idnits 2.17.1 draft-ietf-bmwg-firewall-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 26 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack an Authors' Addresses Section. ** There are 5 instances of too long lines in the document, the longest one being 3 characters in excess of 72. ** There are 8 instances of lines with control characters in the document. == There are 21 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 782 has weird spacing: '...s at or below...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The tester originating the TCP SYN attack MUST be attached to the unprotected network. In addition, the tester MUST not respond to the SYN/ACK packets sent by target server or NAT proxy in response to the SYN packet. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Tester SHOULD perform the same measurements as defined in HTTP test(Section 5.6.4). Unlike the HTTP transfer rate test, the tester MUST not include any bits which are associated with illegal traffic in its forwarding rate measurements. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 2002) is 8009 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '1' on line 1313 looks like a reference -- Missing reference section? '7' on line 1332 looks like a reference -- Missing reference section? '6' on line 1329 looks like a reference -- Missing reference section? '5' on line 1326 looks like a reference -- Missing reference section? '3' on line 1320 looks like a reference -- Missing reference section? '4' on line 1323 looks like a reference -- Missing reference section? '2' on line 1316 looks like a reference Summary: 7 errors (**), 0 flaws (~~), 6 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Brooks Hickman 2 Internet-Draft Spirent Communications 3 Expiration Date: November 2002 David Newman 4 Network Test 5 Saldju Tadjudin 6 Spirent Communications 7 Terry Martin 8 M2networx INC 9 May 2002 11 Benchmarking Methodology for Firewall Performance 12 14 Status of this Memo 16 This document is an Internet-Draft and is in full conformance with 17 all provisions of Section 10 of RFC2026. 19 Internet-Drafts are working documents of the Internet Engineering 20 Task Force (IETF), its areas, and its working groups. Note that 21 other groups may also distribute working documents as Internet- 22 Drafts. 24 Internet-Drafts are draft documents valid for a maximum of six 25 months and may be updated, replaced, or obsoleted by other documents 26 at any time. It is inappropriate to use Internet-Drafts as 27 reference material or to cite them other than as "work in progress." 29 The list of current Internet-Drafts can be accessed at 30 http://www.ietf.org/ietf/1id-abstracts.txt 32 The list of Internet-Draft Shadow Directories can be accessed at 33 http://www.ietf.org/shadow.html. 35 Table of Contents 37 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 38 2. Requirements . . . . . . . . . . . . . . . . . . . . . . . . 2 39 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 40 4. Test setup . . . . . . . . . . . . . . . . . . . . . . . . . 2 41 4.1 Test Considerations . . . . . . . . . . . . . . . . . . 3 42 4.2 Virtual Client/Servers . . . . . . . . . . . . . . . . . 3 43 4.3 Test Traffic Requirements . . . . . . . . . . . . . . . . 4 44 4.4 DUT/SUT Traffic Flows . . . . . . . . . . . . . . . . . . 4 45 4.5 Multiple Client/Server Testing . . . . . . . . . . . . . 5 46 4.6 NAT(Network Address Translation) . . . . . . . . . . . . 5 47 4.7 Rule Sets . . . . . . . . . . . . . . . . . . . . . . . . 5 48 4.8 Web Caching . . . . . . . . . . . . . . . . . . . . . . . 5 49 4.9 Authentication . . . . . . . . . . . . . . . . . . . . . 6 50 5. Benchmarking Tests . . . . . . . . . . . . . . . . . . . . . 6 51 5.1 IP throughput . . . . . . . . . . . . . . . . . . . . . . 6 52 5.2 Concurrent TCP Connection Capacity . . . . . . . . . . . 7 53 5.3 Maximum TCP Connection Establishment Rate . . . . . . . . 10 54 5.4 Maximum TCP Connection Tear Down Rate . . . . . . . . . . 12 55 5.5 Denial Of Service Handling . . . . . . . . . . . . . . . 13 56 5.6 HTTP Transfer Rate . . . . . . . . . . . . . . . . . . . 14 57 5.7 HTTP Concurrent Transaction Capacity . . . . . . . . . . 17 58 5.8 HTTP Transaction Rate . . . . . . . . . . . . . . . . . . 18 59 5.9 Illegal Traffic Handling . . . . . . . . . . . . . . . . 19 60 5.10 IP Fragmentation Handling . . . . . . . . . . . . . . . 20 61 5.11 Latency . . . . . . . . . . . . . . . . . . . . . . . . 22 62 Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . 25 63 A. HyperText Transfer Protocol(HTTP) . . . . . . . . . . . . 25 64 B. Connection Establishment Time Measurements . . . . . . . . 25 65 C. Connection Tear Down Time Measurements . . . . . . . . . . 26 66 C. References . . . . . . . . . . . . . . . . . . . . . . . . 26 68 1. Introduction 70 This document provides methodologies for the performance 71 benchmarking of firewalls. It provides methodologies in four areas: 72 forwarding, connection, latency and filtering. In addition to 73 defining the tests, this document also describes specific formats 74 for reporting the results of the tests. 76 A previous document, "Benchmarking Terminology for Firewall 77 Performance" [1], defines many of the terms that are used in this 78 document. The terminology document SHOULD be consulted before 79 attempting to make use of this document. 81 2. Requirements 83 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 84 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in 85 this document are to be interpreted as described in RFC 2119. 87 3. Scope 89 Firewalls can provide a single point of defense between networks. 90 Usually, a firewall protects private networks from the public or 91 shared networks to which it is connected. A firewall can be as 92 simple as a device that filters different packets or as complex 93 as a group of devices that combine packet filtering and 94 application-level proxy or network translation services. This RFC 95 will focus on developing benchmark testing of DUT/SUTs, wherever 96 possible, independent of their implementation. 98 4. Test Setup 100 Test configurations defined in this document will be confined to 101 dual-homed and tri-homed as shown in figure 1 and figure 2 102 respectively. 104 Firewalls employing dual-homed configurations connect two networks. 105 One interface of the firewall is attached to the unprotected 106 network, typically the public network(Internet). The other interface 107 is connected to the protected network, typically the internal LAN. 109 In the case of dual-homed configurations, servers which are made 110 accessible to the public(Unprotected) network are attached to the 111 private(Protected) network. 113 +----------+ +----------+ 114 | | | +----------+ | | | 115 | Servers/ |----| | | |------| Servers/ | 116 | Clients | | | | | | Clients | 117 | | |-------| DUT/SUT |--------| | | 118 +----------+ | | | | +----------+ 119 | +----------+ | 120 Protected | | Unprotected 121 Network Network 122 Figure 1(Dual-Homed) 124 Tri-homed[1] configurations employ a third segment called a 125 Demilitarized Zone(DMZ). With tri-homed configurations, servers 126 accessible to the public network are attached to the DMZ. Tri-Homed 127 configurations offer additional security by separating server(s) 128 accessible to the public network from internal hosts. 130 +----------+ +----------+ 131 | | | +----------+ | | | 132 | Clients |----| | | |------| Servers/ | 133 | | | | | | | Clients | 134 +----------+ |-------| DUT/SUT |--------| | | 135 | | | | +----------+ 136 | +----------+ | 137 Protected | | | Unprotected 138 Network | Network 139 | 140 | 141 ----------------- 142 | DMZ 143 | 144 | 145 +-----------+ 146 | | 147 | Servers | 148 | | 149 +-----------+ 151 Figure 2(Tri-Homed) 153 4.1 Test Considerations 155 4.2 Virtual Clients/Servers 157 Since firewall testing may involve data sources which emulate 158 multiple users or hosts, the methodology uses the terms virtual 159 clients/servers. For these firewall tests, virtual clients/servers 160 specify application layer entities which may not be associated with 161 a unique physical interface. For example, four virtual clients may 162 originate from the same data source[1]. The test report SHOULD 163 indicate the number of virtual clients and virtual servers 164 participating in the test. 166 Testers MUST synchronize all data sources participating in a test. 168 4.3 Test Traffic Requirements 170 While the function of a firewall is to enforce access control 171 policies, the criteria by which those policies are defined vary 172 depending on the implementation. Firewalls may use network layer, 173 transport layer or, in many cases, application-layer criteria to 174 make access-control decisions. 176 For the purposes of benchmarking firewall performance this document 177 references HTTP 1.1 or higher as the application layer entity, 178 although the methodologies may be used as a template for 179 benchmarking with other applications. Since testing may involve 180 proxy based DUT/SUTs, HTTP version considerations are discussed in 181 appendix A. 183 4.4 DUT/SUT Traffic Flows 185 Since the number of interfaces are not fixed, the traffic flows will 186 be dependent upon the configuration used in benchmarking the 187 DUT/SUT. Note that the term "traffic flows" is associated with 188 client-to-server requests. 190 For Dual-Homed configurations, there are two unique traffic flows: 192 Client Server 193 ------ ------ 194 Protected -> Unprotected 195 Unprotected -> Protected 197 For Tri-Homed configurations, there are three unique traffic flows: 199 Client Server 200 ------ ------ 201 Protected -> Unprotected 202 Protected -> DMZ 203 Unprotected -> DMZ 205 4.5 Multiple Client/Server Testing 207 One or more clients may target multiple servers for a given 208 application. Each virtual client MUST initiate connections in a 209 round-robin fashion. For example, if the test consisted of six 210 virtual clients targeting three servers, the pattern would be as 211 follows: 213 Client Target Server(In order of request) 214 #1 1 2 3 1... 215 #2 2 3 1 2... 216 #3 3 1 2 3... 217 #4 1 2 3 1... 218 #5 2 3 1 2... 219 #6 3 1 2 3... 221 4.6 Network Address Translation(NAT) 223 Many firewalls implement network address translation(NAT), a 224 function which translates internal host IP addresses attached to 225 the protected network to a virtual IP address for communicating 226 across the unprotected network(Internet). This involves additional 227 processing on the part of the DUT/SUT and may impact performance. 228 Therefore, tests SHOULD be ran with NAT disabled and NAT enabled 229 to determine the performance differentials. The test report MUST 230 indicate whether NAT was enabled or disabled. 232 4.7 Rule Sets 234 Rule sets[1] are a collection of access control policies that 235 determine which packets the DUT/SUT will forward and which it will 236 reject[1]. Since criteria by which these access control policies may 237 be defined will vary depending on the capabilities of the DUT/SUT, 238 the following is limited to providing guidelines for configuring 239 rule sets when benchmarking the performance of the DUT/SUT. 241 It is RECOMMENDED that a rule be entered for each host(Virtual 242 client). In addition, testing SHOULD be performed using different 243 size rule sets to determine its impact on the performance of the 244 DUT/SUT. Rule sets MUST be configured in a manner, such that, rules 245 associated with actual test traffic are configured at the end of the 246 rule set and not the beginning. 248 The DUT/SUT SHOULD be configured to deny access to all traffic 249 which was not previously defined in the rule set. The test report 250 SHOULD include the DUT/SUT configured rule set(s). 252 4.7 Web Caching 254 Some firewalls include caching agents to reduce network load. When 255 making a request through a caching agent, the caching agent attempts 256 to service the response from its internal memory. The cache itself 257 saves responses it receives, such as responses for HTTP GET 258 requests. Testing SHOULD be performed with any caching agents on the 259 DUT/SUT disabled. 261 4.8 Authentication 263 Access control may involve authentication processes such as user, 264 client or session authentication. Authentication is usually 265 performed by devices external to the firewall itself, such as an 266 authentication server(s) and may add to the latency of the system. 267 Any authentication processes MUST be included as part of connection 268 setup process. 270 5. Benchmarking Tests 272 5.1 IP Throughput 274 5.1.1 Objective 276 To determine the throughput of network-layer data transversing 277 the DUT/SUT, as defined in RFC1242[1]. Note that while RFC1242 278 uses the term frames, which is associated with the link layer, the 279 procedure uses the term packets, since it is referencing the 280 network layer. This test is intended to baseline the ability of 281 the DUT/SUT to forward packets at the network layer. 283 5.1.2 Setup Parameters 285 The following parameters MUST be defined: 287 Packet size - Number of bytes in the IP packet, exclusive of any 288 link layer header or checksums. 290 Test Duration - Duration of the test, expressed in seconds. 292 5.1.3 Procedure 294 The tester will offer client/server traffic to the DUT/SUT, 295 consisting of unicast IP packets. The tester MUST offer the packets 296 at a constant rate. The test MAY consist of either bi-directional or 297 unidirectional traffic, with the client offering a unicast stream of 298 packets to the server for the latter. 300 The test MAY employ an iterative search algorithm. Each iteration 301 will involve the tester varying the intended load until the maximum 302 rate, at which no packet loss occurs, is found. Since backpressure 303 mechanisms may be employed, resulting in the intended load and 304 offered load being different, the test SHOULD be performed in either 305 a packet based or time based manner as described in RFC2889[7]. As 306 with RFC1242, the term packet is used in place of frame. The 307 duration of the test portion of each trial MUST be at least 30 308 seconds. 310 When comparing DUT/SUTs with different MTUs, it is RECOMMENDED to 311 limit the maximum IP size tested to the maximum MTU supported by all 312 of the DUT/SUTs. 314 5.1.4 Measurement 316 5.1.4.1 Network Layer 318 Throughput - Maximum offered load, expressed in either bits per 319 second or packets per second, at which no packet loss is detected. 321 Forwarding Rate - Forwarding rate, expressed in either bits per 322 second or packets per second, the device is observed to 323 successfully forward to the correct destination interface in 324 response to a specified offered load. 326 5.1.4 Reporting Format 328 The test report MUST note the packet size(s), test duration, 329 throughput and forwarding rate. If the test involved offering 330 packets which target more than one segment(Protected, Unprotected 331 or DMZ), the report MUST identify the results as an aggregate 332 throughput measurement. 334 The throughput results SHOULD be reported in the format of a table 335 with a row for each of the tested packet sizes. There SHOULD be 336 columns for the packet size, the intended load, the offered load, 337 resultant throughput and forwarding rate for each test. 339 A log file MAY be generated which includes the packet size, test 340 duration and for each iteration: 342 - Step Iteration 343 - Pass/Fail Status 344 - Total packets offered 345 - Total packets forwarded 346 - Intended load 347 - Offered load(If applicable) 348 - Forwarding rate 350 5.2 Concurrent TCP Connection Capacity 352 5.2.1 Objective 354 To determine the maximum number of concurrent TCP connections 355 supported through or with the DUT/SUT, as defined in RFC2647[1]. 357 5.2.2 Setup Parameters 359 The following parameters MUST be defined for all tests: 361 5.2.2.1 Transport-Layer Setup Parameters 363 Connection Attempt Rate - The aggregate rate, expressed in 364 connections per second, at which new TCP connection requests are 365 attempted. The rate SHOULD be set at or lower than the maximum 366 rate at which the DUT/SUT can accept connection requests. 368 Age Time - The time, expressed in seconds, the DUT/SUT will keep a 369 connection in its connection table after receiving a TCP FIN or RST 370 packet. 372 5.2.2.2 Transport-Layer Setup Parameters 374 Validation Method - HTTP 1.1 or higher MUST be used for this test. 376 Object Size - Defines the number of bytes, excluding any bytes 377 associated with the HTTP header, to be transferred in response to an 378 HTTP 1.1 or higher GET request. 380 5.2.3 Procedure 382 An iterative search algorithm MAY be used to determine the maximum 383 number of concurrent TCP connections supported through or with the 384 DUT/SUT. 386 For each iteration, the aggregate number of concurrent TCP 387 connections attempted by the virtual client(s) will be varied. The 388 destination address will be that of the server or that of the NAT 389 proxy. The aggregate rate will be defined by connection attempt 390 rate, and will be attempted in a round-robin fashion(See 4.5). 392 To validate all connections, the virtual client(s) MUST request an 393 object using an HTTP 1.1 or higher GET request. The requests MUST be 394 initiated on each connection after all of the TCP connections have 395 been established. 397 When testing proxy-based DUT/SUTs, the virtual client(s) MUST 398 request two objects using HTTP 1.1 or higher GET requests. The first 399 GET request is required for connection time establishment 400 measurements as specified in appendix B. The second request is used 401 for validation as previously mentioned. When comparing proxy and 402 non-proxy based DUT/SUTs, the test MUST be performed in the same 403 manner. 405 Between each iteration, it is RECOMMENDED that the tester issue a 406 TCP RST referencing all connections attempted for the previous 407 iteration, regardless of whether or not the connection attempt was 408 successful. The tester will wait for age time before continuing to 409 the next iteration. 411 5.2.4 Measurements 413 5.2.4.1 Application-Layer measurements 415 Number of objects requested 417 Number of objects returned 419 5.2.4.2 Transport-Layer measurements 421 Maximum concurrent connections - Total number of TCP connections 422 open for the last successful iteration performed in the search 423 algorithm. 425 The following measurements SHOULD be performed on a per iteration 426 basis: 428 Minimum connection establishment time - Lowest TCP connection 429 establishment time measured as defined in appendix B. 431 Maximum connection establishment time - Highest TCP connection 432 establishment time measured as defined in appendix B. 434 Average connection establishment time - The mean of all measurements 435 of connection establishment times. 437 Aggregate connection establishment time - The total of all 438 measurements of connection establishment times. 440 5.2.5 Reporting Format 442 5.2.5.1 Application-Layer Reporting: 444 The test report MUST note the object size, number of completed 445 requests and number of completed responses. 447 The intermediate results of the search algorithm MAY be reported 448 in a table format with a column for each iteration. There SHOULD be 449 rows for the number of requests attempted, number of requests 450 completed, number of responses attempted and number of responses 451 completed. The table MAY be combined with the transport-layer 452 reporting, provided that the table identify this as an application 453 layer measurement. 455 Version information: 457 The test report MUST note the version of HTTP client(s) and 458 server(s). 460 5.2.5.2 Transport-Layer Reporting: 462 The test report MUST note the connection attempt rate, age time and 463 maximum concurrent connections measured. 465 The intermediate results of the search algorithm MAY be reported 466 in the format of a table with a column for each iteration. There 467 SHOULD be rows for the total number of TCP connections attempted, 468 total number of TCP connections completed, minimum TCP connection 469 establishment time, maximum TCP connection establishment time, 470 average connection establishment time and the aggregate connection 471 establishment time. 473 5.3 Maximum TCP Connection Establishment Rate 475 5.3.1 Objective 477 To determine the maximum TCP connection establishment rate through 478 or with the DUT/SUT, as defined by RFC2647[1]. 480 5.3.2 Setup Parameters 482 The following parameters MUST be defined for all tests: 484 5.3.2.1 Transport-Layer Setup Parameters 486 Number of Connections - Defines the aggregate number of TCP 487 connections that must be established. 489 Age Time - The time, expressed in seconds, the DUT/SUT will keep a 490 connection in it's state table after receiving a TCP FIN or RST 491 packet. 493 5.3.2.2 Transport-Layer Setup Parameters 495 Validation Method - HTTP 1.1 or higher MUST be used for this test. 497 Object Size - Defines the number of bytes, excluding any bytes 498 associated with the HTTP header, to be transferred in response to an 499 HTTP 1.1 or higher GET request. 501 5.3.3 Procedure Test 503 An iterative search algorithm MAY be used to determine the maximum 504 rate at which the DUT/SUT can accept TCP connection requests. 506 For each iteration, the aggregate rate at which TCP connection 507 requests are attempted by the virtual client(s) will be varied. The 508 destination address will be that of the server or that of the NAT 509 proxy. The aggregate number of connections, defined by number of 510 connections, will be attempted in a round-robin fashion(See 4.5). 512 The same application-layer object transfers required for validation 513 and establishment time measurements as described in the concurrent 514 TCP connection capacity test MUST be performed. 516 Between each iteration, it is RECOMMENDED that the tester issue a 517 TCP RST referencing all connections attempted for the previous 518 iteration, regardless of whether or not the connection attempt was 519 successful. The tester will wait for age time before continuing to 520 the next iteration. 522 5.3.4 Measurements 524 5.3.4.1 Application-Layer measurements 526 Number of objects requested 528 Number of objects returned 530 5.3.4.2 Transport-Layer measurements 532 Highest connection rate - Highest rate, in connections per second, 533 for which for the search algorithm passed. 535 The following measurements SHOULD performed on a per iteration 536 basis: 538 Minimum connection establishment time - Lowest TCP connection 539 establishment time measured as defined in appendix B. 541 Maximum connection establishment time - Highest TCP connection 542 establishment time measured as defined in appendix B. 544 Average connection establishment time - The mean of all measurements 545 of connection establishment times. 547 Aggregate connection establishment time - The total of all 548 measurements of connection establishment times. 550 5.3.5 Reporting Format 552 5.3.5.1 Application-Layer Reporting: 554 The test report MUST note object size(s), number of completed 555 requests and number of completed responses. 557 The intermediate results of the search algorithm MAY be reported 558 in a table format with a column for each iteration. There SHOULD be 559 rows for the number of requests and responses completed. The table 560 MAY be combined with the transport-layer reporting, provided that 561 the table identify this as an application layer measurement. 563 Version information: 565 The test report MUST note the version of HTTP client(s) and server(s). 567 5.3.5.2 Transport-Layer Reporting: 569 The test report MUST note the number of connections, age time and 570 highest connection rate measured. 572 The intermediate results of the search algorithm MAY be reported 573 in the format of a table with a column for each iteration. There 574 SHOULD be rows for the connection attempt rate, total number of 575 TCP connections attempted, total number of TCP connections 576 completed, minimum TCP connection establishment time, maximum TCP 577 connection establishment time, average connection establishment time 578 and the aggregate connection establishment time. 580 5.4 Maximum TCP Connection Tear Down Rate 582 5.4.1 Objective 584 To determine the maximum TCP connection tear down rate through or 585 with the DUT/SUT, as defined by RFC2647[1]. 587 5.4.2 Setup Parameters 589 Number of Connections - Defines the number of TCP connections that 590 the tester will attempt to tear down. 592 Age Time - The time, expressed in seconds, the DUT/SUT will keep a 593 connection in it's state table after receiving a TCP FIN or RST 594 packet. 596 5.4.3 Procedure 598 An iterative search algorithm MAY be used to determine the maximum 599 TCP connection tear down rate. The test iterates through different 600 TCP connection tear down rates with a fixed number of TCP 601 connections. 603 The virtual client(s) will initialize the test by establishing TCP 604 connections defined by number of connections. The virtual client(s) 605 will then attempt to tear down all of TCP connections, at a rate 606 defined by tear down attempt rate. For benchmarking purposes, the 607 tester MUST use a TCP FIN when initiating the connection tear down. 609 In the case of proxy based DUT/SUTs, the DUT/SUT will itself receive 610 the final ACK in the three-way handshake when a connection is being 611 torn down. For validation purposes, the virtual client(s) MAY 612 verify that the DUT/SUT received the final ACK in the connection tear 613 down exchange for all connections by transmitting a TCP datagram 614 referencing the previously town down connection. A TCP RST should be 615 received in response to the TCP datagram. 617 5.4.4 Measurements 619 Highest connection tear down rate - Highest rate, in connections per 620 second, for which all TCP connections were successfully torn down. 622 Minimum connection tear down time - Lowest TCP connection tear down 623 time measured as defined in appendix C. 625 Maximum connection tear down time - Highest TCP connection tear down 626 time measured as defined in appendix C. 628 Average connection tear down time - The mean of all measurements of 629 connection tear down times. 631 Aggregate connection tear down time - The total of all measurements 632 of connection tear down times. 634 5.4.5 Reporting Format 636 The test report MUST note the number of connections, age time and 637 highest connection tear down rate measured. 639 The intermediate results of the search algorithm SHOULD be reported 640 in the format of a table with a column for each iteration. There 641 SHOULD be rows for the number of TCP tear downs attempted, number 642 of TCP connection tear downs completed, minimum TCP connection tear 643 down time, maximum TCP connection tear down time, average TCP 644 connection tear down time and the aggregate TCP connection tear down 645 time. 647 5.5 Denial Of Service Handling 649 5.5.1 Objective 651 To determine the effect of a denial of service attack on a DUT/SUT 652 TCP connection establishment and/or HTTP transfer rates. The denial 653 of service handling test MUST be run after obtaining baseline 654 measurements from sections 5.3 and/or 5.6. 656 The TCP SYN flood attack exploits TCP's three-way handshake 657 mechanism by having an attacking source host generate TCP SYN 658 packets with random source addresses towards a victim host, thereby 659 consuming that host's resources. 661 5.5.2 Setup Parameters 663 Use the same setup parameters as defined in section 5.2.2 or 5.6.2, 664 depending on whether testing against the baseline TCP connection 665 setup rate test or HTTP transfer rate test, respectfully. 667 In addition, the following setup parameters MUST be defined. 669 SYN attack rate - Rate, expressed in packets per second, at which 670 the server(s) or NAT proxy address is targeted with TCP SYN packets. 672 5.5.3 Procedure 674 Use the same procedure as defined in section 5.3.3 or 5.6.3, 675 depending on whether testing against the baseline TCP connection 676 establishment rate or HTTP transfer rate test, respectfully. In 677 addition, the tester will generate TCP SYN packets targeting the 678 server(s) IP address or NAT proxy address at a rate defined by SYN 679 attack rate. 681 The tester originating the TCP SYN attack MUST be attached to the 682 unprotected network. In addition, the tester MUST not respond to the 683 SYN/ACK packets sent by target server or NAT proxy in response to 684 the SYN packet. 686 Some firewalls employ mechanisms to guard against SYN attacks. If 687 such mechanisms exist on the DUT/SUT, tests SHOULD be run with these 688 mechanisms enabled to determine how well the DUT/SUT can maintain, 689 under such attacks, the baseline connection establishment rates and 690 HTTP transfer rates determined in section 5.3 and section 5.6, 691 respectively. 693 5.5.4 Measurements 695 Perform the same measurements as defined in section 5.3.4 or 5.6.4, 696 depending on whether testing against the baseline TCP connection 697 establishment rate test or HTTP transfer rate, respectfully. 699 In addition, the tester SHOULD track TCP SYN packets associated with 700 the SYN attack which the DUT/SUT forwards on the protected or DMZ 701 interface(s). 703 5.5.5 Reporting Format 705 The test SHOULD use the same reporting format as described in 706 section 5.3.5 or 5.6.5, depending on whether testing against the 707 baseline TCP connection establishment rate test or HTTP transfer rate, 708 respectfully. 710 In addition, the report MUST indicate a denial of service handling 711 test, SYN attack rate, number TCP SYN attack packets transmitted 712 and the number of TCP SYN attack packets forwarded by the DUT/SUT. 713 The report MUST indicate whether or not the DUT has any SYN attack 714 mechanisms enabled. 716 5.6 HTTP Transfer Rate 718 5.6.1 Objective 720 To determine the transfer rate of HTTP requested object transversing 721 the DUT/SUT. 723 5.6.2 Setup Parameters 725 The following parameters MUST be defined for all tests: 727 5.6.2.1 Transport-Layer Setup Parameters 729 Number of connections - Defines the aggregate number of connections 730 attempted. The number SHOULD be a multiple of the number of virtual 731 clients participating in the test 732 5.6.2.2 Application-Layer Setup Parameters 734 Session type - The virtual clients/servers MUST use HTTP 1.1 or 735 higher. 737 GET requests per connection - Defines the number of HTTP 1.1 or 738 higher GET requests attempted per connection. 740 Object Size - Defines the number of bytes, excluding any bytes 741 associated with the HTTP header, to be transferred in response to an 742 HTTP 1.1 or higher GET request. 744 5.6.3 Procedure 746 Each HTTP 1.1 or higher client will request one or more objects from 747 an HTTP 1.1 or higher server using one or more HTTP GET requests. 748 The aggregate number of connections attempted, defined by number of 749 connections, MUST be evenly divided among all of the participating 750 virtual clients. 752 If the virtual client(s) make multiple HTTP GET requests per 753 connection, it MUST request the same object size for each GET 754 request. Multiple iterations of this test SHOULD be ran using 755 different object sizes. 757 5.6.4 Measurements 759 5.6.4.1 Application-Layer measurements 761 Average Transfer Rate - The average transfer rate of the DUT/SUT 762 MUST be measured and shall be referenced to the requested object(s). 763 The measurement will start on transmission of the first bit of the 764 first requested object and end on transmission of the last bit of 765 the last requested object. The average transfer rate, in bits per 766 second, will be calculated using the following formula: 768 OBJECTS * OBJECTSIZE * 8 769 TRANSFER RATE(bit/s) = -------------------------- 770 DURATION 772 OBJECTS - Objects successfully transferred 774 OBJECTSIZE - Object size in bytes 776 DURATION - Aggregate transfer time based on aforementioned time 777 references. 779 5.6.4.2 Measurements at or below the Transport-Layer 781 The tester SHOULD make goodput[1] measurements for connection- 782 oriented protocols at or below the transport layer. Goodput 783 measurements MUST only reference the protocols payload, excluding 784 any of the protocols header. In addition, the tester MUST exclude 785 any bits associated with the connection establishment, connection 786 tear down, security associations or connection maintenance. 788 Since connection-oriented protocols require that data be 789 acknowledged, the offered load[6] will vary over the duration of the 790 test. When performing forwarding rate measurements, the tester 791 should measure the average forwarding rate over the duration of the 792 test. 794 5.6.5 Reporting Format 796 5.6.5.1 Application-Layer reporting 798 The test report MUST note number of GET requests per connection, 799 and object size. 801 The transfer rate results SHOULD be reported in tabular form with 802 a row for each of the object sizes. There SHOULD be a column for the 803 object size, the number of completed requests, the number of 804 completed responses, and the transfer rate results for each test. 806 Failure analysis: 808 The test report SHOULD indicate the number and percentage of HTTP 809 GET request or responses that failed to complete. 811 Version information: 813 The test report MUST note the version of HTTP client(s) and 814 server(s). 816 5.6.5.2 Transport-Layer and below reporting 818 The test report MUST note the aggregate number of connections. In 819 addition, the report MUST identify the layer/protocol for which the 820 measurement was made. 822 The results SHOULD be in tabular form with a column for each 823 iteration of the test. There should be columns for transmitted bits, 824 retransmitted bits and the measured goodput. 826 Failure analysis: 828 The test report SHOULD indicate the number and percentage of 829 connections that failed to complete. 831 5.7 HTTP Concurrent Transaction Capacity 833 5.7.1 Objective 835 Determine the maximum number of concurrent or simultaneous HTTP 836 transactions the DUT/SUT can support. This test is intended to 837 find the maximum number of users that can simultaneously access 838 web objects. 840 5.7.2 Setup Parameters 842 GET request rate - The aggregate rate, expressed in request per 843 second, at which HTTP 1.1 or higher GET requests are offered by the 844 virtual client(s). 846 Session type - The virtual clients/servers MUST use HTTP 1.1 or 847 higher. 849 5.7.3 Procedure 851 An iterative search algorithm MAY be used to determine the maximum 852 HTTP concurrent transaction capacity. 854 For each iteration, the virtual client(s) will vary the number of 855 concurrent or simultaneous HTTP transactions - that is, on-going 856 GET requests. The HTTP 1.1 or higher virtual client(s) will request 857 one object, across each connection, from an HTTP 1.1 or higher 858 server using one HTTP GET request. The aggregate rate at which the 859 virtual client(s) will offer the requests will be defined by GET 860 request rate. 862 The object size requested MUST be large enough, such that, the 863 transaction - that is, the request/response cycle -- will exist for 864 the duration of the test. At the end of each iteration, the tester 865 MUST validate that all transactions are still active. After all of 866 the transactions are checked, the transactions MAY be aborted. 868 5.7.4 Measurements 870 Maximum concurrent transactions - Total number of concurrent HTTP 871 transactions active for the last successful iteration performed in 872 the search algorithm. 874 5.7.5 Reporting Format 876 5.7.5.1 Application-Layer reporting 878 The test report MUST note the GET request rate and the maximum 879 concurrent transactions measured. 881 The intermediate results of the search algorithm MAY be reported 882 in a table format with a column for each iteration. There SHOULD be 883 rows for the number of concurrent transactions attempted, GET 884 request rate, number of aborted transactions and number of 885 transactions active at the end of the test iteration. 887 Version information: 889 The test report MUST note the version of HTTP client(s) and 890 server(s). 892 5.8 HTTP Transaction Rate 894 5.8.1 Objective 896 Determine the maximum HTTP transaction rate that a DUT/SUT can 897 sustain. 899 5.8.2 Setup Parameters 901 Session Type - HTTP 1.1 or higher MUST be used for this test. 903 Test Duration - Time, expressed in seconds, for which the 904 virtual client(s) will sustain the attempted GET request rate. 905 It is RECOMMENDED that the duration be at least 30 seconds. 907 Requests per connection - Number of object requests per connection. 909 Object Size - Defines the number of bytes, excluding any bytes 910 associated with the HTTP header, to be transferred in response to an 911 HTTP 1.1 or higher GET request. 913 5.8.3 Procedure 915 An iterative search algorithm MAY be used to determine the maximum 916 transaction rate that the DUT/SUT can sustain. 918 For each iteration, HTTP 1.1 or higher virtual client(s) will 919 vary the aggregate GET request rate offered to HTTP 1.1 or higher 920 server(s). The virtual client(s) will maintain the offered request 921 rate for the defined test duration. 923 If the tester makes multiple HTTP GET requests per connection, it 924 MUST request the same object size for each GET request rate. 925 Multiple iterations of this test MAY be performed with objects of 926 different sizes. 928 5.8.4 Measurements 930 Maximum Transaction Rate - The maximum rate at which all 931 transactions -- that is all requests/responses cycles -- are 932 completed. 934 Transaction Time - The tester SHOULD measure minimum, maximum and 935 average transaction times. The transaction time will start when the 936 virtual client issues the GET request and end when the requesting 937 virtual client receives the last bit of the requested object. 939 5.8.5 Reporting Format 941 The test report MUST note the test duration, object size, requests 942 per connection and the measured maximum transaction rate. 944 The intermediate results of the search algorithm MAY be reported 945 in a table format with a column for each iteration. There SHOULD be 946 rows for the GET request attempt rate, number of requests attempted, 947 number and percentage of requests completed, number of responses 948 attempted, number and percentage of responses completed, minimum 949 transaction time, average transaction time and maximum transaction 950 time. 952 Version information: 954 The test report MUST note the version of HTTP client(s) and 955 server(s). 957 5.9 Illegal Traffic Handling 959 5.9.1 Objective 961 To determine the behavior of the DUT/SUT when presented with a 962 combination of both legal and Illegal traffic flows. Note that 963 Illegal traffic does not refer to an attack, but traffic which 964 has been explicitly defined by a rule(s) to drop. 966 5.9.2 Setup Parameters 968 Setup parameters will use the same parameters as specified in the 969 HTTP transfer rate test. In addition, the following setup 970 parameters MUST be defined: 972 Illegal traffic percentage - Percentage of HTTP 1.1 or higher 973 connections which have been explicitly defined in a rule(s) to drop. 975 5.9.3 Procedure 977 Each HTTP 1.1 or higher client will request one or more objects from 978 an HTTP 1.1 or higher server using one or more HTTP GET requests. 979 The aggregate number of connections attempted, defined by number of 980 connections, MUST be evenly divided among all of the participating 981 virtual clients. 983 The virtual client(s) MUST offer the connection requests, both legal 984 and illegal, in an evenly distributed manner. Many firewalls have 985 the capability to filter on different traffic criteria( IP 986 addresses, Port numbers, etc). Testers may run multiple 987 iterations of this test with the DUT/SUT configured to filter 988 on different traffic criteria. 990 5.9.4 Measurements 992 Tester SHOULD perform the same measurements as defined in HTTP 993 test(Section 5.6.4). Unlike the HTTP transfer rate test, the 994 tester MUST not include any bits which are associated with illegal 995 traffic in its forwarding rate measurements. 997 5.9.5 Reporting Format 999 Test report SHOULD be the same as specified in the HTTP 1000 test(Section 5.6.5). 1002 In addition, the report MUST note the percentage of illegal HTTP 1003 connections. 1005 Failure analysis: 1007 Test report MUST note the number and percentage of illegal 1008 connections that were allowed by the DUT/SUT. 1010 5.10 IP Fragmentation Handling 1012 5.10.1 Objective 1014 To determine the performance impact when the DUT/SUT is presented 1015 with IP fragmented[5] traffic. IP packets which have been 1016 fragmented, due to crossing a network that supports a smaller 1017 MTU(Maximum Transmission Unit) than the actual IP packet, may 1018 require the firewall to perform re-assembly prior to the rule set 1019 being applied. 1021 While IP fragmentation is a common form of attack, either on the 1022 firewall itself or on internal hosts, this test will focus on 1023 determining how the additional processing associated with the 1024 re-assembly of the packets have on the forwarding rate of the 1025 DUT/SUT. RFC 1858 addresses some fragmentation attacks that 1026 get around IP filtering processes used in routers and hosts. 1028 5.10.2 Setup Parameters 1030 The following parameters MUST be defined. 1032 5.10.2.1 Non-Fragmented Traffic Parameters 1034 Setup parameters will be the same as defined in the HTTP transfer 1035 rate test(Sections 5.6.2.1 and 5.6.2.2). 1037 5.10.2.2 Fragmented Traffic Parameters 1039 Packet size - Number of bytes in the IP/UDP packet, exclusive of 1040 link-layer headers and checksums, prior to fragmentation. 1042 MTU - Maximum transmission unit, expressed in bytes. For testing 1043 purposes, this MAY be configured to values smaller than the MTU 1044 supported by the link layer. 1046 Intended Load - Intended load, expressed as percentage of media 1047 utilization. 1049 5.10.3 Procedure 1051 Each HTTP 1.1 or higher client will request one or more objects from 1052 an HTTP 1.1 or higher server using one or more HTTP GET requests. 1053 The aggregate number of connections attempted, defined by number of 1054 connections, MUST be evenly divided among all of the participating 1055 virtual clients. If the virtual client(s) make multiple HTTP GET 1056 requests per connection, it MUST request the same object size for 1057 each GET request. 1059 A tester attached to the unprotected side of the network, will offer 1060 a unidirectional stream of unicast IP/UDP targeting a server 1061 attached to either the protected or DMZ. The tester MUST offer the 1062 unidirectional stream over the duration of the test. 1064 Baseline measurements SHOULD be performed with IP filtering deny 1065 rule(s) to filter fragmented traffic. If the DUT/SUT has logging 1066 capability, the log SHOULD be checked to determine if it contains 1067 the correct information regarding the fragmented traffic. 1069 The test SHOULD be repeated with the DUT/SUT rule set changed to 1070 allow the fragmented traffic through. When running multiple 1071 iterations of the test, it is RECOMMENDED to vary the MTU while 1072 keeping all other parameters constant. 1074 Then setup the DUT/SUT to the policy or rule set the manufacturer 1075 required to be defined to protect against fragmentation attacks and 1076 repeat the measurements outlined in the baseline procedures. 1078 5.10.4 Measurements 1080 Tester SHOULD perform the same measurements as defined in HTTP 1081 test(Section 5.6.4). 1083 Transmitted UDP/IP Packets - Number of UDP packets transmitted by 1084 client. 1086 Received UDP/IP Packets - Number of UDP/IP Packets received by 1087 server. 1089 5.10.5 Reporting Format 1091 5.10.1 Non-Fragmented Traffic 1093 The test report SHOULD be the same as described in section 5.6.5. 1094 Note that any forwarding rate measurements for the HTTP traffic 1095 excludes any bits associated with the fragmented traffic which 1096 may be forward by the DUT/SUT. 1098 5.10.2 Fragmented Traffic 1100 The test report MUST note the packet size, MTU size, intended load, 1101 number of UDP/IP packets transmitted and number of UDP/IP packets 1102 forwarded. The test report SHOULD also note whether or not the 1103 DUT/SUT forwarded the offered UDP/IP traffic fragmented. 1105 5.11 Latency 1107 5.11.1 Objective 1109 To determine the latency of network-layer or application-layer data 1110 traversing the DUT/SUT. RFC 1242 [3] defines latency. 1112 5.11.2 Setup Parameters 1114 The following parameters MUST be defined: 1116 5.11.2.1 Network-layer Measurements 1118 Packet size, expressed as the number of bytes in the IP packet, 1119 exclusive of link-layer headers and checksums. 1121 Intended load, expressed as percentage of media utilization. 1123 Test duration, expressed in seconds. 1125 Test instruments MUST generate packets with unique timestamp 1126 signatures. 1128 5.11.2.2 Application-layer Measurements 1130 Object Size - Defines the number of bytes, excluding any bytes 1131 associated with the HTTP header, to be transferred in response to 1132 an HTTP 1.1 or higher GET request. Testers SHOULD use the minimum 1133 object size supported by the media, but MAY use other object 1134 sizes as well. 1136 Connection type. The tester MUST use one HTTP 1.1 or higher 1137 connection for latency measurements. 1139 Number of objects requested. 1141 Number of objects transferred. 1143 Test duration, expressed in seconds. 1145 Test instruments MUST generate packets with unique timestamp 1146 signatures. 1148 5.11.3 Network-layer procedure 1150 A client will offer a unidirectional stream of unicast packets to a 1151 server. The packets MUST use a connectionless protocol like IP or 1152 UDP/IP. 1154 The tester MUST offer packets in a steady state. As noted in the 1155 latency discussion in RFC 2544 [4], latency measurements MUST be 1156 taken at the throughput level -- that is, at the highest offered 1157 load with zero packet loss. Measurements taken at the throughput 1158 level are the only ones that can legitimately be termed latency. 1160 It is RECOMMENDED that implementers use offered loads not only at 1161 the throughput level, but also at load levels that are less than 1162 or greater than the throughput level. To avoid confusion with 1163 existing terminology, measurements from such tests MUST be labeled 1164 as delay rather than latency. 1166 If desired, the tester MAY use a step test in which offered loads 1167 increment or decrement through a range of load levels. 1169 The duration of the test portion of each trial MUST be at least 30 1170 seconds. 1172 5.11.4 Application layer procedure 1174 An HTTP 1.1 or higher client will request one or more objects from 1175 an HTTP or higher 1.1 server using one or more HTTP GET requests. If 1176 the tester makes multiple HTTP GET requests, it MUST request the 1177 same-sized object each time. Testers may run multiple iterations of 1178 this test with objects of different sizes. 1180 Implementers MAY configure the tester to run for a fixed duration. 1181 In this case, the tester MUST report the number of objects requested 1182 and returned for the duration of the test. For fixed-duration tests 1183 it is RECOMMENDED that the duration be at least 30 seconds. 1185 5.11.5 Measurements 1187 Minimum delay - The smallest delay incurred by data traversing the 1188 DUT/SUT at the network layer or application layer, as appropriate. 1190 Maximum delay - The largest delay incurred by data traversing the 1191 DUT/SUT at the network layer or application layer, as appropriate. 1193 Average delay - The mean of all measurements of delay incurred by 1194 data traversing the DUT/SUT at the network layer or application 1195 layer, as appropriate. 1197 Delay distribution - A set of histograms of all delay measurements 1198 observed for data traversing the DUT/SUT at the network layer or 1199 application layer, as appropriate. 1201 5.11.6 Network-layer reporting format 1203 The test report MUST note the packet size(s), offered load(s) and 1204 test duration used. 1206 The latency results SHOULD be reported in the format of a table with 1207 a row for each of the tested packet sizes. There SHOULD be columns 1208 for the packet size, the intended rate, the offered rate, and the 1209 resultant latency or delay values for each test. 1211 5.11.7 Application-layer reporting format 1213 The test report MUST note the object size(s) and number of requests 1214 and responses completed. If applicable, the report MUST note the 1215 test duration if a fixed duration was used. 1217 The latency results SHOULD be reported in the format of a table with 1218 a row for each of the object sizes. There SHOULD be columns for the 1219 object size, the number of completed requests, the number of 1220 completed responses, and the resultant latency or delay values for 1221 each test. 1223 Failure analysis: 1225 The test report SHOULD indicate the number and percentage of HTTP 1226 GET request or responses that failed to complete within the test 1227 duration. 1229 Version information: 1231 The test report MUST note the version of HTTP client and server. 1233 APPENDIX A: HTTP(HyperText Transfer Protocol) 1235 The most common versions of HTTP in use today are HTTP/1.0 and 1236 HTTP/1.1 with the main difference being in regard to persistent 1237 connections. HTTP 1.0, by default, does not support persistent 1238 connections. A separate TCP connection is opened up for each 1239 GET request the client wants to initiate and closed after the 1240 requested object transfer is completed. While some implementations 1241 HTTP/1.0 supports persistence through the use of a keep-alive, 1242 there is no official specification for how the keep-alive operates. 1243 In addition, HTTP 1.0 proxies do support persistent connection as 1244 they do not recognize the connection header. 1246 HTTP/1.1, by default, does support persistent connection and 1247 is therefore the version that is referenced in this methodology. 1248 Proxy based DUT/SUTs may monitor the TCP connection and after a 1249 timeout, close the connection if no activity is detected. The 1250 duration of this timeout is not defined in the HTTP/1.1 1251 specification and will vary between DUT/SUTs. If the DUT/SUT 1252 closes inactive connections, the aging timer on the DUT SHOULD 1253 be configured for a duration that exceeds the test time. 1255 While this document cannot foresee future changes to HTTP 1256 and it impact on the methodologies defined herein, such 1257 changes should be accommodated for so that newer versions of 1258 HTTP may be used in benchmarking firewall performance. 1260 APPENDIX B: Connection Establishment Time Measurements 1262 For purposes of benchmarking firewall performance, the connection 1263 establishment time will be considered the interval between the 1264 transmission of the first bit of the first octet of the packet 1265 carrying the connection request to the DUT/SUT interface to 1266 receipt of the last bit of the last octet of the last packet of 1267 the connection setup traffic received on the client or server, 1268 depending on whether a given connection requires an even or odd 1269 number of messages, respectfully. 1271 Some connection oriented protocols, such as TCP, involve an odd 1272 number of messages when establishing a connection. In the case of 1273 proxy based DUT/SUTs, the DUT/SUT will terminate the connection, 1274 setting up a separate connection to the server. Since, in such 1275 cases, the tester does not own both sides of the connection, 1276 measurements will be made two different ways. While the following 1277 describes the measurements with reference to TCP, the methodology 1278 may be used with other connection oriented protocols which involve 1279 an odd number of messages. 1281 For non-proxy based DUT/SUTs , the establishment time shall be 1282 directly measured and is considered to be from the time the first 1283 bit of the first SYN packet is transmitted by the client to the 1284 time the last bit of the final ACK in the three-way handshake is 1285 received by the target server. 1287 If the DUT/SUT is proxy based, the connection establishment time is 1288 considered to be from the time the first bit of the first SYN packet 1289 is transmitted by the client to the time the client transmits the first 1290 bit of the first acknowledged TCP datagram(t4-t0 in the following 1291 timeline). 1293 t0: Client sends a SYN. 1294 t1: Proxy sends a SYN/ACK. 1295 t2: Client sends the final ACK. 1296 t3: Proxy establishes separate connection with server. 1297 t4: Client sends TCP datagram to server. 1298 *t5: Proxy sends ACK of the datagram to client. 1300 * While t5 is not considered part of the TCP connection establishment, 1301 acknowledgement of t4 must be received for the connection to be 1302 considered successful. 1304 APPENDIX C: Connection Tear Time Measurements 1306 The TCP connection tear down time will be considered the interval 1307 between the transmission of the first TCP FIN packet transmitted 1308 by the tester requesting a connection tear down to receipt of the 1309 ACK packet on the same tester interface. 1311 Appendix D. References 1313 [1] D. Newman, "Benchmarking Terminology for Firewall Devices", RFC 2647, 1314 August 1999. 1316 [2] R. Fielding, J. Gettys, J. Mogul, H Frystyk, L.Masinter, P. Leach, 1317 T. Berners-Lee , "Hypertext Transfer Protocol -- HTTP/1.1", 1318 RFC 2616 June 1999 1320 [3] S. Bradner, editor. "Benchmarking Terminology for Network 1321 Interconnection Devices," RFC 1242, July 1991. 1323 [4] S. Bradner, J. McQuaid, "Benchmarking Methodology for Network 1324 Interconnect Devices," RFC 2544, March 1999. 1326 [5] David C. Clark, "IP Datagram Reassembly Algorithm", RFC 815 , 1327 July 1982. 1329 [6] Mandeville, R., "Benchmarking Terminology for LAN Switching 1330 Devices", RFC 2285, February 1998. 1332 [7] Mandeville, R., Perser,J., "Benchmarking Methodology for LAN 1333 Switching Devices", RFC 2889, August 2000.