idnits 2.17.1 draft-ietf-bmwg-methodology-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-24) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 21) being 1248 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 21 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack an Authors' Addresses Section. ** There are 252 instances of lines with control characters in the document. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 107: '...more of the MUST requirements for the ...' RFC 2119 keyword, line 108: '...atisfies all the MUST and all the SHOU...' RFC 2119 keyword, line 110: '...atisfies all the MUST requirements but...' RFC 2119 keyword, line 111: '...the SHOULD requirements for its protoc...' RFC 2119 keyword, line 115: '...he tests, the device to be tested MUST...' (88 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 34 has weird spacing: '...necting devic...' == Line 70 has weird spacing: '...s under all o...' == Line 255 has weird spacing: '... frames addre...' == Line 363 has weird spacing: '...of data, with...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: The 24 input and output protocol addresses SHOULD not be any that are represented in the test data stream. The last filter SHOULD permit the forwarding of the test data stream. By "first" and "last" we mean to ensure that in the second case, 25 conditions must be checked before the data frames will match the conditions that permit the forwarding of the frame. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: A.1 Scope Of This Appendix This appendix discusses certain issues in the benchmarking methodology where experience or judgement may play a role in the tests selected to be run or in the approach to constructing the test with a particular device. As such, this appendix MUST not be read as an amendment to the methodology described in the body of this document but as a guide to testing practice. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 13 errors (**), 0 flaws (~~), 9 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 INTERNET-DRAFT 2 Expires in six months 4 Benchmarking Methodology for Network Interconnect Devices 6 8 Status of this Document 10 This document is an Internet-Draft. Internet-Drafts are working 11 documents of the Internet Engineering Task Force (IETF), its areas, 12 and its working groups. Note that other groups may also distribute 13 working documents as Internet-Drafts. 15 Internet-Drafts are draft documents valid for a maximum of six 16 months and may be updated, replaced, or obsoleted by other documents 17 at any time. It is inappropriate to use Internet- Drafts as 18 reference material or to cite them other than as ``work in 19 progress.'' 21 To learn the current status of any Internet-Draft, please check the 22 ``1id-abstracts.txt'' listing contained in the Internet-Drafts 23 Shadow Directories on ds.internic.net (US East Coast), nic.nordu.net 24 (Europe), ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific 25 Rim). 27 Distribution of this document is unlimited. Please send comments to 28 bmwg@harvard.edu or to the editor. 30 Abstract 32 This document discusses and defines a number of tests that may be 33 used to describe the performance characteristics of a network 34 interconnecting device. In addition to defining the tests this 35 document also describes specific formats for reporting the results 36 of the tests. Appendix A lists the tests and conditions that we 37 believe should be included for specific cases and gives additional 38 information about testing practices. Appendix B is a reference 39 listing of maximum frame rates to be used with specific frame sizes 40 on various media and Appendix C gives some examples of frame formats 41 to be used in testing. 43 1. Introduction 44 Vendors often engage in "specsmanship" in an attempt to give their 45 products a better position in the marketplace. This often involves 46 "smoke & mirrors" to confuse the potential users of the products. 47 This document and follow up memos attempt to define a specific set 48 of tests that vendors can use to measure and report the performance 49 characteristics of network devices. The results of these tests will 50 provide the user comparable data from different vendors with which 51 to evaluate these devices. 53 A previous document, "Benchmarking Terminology for Network 54 Interconnect Devices" (RFC 1242), defined many of the terms that are 55 used in this document. The terminology document should be consulted 56 before attempting to make use of this document. 58 2. Real world 59 In producing this document the authors attempted to keep in mind the 60 requirement that apparatus to perform the described tests must 61 actually be built. We do not know of "off the shelf" equipment 62 available to implement all of the tests but it is our opinion that 63 such equipment can be constructed. 65 3. Tests to be run 66 There are a number of tests described in this document. Not all of 67 the tests apply to all types of devices. It is expected that a 68 vendor will perform all of the tests that apply to a specific type 69 of product. The authors understand that it will take a considerable 70 period of time to perform all of the recommended tests under all of 71 the recommended conditions. We believe that the results are worth 72 the effort. Appendix A lists the tests and conditions that we 73 believe should be included for specific cases. 75 4. Evaluating the results 76 Performing all of the recommended tests will result in a great deal 77 of data. Much of this data will not apply to the evaluation of the 78 devices under each circumstance. For example, the rate at which a 79 router forwards IPX frames will be of little use in selecting a 80 router for an environment that does not (and will not) support that 81 protocol. Evaluating even that data which is relevant to a 82 particular network installation will require experience which may 83 not be readily available. 85 5. Requirements 86 In this document, the words that are used to define the significance 87 of each particular requirement are capitalized. These words are: 89 * "MUST" 90 This word or the adjective "REQUIRED" means that the item is 91 an absolute requirement of the specification. 93 * "SHOULD" 94 This word or the adjective "RECOMMENDED" means that there may 95 exist valid reasons in particular circumstances to ignore this 96 item, but the full implications should be understood and the 97 case carefully weighed before choosing a different course. 99 * "MAY" 100 This word or the adjective "OPTIONAL" means that this item is 101 truly optional. One vendor may choose to include the item 102 because a particular marketplace requires it or because it 103 enhances the product, for example; another vendor may omit the 104 same item. 106 An implementation is not compliant if it fails to satisfy one or 107 more of the MUST requirements for the protocols it implements. An 108 implementation that satisfies all the MUST and all the SHOULD 109 requirements for its protocols is said to be "unconditionally 110 compliant"; one that satisfies all the MUST requirements but not all 111 the SHOULD requirements for its protocols is said to be 112 "conditionally compliant". 114 6. Device set up 115 Before starting to perform the tests, the device to be tested MUST 116 be configured following the instructions provided to the user. 117 Specifically, it is expected that all of the supported protocols 118 will be configured and enabled during this set up (See Appendix A). 119 It is expected that all of the tests will be run without changing 120 the configuration or setup of the device in any way other than that 121 required to do the specific test. For example, it is not acceptable 122 to change the size of frame handling buffers between tests of frame 123 handling rates or to disable all but one transport protocol when 124 testing the throughput of that protocol. It is necessary to modify 125 the configuration when starting a test to determine the effect of 126 filters on throughput, but the only change MUST be to enable the 127 specific filter. The device set up SHOULD include the normally 128 recommended routing update intervals and keep alive frequency. The 129 specific version of the software and the exact device configuration, 130 including what device functions are disabled, used during the tests 131 SHOULD be included as part of the report of the results. 133 7. Frame formats 134 The formats of the test frames to use for TCP/IP over Ethernet are 135 shown in Appendix C: Test Frame Formats. It is expected that these 136 exact frame formats will be used in the tests described in this 137 document for this protocol/media combination and that these frames 138 will be used as a template for testing other protocol/media 139 combinations. The specific formats that are used to define the test 140 frames for a particular test series MUST be included in the report 141 of the results. 143 8. Frame sizes 144 All of the described tests SHOULD be performed at a number of frame 145 sizes. Specifically, the sizes SHOULD include the maximum and 146 minimum legitimate sizes for the protocol under test on the media 147 under test and enough sizes in between to be able to get a full 148 characterization of the device performance. 150 Except where noted, it is expected that at least five frame sizes 151 will be tested for each test condition. 153 Theoretically the minimum size UDP Echo request frame would consist 154 of an IP header (minimum length 20 octets), a UDP header (8 octets) 155 and whatever MAC level header is required by the media in use. The 156 theoretical maximum frame size is determined by the size of the 157 length field in the IP header. In almost all cases the actual 158 maximum and minimum sizes are determined by the limitations of the 159 media. 161 In theory it would be ideal to distribute the frame sizes in a way 162 that would evenly distribute the theoretical frame rates. These 163 recommendations incorporate this theory but specify frame sizes 164 which are easy to understand and remember. In addition, many of the 165 same frame sizes are specified on each of the media types to allow 166 for easy performance comparisons. 168 The inclusion of an unrealistically small frame size on some of the 169 media types (i.e. with little or no space for data) is to help 170 characterize the per-frame processing overhead of the network 171 connection device. 173 8.1 Frame sizes to be used on Ethernet 174 64, 128, 256, 512, 1024, 1280, 1518 176 These sizes include the maximum and minimum frame sizes permitted by 177 the Ethernet standard and a selection of sizes between these 178 extremes with a finer granularity for the smaller frame sizes and 179 higher frame rates. 181 8.2 Frame sizes to be used on 4Mb and 16Mb token ring 182 54, 64, 128, 256, 1024, 1518, 2048, 4472 184 The frame size recommendations for token ring assume that there is 185 no RIF field in the frames of routed protocols. A RIF field would 186 be present in any direct source route bridge performance test. The 187 minimum size frame for UDP on token ring is 54 octets. The maximum 188 size of 4472 octets is recommended for 16Mb token ring instead of 189 the theoretical size of 17.9Kb because of the size limitations 190 imposed by many token ring interfaces. The reminder of the sizes 191 are selected to permit direct comparisons with other types of media. 192 An IP (i.e. not UDP) frame may be used in addition if a higher data 193 rate is desired, in which case the minimum frame size is 46 octets. 195 8.3 Frame sizes to be used on FDDI 196 54, 64, 128, 256, 1024, 1518, 2048, 4472 198 The minimum size frame for UDP on FDDI is 53 octets, the minimum 199 size of 54 is recommended to allow direct comparison to token ring 200 performance. The maximum size of 4472 is recommended instead of the 201 theoretical maximum size of 4500 octets to permit the same type of 202 comparison. An IP (i.e. not UDP) frame may be used in addition if a 203 higher data rate is desired, in which case the minimum frame size is 204 45 octets. 206 8.4 Frame sizes in the presence of disparate MTUs 208 When the interconnect device supports connecting links with 209 disparate MTUs, the frame sizes for the link with the *larger* MTU 210 SHOULD be used, up to the limit of the protocol being tested. If the 211 interconnect device does not support the fragmenting of frames in 212 the presence of MTU mismatch, the forwarding rate for that frame 213 size shall be reported as zero (0). 215 For example, the test of IP forwarding with a bridge or router that 216 joins FDDI and Ethernet should use the frame sizes of FDDI when 217 going from the FDDI to the Ethernet link. If the bridge does not 218 support IP fragmentation, the forwarding rate for those frames too 219 large for Ethernet should be reported as zero. 221 9. Verifying received frames 222 The test equipment SHOULD discard any frames received during a test 223 run that are not actual forwarded test frames. For example, keep- 224 alive and routing update frames SHOULD NOT be included in the count 225 of received frames. In any case, the test equipment SHOULD verify 226 the length of the received frames and check that they match the 227 expected length. 229 Preferably, the test equipment SHOULD include sequence numbers in 230 the transmitted frames and check for these numbers on the received 231 frames. If this is done, the reported results SHOULD include in 232 addition to the number of frames dropped, the number of frames that 233 were received out of order, the number of duplicate frames received 234 and the number of gaps in the received frame numbering sequence. 235 This functionality is required for some of the described tests. 237 10. Modifiers 238 It might be useful to know the device performance under a number of 239 conditions; some of these conditions are noted below. It is 240 expected that the reported results will include as many of these 241 conditions as the test equipment is able to generate. The suite of 242 tests SHOULD be first run without any modifying conditions and then 243 repeated under each of the conditions separately. To preserve the 244 ability to compare the results of these tests any frames that are 245 required to generate the modifying conditions (management queries 246 for example) will be included in the same data stream as the normal 247 test frames in place of one of the test frames and not be supplied 248 to the device on a separate network port. 250 10.1 Broadcast frames 251 In most router designs special processing is required when frames 252 addressed to the hardware broadcast address are received. In 253 bridges (or in bridge mode on routers) these broadcast frames must 254 be flooded to a number of ports. The stream of test frames SHOULD 255 be augmented with 1% frames addressed to the hardware broadcast 256 address. The specific frames that should be used are included in 257 the test frame format document. The broadcast frames SHOULD be 258 evenly distributed throughout the data stream, for example, every 259 100th frame. 261 It is understood that a level of broadcast frames of 1% is much 262 higher than many networks experience but, as in drug toxicity 263 evaluations, the higher level is required to be able to gage the 264 effect which would otherwise often fall within the normal 265 variability of the system performance. Due to design factors some 266 test equipment will not be able to generate a level of alternate 267 frames this low. In these cases it is expected that the percentage 268 would be as small as the equipment can provide and that the actual 269 level be described in the report of the test results. 271 10.2 Management frames 272 Most data networks now make use of management protocols such as 273 SNMP. In many environments there can be a number of management 274 stations sending queries to the same device at the same time. 276 The stream of test frames SHOULD be augmented with one management 277 query as the first frame sent each second during the duration of the 278 trial. The result of the query must fit into one response frame. 279 The response frame SHOULD be verified by the test equipment. One 280 example of the specific query frame that should be used is shown in 281 Appendix C. 283 10.3 Routing update frames 284 The processing of dynamic routing protocol updates could have a 285 significant impact on the ability of a router to forward data 286 frames. The stream of test frames SHOULD be augmented with one 287 routing update frame transmitted as the first frame transmitted 288 during the trial. Routing update frames SHOULD be sent at the rate 289 specified in Appendix C for the specific routing protocol being used 290 in the test. Two routing update frames are defined in Appendix C for 291 the TCP/IP over Ethernet example. The routing frames are designed 292 to change the routing to a number of networks that are not involved 293 in the forwarding of the test data. The first frame sets the 294 routing table state to "A", the second one changes the state to "B". 295 The frames MUST be alternated during the trial. 297 The test SHOULD verify that the routing update was processed by the 298 device under test. 300 10.4 Filters 301 Filters are added to routers and bridges to selectively inhibit the 302 forwarding of frames that would normally be forwarded. This is 303 usually done to implement security controls on the data that is 304 accepted between one area and another. Different products have 305 different capabilities to implement filters. 307 The device SHOULD be first configured to add one filter condition 308 and the tests performed. This filter SHOULD permit the forwarding 309 of the test data stream. In routers this filter SHOULD be of the 310 form: 312 forward input_protocol_address to output_protocol_address 314 In bridges the filter SHOULD be of the form: 316 forward destination_hardware_address 318 The device SHOULD be then reconfigured to implement a total of 25 319 filters. The first 24 of these filters SHOULD be of the form: 321 block input_protocol_address to output_protocol_address 323 The 24 input and output protocol addresses SHOULD not be any that 324 are represented in the test data stream. The last filter SHOULD 325 permit the forwarding of the test data stream. By "first" and 326 "last" we mean to ensure that in the second case, 25 conditions must 327 be checked before the data frames will match the conditions that 328 permit the forwarding of the frame. 330 The exact filters configuration command lines used SHOULD be 331 included with the report of the results. 333 10.4.1 Filter Addresses 334 Two sets of filter addresses are required, one for the single filter 335 case and one for the 25 filter case. 337 The single filter case should permit traffic from IP address 338 198.18.1.2 to IP address 198.19.65.2 and deny all other traffic. 340 The 25 filter case should follow the following sequence. 342 allow aa.ba.1.1 to aa.ba.100.1 343 allow aa.ba.2.2 to aa.ba.101.2 344 allow aa.ba.3.3 to aa.ba.103.3 345 ... 346 allow aa.ba.12.12 to aa.ba.112.12 347 allow aa.bc.1.2 to aa.bc.65.1 348 allow aa.ba.13.13 to aa.ba.113.13 349 allow aa.ba.14.14 to aa.ba.114.14 350 ... 351 allow aa.ba.24.24 to aa.ba.124.24 352 deny all else 354 All previous filter conditions should be cleared from the router 355 before this sequence is entered. The sequence is selected to test 356 to see if the router sorts the filter conditions or accepts them in 357 the order that they were entered. Both of these procedures will 358 result in a greater reduction in performance than will some form of 359 hash coding. 361 11. Protocol addresses 362 It is easier to implement these tests using a single logical stream 363 of data, with one source protocol address and one destination 364 protocol address, and for some conditions like the filters described 365 above, a practical requirement. Networks in the real world are not 366 limited to single streams of data. The test suite SHOULD be first 367 run with a single protocol (or hardware for bridge tests) source and 368 destination address pair. The tests SHOULD then be repeated with 369 using a random destination address. While testing routers the 370 addresses SHOULD be random over a range of 256 networks and random 371 over the full MAC range for bridges. The specific address ranges to 372 use for IP are shown in Appendix C. 374 12. Route Set Up 375 It is not expected that all of the routing information necessary to 376 forward the test stream, especially in the multiple address case, 377 will be manually set up. At the start of each trial a routing 378 update MUST be sent to the device. This routing update MUST include 379 all of the network addresses that will be required for the trial. 380 All of the addresses SHOULD resolve to the same "next-hop" and it is 381 expected that this will be the address of the receiving side of the 382 test equipment. This routing update will have to be repeated at the 383 interval required by the routing protocol being used. An example of 384 the format and repetition interval of the update frames is given in 385 Appendix C. 387 13. Bidirectional traffic 388 Normal network activity is not all in a single direction. To test 389 the bidirectional performance of a device, the test series SHOULD be 390 run with the same data rate being offered from each direction. The 391 sum of the data rates should not exceed the theoretical limit for 392 the media. 394 14. Single stream path 395 The full suite of tests SHOULD be run along with whatever modifier 396 conditions that are relevant using a single input and output network 397 port on the device. If the internal design of the device has 398 multiple distinct pathways, for example, multiple interface cards 399 each with multiple network ports, then all possible types of 400 pathways SHOULD be tested separately. 402 15. Multi-port 403 Many current router and bridge products provide many network ports 404 in the same device. In performing these tests first half of the 405 ports are designated as "input ports" and half are designated as 406 "output ports". These ports SHOULD be evenly distributed across the 407 device architecture. For example if a device has two interface cards 408 each of which has four ports, two ports on each interface card are 409 designated as input and two are designated as output. 411 The specified tests are run using the same data rate being offered 412 to each of the input ports. The addresses in the input data streams 413 SHOULD be set so that a frame will be directed to each of the output 414 ports in sequence. The stream offered to input one SHOULD consist 415 of a series of frames (one destined to each of the output ports), as 416 SHOULD the frame stream offered to input two. The same configuration 417 MAY be used to perform a bidirectional multi-stream test. In this 418 case all of the ports are considered both input and output ports and 419 each data stream MUST consist of frames addressed to all of the 420 other ports. 422 16. Multiple protocols 423 This document does not address the issue of testing the effects of a 424 mixed protocol environment other than to suggest that if such tests 425 are wanted then frames SHOULD be distributed between all of the test 426 protocols. The distribution MAY approximate the conditions on the 427 network in which the device would be used. 429 17. Multiple frame sizes 430 This document does not address the issue of testing the effects of a 431 mixed frame size environment other than to suggest that if such 432 tests are wanted then frames SHOULD be distributed between all of 433 the listed sizes for the protocol under test. The distribution MAY 434 approximate the conditions on the network in which the device would 435 be used. 437 18. Testing performance beyond a single device. 438 In the performance testing of a single device, the paradigm can be 439 described as applying some input to a device under test and 440 monitoring the output. The results of which can be used to form a 441 basis of characterization of that device under those test 442 conditions. 444 This model is useful when the test input and output are homogenous 445 (e.g., 64-byte IP, 802.3 frames into the device under test; 64 IP, 446 802.3 frames out), or the method of test can distinguish between 447 dissimilar input/output. (E.g., 1518 byte, IP, 802.3 frames in; 576 448 byte, fragmented IP, X.25 frames out.) 450 By extending the single device test model, reasonable benchmarks 451 regarding multiple devices or heterogeneous environments may be 452 collected. In this extension, the single device under test is 453 replaced by a system of interconnected network devices. This test 454 methodology would support the benchmarking of a variety of 455 device/media/service/protocol combinations. For example, a 456 configuration for a LAN-to-WAN-to-LAN test might be: 458 (1) 802.3-> device 1 -> X.25 @ 64kbps -> device 2 -> 802.3 460 Or a mixed LAN configuration might be: 462 (2) 802.3 -> device 1 -> FDDI -> device 2 -> FDDI -> device 3 -> 463 802.3 465 In both examples 1 and 2, end-to-end benchmarks of each system could 466 be empirically ascertained. Other behavior may be characterized 467 through the use of intermediate devices. In example 2, the 468 configuration may be used to give an indication of the FDDI to FDDI 469 capability exhibited by device 2. 471 Because multiple devices are treated as a single system, there are 472 limitations to this methodology. For instance, this methodology may 473 yield an aggregate benchmark for a tested system. That benchmark 474 alone, however, may not necessarily reflect asymmetries in behavior 475 between the devices, latencies introduce by other apparatus (e.g., 476 CSUs/DSUs, switches), etc. 478 Further, care must be used when comparing benchmarks of different 479 systems by ensuring that the devices' features/configuration of the 480 tested systems have the appropriate common denominators to allow 481 comparison. 483 The maximum frame rate that should be used when testing WAN 484 connections SHOULD be greater than the listed theoretical maximum 485 rate for the frame size on that speed connection. The higher rate 486 for WAN tests is to compensate for the fact that some vendors employ 487 various forms of header compression. See Appendix A. 489 19. Maximum frame rate 490 The maximum frame rate that should be used when testing LAN 491 connections SHOULD be the listed theoretical maximum rate for the 492 frame size on the media. A list of maximum frame rates for LAN 493 connections is included in Appendix B. 495 20. Bursty traffic 496 It is convenient to measure the device performance under steady 497 state load but this is an unrealistic way to gage the functioning of 498 a device since actual network traffic normally consists of bursts of 499 frames. Some of the tests described below SHOULD be performed with 500 both steady state traffic and with traffic consisting of repeated 501 bursts of frames. The frames within a burst are transmitted with 502 the minimum legitimate inter-frame gap. 504 The objective of the test is to determine the minimum interval 505 between bursts which the device under test can process with no frame 506 loss. During each test the number of frames in each burst is held 507 constant and the inter-burst interval varied. Tests SHOULD be run 508 with burst sizes of 16, 64, 256 and 1024 frames. 510 21. Frames per token 511 Although it is possible to configure some token ring and FDDI 512 interfaces to transmit more than one frame each time that the token 513 is received, most of the network devices currently available 514 transmit only one frame per token. These tests SHOULD first be 515 performed while transmitting only one frame per token. 517 Some current high-performance workstation servers do transmit more 518 than one frame per token on FDDI to maximize throughput. Since this 519 may be a common feature in future workstations and servers, 520 interconnect devices with FDDI interfaces SHOULD be tested with 1, 521 4, 8, and 16 frames per token. The reported frame rate SHOULD be 522 the average rate of frame transmission over the total trial period. 524 22. Trial description 525 A particular test consists of multiple trials. Each trial returns 526 one piece of information, for example the loss rate at a particular 527 input frame rate. Each trial consists of a number of phases: 529 a) If the test device is a router, send the routing update to 530 the "input" port and pause two seconds to be sure that the 531 routing has settled. 533 b) Send the "learning frames" to the "output" port and wait 2 534 seconds to be sure that the learning has settled. Bridge 535 learning frames are frames with source addresses that are the 536 same as the destination addresses used by the test frames. 537 Learning frames for other protocols are used to prime the 538 address resolution tables in the device. The formats of the 539 learning frame that should be used are shown in the Test Frame 540 Formats document. 542 c) Run the test trial. 544 d) Wait for two sec for any residual frames to be received. 546 e) Wait for at least five seconds for the device to 547 restabilize. 549 23. Trial duration 550 The aim of these tests is to determine the rate continuously 551 supportable by the device. The actual duration of the test trials 552 must be a compromise between this aim and the duration of the 553 benchmarking test suite. The duration of the test portion of each 554 trial SHOULD be at least 60 seconds. The tests that involve some 555 form of "binary search", for example the throughput test, to 556 determine the exact result MAY use a shorter trial duration to 557 minimize the length of the search procedure, but it is expected that 558 the final determination will be made with full length trials. 560 24 Address resolution 561 The test device SHOULD be able to respond to address resolution 562 requests sent by the device under test wherever the protocol 563 requires such a process. 565 25 Benchmarking tests: 566 Note: The notation "type of data stream" refers to the above 567 modifications to a frame stream with a constant inter-frame gap, for 568 example, the addition of traffic filters to the configuration of the 569 device under test. 571 25.1 Throughput 572 Objective: 573 To determine the device throughput as defined in RFC 1242. 575 Procedure: 576 Send a specific number of frames at a specific rate through the 577 device and then count the frames that are transmitted by the device. 578 If the count of offered frames is equal to the count of received 579 frames, the rate of the offered stream is raised and the test rerun. 580 If fewer frames are received than were transmitted, the rate of the 581 offered stream is reduced and the test is rerun. 583 The throughput is the fastest rate at which the count of test frames 584 transmitted by the DUT is equal to the number of test frames sent to 585 it by the test equipment. 587 Reporting format: 588 The results of the throughput test SHOULD be reported in the form of 589 a graph. If it is, the x coordinate SHOULD be the frame size, the y 590 coordinate SHOULD be the frame rate. There SHOULD be at least two 591 lines on the graph. There SHOULD be one line showing the 592 theoretical frame rate for the media at the various frame sizes. 593 The second line SHOULD be the plot of the test results. Additional 594 lines MAY be used on the graph to report the results for each type 595 of data stream tested. Text accompanying the graph SHOULD indicate 596 the protocol, data stream format, and type of media used in the 597 tests. 599 We assume that if a single value is desired for advertising purposes 600 the vendor will select the rate for the minimum frame size for the 601 media. If this is done then the figure MUST be expressed in frames 602 per second. The rate MAY also be expressed in bits (or bytes) per 603 second if the vendor so desires. The statement of performance MUST 604 include a/ the measured maximum frame rate, b/ the size of the frame 605 used, c/ the theoretical limit of the media for that frame size, and 606 d/ the type of protocol used in the test. Even if a single value is 607 used as part of the advertising copy, the full table of results 608 SHOULD be included in the product data sheet. 610 25.2 Latency 611 Objective: 612 To determine the latency as defined in RFC 1242. 614 Procedure: 615 First determine the throughput for device at each of the listed 616 frame 617 sizes. 619 Send a stream of frames at a particular frame size through the 620 device at the determined throughput rate to a specific destination. 621 The stream SHOULD be at least 120 seconds in duration. An 622 identifying tag SHOULD be included in one frame after 60 seconds 623 with the type of tag being implementation dependent. The time at 624 which this frame is fully transmitted is recorded, i.e. the last bit 625 has been transmitted (timestamp A). The receiver logic in the test 626 equipment MUST be able to recognize the tag information in the frame 627 stream and record the time at which the entire tagged frame was 628 received (timestamp B). 630 The latency is timestamp B minus timestamp A minus the transit time 631 for a frame of the tested size on the tested media. This 632 calculation may result in a negative value for those devices that 633 begin to transmit the output frame before the entire input frame has 634 been received. 636 The test MUST be repeated at least 20 times with the reported value 637 being the average of the recorded values. 639 This test SHOULD be performed with the test frame addressed to the 640 same destination as the rest of the data stream and also with each 641 of the test frames addressed to a new destination network. 643 Reporting format: 644 The latency results SHOULD be reported in the format of a table with 645 a row for each of the tested frame sizes. There SHOULD be columns 646 for the frame size, the rate at which the latency test was run for 647 that frame size, for the media types tested, and for the resultant 648 latency values for each type of data stream tested. 650 25.3 Frame loss rate 651 Objective: 652 To determine the frame loss rate, as defined in RFC 1242, of a 653 device throughout the entire range of input data rates and frame 654 sizes. 656 Procedure: 657 Send a specific number of frames at a specific rate through the 658 device to be tested and count the frames that are transmitted by the 659 device. The frame loss rate at each point is calculated using the 660 following 661 equation: 663 ( ( input_count - output_count ) * 100 ) / input_count 665 The first trial SHOULD be run for the frame rate that corresponds to 666 100% of the maximum rate for the frame size on the input media. 667 Repeat the procedure for the rate that corresponds to 90% of the 668 maximum rate used and then for 80% of this rate. This sequence 669 SHOULD be continued (at reducing 10% intervals) until there are two 670 successive trials in which no frames are lost. The maximum 671 granularity of the trials MUST be 10% of the maximum rate, a finer 672 granularity is encouraged. 674 Reporting format: 675 The results of the frame loss rate test SHOULD be plotted as a 676 graph. If this is done then the X axis MUST be the input frame rate 677 as a percent of the theoretical rate for the media at the specific 678 frame size. The Y axis MUST be the percent loss at the particular 679 input rate. The left end of the X axis and the bottom of the Y axis 680 MUST be 0 percent; the right end of the X axis and the top of the Y 681 axis MUST be 100 percent. Multiple lines on the graph MAY used to 682 report the frame loss rate for different frame sizes, protocols, and 683 types of data streams. 685 Note: See section 18 for the maximum frame rates that SHOULD be 686 used. 688 25.4 Back-to-back frames 689 Objective: 690 To characterize the ability of a device to process back-to-back 691 frames as defined in RFC 1242. 693 Procedure: 694 Send a burst of frames with minimum inter-frame gaps to the device 695 and count the number of frames forwarded by the device. If the 696 count of transmitted frames is equal to the number of frames 697 forwarded the length of the burst is increased and the test is 698 rerun. If the number of forwarded frames is less than the number 699 transmitted, the length of the burst is reduced and the test is 700 rerun. 702 The back-to-back value is the number of frames in the longest burst 703 that the device will handle without the loss of any frames. 705 The trial length MUST be at least 2 seconds and SHOULD be repeated 706 at least 50 times with the average of the recorded values being 707 reported. 709 Reporting format: 710 The back-to-back results SHOULD be reported in the format of a table 711 with a row for each of the tested frame sizes. There SHOULD be 712 columns for the frame size and for the resultant average frame count 713 for each type of data stream tested. The standard deviation for 714 each measurement MAY also be reported. 716 25.5 System recovery 717 Objective: 718 To characterize the speed at which a device recovers from an 719 overload condition. 721 Procedure: 722 First determine the throughput for a device at each of the listed 723 frame sizes. 725 Send a stream of frames at a rate 110% of the recorded throughput 726 rate or the maximum rate for the media, whichever is lower, for at 727 least 60 seconds. At Timestamp A reduce the frame rate to 50% of 728 the above rate and record the time of the last frame lost (Timestamp 729 B). The system recovery time is determined by subtracting Timestamp 730 A from Timestamp B. The test SHOULD be repeated a number of times 731 and the average of the recorded values being reported. 733 Reporting format: 734 The system recovery results SHOULD be reported in the format of a 735 table with a row for each of the tested frame sizes. There SHOULD 736 be columns for the frame size, the frame rate used as the throughput 737 rate for each type of data stream tested, and for the measured 738 recovery time for each type of data stream tested. 740 25.6 Reset 741 Objective: 742 To characterize the speed at which a device recovers from a device 743 or software reset. 745 Procedure: 746 First determine the throughput for the device for the minimum frame 747 size on the media used in the testing. 749 Send a continuous stream of frames at the determined throughput rate 750 for the minimum sized frames. Cause a reset in the device. Monitor 751 the output until frames begin to be forwarded and record the time 752 that the last frame (Timestamp A) of the initial stream and the 753 first frame of the new stream (Timestamp B) are received. 755 A power interruption reset test is performed as above except that 756 the power to the device should be interrupted for 10 seconds in 757 place of causing a reset. 759 This test SHOULD only be run using frames addressed to networks 760 directly connected to the device under test so that there is no 761 requirement to delay until a routing update is received. 763 The reset value is obtained by subtracting Timestamp A from 764 Timestamp B. 766 Hardware and software resets, as well as a power interruption SHOULD 767 be tested. 769 Reporting format: 770 The reset value SHOULD be reported in a simple set of statements, 771 one for each reset type. 773 26. Security Considerations 774 Security issues are not addressed in this document. 776 27. Editor's Address 778 Scott Bradner 779 Holyoke Center Phone +1 617 495-3864 780 Harvard University Fax +1 617 495-0914 781 Cambridge, MA 02138 Email: sob@harvard.edu 783 Jim McQuaid 784 Wandel & Goltermann Technologies, Inc Phone +1 919 941-4730 785 P. O. Box 13585 Fax: +1 919 941-5751 786 Research Triangle Park, NC 27709 Email: mcquaid@wg.com 788 Appendix A: Testing Considerations 790 A.1 Scope Of This Appendix 791 This appendix discusses certain issues in the benchmarking 792 methodology where experience or judgement may play a role in the 793 tests selected to be run or in the approach to constructing the test 794 with a particular device. As such, this appendix MUST not be read 795 as an amendment to the methodology described in the body of this 796 document but as a guide to testing practice. 798 1. Typical testing practice has been to enable all protocols to 799 be tested and conduct all testing with no further 800 configuration of protocols, even though a given set of trials 801 may exercise only one protocol at a time. This minimizes the 802 opportunities to "tune" a device under test for a single 803 protocol. 805 2. The least common denominator of the available filter functions 806 should be used to ensure that there is a basis for comparison 807 between vendors. Because of product differences, those 808 conducting and evaluating tests must make a judgement about 809 this issue. 811 3. Architectural considerations may need to be considered. For 812 example, first perform the tests with the stream going between 813 ports on the same interface card and the repeat the tests with 814 the stream going into a port on one interface card and out of 815 a port on a second interface card. There will almost always be 816 a best case and worst case configuration for a given device 817 under test architecture. 819 4. Testing done using traffic streams consisting of mixed 820 protocols has not shown much difference between testing with 821 individual protocols. That is, if protocol A testing and 822 protocol B testing give two different performance results, 823 mixed protocol testing appears to give a result which is the 824 average of the two. 826 5. Wide Area Network (WAN) performance may be tested by setting 827 up two identical devices connected by the appropriate short- 828 haul versions of the WAN modems. Performance is then measured 829 between a LAN interface on one device to a LAN interface on 830 the other device. 832 The maximum frame rate to be used for LAN-WAN-LAN 833 configurations is a judgement that can be based on known 834 characteristics of the overall system including compression 835 effects, fragmentation, and gross link speeds. Practice 836 suggests that the rate should be at least 110% of the slowest 837 link speed. Substantive issues of testing compression itself 838 are beyond the scope of this document. 840 Appendix B: Maximum frame rates reference 842 (Provided by Roger Beeman) 844 Ethernet Size Ethernet 16Mb Token Ring FDDI 845 (bytes) (pps) (pps) (pps) 847 64 14880 24691 152439 848 128 8445 13793 85616 849 256 4528 7326 45620 850 512 2349 3780 23585 851 768 1586 2547 15903 852 1024 1197 1921 11996 853 1280 961 1542 9630 854 1518 812 1302 8138 856 Ethernet size 857 Preamble 64 bits 858 Frame 8 x N bits 859 Gap 96 bits 861 16Mb Token Ring size 862 SD 8 bits 863 AC 8 bits 864 FC 8 bits 865 DA 48 bits 866 SA 48 bits 867 RI 48 bits ( 06 30 00 12 00 30 ) 868 SNAP 869 DSAP 8 bits 870 SSAP 8 bits 871 Control 8 bits 872 Vendor 24 bits 873 Type 16 bits 874 Data 8 x ( N - 18) bits 875 FCS 32 bits 876 ED 8 bits 877 FS 8 bits 879 No accounting for token or idles between packets (theoretical 880 minimums hard to pin down) 882 FDDI size 883 Preamble 64 bits 884 SD 8 bits 885 FC 8 bits 886 DA 48 bits 887 SA 48 bits 888 SNAP 889 DSAP 8 bits 890 SSAP 8 bits 891 Control 8 bits 892 Vendor 24 bits 893 Type 16 bits 894 Data 8 x ( N - 18) bits 895 FCS 32 bits 896 ED 4 bits 897 FS 12 bits 899 No accounting for token or idles between packets (theoretical 900 minimums hard to pin down) 902 Appendix C: Test Frame Formats 904 This appendix defines the frame formats that may be used with these 905 tests. It also includes protocol specific parameters for TCP/IP 906 over Ethernet to be used with the tests as an example. 908 C.1. Introduction 909 The general logic used in the selection of the parameters and the 910 design of the frame formats is explained for each case within the 911 TCP/IP section. The same logic has been used in the other sections. 912 Comments are used in these sections only if there is a protocol 913 specific feature to be explained. Parameters and frame formats for 914 additional protocols can be defined by the reader by using the same 915 logic. 917 C.2. TCP/IP Information 918 The following section deals with the TCP/IP protocol suite. 920 C.2.1 Frame Type. 921 An application level datagram echo request is used for the test data 922 frame in the protocols that support such a function. A datagram 923 protocol is used to minimize the chance that a router might expect a 924 specific session initialization sequence, as might be the case for a 925 reliable stream protocol. A specific defined protocol is used 926 because some routers verify the protocol field and refuse to forward 927 unknown protocols. 929 For TCP/IP a UDP Echo Request is used. 931 C.2.2 Protocol Addresses 932 Two sets of addresses must be defined: first the addresses assigned 933 to the router ports, and second the address that are to be used in 934 the frames themselves and in the routing updates. 936 The following specific network addresses are have been assigned to 937 the BMWG by the NIC for this purpose. This assignment was made to 938 minimize the chance of conflict in case a testing device were to be 939 accidentally connected to part of the Internet. 941 C.2.2.1 Router port protocol addresses 942 Half of the ports on a multi-port router are referred to as "input" 943 ports and the other half as "output" ports even though some of the 944 tests use all ports both as input and output. A contiguous series 945 of IP Class C network addresses from 198.18.1.0 to 198.18.64.0 have 946 been assigned for use on the "input" ports. A second series from 947 198.19.1.0 to 198.19.64.0 have been assigned for use on the "output" 948 ports. In all cases the router port is node 1 on the appropriate 949 network. For example, a two port device would have an IP address of 950 198.18.1.1 on one port and 198.19.1.1 on the other port. 952 Some of the tests described in the methodology memo make use of an 953 SNMP management connection to the device under test. The management 954 access address for the device is assumed to be the first of the 955 "input" ports (198.18.1.1). 957 C.2.2.2 Frame addresses 958 Some of the described tests assume adjacent network routing (the 959 reboot time test for example). The IP address used in the test 960 frame is that of node 2 on the appropriate Class C network. 961 (198.19.1.2 for example) 963 If the test involves non-adjacent network routing the phantom 964 routers are located at node 10 of each of the appropriate Class C 965 networks. A series of Class C network addresses from 198.18.65.0 to 966 198.18.254.0 has been assigned for use as the networks accessible 967 through the phantom routers on the "input" side of device under 968 test. The series of Class C networks from 198.19.65.0 to 969 198.19.254.0 have been assigned to be used as the networks visible 970 through the phantom routers on the "output" side of the device under 971 test. 973 C.2.3 Routing Update Frequency 974 The update interval for each routing protocol is may have to be 975 determined by the specifications of the individual protocol. For IP 976 RIP, Cisco IGRP and for OSPF a routing update frame or frames should 977 precede each stream of test frames by 5 seconds. This frequency is 978 sufficient for trial durations of up to 60 seconds. Routing updates 979 must be mixed with the stream of test frames if longer trial periods 980 are selected. The frequency of updates should be taken from the 981 following table. 983 IP-RIP 30 sec 984 IGRP 90 sec 985 OSPF 90 sec 987 C.2.4 Frame Formats - detailed discussion 989 C.2.4.1 Learning Frame 990 In most protocols a procedure is used to determine the mapping 991 between the protocol node address and the MAC address. The Address 992 Resolution Protocol (ARP) is used to perform this function in 993 TCP/IP. No such procedure is required in XNS or IPX because the MAC 994 address is used as the protocol node address. 996 In the ideal case the tester would be able to respond to ARP 997 requests from the device under test. In cases where this is not 998 possible an ARP request should be sent to the router's "output" 999 port. This request should be seen as coming from the immediate 1000 destination of the test frame stream. (i.e. the phantom router or 1001 the end node if adjacent network routing is being used.) It is 1002 assumed that the router will cache the MAC address of the requesting 1003 device. The ARP request should be sent 5 seconds before the test 1004 frame stream starts in each trial. Trial lengths of longer than 50 1005 seconds may require that the router be configured for an extended 1006 ARP timeout. 1008 C.2.4.2 Routing Update Frame 1009 If the test does not involve adjacent net routing the tester must 1010 supply proper routing information using a routing update. A single 1011 routing update is used before each trial on each "destination" port 1012 (see section C.24). This update includes the network addresses that 1013 are reachable through a phantom router on the network attached to 1014 the port. For a full mesh test, one destination network address is 1015 present in the routing update for each of the "input" ports. The 1016 test stream on each "input" port consists of a repeating sequence of 1017 frames, one to each of the "output" ports. 1019 C.2.4.3 Management Query Frame 1020 The management overhead test uses SNMP to query a set of variables 1021 that should be present in all devices that support SNMP. The 1022 variables are read by an NMS at the appropriate intervals. The list 1023 of variables to retrieve follow: 1025 sysUpTime 1026 ifInOctets 1027 ifOutOctets 1028 ifInUcastPkts 1029 ifOutUcastPkts 1031 C.2.4.4 Test Frames 1032 The test frame is an UDP Echo Request with enough data to fill out 1033 the required frame size. The data should not be all bits off or all 1034 bits on since these patters can cause a "bit stuffing" process to be 1035 used to maintain clock synchronization on WAN links. This process 1036 will result in a longer frame than was intended. 1038 C.2.4.5 Frame Formats - TCP/IP on Ethernet 1039 Each of the frames below are described for the 1st pair of device 1040 ports, i.e. "input" port #1 and "output" port #1. Addresses must be 1041 changed if the frame is to be used for other ports. 1043 C.2.6.1 Learning Frame 1045 ARP Request on Ethernet 1047 -- DATAGRAM HEADER 1048 offset data (hex) description 1049 00 FF FF FF FF FF FF dest MAC address 1050 send to broadcast address 1051 06 xx xx xx xx xx xx set to source MAC address 1052 12 08 06 ARP type 1053 14 00 01 hardware type 1054 Ethernet = 1 1055 16 08 00 protocol type 1056 IP = 800 1057 18 06 hardware address length 1058 48 bits on Ethernet 1059 19 04 protocol address length 1060 4 octets for IP 1061 20 00 01 opcode 1062 request = 1 1063 22 xx xx xx xx xx xx source MAC address 1064 28 xx xx xx xx source IP address 1065 32 FF FF FF FF FF FF requesting DUT's MAC address 1066 38 xx xx xx xx DUT's IP address 1068 C.2.6.2 Routing Update Frame 1070 -- DATAGRAM HEADER 1071 offset date description 1072 00 FF FF FF FF FF FF dest MAC address is broadcast 1073 06 xx xx xx xx xx xx source hardware address 1074 12 08 00 type 1076 -- IP HEADER 1077 14 45 IP version - 4, 1078 header length 1079 (4 byte units) - 5 1080 15 00 service field 1081 16 00 EE total length 1082 18 00 00 ID 1083 20 40 00 flags (3 bits) 1084 4 (do not fragment),fragment 1085 offset-0 1086 22 0A TTL 1087 23 11 protocol - 17 (UDP) 1088 24 C4 8D header checksum 1089 26 xx xx xx xx source IP address 1090 30 xx xx xx destination IP address 1091 33 FF host part = FF for broadcast 1093 -- UDP HEADER 1094 34 02 08 source port 1095 208 = RIP 1096 36 02 08 destination port 1097 208 = RIP 1098 38 00 DA UDP message length 1099 40 00 00 UDP checksum 1101 -- RIP packet 1102 42 02 command = response 1103 43 01 version = 1 1104 44 00 00 0 1106 -- net 1 1107 46 00 02 family = IP 1108 48 00 00 0 1109 50 xx xx xx net 1 IP address 1110 53 00 net not node 1111 54 00 00 00 00 0 1112 58 00 00 00 00 0 1113 62 00 00 00 07 metric 7 1115 -- net 2 1116 66 00 02 family = IP 1117 68 00 00 0 1118 70 xx xx xx net 2 IP address 1119 73 00 net not node 1120 74 00 00 00 00 0 1121 78 00 00 00 00 0 1122 82 00 00 00 07 metric 7 1124 -- net 3 1125 86 00 02 family = IP 1126 88 00 00 0 1127 90 xx xx xx net 3 IP address 1128 93 00 net not node 1129 94 00 00 00 00 0 1130 98 00 00 00 00 0 1131 102 00 00 00 07 metric 7 1133 -- net 4 1134 106 00 02 family = IP 1135 108 00 00 0 1136 110 xx xx xx net 4 IP address 1137 113 00 net not node 1138 114 00 00 00 00 0 1139 118 00 00 00 00 0 1140 122 00 00 00 07 metric 7 1142 -- net 5 1143 126 00 02 family = IP 1144 128 00 00 0 1145 130 00 net 5 IP address 1146 133 00 net not node 1147 134 00 00 00 00 0 1148 138 00 00 00 00 0 1149 142 00 00 00 07 metric 7 1151 -- net 6 1152 146 00 02 family = IP 1153 148 00 00 0 1154 150 xx xx xx net 6 IP address 1155 153 00 net not node 1156 154 00 00 00 00 0 1157 158 00 00 00 00 0 1158 162 00 00 00 07 metric 7 1160 C.2.4.6 Management Query Frame 1162 To be defined. 1164 C.2.6.4 Test Frames 1166 UDP echo request on Ethernet 1168 -- DATAGRAM HEADER 1169 offset data description 1170 00 xx xx xx xx xx xx set to dest MAC address 1171 06 xx xx xx xx xx xx set to source MAC address 1172 12 08 00 type 1174 -- IP HEADER 1175 14 45 IP version - 4 1176 header length 5 4 byte units 1177 15 00 TOS 1178 16 00 2E total length* 1179 18 00 00 ID 1180 20 00 00 flags (3 bits) - 0 1181 fragment offset-0 1182 22 0A TTL 1183 23 11 protocol - 17 (UDP) 1184 24 C4 8D header checksum* 1185 26 xx xx xx xx set to source IP address** 1186 30 xx xx xx xx set to destination IP address** 1188 -- UDP HEADER 1189 34 C0 20 source port 1190 36 00 07 destination port 1191 07 = Echo 1192 38 00 1A UDP message length* 1193 40 00 00 UDP checksum 1195 -- UDP DATA 1196 42 00 01 02 03 04 05 06 07 some data*** 1197 50 08 09 0A 0B 0C 0D 0E 0F 1199 * - change for different length frames 1201 ** - change for different logical streams 1203 *** - fill remainder of frame with incrementing octets, repeated if 1204 required by frame length 1206 Values to be used in Total Length and UDP message length fields: 1208 frame size total length UDP message length 1209 64 00 2E 00 1A 1210 128 00 6E 00 5A 1211 256 00 EE 00 9A 1212 512 01 EE 01 9A 1213 768 02 EE 02 9A 1214 1024 03 EE 03 9A 1215 1280 04 EE 04 9A 1216 1518 05 DC 05 C8 1217 Benchmarking Methodology Working Group Scott Bradner 1218 Internet Draft - February 1995 Harvard University 1219 Jim McQuaid 1220 Wandel & Goltermann Technologies, Inc