idnits 2.17.1 draft-ietf-bmwg-methodology-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-16) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 1190 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 2 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack an Authors' Addresses Section. ** There are 252 instances of lines with control characters in the document. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 102: '...the MUST requirements for the protocol...' RFC 2119 keyword, line 103: '...atisfies all the MUST and all the SHOU...' RFC 2119 keyword, line 105: '...atisfies all the MUST requirements but...' RFC 2119 keyword, line 110: '...tests, the device to be tested MUST be...' RFC 2119 keyword, line 121: '...the only change MUST be to enable the ...' (89 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 67 has weird spacing: '...s under all o...' == Line 230 has weird spacing: '... frames addre...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: The 24 input and output protocol addresses SHOULD not be any that are represented in the test data stream. The last filter SHOULD permit the forwarding of the test data stream. By "first" and "last" we mean to ensure that in the second case, 25 conditions must be checked before the data frames will match the conditions that permit the forwarding of the frame. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: A.1 Scope Of This Appendix This appendix discusses certain issues in the benchmarking methodology where experience or judgement may play a role in the tests selected to be run or in the approach to constructing the test with a particular device. As such, this appendix MUST not be read as an amendment to the methodology described in the body of this document but as a guide to testing practice. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 13 errors (**), 0 flaws (~~), 7 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 INTERNET-DRAFT 2 Expires in six months 4 Benchmarking Methodology for Network Interconnect Devices 6 8 Status of this Document 10 This document is an Internet-Draft. Internet-Drafts are working 11 documents of the Internet Engineering Task Force (IETF), its areas, and 12 its working groups. Note that other groups may also distribute working 13 documents as Internet-Drafts. 15 Internet-Drafts are draft documents valid for a maximum of six months 16 and may be updated, replaced, or obsoleted by other documents at any 17 time. It is inappropriate to use Internet- Drafts as reference material 18 or to cite them other than as ``work in progress.'' 20 To learn the current status of any Internet-Draft, please check the 21 ``1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow 22 Directories on ds.internic.net (US East Coast), nic.nordu.net (Europe), 23 ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific Rim). 25 Distribution of this document is unlimited. Please send comments to 26 bmwg@harvard.edu or to the editor. 28 Abstract 30 This document discusses and defines a number of tests that may be used 31 to describe the performance characteristics of a network interconnecting 32 device. In addition to defining the tests this document also describes 33 specific formats for reporting the results of the tests. Appendix A 34 lists the tests and conditions that we believe should be included for 35 specific cases and gives additional information about testing practices. 36 Appendix B is a reference listing of maximum frame rates to be used with 37 specific frame sizes on various media and Appendix C gives some examples 38 of frame formats to be used in testing. 40 1. Introduction 41 Vendors often engage in "specsmanship" in an attempt to give their 42 products a better position in the marketplace. This often involves 43 "smoke & mirrors" to confuse the potential users of the products. This 44 document and follow up memos attempt to define a specific set of tests 45 that vendors can use to measure and report the performance 46 characteristics of network devices. The results of these tests will 47 provide the user comparable data from different vendors with which to 48 evaluate these devices. 50 A previous document, "Benchmarking Terminology for Network Interconnect 51 Devices" (RFC 1242), defined many of the terms that are used in this 52 document. The terminology document should be consulted before 53 attempting to make use of this document. 55 2. Real world 56 In producing this document the authors attempted to keep in mind the 57 requirement that apparatus to perform the described tests must actually 58 be built. We do not know of "off the shelf" equipment available to 59 implement all of the tests but it is our opinion that such equipment can 60 be constructed. 62 3. Tests to be run 63 There are a number of tests described in this document. Not all of the 64 tests apply to all types of devices. It is expected that a vendor will 65 perform all of the tests that apply to a specific type of product. The 66 authors understand that it will take a considerable period of time to 67 perform all of the recommended tests under all of the recommended 68 conditions. We believe that the results are worth the effort. Appendix 69 A lists the tests and conditions that we believe should be included for 70 specific cases. 72 4. Evaluating the results 73 Performing all of the recommended tests will result in a great deal of 74 data. Much of this data will not apply to the evaluation of the devices 75 under each circumstance. For example, the rate at which a router 76 forwards IPX frames will be of little use in selecting a router for an 77 environment that does not (and will not) support that protocol. 78 Evaluating even that data which is relevant to a particular network 79 installation will require experience which may not be readily available. 81 5. Requirements 82 In this document, the words that are used to define the significance of 83 each particular requirement are capitalized. These words are: 85 * "MUST" 86 This word or the adjective "REQUIRED" means that the item is an 87 absolute requirement of the specification. 89 * "SHOULD" 90 This word or the adjective "RECOMMENDED" means that there may 91 exist valid reasons in particular circumstances to ignore this 92 item, but the full implications should be understood and the case 93 carefully weighed before choosing a different course. 95 * "MAY" 96 This word or the adjective "OPTIONAL" means that this item is 97 truly optional. One vendor may choose to include the item because 98 a particular marketplace requires it or because it enhances the 99 product, for example; another vendor may omit the same item. 101 An implementation is not compliant if it fails to satisfy one or more of 102 the MUST requirements for the protocols it implements. An 103 implementation that satisfies all the MUST and all the SHOULD 104 requirements for its protocols is said to be "unconditionally 105 compliant"; one that satisfies all the MUST requirements but not all the 106 SHOULD requirements for its protocols is said to be "conditionally 107 compliant". 109 6. Device set up 110 Before starting to perform the tests, the device to be tested MUST be 111 configured following the instructions provided to the user. 112 Specifically, it is expected that all of the supported protocols will be 113 configured and enabled during this set up (See Appendix A). It is 114 expected that all of the tests will be run without changing the 115 configuration or setup of the device in any way other than that required 116 to do the specific test. For example, it is not acceptable to change 117 the size of frame handling buffers between tests of frame handling rates 118 or to disable all but one transport protocol when testing the throughput 119 of that protocol. It is necessary to modify the configuration when 120 starting a test to determine the effect of filters on throughput, but 121 the only change MUST be to enable the specific filter. The device set up 122 SHOULD include the normally recommended routing update intervals and 123 keep alive frequency. The specific version of the software and the 124 exact device configuration, including what device functions are 125 disabled, used during the tests SHOULD be included as part of the report 126 of the results. 128 7. Frame formats 129 The formats of the test frames to use for TCP/IP over Ethernet are shown 130 in Appendix C: Test Frame Formats. It is expected that these exact 131 frame formats will be used in the tests described in this document for 132 this protocol/media combination and that these frames will be used as a 133 template for testing other protocol/media combinations. The specific 134 formats that are used to define the test frames for a particular test 135 series MUST be included in the report of the results. 137 8. Frame sizes 138 All of the described tests SHOULD be performed at a number of frame 139 sizes. Specifically, the sizes SHOULD include the maximum and minimum 140 legitimate sizes for the protocol under test on the media under test and 141 enough sizes in between to be able to get a full characterization of the 142 device performance. 144 Except where noted, it is expected that at least five frame sizes will 145 be tested for each test condition. 147 Theoretically the minimum size UDP Echo request frame would consist of 148 an IP header (minimum length 20 octets), a UDP header (8 octets) and 149 whatever MAC level header is required by the media in use. The 150 theoretical maximum frame size is determined by the size of the length 151 field in the IP header. In almost all cases the actual maximum and 152 minimum sizes are determined by the limitations of the media. 154 In theory it would be ideal to distribute the frame sizes in a way that 155 would evenly distribute the theoretical frame rates. These 156 recommendations incorporate this theory but specify frame sizes which 157 are easy to understand and remember. In addition, many of the same 158 frame sizes are specified on each of the media types to allow for easy 159 performance comparisons. 161 The inclusion of an unrealistically small frame size on some of the 162 media types (i.e. with little or no space for data) is to help 163 characterize the per-frame processing overhead of the network connection 164 device. 166 8.1 Frame sizes to be used on Ethernet 167 64, 128, 256, 512, 1024, 1280, 1518 169 These sizes include the maximum and minimum frame sizes permitted by the 170 Ethernet standard and a selection of sizes between these extremes with a 171 finer granularity for the smaller frame sizes and higher frame rates. 173 8.2 Frame sizes to be used on 4Mb and 16Mb token ring 174 54, 64, 128, 256, 1024, 1518, 2048, 4472 176 The frame size recommendations for token ring assume that there is no 177 RIF field in the frames of routed protocols. A RIF field would be 178 present in any direct source route bridge performance test. The minimum 179 size frame for UDP on token ring is 54 octets. The maximum size of 4472 180 octets is recommended for 16Mb token ring instead of the theoretical 181 size of 17.9Kb because of the size limitations imposed by many token 182 ring interfaces. The reminder of the sizes are selected to permit 183 direct comparisons with other types of media. An IP (i.e. not UDP) 184 frame may be used in addition if a higher data rate is desired, in which 185 case the minimum frame size is 46 octets. 187 8.3 Frame sizes to be used on FDDI 188 54, 64, 128, 256, 1024, 1518, 2048, 4472 190 The minimum size frame for UDP on FDDI is 53 octets, the minimum size of 191 54 is recommended to allow direct comparison to token ring performance. 192 The maximum size of 4472 is recommended instead of the theoretical 193 maximum size of 4500 octets to permit the same type of comparison. An IP 194 (i.e. not UDP) frame may be used in addition if a higher data rate is 195 desired, in which case the minimum frame size is 45 octets. 197 9. Verifying received frames 198 The test equipment SHOULD discard any frames received during a test run 199 that are not actual forwarded test frames. For example, keep-alive and 200 routing update frames SHOULD NOT be included in the count of received 201 frames. In any case, the test equipment SHOULD verify the length of the 202 received frames and check that they match the expected length. 204 Preferably, the test equipment SHOULD include sequence numbers in the 205 transmitted frames and check for these numbers on the received frames. 206 If this is done, the reported results SHOULD include in addition to the 207 number of frames dropped, the number of frames that were received out of 208 order, the number of duplicate frames received and the number of gaps in 209 the received frame numbering sequence. This functionality is required 210 for some of the described tests. 212 10. Modifiers 213 It might be useful to know the device performance under a number of 214 conditions; some of these conditions are noted below. It is expected 215 that the reported results will include as many of these conditions as 216 the test equipment is able to generate. The suite of tests SHOULD be 217 first run without any modifying conditions and then repeated under each 218 of the conditions separately. To preserve the ability to compare the 219 results of these tests any frames that are required to generate the 220 modifying conditions (management queries for example) will be included 221 in the same data stream as the normal test frames in place of one of the 222 test frames and not be supplied to the device on a separate network 223 port. 225 10.1 Broadcast frames 226 In most router designs special processing is required when frames 227 addressed to the hardware broadcast address are received. In bridges 228 (or in bridge mode on routers) these broadcast frames must be flooded to 229 a number of ports. The stream of test frames SHOULD be augmented with 230 1% frames addressed to the hardware broadcast address. The specific 231 frames that should be used are included in the test frame format 232 document. The broadcast frames SHOULD be evenly distributed throughout 233 the data stream, for example, every 100th frame. 235 It is understood that a level of broadcast frames of 1% is much higher 236 than many networks experience but, as in drug toxicity evaluations, the 237 higher level is required to be able to gage the effect which would 238 otherwise often fall within the normal variability of the system 239 performance. Due to design factors some test equipment will not be able 240 to generate a level of alternate frames this low. In these cases it is 241 expected that the percentage would be as small as the equipment can 242 provide and that the actual level be described in the report of the test 243 results. 245 10.2 Management frames 246 Most data networks now make use of management protocols such as SNMP. 247 In many environments there can be a number of management stations 248 sending queries to the same device at the same time. 250 The stream of test frames SHOULD be augmented with one management query 251 as the first frame sent each second during the duration of the trial. 252 The result of the query must fit into one response frame. The response 253 frame SHOULD be verified by the test equipment. One example of the 254 specific query frame that should be used is shown in Appendix C. 256 10.3 Routing update frames 257 The processing of dynamic routing protocol updates could have a 258 significant impact on the ability of a router to forward data frames. 259 The stream of test frames SHOULD be augmented with one routing update 260 frame transmitted as the first frame transmitted during the trial. 261 Routing update frames SHOULD be sent at the rate specified in Appendix C 262 for the specific routing protocol being used in the test. Two routing 263 update frames are defined in Appendix C for the TCP/IP over Ethernet 264 example. The routing frames are designed to change the routing to a 265 number of networks that are not involved in the forwarding of the test 266 data. The first frame sets the routing table state to "A", the second 267 one changes the state to "B". The frames MUST be alternated during the 268 trial. 270 The test SHOULD verify that the routing update was processed by the 271 device under test. 273 10.4 Filters 274 Filters are added to routers and bridges to selectively inhibit the 275 forwarding of frames that would normally be forwarded. This is usually 276 done to implement security controls on the data that is accepted between 277 one area and another. Different products have different capabilities to 278 implement filters. 280 The device SHOULD be first configured to add one filter condition and 281 the tests performed. This filter SHOULD permit the forwarding of the 282 test data stream. In routers this filter SHOULD be of the form: 284 forward input_protocol_address to output_protocol_address 286 In bridges the filter SHOULD be of the form: 288 forward destination_hardware_address 290 The device SHOULD be then reconfigured to implement a total of 25 291 filters. The first 24 of these filters SHOULD be of the form: 293 block input_protocol_address to output_protocol_address 295 The 24 input and output protocol addresses SHOULD not be any that are 296 represented in the test data stream. The last filter SHOULD permit the 297 forwarding of the test data stream. By "first" and "last" we mean to 298 ensure that in the second case, 25 conditions must be checked before the 299 data frames will match the conditions that permit the forwarding of the 300 frame. 302 The exact filters configuration command lines used SHOULD be included 303 with the report of the results. 305 10.4.1 Filter Addresses 306 Two sets of filter addresses are required, one for the single filter 307 case and one for the 25 filter case. 309 The single filter case should permit traffic from IP address 198.18.1.2 310 to IP address 198.19.65.2 and deny all other traffic. 312 The 25 filter case should follow the following sequence. 314 allow aa.ba.1.1 to aa.ba.100.1 315 allow aa.ba.2.2 to aa.ba.101.2 316 allow aa.ba.3.3 to aa.ba.103.3 317 ... 318 allow aa.ba.12.12 to aa.ba.112.12 319 allow aa.bc.1.2 to aa.bc.65.1 320 allow aa.ba.13.13 to aa.ba.113.13 321 allow aa.ba.14.14 to aa.ba.114.14 322 ... 323 allow aa.ba.24.24 to aa.ba.124.24 324 deny all else 326 All previous filter conditions should be cleared from the router before 327 this sequence is entered. The sequence is selected to test to see if 328 the router sorts the filter conditions or accepts them in the order that 329 they were entered. Both of these procedures will result in a greater 330 reduction in performance than will some form of hash coding. 332 11. Protocol addresses 333 It is easier to implement these tests using a single logical stream of 334 data, with one source protocol address and one destination protocol 335 address, and for some conditions like the filters described above, a 336 practical requirement. Networks in the real world are not limited to 337 single streams of data. The test suite SHOULD be first run with a single 338 protocol (or hardware for bridge tests) source and destination address 339 pair. The tests SHOULD then be repeated with using a random destination 340 address. While testing routers the addresses SHOULD be random over a 341 range of 256 networks and random over the full MAC range for bridges. 342 The specific address ranges to use for IP are shown in Appendix C. 344 12. Route Set Up 345 It is not expected that all of the routing information necessary to 346 forward the test stream, especially in the multiple address case, will 347 be manually set up. At the start of each trial a routing update MUST be 348 sent to the device. This routing update MUST include all of the network 349 addresses that will be required for the trial. All of the addresses 350 SHOULD resolve to the same "next-hop" and it is expected that this will 351 be the address of the receiving side of the test equipment. This routing 352 update will have to be repeated at the interval required by the routing 353 protocol being used. An example of the format and repetition interval 354 of the update frames is given in Appendix C. 356 13. Bidirectional traffic 357 Normal network activity is not all in a single direction. To test the 358 bidirectional performance of a device, the test series SHOULD be run 359 with the same data rate being offered from each direction. The sum of 360 the data rates should not exceed the theoretical limit for the media. 362 14. Single stream path 363 The full suite of tests SHOULD be run along with whatever modifier 364 conditions that are relevant using a single input and output network 365 port on the device. If the internal design of the device has multiple 366 distinct pathways, for example, multiple interface cards each with 367 multiple network ports, then all possible types of pathways SHOULD be 368 tested separately. 370 15. Multi-port 371 Many current router and bridge products provide many network ports in 372 the same device. In performing these tests first half of the ports are 373 designated as "input ports" and half are designated as "output ports". 374 These ports SHOULD be evenly distributed across the device architecture. 375 For example if a device has two interface cards each of which has four 376 ports, two ports on each interface card are designated as input and two 377 are designated as output. 379 The specified tests are run using the same data rate being offered to 380 each of the input ports. The addresses in the input data streams SHOULD 381 be set so that a frame will be directed to each of the output ports in 382 sequence. The stream offered to input one SHOULD consist of a series of 383 frames (one destined to each of the output ports), as SHOULD the frame 384 stream offered to input two. The same configuration MAY be used to 385 perform a bidirectional multi-stream test. In this case all of the 386 ports are considered both input and output ports and each data stream 387 MUST consist of frames addressed to all of the other ports. 389 16. Multiple protocols 390 This document does not address the issue of testing the effects of a 391 mixed protocol environment other than to suggest that if such tests are 392 wanted then frames SHOULD be distributed between all of the test 393 protocols. The distribution MAY approximate the conditions on the 394 network in which the device would be used. 396 17. Multiple frame sizes 397 This document does not address the issue of testing the effects of a 398 mixed frame size environment other than to suggest that if such tests 399 are wanted then frames SHOULD be distributed between all of the listed 400 sizes for the protocol under test. The distribution MAY approximate the 401 conditions on the network in which the device would be used. 403 18. Testing performance beyond a single device. 404 In the performance testing of a single device, the paradigm can be 405 described as applying some input to a device under test and monitoring 406 the output. The results of which can be used to form a basis of 407 characterization of that device under those test conditions. 409 This model is useful when the test input and output are homogenous 410 (e.g., 64-byte IP, 802.3 frames into the device under test; 64 IP, 802.3 411 frames out), or the method of test can distinguish between dissimilar 412 input/output. (E.g., 1518 byte, IP, 802.3 frames in; 576 byte, 413 fragmented IP, X.25 frames out.) 415 By extending the single device test model, reasonable benchmarks 416 regarding multiple devices or heterogeneous environments may be 417 collected. In this extension, the single device under test is replaced 418 by a system of interconnected network devices. This test methodology 419 would support the benchmarking of a variety of 420 device/media/service/protocol combinations. For example, a configuration 421 for a LAN-to-WAN-to-LAN test might be: 423 (1) 802.3-> device 1 -> X.25 @ 64kbps -> device 2 -> 802.3 425 Or a mixed LAN configuration might be: 427 (2) 802.3 -> device 1 -> FDDI -> device 2 -> FDDI -> device 3 -> 802.3 429 In both examples 1 and 2, end-to-end benchmarks of each system could be 430 empirically ascertained. Other behavior may be characterized through the 431 use of intermediate devices. In example 2, the configuration may be used 432 to give an indication of the FDDI to FDDI capability exhibited by device 433 2. 435 Because multiple devices are treated as a single system, there are 436 limitations to this methodology. For instance, this methodology may 437 yield an aggregate benchmark for a tested system. That benchmark alone, 438 however, may not necessarily reflect asymmetries in behavior between the 439 devices, latencies introduce by other apparatus (e.g., CSUs/DSUs, 440 switches), etc. 442 Further, care must be used when comparing benchmarks of different 443 systems by ensuring that the devices' features/configuration of the 444 tested systems have the appropriate common denominators to allow 445 comparison. 447 The maximum frame rate that should be used when testing WAN connections 448 SHOULD be greater than the listed theoretical maximum rate for the frame 449 size on that speed connection. The higher rate for WAN tests is to 450 compensate for the fact that some vendors employ various forms of header 451 compression. See Appendix A. 453 19. Maximum frame rate 454 The maximum frame rate that should be used when testing LAN connections 455 SHOULD be the listed theoretical maximum rate for the frame size on the 456 media. A list of maximum frame rates for LAN connections is included in 457 Appendix B. 459 20. Bursty traffic 460 It is convenient to measure the device performance under steady state 461 load but this is an unrealistic way to gage the functioning of a device 462 since actual network traffic normally consists of bursts of frames. 463 Some of the tests described below SHOULD be performed with both steady 464 state traffic and with traffic consisting of repeated bursts of frames. 465 The frames within a burst are transmitted with the minimum legitimate 466 inter-frame gap. 468 The objective of the test is to determine the minimum interval between 469 bursts which the device under test can process with no frame loss. 470 During each test the number of frames in each burst is held constant and 471 the inter-burst interval varied. Tests SHOULD be run with burst sizes 472 of 16, 64, 256 and 1024 frames. 474 21. Frames per token 475 Although it is possible to configure some token ring and FDDI interfaces 476 to transmit more than one frame each time that the token is received, 477 most of the network devices currently available transmit only one frame 478 per token. These tests SHOULD first be performed while transmitting 479 only one frame per token. 481 Some current high-performance workstation servers do transmit more than 482 one frame per token on FDDI to maximize throughput. Since this may be a 483 common feature in future workstations and servers, interconnect devices 484 with FDDI interfaces SHOULD be tested with 1, 4, 8, and 16 frames per 485 token. The reported frame rate SHOULD be the average rate of frame 486 transmission over the total trial period. 488 22. Trial description 489 A particular test consists of multiple trials. Each trial returns one 490 piece of information, for example the loss rate at a particular input 491 frame rate. Each trial consists of a number of phases: 493 a) If the test device is a router, send the routing update to the 494 "input" port and pause two seconds to be sure that the routing has 495 settled. 497 b) Send the "learning frames" to the "output" port and wait 2 498 seconds to be sure that the learning has settled. Bridge learning 499 frames are frames with source addresses that are the same as the 500 destination addresses used by the test frames. Learning frames 501 for other protocols are used to prime the address resolution 502 tables in the device. The formats of the learning frame that 503 should be used are shown in the Test Frame Formats document. 505 c) Run the test trial. 507 d) Wait for two sec for any residual frames to be received. 509 e) Wait for at least five seconds for the device to restabilize. 511 23. Trial duration 512 The aim of these tests is to determine the rate continuously supportable 513 by the device. The actual duration of the test trials must be a 514 compromise between this aim and the duration of the benchmarking test 515 suite. The duration of the test portion of each trial SHOULD be at 516 least 60 seconds. The tests that involve some form of "binary search", 517 for example the throughput test, to determine the exact result MAY use a 518 shorter trial duration to minimize the length of the search procedure, 519 but it is expected that the final determination will be made with full 520 length trials. 522 24 Address resolution 523 The test device SHOULD be able to respond to address resolution requests 524 sent by the device under test wherever the protocol requires such a 525 process. 527 25 Benchmarking tests: 528 Note: The notation "type of data stream" refers to the above 529 modifications to a frame stream with a constant inter-frame gap, for 530 example, the addition of traffic filters to the configuration of the 531 device under test. 533 25.1 Throughput 534 Objective: 535 To determine the device throughput as defined in RFC 1242. 537 Procedure: 538 Send a specific number of frames at a specific rate through the device 539 and then count the frames that are transmitted by the device. If the 540 count of offered frames is equal to the count of received frames, the 541 rate of the offered stream is raised and the test rerun. If fewer 542 frames are received than were transmitted, the rate of the offered 543 stream is reduced and the test is rerun. 545 The throughput is the fastest rate at which the count of test frames 546 transmitted is equal to the number of test frames sent. 548 Reporting format: 549 The results of the throughput test SHOULD be reported in the form of a 550 graph. If it is, the x coordinate SHOULD be the frame size, the y 551 coordinate SHOULD be the frame rate. There SHOULD be at least two lines 552 on the graph. There SHOULD be one line showing the theoretical frame 553 rate for the media at the various frame sizes. The second line SHOULD 554 be the plot of the test results. Additional lines MAY be used on the 555 graph to report the results for each type of data stream tested. Text 556 accompanying the graph SHOULD indicate the protocol, data stream format, 557 and type of media used in the tests. 559 We assume that if a single value is desired for advertising purposes the 560 vendor will select the rate for the minimum frame size for the media. If 561 this is done then the figure MUST be expressed in frames per second. 562 The rate MAY also be expressed in bits (or bytes) per second if the 563 vendor so desires. The statement of performance MUST include a/ the 564 measured maximum frame rate, b/ the size of the frame used, c/ the 565 theoretical limit of the media for that frame size, and d/ the type of 566 protocol used in the test. Even if a single value is used as part of 567 the advertising copy, the full table of results SHOULD be included in 568 the product data sheet. 570 25.2 Latency 571 Objective: 572 To determine the latency as defined in RFC 1242. 574 Procedure: 575 First determine the throughput for device at each of the listed frame 576 sizes. 578 Send a stream of frames at a particular frame size through the device at 579 the determined throughput rate to a specific destination. The stream 580 SHOULD be at least 120 seconds in duration. An identifying tag SHOULD 581 be included in one frame after 60 seconds with the type of tag being 582 implementation dependent. The time at which this frame is fully 583 transmitted is recorded, i.e. the last bit has been transmitted 584 (timestamp A). The receiver logic in the test equipment MUST be able to 585 recognize the tag information in the frame stream and record the time at 586 which the entire tagged frame was received (timestamp B). 588 The latency is timestamp B minus timestamp A minus the transit time for 589 a frame of the tested size on the tested media. This calculation may 590 result in a negative value for those devices that begin to transmit the 591 output frame before the entire input frame has been received. 593 The test MUST be repeated at least 20 times with the reported value 594 being the average of the recorded values. 596 This test SHOULD be performed with the test frame addressed to the same 597 destination as the rest of the data stream and also with each of the 598 test frames addressed to a new destination network. 600 Reporting format: 601 The latency results SHOULD be reported in the format of a table with a 602 row for each of the tested frame sizes. There SHOULD be columns for the 603 frame size, the rate at which the latency test was run for that frame 604 size, for the media types tested, and for the resultant latency values 605 for each type of data stream tested. 607 25.3 Frame loss rate 608 Objective: 609 To determine the frame loss rate, as defined in RFC 1242, of a device 610 throughout the entire range of input data rates and frame sizes. 612 Procedure: 613 Send a specific number of frames at a specific rate through the device 614 to be tested and count the frames that are transmitted by the device. 615 The frame loss rate at each point is calculated using the following 616 equation: 618 ( ( input_count - output_count ) * 100 ) / input_count 620 The first trial SHOULD be run for the frame rate that corresponds to 621 100% of the maximum rate for the frame size on the input media. Repeat 622 the procedure for the rate that corresponds to 90% of the maximum rate 623 used and then for 80% of this rate. This sequence SHOULD be continued 624 (at reducing 10% intervals) until there are two successive trials in 625 which no frames are lost. The maximum granularity of the trials MUST be 626 10% of the maximum rate, a finer granularity is encouraged. 628 Reporting format: 629 The results of the frame loss rate test SHOULD be plotted as a graph. 630 If this is done then the X axis MUST be the input frame rate as a 631 percent of the theoretical rate for the media at the specific frame 632 size. The Y axis MUST be the percent loss at the particular input rate. 633 The left end of the X axis and the bottom of the Y axis MUST be 0 634 percent; the right end of the X axis and the top of the Y axis MUST be 635 100 percent. Multiple lines on the graph MAY used to report the frame 636 loss rate for different frame sizes, protocols, and types of data 637 streams. 639 Note: See section 18 for the maximum frame rates that SHOULD be used. 641 25.4 Back-to-back frames 642 Objective: 643 To characterize the ability of a device to process back-to-back frames 644 as defined in RFC 1242. 646 Procedure: 647 Send a burst of frames with minimum inter-frame gaps to the device and 648 count the number of frames forwarded by the device. If the count of 649 transmitted frames is equal to the number of frames forwarded the length 650 of the burst is increased and the test is rerun. If the number of 651 forwarded frames is less than the number transmitted, the length of the 652 burst is reduced and the test is rerun. 654 The back-to-back value is the number of frames in the longest burst that 655 the device will handle without the loss of any frames. 657 The trial length MUST be at least 2 seconds and SHOULD be repeated at 658 least 50 times with the average of the recorded values being reported. 660 Reporting format: 661 The back-to-back results SHOULD be reported in the format of a table 662 with a row for each of the tested frame sizes. There SHOULD be columns 663 for the frame size and for the resultant average frame count for each 664 type of data stream tested. The standard deviation for each measurement 665 MAY also be reported. 667 25.5 System recovery 668 Objective: 669 To characterize the speed at which a device recovers from an overload 670 condition. 672 Procedure: 673 First determine the throughput for a device at each of the listed frame 674 sizes. 676 Send a stream of frames at a rate 110% of the recorded throughput rate 677 or the maximum rate for the media, whichever is lower, for at least 60 678 seconds. At Timestamp A reduce the frame rate to 50% of the above rate 679 and record the time of the last frame lost (Timestamp B). The system 680 recovery time is determined by subtracting Timestamp A from Timestamp B. 681 The test SHOULD be repeated a number of times and the average of the 682 recorded values being reported. 684 Reporting format: 685 The system recovery results SHOULD be reported in the format of a table 686 with a row for each of the tested frame sizes. There SHOULD be columns 687 for the frame size, the frame rate used as the throughput rate for each 688 type of data stream tested, and for the measured recovery time for each 689 type of data stream tested. 691 25.6 Reset 692 Objective: 693 To characterize the speed at which a device recovers from a device or 694 software reset. 696 Procedure: 697 First determine the throughput for the device for the minimum frame size 698 on the media used in the testing. 700 Send a continuous stream of frames at the determined throughput rate for 701 the minimum sized frames. Cause a reset in the device. Monitor the 702 output until frames begin to be forwarded and record the time that the 703 last frame (Timestamp A) of the initial stream and the first frame of 704 the new stream (Timestamp B) are received. 706 A power interruption reset test is performed as above except that the 707 power to the device should be interrupted for 10 seconds in place of 708 causing a reset. 710 This test SHOULD only be run using frames addressed to networks directly 711 connected to the device under test so that there is no requirement to 712 delay until a routing update is received. 714 The reset value is obtained by subtracting Timestamp A from Timestamp B. 716 Hardware and software resets, as well as a power interruption SHOULD be 717 tested. 719 Reporting format: 720 The reset value SHOULD be reported in a simple set of statements, one 721 for each reset type. 723 26. Security Considerations 724 Security issues are not addressed in this document. 726 27. Editor's Address 728 Scott Bradner 729 Holyoke Center Phone +1 617 495-3864 730 Harvard University Fax +1 617 495-0914 731 Cambridge, MA 02138 Email: sob@harvard.edu 733 Jim McQuaid 734 Wandel & Goltermann Technologies, Inc Phone +1 919 941-4730 735 P. O. Box 13585 Fax: +1 919 941-5751 736 Research Triangle Park, NC 27709 Email: mcquaid@wg.com 738 Appendix A: Testing Considerations 740 A.1 Scope Of This Appendix 741 This appendix discusses certain issues in the benchmarking methodology 742 where experience or judgement may play a role in the tests selected to 743 be run or in the approach to constructing the test with a particular 744 device. As such, this appendix MUST not be read as an amendment to the 745 methodology described in the body of this document but as a guide to 746 testing practice. 748 1. Typical testing practice has been to enable all protocols to be 749 tested and conduct all testing with no further configuration of 750 protocols, even though a given set of trials may exercise only one 751 protocol at a time. This minimizes the opportunities to "tune" a 752 device under test for a single protocol. 754 2. The least common denominator of the available filter functions 755 should be used to ensure that there is a basis for comparison 756 between vendors. Because of product differences, those conducting 757 and evaluating tests must make a judgement about this issue. 759 3. Architectural considerations may need to be considered. For 760 example, first perform the tests with the stream going between 761 ports on the same interface card and the repeat the tests with the 762 stream going into a port on one interface card and out of a port 763 on a second interface card. There will almost always be a best 764 case and worst case configuration for a given device under test 765 architecture. 767 4. Testing done using traffic streams consisting of mixed protocols 768 has not shown much difference between testing with individual 769 protocols. That is, if protocol A testing and protocol B testing 770 give two different performance results, mixed protocol testing 771 appears to give a result which is the average of the two. 773 5. Wide Area Network (WAN) performance may be tested by setting up 774 two identical devices connected by the appropriate short-haul 775 versions of the WAN modems. Performance is then measured between 776 a LAN interface on one device to a LAN interface on the other 777 device. 779 The maximum frame rate to be used for LAN-WAN-LAN configurations 780 is a judgement that can be based on known characteristics of the 781 overall system including compression effects, fragmentation, and 782 gross link speeds. Practice suggests that the rate should be at 783 least 110% of the slowest link speed. Substantive issues of 784 testing compression itself are beyond the scope of this document. 786 Appendix B: Maximum frame rates reference 788 (Provided by Roger Beeman) 790 Ethernet Size Ethernet 16Mb Token Ring FDDI 791 (bytes) (pps) (pps) (pps) 793 64 14880 24691 152439 794 128 8445 13793 85616 795 256 4528 7326 45620 796 512 2349 3780 23585 797 768 1586 2547 15903 798 1024 1197 1921 11996 799 1280 961 1542 9630 800 1518 812 1302 8138 802 Ethernet size 803 Preamble 64 bits 804 Frame 8 x N bits 805 Gap 96 bits 807 16Mb Token Ring size 808 SD 8 bits 809 AC 8 bits 810 FC 8 bits 811 DA 48 bits 812 SA 48 bits 813 RI 48 bits ( 06 30 00 12 00 30 ) 814 SNAP 815 DSAP 8 bits 816 SSAP 8 bits 817 Control 8 bits 818 Vendor 24 bits 819 Type 16 bits 820 Data 8 x ( N - 18) bits 821 FCS 32 bits 822 ED 8 bits 823 FS 8 bits 825 No accounting for token or idles between packets (theoretical minimums 826 hard to pin down) 828 FDDI size 829 Preamble 64 bits 830 SD 8 bits 831 FC 8 bits 832 DA 48 bits 833 SA 48 bits 834 SNAP 835 DSAP 8 bits 836 SSAP 8 bits 837 Control 8 bits 838 Vendor 24 bits 839 Type 16 bits 840 Data 8 x ( N - 18) bits 841 FCS 32 bits 842 ED 4 bits 843 FS 12 bits 845 No accounting for token or idles between packets (theoretical minimums 846 hard to pin down) 848 Appendix C: Test Frame Formats 850 This appendix defines the frame formats that may be used with these 851 tests. It also includes protocol specific parameters for TCP/IP over 852 Ethernet to be used with the tests as an example. 854 C.1. Introduction 855 The general logic used in the selection of the parameters and the design 856 of the frame formats is explained for each case within the TCP/IP 857 section. The same logic has been used in the other sections. Comments 858 are used in these sections only if there is a protocol specific feature 859 to be explained. Parameters and frame formats for additional protocols 860 can be defined by the reader by using the same logic. 862 C.2. TCP/IP Information 863 The following section deals with the TCP/IP protocol suite. 865 C.2.1 Frame Type. 866 An application level datagram echo request is used for the test data 867 frame in the protocols that support such a function. A datagram 868 protocol is used to minimize the chance that a router might expect a 869 specific session initialization sequence, as might be the case for a 870 reliable stream protocol. A specific defined protocol is used because 871 some routers verify the protocol field and refuse to forward unknown 872 protocols. 874 For TCP/IP a UDP Echo Request is used. 876 C.2.2 Protocol Addresses 877 Two sets of addresses must be defined: first the addresses assigned to 878 the router ports, and second the address that are to be used in the 879 frames themselves and in the routing updates. 881 The following specific network addresses are have been assigned to the 882 BMWG by the NIC for this purpose. This assignment was made to minimize 883 the chance of conflict in case a testing device were to be accidentally 884 connected to part of the Internet. 886 C.2.2.1 Router port protocol addresses 887 Half of the ports on a multi-port router are referred to as "input" 888 ports and the other half as "output" ports even though some of the tests 889 use all ports both as input and output. A contiguous series of IP Class 890 C network addresses from 198.18.1.0 to 198.18.64.0 have been assigned 891 for use on the "input" ports. A second series from 198.19.1.0 to 892 198.19.64.0 have been assigned for use on the "output" ports. In all 893 cases the router port is node 1 on the appropriate network. For 894 example, a two port device would have an IP address of 198.18.1.1 on one 895 port and 198.19.1.1 on the other port. 897 Some of the tests described in the methodology memo make use of an SNMP 898 management connection to the device under test. The management access 899 address for the device is assumed to be the first of the "input" ports 900 (198.18.1.1). 902 C.2.2.2 Frame addresses 903 Some of the described tests assume adjacent network routing (the reboot 904 time test for example). The IP address used in the test frame is that 905 of node 2 on the appropriate Class C network. (198.19.1.2 for example) 907 If the test involves non-adjacent network routing the phantom routers 908 are located at node 10 of each of the appropriate Class C networks. A 909 series of Class C network addresses from 198.18.65.0 to 198.18.254.0 has 910 been assigned for use as the networks accessible through the phantom 911 routers on the "input" side of device under test. The series of Class C 912 networks from 198.19.65.0 to 198.19.254.0 have been assigned to be used 913 as the networks visible through the phantom routers on the "output" side 914 of the device under test. 916 C.2.3 Routing Update Frequency 917 The update interval for each routing protocol is may have to be 918 determined by the specifications of the individual protocol. For IP 919 RIP, Cisco IGRP and for OSPF a routing update frame or frames should 920 precede each stream of test frames by 5 seconds. This frequency is 921 sufficient for trial durations of up to 60 seconds. Routing updates 922 must be mixed with the stream of test frames if longer trial periods are 923 selected. The frequency of updates should be taken from the following 924 table. 926 IP-RIP 30 sec 927 IGRP 90 sec 928 OSPF 90 sec 930 C.2.4 Frame Formats - detailed discussion 932 C.2.4.1 Learning Frame 933 In most protocols a procedure is used to determine the mapping between 934 the protocol node address and the MAC address. The Address Resolution 935 Protocol (ARP) is used to perform this function in TCP/IP. No such 936 procedure is required in XNS or IPX because the MAC address is used as 937 the protocol node address. 939 In the ideal case the tester would be able to respond to ARP requests 940 from the device under test. In cases where this is not possible an ARP 941 request should be sent to the router's "output" port. This request 942 should be seen as coming from the immediate destination of the test 943 frame stream. (i.e. the phantom router or the end node if adjacent 944 network routing is being used.) It is assumed that the router will 945 cache the MAC address of the requesting device. The ARP request should 946 be sent 5 seconds before the test frame stream starts in each trial. 947 Trial lengths of longer than 50 seconds may require that the router be 948 configured for an extended ARP timeout. 950 C.2.4.2 Routing Update Frame 951 If the test does not involve adjacent net routing the tester must supply 952 proper routing information using a routing update. A single routing 953 update is used before each trial on each "destination" port (see section 954 C.24). This update includes the network addresses that are reachable 955 through a phantom router on the network attached to the port. For a 956 full mesh test, one destination network address is present in the 957 routing update for each of the "input" ports. The test stream on each 958 "input" port consists of a repeating sequence of frames, one to each of 959 the "output" ports. 961 C.2.4.3 Management Query Frame 962 The management overhead test uses SNMP to query a set of variables that 963 should be present in all devices that support SNMP. The variables are 964 read by an NMS at the appropriate intervals. The list of variables to 965 retrieve follow: 967 sysUpTime 968 ifInOctets 969 ifOutOctets 970 ifInUcastPkts 971 ifOutUcastPkts 973 C.2.4.4 Test Frames 974 The test frame is an UDP Echo Request with enough data to fill out the 975 required frame size. The data should not be all bits off or all bits on 976 since these patters can cause a "bit stuffing" process to be used to 977 maintain clock synchronization on WAN links. This process will result 978 in a longer frame than was intended. 980 C.2.4.5 Frame Formats - TCP/IP on Ethernet 981 Each of the frames below are described for the 1st pair of device ports, 982 i.e. "input" port #1 and "output" port #1. Addresses must be changed if 983 the frame is to be used for other ports. 985 C.2.6.1 Learning Frame 987 ARP Request on Ethernet 989 -- DATAGRAM HEADER 990 offset data (hex) description 991 00 FF FF FF FF FF FF dest MAC address 992 send to broadcast address 993 06 xx xx xx xx xx xx set to source MAC address 994 12 08 06 ARP type 995 14 00 01 hardware type 996 Ethernet = 1 997 16 08 00 protocol type 998 IP = 800 999 18 06 hardware address length 1000 48 bits on Ethernet 1001 19 04 protocol address length 1002 4 octets for IP 1003 20 00 01 opcode 1004 request = 1 1005 22 xx xx xx xx xx xx source MAC address 1006 28 xx xx xx xx source IP address 1007 32 FF FF FF FF FF FF requesting DUT's MAC address 1008 38 xx xx xx xx DUT's IP address 1010 C.2.6.2 Routing Update Frame 1012 -- DATAGRAM HEADER 1013 offset date description 1014 00 FF FF FF FF FF FF dest MAC address is broadcast 1015 06 xx xx xx xx xx xx source hardware address 1016 12 08 00 type 1018 -- IP HEADER 1019 14 45 IP version - 4, 1020 header length 1021 (4 byte units) - 5 1022 15 00 service field 1023 16 00 EE total length 1024 18 00 00 ID 1025 20 40 00 flags (3 bits) 1026 4 (do not fragment),fragment offset-0 1027 22 0A TTL 1028 23 11 protocol - 17 (UDP) 1029 24 C4 8D header checksum 1030 26 xx xx xx xx source IP address 1031 30 xx xx xx destination IP address 1032 33 FF host part = FF for broadcast 1034 -- UDP HEADER 1035 34 02 08 source port 1036 208 = RIP 1037 36 02 08 destination port 1038 208 = RIP 1039 38 00 DA UDP message length 1040 40 00 00 UDP checksum 1042 -- RIP packet 1043 42 02 command = response 1044 43 01 version = 1 1045 44 00 00 0 1047 -- net 1 1048 46 00 02 family = IP 1049 48 00 00 0 1050 50 xx xx xx net 1 IP address 1051 53 00 net not node 1052 54 00 00 00 00 0 1053 58 00 00 00 00 0 1054 62 00 00 00 07 metric 7 1056 -- net 2 1057 66 00 02 family = IP 1058 68 00 00 0 1059 70 xx xx xx net 2 IP address 1060 73 00 net not node 1061 74 00 00 00 00 0 1062 78 00 00 00 00 0 1063 82 00 00 00 07 metric 7 1065 -- net 3 1066 86 00 02 family = IP 1067 88 00 00 0 1068 90 xx xx xx net 3 IP address 1069 93 00 net not node 1070 94 00 00 00 00 0 1071 98 00 00 00 00 0 1072 102 00 00 00 07 metric 7 1074 -- net 4 1075 106 00 02 family = IP 1076 108 00 00 0 1077 110 xx xx xx net 4 IP address 1078 113 00 net not node 1079 114 00 00 00 00 0 1080 118 00 00 00 00 0 1081 122 00 00 00 07 metric 7 1083 -- net 5 1084 126 00 02 family = IP 1085 128 00 00 0 1086 130 00 net 5 IP address 1087 133 00 net not node 1088 134 00 00 00 00 0 1089 138 00 00 00 00 0 1090 142 00 00 00 07 metric 7 1092 -- net 6 1093 146 00 02 family = IP 1094 148 00 00 0 1095 150 xx xx xx net 6 IP address 1096 153 00 net not node 1097 154 00 00 00 00 0 1098 158 00 00 00 00 0 1099 162 00 00 00 07 metric 7 1101 C.2.4.6 Management Query Frame 1103 To be defined. 1105 C.2.6.4 Test Frames 1107 UDP echo request on Ethernet 1109 -- DATAGRAM HEADER 1110 offset data description 1111 00 xx xx xx xx xx xx set to dest MAC address 1112 06 xx xx xx xx xx xx set to source MAC address 1113 12 08 00 type 1115 -- IP HEADER 1116 14 45 IP version - 4 1117 header length 5 4 byte units 1118 15 00 TOS 1119 16 00 2E total length* 1120 18 00 00 ID 1121 20 00 00 flags (3 bits) - 0 1122 fragment offset-0 1123 22 0A TTL 1124 23 11 protocol - 17 (UDP) 1125 24 C4 8D header checksum* 1126 26 xx xx xx xx set to source IP address** 1127 30 xx xx xx xx set to destination IP address** 1129 -- UDP HEADER 1130 34 C0 20 source port 1131 36 00 07 destination port 1132 07 = Echo 1133 38 00 1A UDP message length* 1134 40 00 00 UDP checksum 1136 -- UDP DATA 1137 42 00 01 02 03 04 05 06 07 some data*** 1138 50 08 09 0A 0B 0C 0D 0E 0F 1140 * - change for different length frames 1142 ** - change for different logical streams 1144 *** - fill remainder of frame with incrementing octets, repeated if 1145 required by frame length 1147 Values to be used in Total Length and UDP message length fields: 1149 frame size total length UDP message length 1150 64 00 2E 00 1A 1151 128 00 6E 00 5A 1152 256 00 EE 00 9A 1153 512 01 EE 01 9A 1154 768 02 EE 02 9A 1155 1024 03 EE 03 9A 1156 1280 04 EE 04 9A 1157 1518 05 DC 05 C8 1158 Benchmarking Methodology Working Group Scott Bradner 1159 Internet Draft - November 1994 Harvard University 1160 Jim McQuaid 1161 Wandel & Goltermann Technologies, Inc