idnits 2.17.1 draft-ietf-bmwg-methodology-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-19) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) == There are 9 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 107: '... of the MUST requirements for the pr...' RFC 2119 keyword, line 108: '...atisfies all the MUST and all the SHOU...' RFC 2119 keyword, line 110: '...atisfies all the MUST requirements but...' RFC 2119 keyword, line 111: '... the SHOULD requirements for its pro...' RFC 2119 keyword, line 180: '...he tests, the DUT to be tested MUST be...' (102 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 35 has weird spacing: '...necting devic...' == Line 72 has weird spacing: '... under all o...' == Line 439 has weird spacing: '... of data, w...' == Line 651 has weird spacing: '...re, but the f...' == Line 1305 has weird spacing: '...me size tota...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'SHOULD not' in this paragraph: The 24 input and output protocol addresses SHOULD not be any that are represented in the test data stream. The last filter SHOULD permit the forwarding of the test data stream. By "first" and "last" we mean to ensure that in the second case, 25 conditions must be checked before the data frames will match the conditions that permit the forwarding of the frame. Of course, if the DUT reorders the filters or does not use a linear scan of the filter rules the effect of the sequence in which the filters are input is properly lost. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: This appendix discusses certain issues in the benchmarking methodology where experience or judgment may play a role in the tests selected to be run or in the approach to constructing the test with a particular DUT. As such, this appendix MUST not be read as an amendment to the methodology described in the body of this document but as a guide to testing practice. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 1995) is 10475 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 9 errors (**), 0 flaws (~~), 9 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Scott Bradner 3 Internet Draft Harvard University 4 Expires in six months Jim McQuaid 5 Wandel & Goltermann 6 August 1995 8 Benchmarking Methodology for Network Interconnect Devices 10 12 Status of this Document 13 This document is an Internet-Draft. Internet-Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its areas, 15 and its working groups. Note that other groups may also distribute 16 working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference 21 material or to cite them other than as ``work in progress.'' 23 To learn the current status of any Internet-Draft, please check the 24 ``1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow 25 Directories on ds.internic.net (US East Coast), nic.nordu.net 26 (Europe), ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific 27 Rim). 29 Distribution of this document is unlimited. Please send comments to 30 bmwg@harvard.edu or to the editors. 32 Abstract 33 This document discusses and defines a number of tests that may be 34 used to describe the performance characteristics of a network 35 interconnecting device. In addition to defining the tests this 36 document also describes specific formats for reporting the results of 37 the tests. Appendix A lists the tests and conditions that we believe 38 should be included for specific cases and gives additional 39 information about testing practices. Appendix B is a reference 40 listing of maximum frame rates to be used with specific frame sizes 41 on various media and Appendix C gives some examples of frame formats 42 to be used in testing. 44 1. Introduction 45 Vendors often engage in "specsmanship" in an attempt to give their 46 products a better position in the marketplace. This often involves 47 "smoke & mirrors" to confuse the potential users of the products. 49 This document defines a specific set of tests that vendors can use to 50 measure and report the performance characteristics of network 51 devices. The results of these tests will provide the user comparable 52 data from different vendors with which to evaluate these devices. 54 A previous document, "Benchmarking Terminology for Network 55 Interconnect Devices" (RFC 1242), defined many of the terms that are 56 used in this document. The terminology document should be consulted 57 before attempting to make use of this document. 59 2. Real world 60 In producing this document the authors attempted to keep in mind the 61 requirement that apparatus to perform the described tests must 62 actually be built. We do not know of "off the shelf" equipment 63 available to implement all of the tests but it is our opinion that 64 such equipment can be constructed. 66 3. Tests to be run 67 There are a number of tests described in this document. Not all of 68 the tests apply to all types of devices under test (DUTs). Vendors 69 should perform all of the tests that can be supported by a specific 70 type of product. The authors understand that it will take a 71 considerable period of time to perform all of the recommended tests 72 under all of the recommended conditions. We believe that the results 73 are worth the effort. Appendix A lists some of the tests and 74 conditions that we believe should be included for specific cases. 76 4. Evaluating the results 77 Performing all of the recommended tests will result in a great deal 78 of data. Much of this data will not apply to the evaluation of the 79 devices under each circumstance. For example, the rate at which a 80 router forwards IPX frames will be of little use in selecting a 81 router for an environment that does not (and will not) support that 82 protocol. Evaluating even that data which is relevant to a 83 particular network installation will require experience which may not 84 be readily available. Furthermore, selection of the tests to be run 85 and evaluation of the test data must be done with an understanding of 86 generally accepted testing practices regarding repeatability, 87 variance and statistical significance of small numbers of trials. 89 5. Requirements 90 In this document, the words that are used to define the significance 91 of each particular requirement are capitalized. These words are: 93 * "MUST" This word, or the words "REQUIRED" and "SHALL" mean that 94 the item is an absolute requirement of the specification. 96 * "SHOULD" This word or the adjective "RECOMMENDED" means that there 97 may exist valid reasons in particular circumstances to ignore this 98 item, but the full implications should be understood and the case 99 carefully weighed before choosing a different course. 101 * "MAY" This word or the adjective "OPTIONAL" means that this item 102 is truly optional. One vendor may choose to include the item because 103 a particular marketplace requires it or because it enhances the 104 product, for example; another vendor may omit the same item. 106 An implementation is not compliant if it fails to satisfy one or more 107 of the MUST requirements for the protocols it implements. An 108 implementation that satisfies all the MUST and all the SHOULD 109 requirements for its protocols is said to be "unconditionally 110 compliant"; one that satisfies all the MUST requirements but not all 111 the SHOULD requirements for its protocols is said to be 112 "conditionally compliant". 114 6. Test set up 115 The ideal way to implement this series of tests is to use a tester 116 with both transmitting and receiving ports. Connections are made 117 from the sending ports of the tester to the receiving ports of the 118 DUT and from the sending ports of the DUT back to the tester. (see 119 figure 1) Since the tester both sends the test traffic and receives 120 it back, after the traffic has been forwarded but the DUT, the tester 121 can easily determine if all of the transmitted packets were received 122 and verify that the correct packets were received. The same 123 functionality can be obtained with separate transmitting and 124 receiving devices (see figure 2) but unless they are remotely 125 controlled by some computer in a way that simulates the single 126 tester, the labor required to accurately perform some of the tests 127 (particularly the throughput test) can be prohibitive. 129 +------------+ 130 | | 131 +------------| tester |<-------------+ 132 | | | | 133 | +------------+ | 134 | | 135 | +------------+ | 136 | | | | 137 +----------->| DUT |--------------+ 138 | | 139 +------------+ 141 figure 1 143 +--------+ +------------+ +----------+ 144 | | | | | | 145 | sender |-------->| DUT |--------->| receiver | 146 | | | | | | 147 +--------+ +------------+ +----------+ 149 figure 2 151 6.1 Test set up for multiple media types 152 Two different setups could be used to test a DUT which is used in 153 real-world networks to connect networks of differing media type, 154 local Ethernet to a backbone FDDI ring for example. The tester could 155 support both media types in which case the set up shown in figure 1 156 would be used. 158 Two identical DUTs are used in the other test set up. (see figure 3) 159 In many cases this set up may more accurately simulate the real 160 world. For example, connecting two LANs together with a WAN link or 161 high speed backbone. This set up would not be as good at simulating 162 a system where clients on a Ethernet LAN were interacting with a 163 server on an FDDI backbone. 165 +-----------+ 166 | | 167 +---------------------| tester |<---------------------+ 168 | | | | 169 | +-----------+ | 170 | | 171 | +----------+ +----------+ | 172 | | | | | | 173 +------->| DUT 1 |-------------->| DUT 2 |---------+ 174 | | | | 175 +----------+ +----------+ 177 figure 3 179 7. DUT set up 180 Before starting to perform the tests, the DUT to be tested MUST be 181 configured following the instructions provided to the user. 182 Specifically, it is expected that all of the supported protocols will 183 be configured and enabled during this set up (See Appendix A). It is 184 expected that all of the tests will be run without changing the 185 configuration or setup of the DUT in any way other than that required 186 to do the specific test. For example, it is not acceptable to change 187 the size of frame handling buffers between tests of frame handling 188 rates or to disable all but one transport protocol when testing the 189 throughput of that protocol. It is necessary to modify the 190 configuration when starting a test to determine the effect of filters 191 on throughput, but the only change MUST be to enable the specific 192 filter. The DUT set up SHOULD include the normally recommended 193 routing update intervals and keep alive frequency. The specific 194 version of the software and the exact DUT configuration, including 195 what functions are disabled, used during the tests MUST be included 196 as part of the report of the results. 198 8. Frame formats 199 The formats of the test frames to use for TCP/IP over Ethernet are 200 shown in Appendix C: Test Frame Formats. These exact frame formats 201 SHOULD be used in the tests described in this document for this 202 protocol/media combination and that these frames will be used as a 203 template for testing other protocol/media combinations. The specific 204 formats that are used to define the test frames for a particular test 205 series MUST be included in the report of the results. 207 9. Frame sizes 208 All of the described tests SHOULD be performed at a number of frame 209 sizes. Specifically, the sizes SHOULD include the maximum and minimum 210 legitimate sizes for the protocol under test on the media under test 211 and enough sizes in between to be able to get a full characterization 212 of the DUT performance. Except where noted, at least five frame 213 sizes SHOULD be tested for each test condition. 215 Theoretically the minimum size UDP Echo request frame would consist 216 of an IP header (minimum length 20 octets), a UDP header (8 octets) 217 and whatever MAC level header is required by the media in use. The 218 theoretical maximum frame size is determined by the size of the 219 length field in the IP header. In almost all cases the actual 220 maximum and minimum sizes are determined by the limitations of the 221 media. 223 In theory it would be ideal to distribute the frame sizes in a way 224 that would evenly distribute the theoretical frame rates. These 225 recommendations incorporate this theory but specify frame sizes which 226 are easy to understand and remember. In addition, many of the same 227 frame sizes are specified on each of the media types to allow for 228 easy performance comparisons. 230 Note: The inclusion of an unrealistically small frame size on some of 231 the media types (i.e. with little or no space for data) is to help 232 characterize the per-frame processing overhead of the DUT. 234 9.1 Frame sizes to be used on Ethernet 236 64, 128, 256, 512, 1024, 1280, 1518 238 These sizes include the maximum and minimum frame sizes permitted 239 by the Ethernet standard and a selection of sizes between these 240 extremes with a finer granularity for the smaller frame sizes and 241 higher frame rates. 243 9.2 Frame sizes to be used on 4Mb and 16Mb token ring 245 54, 64, 128, 256, 1024, 1518, 2048, 4472 247 The frame size recommendations for token ring assume that there is 248 no RIF field in the frames of routed protocols. A RIF field would 249 be present in any direct source route bridge performance test. 250 The minimum size frame for UDP on token ring is 54 octets. The 251 maximum size of 4472 octets is recommended for 16Mb token ring 252 instead of the theoretical size of 17.9Kb because of the size 253 limitations imposed by many token ring interfaces. The reminder 254 of the sizes are selected to permit direct comparisons with other 255 types of media. An IP (i.e. not UDP) frame may be used in 256 addition if a higher data rate is desired, in which case the 257 minimum frame size is 46 octets. 259 9.3 Frame sizes to be used on FDDI 261 54, 64, 128, 256, 1024, 1518, 2048, 4472 263 The minimum size frame for UDP on FDDI is 53 octets, the minimum 264 size of 54 is recommended to allow direct comparison to token ring 265 performance. The maximum size of 4472 is recommended instead of 266 the theoretical maximum size of 4500 octets to permit the same 267 type of comparison. An IP (i.e. not UDP) frame may be used in 268 addition if a higher data rate is desired, in which case the 269 minimum frame size is 45 octets. 271 9.4 Frame sizes in the presence of disparate MTUs 272 When the interconnect DUT supports connecting links with disparate 273 MTUs, the frame sizes for the link with the *larger* MTU SHOULD be 274 used, up to the limit of the protocol being tested. If the 275 interconnect DUT does not support the fragmenting of frames in the 276 presence of MTU mismatch, the forwarding rate for that frame size 277 shall be reported as zero. 279 For example, the test of IP forwarding with a bridge or router 280 that joins FDDI and Ethernet should use the frame sizes of FDDI 281 when going from the FDDI to the Ethernet link. If the bridge does 282 not support IP fragmentation, the forwarding rate for those frames 283 too large for Ethernet should be reported as zero. 285 10. Verifying received frames 286 The test equipment SHOULD discard any frames received during a test 287 run that are not actual forwarded test frames. For example, keep- 288 alive and routing update frames SHOULD NOT be included in the count 289 of received frames. In any case, the test equipment SHOULD verify 290 the length of the received frames and check that they match the 291 expected length. 293 Preferably, the test equipment SHOULD include sequence numbers in the 294 transmitted frames and check for these numbers on the received 295 frames. If this is done, the reported results SHOULD include in 296 addition to the number of frames dropped, the number of frames that 297 were received out of order, the number of duplicate frames received 298 and the number of gaps in the received frame numbering sequence. 299 This functionality is required for some of the described tests. 301 11. Modifiers 302 It might be useful to know the DUT performance under a number of 303 conditions; some of these conditions are noted below. The reported 304 results SHOULD include as many of these conditions as the test 305 equipment is able to generate. The suite of tests SHOULD be first 306 run without any modifying conditions and then repeated under each of 307 the conditions separately. To preserve the ability to compare the 308 results of these tests any frames that are required to generate the 309 modifying conditions (management queries for example) will be 310 included in the same data stream as the normal test frames in place 311 of one of the test frames and not be supplied to the DUT on a 312 separate network port. 314 11.1 Broadcast frames 315 In most router designs special processing is required when frames 316 addressed to the hardware broadcast address are received. In 317 bridges (or in bridge mode on routers) these broadcast frames must 318 be flooded to a number of ports. The stream of test frames SHOULD 319 be augmented with 1% frames addressed to the hardware broadcast 320 address. The frames sent to the broadcast address should be of a 321 type that the router will not need to process. The aim of this 322 test is to determine if there is any effect on the forwarding rate 323 of the other data in the stream. The specific frames that should 324 be used are included in the test frame format document. The 325 broadcast frames SHOULD be evenly distributed throughout the data 326 stream, for example, every 100th frame. 328 The same test SHOULD be performed on bridge-like DUTs but in this 329 case the broadcast packets will be processed and flooded to all 330 outputs. 332 It is understood that a level of broadcast frames of 1% is much 333 higher than many networks experience but, as in drug toxicity 334 evaluations, the higher level is required to be able to gage the 335 effect which would otherwise often fall within the normal 336 variability of the system performance. Due to design factors some 337 test equipment will not be able to generate a level of alternate 338 frames this low. In these cases the percentage SHOULD be as small 339 as the equipment can provide and that the actual level be 340 described in the report of the test results. 342 11.2 Management frames 343 Most data networks now make use of management protocols such as 344 SNMP. In many environments there can be a number of management 345 stations sending queries to the same DUT at the same time. 347 The stream of test frames SHOULD be augmented with one management 348 query as the first frame sent each second during the duration of 349 the trial. The result of the query must fit into one response 350 frame. The response frame SHOULD be verified by the test 351 equipment. One example of the specific query frame that should be 352 used is shown in Appendix C. 354 11.3 Routing update frames 355 The processing of dynamic routing protocol updates could have a 356 significant impact on the ability of a router to forward data 357 frames. The stream of test frames SHOULD be augmented with one 358 routing update frame transmitted as the first frame transmitted 359 during the trial. Routing update frames SHOULD be sent at the 360 rate specified in Appendix C for the specific routing protocol 361 being used in the test. Two routing update frames are defined in 362 Appendix C for the TCP/IP over Ethernet example. The routing 363 frames are designed to change the routing to a number of networks 364 that are not involved in the forwarding of the test data. The 365 first frame sets the routing table state to "A", the second one 366 changes the state to "B". The frames MUST be alternated during 367 the trial. 369 The test SHOULD verify that the routing update was processed by 370 the DUT. 372 11.4 Filters 373 Filters are added to routers and bridges to selectively inhibit 374 the forwarding of frames that would normally be forwarded. This 375 is usually done to implement security controls on the data that is 376 accepted between one area and another. Different products have 377 different capabilities to implement filters. 379 The DUT SHOULD be first configured to add one filter condition and 380 the tests performed. This filter SHOULD permit the forwarding of 381 the test data stream. In routers this filter SHOULD be of the 382 form: 384 forward input_protocol_address to output_protocol_address 386 In bridges the filter SHOULD be of the form: 388 forward destination_hardware_address 390 The DUT SHOULD be then reconfigured to implement a total of 25 391 filters. The first 24 of these filters SHOULD be of the form: 393 block input_protocol_address to output_protocol_address 395 The 24 input and output protocol addresses SHOULD not be any that 396 are represented in the test data stream. The last filter SHOULD 397 permit the forwarding of the test data stream. By "first" and 398 "last" we mean to ensure that in the second case, 25 conditions 399 must be checked before the data frames will match the conditions 400 that permit the forwarding of the frame. Of course, if the DUT 401 reorders the filters or does not use a linear scan of the filter 402 rules the effect of the sequence in which the filters are input is 403 properly lost. 405 The exact filters configuration command lines used SHOULD be 406 included with the report of the results. 408 11.4.1 Filter Addresses 409 Two sets of filter addresses are required, one for the single 410 filter case and one for the 25 filter case. 412 The single filter case should permit traffic from IP address 413 198.18.1.2 to IP address 198.19.65.2 and deny all other 414 traffic. 416 The 25 filter case should follow the following sequence. 418 deny aa.ba.1.1 to aa.ba.100.1 419 deny aa.ba.2.2 to aa.ba.101.2 420 deny aa.ba.3.3 to aa.ba.103.3 421 ... 422 deny aa.ba.12.12 to aa.ba.112.12 423 allow aa.bc.1.2 to aa.bc.65.1 424 deny aa.ba.13.13 to aa.ba.113.13 425 deny aa.ba.14.14 to aa.ba.114.14 426 ... 427 deny aa.ba.24.24 to aa.ba.124.24 428 deny all else 430 All previous filter conditions should be cleared from the 431 router before this sequence is entered. The sequence is 432 selected to test to see if the router sorts the filter 433 conditions or accepts them in the order that they were entered. 434 Both of these procedures will result in a greater impact on 435 performance than will some form of hash coding. 437 12. Protocol addresses 438 It is easier to implement these tests using a single logical stream 439 of data, with one source protocol address and one destination 440 protocol address, and for some conditions like the filters described 441 above, a practical requirement. Networks in the real world are not 442 limited to single streams of data. The test suite SHOULD be first run 443 with a single protocol (or hardware for bridge tests) source and 444 destination address pair. The tests SHOULD then be repeated with 445 using a random destination address. While testing routers the 446 addresses SHOULD be random and uniformly distributed over a range of 447 256 networks and random and uniformly distributed over the full MAC 448 range for bridges. The specific address ranges to use for IP are 449 shown in Appendix C. 451 13. Route Set Up 452 It is not reasonable that all of the routing information necessary to 453 forward the test stream, especially in the multiple address case, 454 will be manually set up. At the start of each trial a routing update 455 MUST be sent to the DUT. This routing update MUST include all of the 456 network addresses that will be required for the trial. All of the 457 addresses SHOULD resolve to the same "next-hop". Normally this will 458 be the address of the receiving side of the test equipment. This 459 routing update will have to be repeated at the interval required by 460 the routing protocol being used. An example of the format and 461 repetition interval of the update frames is given in Appendix C. 463 14. Bidirectional traffic 464 Normal network activity is not all in a single direction. To test 465 the bidirectional performance of a DUT, the test series SHOULD be run 466 with the same data rate being offered from each direction. The sum of 467 the data rates should not exceed the theoretical limit for the media. 469 15. Single stream path 470 The full suite of tests SHOULD be run along with whatever modifier 471 conditions that are relevant using a single input and output network 472 port on the DUT. If the internal design of the DUT has multiple 473 distinct pathways, for example, multiple interface cards each with 474 multiple network ports, then all possible types of pathways SHOULD be 475 tested separately. 477 16. Multi-port 478 Many current router and bridge products provide many network ports in 479 the same module. In performing these tests first half of the ports 480 are designated as "input ports" and half are designated as "output 481 ports". These ports SHOULD be evenly distributed across the DUT 482 architecture. For example if a DUT has two interface cards each of 483 which has four ports, two ports on each interface card are designated 484 as input and two are designated as output. The specified tests are 485 run using the same data rate being offered to each of the input 486 ports. The addresses in the input data streams SHOULD be set so that 487 a frame will be directed to each of the output ports in sequence so 488 that all "output" ports will get an even distribution of packets from 489 this input. The same configuration MAY be used to perform a 490 bidirectional multi-stream test. In this case all of the ports are 491 considered both input and output ports and each data stream MUST 492 consist of frames addressed to all of the other ports. 494 Consider the following 6 port DUT: 496 -------------- 497 ---------| in A out X|-------- 498 ---------| in B out Y|-------- 499 ---------| in C out Z|-------- 500 -------------- 502 The addressing of the data streams for each of the inputs SHOULD be: 504 stream sent to input A: 505 packet to out X, packet to out Y, packet to out Z 506 stream sent to input B: 507 packet to out X, packet to out Y, packet to out Z 508 stream sent to input C 509 packet to out X, packet to out Y, packet to out Z 511 Note that these streams each follow the same sequence so that 3 512 packets will arrive at output X at the same time, then 3 packets at 513 Y, then 3 packets at Z. This procedure ensures that, as in the real 514 world, the DUT will have to deal with multiple packets addressed to 515 the same output at the same time. 517 17. Multiple protocols 518 This document does not address the issue of testing the effects of a 519 mixed protocol environment other than to suggest that if such tests 520 are wanted then frames SHOULD be distributed between all of the test 521 protocols. The distribution MAY approximate the conditions on the 522 network in which the DUT would be used. 524 18. Multiple frame sizes 525 This document does not address the issue of testing the effects of a 526 mixed frame size environment other than to suggest that if such tests 527 are wanted then frames SHOULD be distributed between all of the 528 listed sizes for the protocol under test. The distribution MAY 529 approximate the conditions on the network in which the DUT would be 530 used. The authors do not have any idea how the results of such a test 531 would be interpreted other than to directly compare multiple DUTs in 532 some very specific simulated network. 534 19. Testing performance beyond a single DUT. 535 In the performance testing of a single DUT, the paradigm can be 536 described as applying some input to a DUT and monitoring the output. 537 The results of which can be used to form a basis of characterization 538 of that device under those test conditions. 540 This model is useful when the test input and output are homogenous 541 (e.g., 64-byte IP, 802.3 frames into the DUT; 64 byte IP, 802.3 542 frames out), or the method of test can distinguish between dissimilar 543 input/output. (E.g., 1518 byte IP, 802.3 frames in; 576 byte, 544 fragmented IP, X.25 frames out.) 546 By extending the single DUT test model, reasonable benchmarks 547 regarding multiple DUTs or heterogeneous environments may be 548 collected. In this extension, the single DUT is replaced by a system 549 of interconnected network DUTs. This test methodology would support 550 the benchmarking of a variety of device/media/service/protocol 551 combinations. For example, a configuration for a LAN-to-WAN-to-LAN 552 test might be: 554 (1) 802.3-> DUT 1 -> X.25 @ 64kbps -> DUT 2 -> 802.3 556 Or a mixed LAN configuration might be: 558 (2) 802.3 -> DUT 1 -> FDDI -> DUT 2 -> FDDI -> DUT 3 -> 802.3 560 In both examples 1 and 2, end-to-end benchmarks of each system could 561 be empirically ascertained. Other behavior may be characterized 562 through the use of intermediate devices. In example 2, the 563 configuration may be used to give an indication of the FDDI to FDDI 564 capability exhibited by DUT 2. 566 Because multiple DUTs are treated as a single system, there are 567 limitations to this methodology. For instance, this methodology may 568 yield an aggregate benchmark for a tested system. That benchmark 569 alone, however, may not necessarily reflect asymmetries in behavior 570 between the DUTs, latencies introduce by other apparatus (e.g., 571 CSUs/DSUs, switches), etc. 573 Further, care must be used when comparing benchmarks of different 574 systems by ensuring that the DUTs' features/configuration of the 575 tested systems have the appropriate common denominators to allow 576 comparison. 578 20. Maximum frame rate 579 The maximum frame rates that should be used when testing LAN 580 connections SHOULD be the listed theoretical maximum rate for the 581 frame size on the media. 583 The maximum frame rate that should be used when testing WAN 584 connections SHOULD be greater than the listed theoretical maximum 585 rate for the frame size on that speed connection. The higher rate 586 for WAN tests is to compensate for the fact that some vendors employ 587 various forms of header compression. 589 A list of maximum frame rates for LAN connections is included in 590 Appendix B. 592 21. Bursty traffic 593 It is convenient to measure the DUT performance under steady state 594 load but this is an unrealistic way to gauge the functioning of a DUT 595 since actual network traffic normally consists of bursts of frames. 596 Some of the tests described below SHOULD be performed with both 597 steady state traffic and with traffic consisting of repeated bursts 598 of frames. The frames within a burst are transmitted with the 599 minimum legitimate inter-frame gap. 601 The objective of the test is to determine the minimum interval 602 between bursts which the DUT can process with no frame loss. During 603 each test the number of frames in each burst is held constant and the 604 inter-burst interval varied. Tests SHOULD be run with burst sizes of 605 16, 64, 256 and 1024 frames. 607 22. Frames per token 608 Although it is possible to configure some token ring and FDDI 609 interfaces to transmit more than one frame each time that the token 610 is received, most of the network devices currently available transmit 611 only one frame per token. These tests SHOULD first be performed 612 while transmitting only one frame per token. 614 Some current high-performance workstation servers do transmit more 615 than one frame per token on FDDI to maximize throughput. Since this 616 may be a common feature in future workstations and servers, 617 interconnect devices with FDDI interfaces SHOULD be tested with 1, 4, 618 8, and 16 frames per token. The reported frame rate SHOULD be the 619 average rate of frame transmission over the total trial period. 621 23. Trial description 622 A particular test consists of multiple trials. Each trial returns 623 one piece of information, for example the loss rate at a particular 624 input frame rate. Each trial consists of a number of phases: 626 a) If the DUT is a router, send the routing update to the "input" 627 port and pause two seconds to be sure that the routing has settled. 629 b) Send the "learning frames" to the "output" port and wait 2 630 seconds to be sure that the learning has settled. Bridge learning 631 frames are frames with source addresses that are the same as the 632 destination addresses used by the test frames. Learning frames for 633 other protocols are used to prime the address resolution tables in 634 the DUT. The formats of the learning frame that should be used are 635 shown in the Test Frame Formats document. 637 c) Run the test trial. 639 d) Wait for two seconds for any residual frames to be received. 641 e) Wait for at least five seconds for the DUT to restabilize. 643 24. Trial duration 644 The aim of these tests is to determine the rate continuously 645 supportable by the DUT. The actual duration of the test trials must 646 be a compromise between this aim and the duration of the benchmarking 647 test suite. The duration of the test portion of each trial SHOULD be 648 at least 60 seconds. The tests that involve some form of "binary 649 search", for example the throughput test, to determine the exact 650 result MAY use a shorter trial duration to minimize the length of the 651 search procedure, but the final determination SHOULD be made with 652 full length trials. 654 25. Address resolution 655 The DUT SHOULD be able to respond to address resolution requests sent 656 by the DUT wherever the protocol requires such a process. 658 26. Benchmarking tests: 659 Note: The notation "type of data stream" refers to the above 660 modifications to a frame stream with a constant inter-frame gap, for 661 example, the addition of traffic filters to the configuration of the 662 DUT. 664 26.1 Throughput 666 Objective: 667 To determine the DUT throughput as defined in RFC 1242. 669 Procedure: 670 Send a specific number of frames at a specific rate through the 671 DUT and then count the frames that are transmitted by the DUT. If 672 the count of offered frames is equal to the count of received 673 frames, the rate of the offered stream is raised and the test 674 rerun. If fewer frames are received than were transmitted, the 675 rate of the offered stream is reduced and the test is rerun. 677 The throughput is the fastest rate at which the count of test 678 frames transmitted by the DUT is equal to the number of test 679 frames sent to it by the test equipment. 681 Reporting format: 682 The results of the throughput test SHOULD be reported in the form 683 of a graph. If it is, the x coordinate SHOULD be the frame size, 684 the y coordinate SHOULD be the frame rate. There SHOULD be at 685 least two lines on the graph. There SHOULD be one line showing 686 the theoretical frame rate for the media at the various frame 687 sizes. The second line SHOULD be the plot of the test results. 688 Additional lines MAY be used on the graph to report the results 689 for each type of data stream tested. Text accompanying the graph 690 SHOULD indicate the protocol, data stream format, and type of 691 media used in the tests. 693 We assume that if a single value is desired for advertising 694 purposes the vendor will select the rate for the minimum frame 695 size for the media. If this is done then the figure MUST be 696 expressed in frames per second. The rate MAY also be expressed in 697 bits (or bytes) per second if the vendor so desires. The 698 statement of performance MUST include a/ the measured maximum 699 frame rate, b/ the size of the frame used, c/ the theoretical 700 limit of the media for that frame size, and d/ the type of 701 protocol used in the test. Even if a single value is used as part 702 of the advertising copy, the full table of results SHOULD be 703 included in the product data sheet. 705 26.2 Latency 707 Objective: 708 To determine the latency as defined in RFC 1242. 710 Procedure: 711 First determine the throughput for DUT at each of the listed frame 712 sizes. Send a stream of frames at a particular frame size through 713 the DUT at the determined throughput rate to a specific 714 destination. The stream SHOULD be at least 120 seconds in 715 duration. An identifying tag SHOULD be included in one frame 716 after 60 seconds with the type of tag being implementation 717 dependent. The time at which this frame is fully transmitted is 718 recorded, i.e. the last bit has been transmitted (timestamp A). 719 The receiver logic in the test equipment MUST be able to recognize 720 the tag information in the frame stream and record the time at 721 which the entire tagged frame was received (timestamp B). 723 The latency is timestamp B minus timestamp A minus the transit 724 time for a frame of the tested size on the tested media. This 725 calculation may result in a negative value for those DUTs that 726 begin to transmit the output frame before the entire input frame 727 has been received. 729 The test MUST be repeated at least 20 times with the reported 730 value being the average of the recorded values. 732 This test SHOULD be performed with the test frame addressed to the 733 same destination as the rest of the data stream and also with each 734 of the test frames addressed to a new destination network. 736 Reporting format: 737 The latency results SHOULD be reported in the format of a table 738 with a row for each of the tested frame sizes. There SHOULD be 739 columns for the frame size, the rate at which the latency test was 740 run for that frame size, for the media types tested, and for the 741 resultant latency values for each type of data stream tested. 743 26.3 Frame loss rate 745 Objective: 746 To determine the frame loss rate, as defined in RFC 1242, of a DUT 747 throughout the entire range of input data rates and frame sizes. 749 Procedure: 750 Send a specific number of frames at a specific rate through the 751 DUT to be tested and count the frames that are transmitted by the 752 DUT. The frame loss rate at each point is calculated using the 753 following equation: 755 ( ( input_count - output_count ) * 100 ) / input_count 757 The first trial SHOULD be run for the frame rate that corresponds 758 to 100% of the maximum rate for the frame size on the input media. 759 Repeat the procedure for the rate that corresponds to 90% of the 760 maximum rate used and then for 80% of this rate. This sequence 761 SHOULD be continued (at reducing 10% intervals) until there are 762 two successive trials in which no frames are lost. The maximum 763 granularity of the trials MUST be 10% of the maximum rate, a finer 764 granularity is encouraged. 766 Reporting format: 767 The results of the frame loss rate test SHOULD be plotted as a 768 graph. If this is done then the X axis MUST be the input frame 769 rate as a percent of the theoretical rate for the media at the 770 specific frame size. The Y axis MUST be the percent loss at the 771 particular input rate. The left end of the X axis and the bottom 772 of the Y axis MUST be 0 percent; the right end of the X axis and 773 the top of the Y axis MUST be 100 percent. Multiple lines on the 774 graph MAY used to report the frame loss rate for different frame 775 sizes, protocols, and types of data streams. 777 Note: See section 18 for the maximum frame rates that SHOULD be 778 used. 780 26.4 Back-to-back frames 782 Objective: 783 To characterize the ability of a DUT to process back-to-back 784 frames as defined in RFC 1242. 786 Procedure: 787 Send a burst of frames with minimum inter-frame gaps to the DUT 788 and count the number of frames forwarded by the DUT. If the count 789 of transmitted frames is equal to the number of frames forwarded 790 the length of the burst is increased and the test is rerun. If 791 the number of forwarded frames is less than the number 792 transmitted, the length of the burst is reduced and the test is 793 rerun. 795 The back-to-back value is the number of frames in the longest 796 burst that the DUT will handle without the loss of any frames. 797 The trial length MUST be at least 2 seconds and SHOULD be 798 repeated at least 50 times with the average of the recorded values 799 being reported. 801 Reporting format: 802 The back-to-back results SHOULD be reported in the format of a 803 table with a row for each of the tested frame sizes. There SHOULD 804 be columns for the frame size and for the resultant average frame 805 count for each type of data stream tested. The standard deviation 806 for each measurement MAY also be reported. 808 26.5 System recovery 810 Objective: 811 To characterize the speed at which a DUT recovers from an overload 812 condition. 814 Procedure: 815 First determine the throughput for a DUT at each of the listed 816 frame sizes. 818 Send a stream of frames at a rate 110% of the recorded throughput 819 rate or the maximum rate for the media, whichever is lower, for at 820 least 60 seconds. At Timestamp A reduce the frame rate to 50% of 821 the above rate and record the time of the last frame lost 822 (Timestamp B). The system recovery time is determined by 823 subtracting Timestamp A from Timestamp B. The test SHOULD be 824 repeated a number of times and the average of the recorded values 825 being reported. 827 Reporting format: 828 The system recovery results SHOULD be reported in the format of a 829 table with a row for each of the tested frame sizes. There SHOULD 830 be columns for the frame size, the frame rate used as the 831 throughput rate for each type of data stream tested, and for the 832 measured recovery time for each type of data stream tested. 834 26.6 Reset 836 Objective: 837 To characterize the speed at which a DUT recovers from a device or 838 software reset. 840 Procedure: 841 First determine the throughput for the DUT for the minimum frame 842 size on the media used in the testing. 844 Send a continuous stream of frames at the determined throughput 845 rate for the minimum sized frames. Cause a reset in the DUT. 846 Monitor the output until frames begin to be forwarded and record 847 the time that the last frame (Timestamp A) of the initial stream 848 and the first frame of the new stream (Timestamp B) are received. 849 A power interruption reset test is performed as above except that 850 the power to the DUT should be interrupted for 10 seconds in place 851 of causing a reset. 853 This test SHOULD only be run using frames addressed to networks 854 directly connected to the DUT so that there is no requirement to 855 delay until a routing update is received. 857 The reset value is obtained by subtracting Timestamp A from 858 Timestamp B. 860 Hardware and software resets, as well as a power interruption 861 SHOULD be tested. 863 Reporting format: 864 The reset value SHOULD be reported in a simple set of statements, 865 one for each reset type. 867 27. Security Considerations 869 Security issues are not addressed in this document. 871 28. Editor's Addresses 873 Scott Bradner 874 Harvard University Phone +1 617 495-3864 875 1350 Mass. Ave, room 813 Fax +1 617 496-8500 876 Cambridge, MA 02138 Email: sob@harvard.edu 878 Jim McQuaid 879 Wandel & Goltermann Technologies, Inc Phone +1 919 941-4730 880 P. O. Box 13585 Fax: +1 919 941-5751 881 Research Triangle Park, NC 27709 Email: mcquaid@wg.com 883 Appendix A: Testing Considerations 885 A.1 Scope Of This Appendix 887 This appendix discusses certain issues in the benchmarking 888 methodology where experience or judgment may play a role in the tests 889 selected to be run or in the approach to constructing the test with a 890 particular DUT. As such, this appendix MUST not be read as an 891 amendment to the methodology described in the body of this document 892 but as a guide to testing practice. 894 1. Typical testing practice has been to enable all protocols to be 895 tested and conduct all testing with no further configuration of 896 protocols, even though a given set of trials may exercise only one 897 protocol at a time. This minimizes the opportunities to "tune" a 898 DUT for a single protocol. 900 2. The least common denominator of the available filter functions 901 should be used to ensure that there is a basis for comparison 902 between vendors. Because of product differences, those conducting 903 and evaluating tests must make a judgment about this issue. 905 3. Architectural considerations may need to be considered. For 906 example, first perform the tests with the stream going between 907 ports on the same interface card and the repeat the tests with the 908 stream going into a port on one interface card and out of a port 909 on a second interface card. There will almost always be a best 910 case and worst case configuration for a given DUT architecture. 912 4. Testing done using traffic streams consisting of mixed protocols 913 has not shown much difference between testing with individual 914 protocols. That is, if protocol A testing and protocol B testing 915 give two different performance results, mixed protocol testing 916 appears to give a result which is the average of the two. 918 5. Wide Area Network (WAN) performance may be tested by setting up 919 two identical devices connected by the appropriate short- haul 920 versions of the WAN modems. Performance is then measured between 921 a LAN interface on one DUT to a LAN interface on the other DUT. 923 The maximum frame rate to be used for LAN-WAN-LAN configurations is a 924 judgment that can be based on known characteristics of the overall 925 system including compression effects, fragmentation, and gross link 926 speeds. Practice suggests that the rate should be at least 110% of 927 the slowest link speed. Substantive issues of testing compression 928 itself are beyond the scope of this document. 930 Appendix B: Maximum frame rates reference 932 (Provided by Roger Beeman, Cisco Systems) 934 Size Ethernet 16Mb Token Ring FDDI 935 (bytes) (pps) (pps) (pps) 937 64 14880 24691 152439 938 128 8445 13793 85616 939 256 4528 7326 45620 940 512 2349 3780 23585 941 768 1586 2547 15903 942 1024 1197 1921 11996 943 1280 961 1542 9630 944 1518 812 1302 8138 946 Ethernet size 947 Preamble 64 bits 948 Frame 8 x N bits 949 Gap 96 bits 951 16Mb Token Ring size 952 SD 8 bits 953 AC 8 bits 954 FC 8 bits 955 DA 48 bits 956 SA 48 bits 957 RI 48 bits ( 06 30 00 12 00 30 ) 958 SNAP 959 DSAP 8 bits 960 SSAP 8 bits 961 Control 8 bits 962 Vendor 24 bits 963 Type 16 bits 964 Data 8 x ( N - 18) bits 965 FCS 32 bits 966 ED 8 bits 967 FS 8 bits 969 Tokens or idles between packets are not included 971 FDDI size 972 Preamble 64 bits 973 SD 8 bits 974 FC 8 bits 975 DA 48 bits 976 SA 48 bits 977 SNAP 978 DSAP 8 bits 979 SSAP 8 bits 980 Control 8 bits 981 Vendor 24 bits 982 Type 16 bits 983 Data 8 x ( N - 18) bits 984 FCS 32 bits 985 ED 4 bits 986 FS 12 bits 988 Appendix C: Test Frame Formats 990 This appendix defines the frame formats that may be used with these 991 tests. It also includes protocol specific parameters for TCP/IP over 992 Ethernet to be used with the tests as an example. 994 C.1. Introduction 996 The general logic used in the selection of the parameters and the 997 design of the frame formats is explained for each case within the 998 TCP/IP section. The same logic has been used in the other sections. 999 Comments are used in these sections only if there is a protocol 1000 specific feature to be explained. Parameters and frame formats for 1001 additional protocols can be defined by the reader by using the same 1002 logic. 1004 C.2. TCP/IP Information 1006 The following section deals with the TCP/IP protocol suite. 1008 C.2.1 Frame Type. 1009 An application level datagram echo request is used for the test 1010 data frame in the protocols that support such a function. A 1011 datagram protocol is used to minimize the chance that a router 1012 might expect a specific session initialization sequence, as might 1013 be the case for a reliable stream protocol. A specific defined 1014 protocol is used because some routers verify the protocol field 1015 and refuse to forward unknown protocols. 1017 For TCP/IP a UDP Echo Request is used. 1019 C.2.2 Protocol Addresses 1020 Two sets of addresses must be defined: first the addresses 1021 assigned to the router ports, and second the address that are to 1022 be used in the frames themselves and in the routing updates. 1024 The network addresses 192.18.0.0 through 192.19.255.255 are have 1025 been assigned to the BMWG by the IANA for this purpose. This 1026 assignment was made to minimize the chance of conflict in case a 1027 testing device were to be accidentally connected to part of the 1028 Internet. The specific use of the addresses is detailed below. 1030 C.2.2.1 Router port protocol addresses 1031 Half of the ports on a multi-port router are referred to as 1032 "input" ports and the other half as "output" ports even though 1033 some of the tests use all ports both as input and output. A 1034 contiguous series of IP Class C network addresses from 1035 198.18.1.0 to 198.18.64.0 have been assigned for use on the 1036 "input" ports. A second series from 198.19.1.0 to 198.19.64.0 1037 have been assigned for use on the "output" ports. In all cases 1038 the router port is node 1 on the appropriate network. For 1039 example, a two port DUT would have an IP address of 198.18.1.1 1040 on one port and 198.19.1.1 on the other port. 1042 Some of the tests described in the methodology memo make use of 1043 an SNMP management connection to the DUT. The management 1044 access address for the DUT is assumed to be the first of the 1045 "input" ports (198.18.1.1). 1047 C.2.2.2 Frame addresses 1048 Some of the described tests assume adjacent network routing 1049 (the reboot time test for example). The IP address used in the 1050 test frame is that of node 2 on the appropriate Class C 1051 network. (198.19.1.2 for example) 1053 If the test involves non-adjacent network routing the phantom 1054 routers are located at node 10 of each of the appropriate Class 1055 C networks. A series of Class C network addresses from 1056 198.18.65.0 to 198.18.254.0 has been assigned for use as the 1057 networks accessible through the phantom routers on the "input" 1058 side of DUT. The series of Class C networks from 198.19.65.0 1059 to 198.19.254.0 have been assigned to be used as the networks 1060 visible through the phantom routers on the "output" side of the 1061 DUT. 1063 C.2.3 Routing Update Frequency 1065 The update interval for each routing protocol is may have to be 1066 determined by the specifications of the individual protocol. For 1067 IP RIP, Cisco IGRP and for OSPF a routing update frame or frames 1068 should precede each stream of test frames by 5 seconds. This 1069 frequency is sufficient for trial durations of up to 60 seconds. 1070 Routing updates must be mixed with the stream of test frames if 1071 longer trial periods are selected. The frequency of updates 1072 should be taken from the following table. 1074 IP-RIP 30 sec 1075 IGRP 90 sec 1076 OSPF 90 sec 1078 C.2.4 Frame Formats - detailed discussion 1080 C.2.4.1 Learning Frame 1081 In most protocols a procedure is used to determine the mapping 1082 between the protocol node address and the MAC address. The 1083 Address Resolution Protocol (ARP) is used to perform this 1084 function in TCP/IP. No such procedure is required in XNS or 1085 IPX because the MAC address is used as the protocol node 1086 address. 1088 In the ideal case the tester would be able to respond to ARP 1089 requests from the DUT. In cases where this is not possible an 1090 ARP request should be sent to the router's "output" port. This 1091 request should be seen as coming from the immediate destination 1092 of the test frame stream. (i.e. the phantom router (figure 2) 1093 or the end node if adjacent network routing is being used.) It 1094 is assumed that the router will cache the MAC address of the 1095 requesting device. The ARP request should be sent 5 seconds 1096 before the test frame stream starts in each trial. Trial 1097 lengths of longer than 50 seconds may require that the router 1098 be configured for an extended ARP timeout. 1100 +--------+ +------------+ 1101 | | | phantom |------ P LAN 1102 A 1103 IN A------| DUT |------------| |------ P LAN 1104 B 1105 | | OUT A | router |------ P LAN 1106 C 1107 +--------+ +------------+ 1108 figure 2 1110 In the case where full routing is being used 1112 C.2.4.2 Routing Update Frame 1113 If the test does not involve adjacent net routing the tester 1114 must supply proper routing information using a routing update. 1115 A single routing update is used before each trial on each 1116 "destination" port (see section C.24). This update includes 1117 the network addresses that are reachable through a phantom 1118 router on the network attached to the port. For a full mesh 1119 test, one destination network address is present in the routing 1120 update for each of the "input" ports. The test stream on each 1121 "input" port consists of a repeating sequence of frames, one to 1122 each of the "output" ports. 1124 C.2.4.3 Management Query Frame 1125 The management overhead test uses SNMP to query a set of 1126 variables that should be present in all DUTs that support SNMP. 1127 The variables are read by an NMS at the appropriate intervals. 1128 The list of variables to retrieve follow: 1130 sysUpTime 1131 ifInOctets 1132 ifOutOctets 1133 ifInUcastPkts 1134 ifOutUcastPkts 1136 C.2.4.4 Test Frames 1137 The test frame is an UDP Echo Request with enough data to fill 1138 out the required frame size. The data should not be all bits 1139 off or all bits on since these patters can cause a "bit 1140 stuffing" process to be used to maintain clock synchronization 1141 on WAN links. This process will result in a longer frame than 1142 was intended. 1144 C.2.4.5 Frame Formats - TCP/IP on Ethernet 1145 Each of the frames below are described for the 1st pair of DUT 1146 ports, i.e. "input" port #1 and "output" port #1. Addresses 1147 must be changed if the frame is to be used for other ports. 1149 C.2.6.1 Learning Frame 1151 ARP Request on Ethernet 1153 -- DATAGRAM HEADER 1154 offset data (hex) description 1155 00 FF FF FF FF FF FF dest MAC address send to 1156 broadcast address 1157 06 xx xx xx xx xx xx set to source MAC address 1158 12 08 06 ARP type 1159 14 00 01 hardware type Ethernet = 1 1160 16 08 00 protocol type IP = 800 1161 18 06 hardware address length 48 bits 1162 on Ethernet 1163 19 04 protocol address length 4 octets 1164 for IP 1165 20 00 01 opcode request = 1 1166 22 xx xx xx xx xx xx source MAC address 1167 28 xx xx xx xx source IP address 1168 32 FF FF FF FF FF FF requesting DUT's MAC address 1169 38 xx xx xx xx DUT's IP address 1171 C.2.6.2 Routing Update Frame 1173 -- DATAGRAM HEADER 1174 offset data (hex) description 1175 00 FF FF FF FF FF FF dest MAC address is broadcast 1176 06 xx xx xx xx xx xx source hardware address 1177 12 08 00 type 1179 -- IP HEADER 1180 14 45 IP version - 4, header length (4 1181 byte units) - 5 1182 15 00 service field 1183 16 00 EE total length 1184 18 00 00 ID 1185 20 40 00 flags (3 bits) 4 (do not 1186 fragment), 1187 fragment offset-0 1188 22 0A TTL 1189 23 11 protocol - 17 (UDP) 1190 24 C4 8D header checksum 1191 26 xx xx xx xx source IP address 1192 30 xx xx xx destination IP address 1193 33 FF host part = FF for broadcast 1195 -- UDP HEADER 1196 34 02 08 source port 208 = RIP 1197 36 02 08 destination port 208 = RIP 1198 38 00 DA UDP message length 1199 40 00 00 UDP checksum 1201 -- RIP packet 1202 42 02 command = response 1203 43 01 version = 1 1204 44 00 00 0 1206 -- net 1 1207 46 00 02 family = IP 1208 48 00 00 0 1209 50 xx xx xx net 1 IP address 1210 53 00 net not node 1211 54 00 00 00 00 0 1212 58 00 00 00 00 0 1213 62 00 00 00 07 metric 7 1215 -- net 2 1216 66 00 02 family = IP 1217 68 00 00 0 1218 70 xx xx xx net 2 IP address 1219 73 00 net not node 1220 74 00 00 00 00 0 1221 78 00 00 00 00 0 1222 82 00 00 00 07 metric 7 1224 -- net 3 1225 86 00 02 family = IP 1226 88 00 00 0 1227 90 xx xx xx net 3 IP address 1228 93 00 net not node 1229 94 00 00 00 00 0 1230 98 00 00 00 00 0 1231 102 00 00 00 07 metric 7 1233 -- net 4 1234 106 00 02 family = IP 1235 108 00 00 0 1236 110 xx xx xx net 4 IP address 1237 113 00 net not node 1238 114 00 00 00 00 0 1239 118 00 00 00 00 0 1240 122 00 00 00 07 metric 7 1242 -- net 5 1243 126 00 02 family = IP 1244 128 00 00 0 1245 130 00 net 5 IP address 1246 133 00 net not node 1247 134 00 00 00 00 0 1248 138 00 00 00 00 0 1249 142 00 00 00 07 metric 7 1251 -- net 6 1252 146 00 02 family = IP 1253 148 00 00 0 1254 150 xx xx xx net 6 IP address 1255 153 00 net not node 1256 154 00 00 00 00 0 1257 158 00 00 00 00 0 1258 162 00 00 00 07 metric 7 1260 C.2.4.6 Management Query Frame 1262 To be defined. 1264 C.2.6.4 Test Frames 1265 UDP echo request on Ethernet 1267 -- DATAGRAM HEADER 1268 offset data (hex) description 1269 00 xx xx xx xx xx xx set to dest MAC address 1270 06 xx xx xx xx xx xx set to source MAC address 1271 12 08 00 type 1273 -- IP HEADER 1274 14 45 IP version - 4 header length 5 4 1275 byte units 1276 15 00 TOS 1277 16 00 2E total length* 1278 18 00 00 ID 1279 20 00 00 flags (3 bits) - 0 fragment 1280 offset-0 1281 22 0A TTL 1282 23 11 protocol - 17 (UDP) 1283 24 C4 8D header checksum* 1284 26 xx xx xx xx set to source IP address** 1285 30 xx xx xx xx set to destination IP address** 1287 -- UDP HEADER 1288 34 C0 20 source port 1289 36 00 07 destination port 07 = Echo 1290 38 00 1A UDP message length* 1291 40 00 00 UDP checksum 1292 -- UDP DATA 1293 42 00 01 02 03 04 05 06 07 some data*** 1294 50 08 09 0A 0B 0C 0D 0E 0F 1296 * - change for different length frames 1298 ** - change for different logical streams 1300 *** - fill remainder of frame with incrementing octets, 1301 repeated if required by frame length 1303 Values to be used in Total Length and UDP message length fields: 1305 frame size total length UDP message length 1306 64 00 2E 00 1A 1307 128 00 6E 00 5A 1308 256 00 EE 00 9A 1309 512 01 EE 01 9A 1310 768 02 EE 02 9A 1311 1024 03 EE 03 9A 1312 1280 04 EE 04 9A 1313 1518 05 DC 05 C8