idnits 2.17.1 draft-ietf-bmwg-mcastm-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 155 instances of too long lines in the document, the longest one being 12 characters in excess of 72. == There are 10 instances of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 14 has weird spacing: '...d is in full ...' == Line 18 has weird spacing: '...), its areas...' == Line 19 has weird spacing: '...ups may also ...' == Line 23 has weird spacing: '... and may be...' == Line 24 has weird spacing: '...opriate to u...' == (16 more instances...) == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: This appendix discusses the suggested approach to configuring the deterministic distribution methodology for tests that involve both multicast and unicast traffic classes in an aggregated traffic stream. As such, this appendix MUST not be read as an amendment to the methodology described in the body of this document but as a guide to testing practice. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 1999) is 9079 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'Br91' is defined on line 658, but no explicit reference was found in the text == Unused Reference: 'Br96' is defined on line 661, but no explicit reference was found in the text == Unused Reference: 'Br97' is defined on line 664, but no explicit reference was found in the text == Unused Reference: 'Du98' is defined on line 667, but no explicit reference was found in the text == Unused Reference: 'Hu95' is defined on line 670, but no explicit reference was found in the text == Unused Reference: 'Ka98' is defined on line 672, but no explicit reference was found in the text == Unused Reference: 'Ma98' is defined on line 675, but no explicit reference was found in the text == Unused Reference: 'Mt98' is defined on line 678, but no explicit reference was found in the text == Unused Reference: 'Se98' is defined on line 681, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. 'Br91') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. 'Br96') ** Downref: Normative reference to an Informational RFC: RFC 2432 (ref. 'Du98') -- Possible downref: Non-RFC (?) normative reference: ref. 'Hu95' -- Possible downref: Non-RFC (?) normative reference: ref. 'Ka98' ** Downref: Normative reference to an Informational RFC: RFC 2285 (ref. 'Ma98') -- Possible downref: Non-RFC (?) normative reference: ref. 'Mt98' -- Possible downref: Non-RFC (?) normative reference: ref. 'Se98' Summary: 10 errors (**), 0 flaws (~~), 18 warnings (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Hardev Soor 2 INTERNET-DRAFT Debra Stopp 3 Expires in: December 1999 Ixia Communications 5 Ralph Daniels 6 Netcom Systems 7 June 1999 9 Methodology for IP Multicast Benchmarking 10 12 Status of this Memo 14 This document is an Internet-Draft and is in full conformance with 15 all provisions of Section 10 of RFC2026. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet- Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 Abstract 35 The purpose of this draft is to describe methodology specific to the 36 benchmarking of multicast IP forwarding devices. It builds upon the 37 tenets set forth in RFC 2544, RFC 2432 and other IETF Benchmarking 38 Methodology Working Group (BMWG) efforts. This document seeks to 39 extend these efforts to the multicast paradigm. 41 The BMWG produces two major classes of documents: Benchmarking 42 Terminology documents and Benchmarking Methodology documents. The 43 Terminology documents present the benchmarks and other related terms. 44 The Methodology documents define the procedures required to collect 45 the benchmarks cited in the corresponding Terminology documents. 47 1. Introduction 49 This document defines a specific set of tests that vendors can use to 50 measure and report the performance characteristics and forwarding 51 capabilities of network devices that support IP multicast protocols. 52 The results of these tests will provide the user comparable data from 53 different vendors with which to evaluate these devices. 55 A previous document, " Terminology for IP Multicast Benchmarking" 56 (RFC 2432), defined many of the terms that are used in this document. 57 The terminology document should be consulted before attempting to 58 make use of this document. 60 This methodology will focus on one source to many destinations, 61 although many of the tests described may be extended to use multiple 62 source to multiple destination IP multicast communication. 64 2. Key Words to Reflect Requirements 66 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 67 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 68 document are to be interpreted as described in RFC 2119. 70 3. Test set up 72 Figure 1 shows a typical setup for an IP multicast test, with one 73 source to multiple destinations, although this MAY be extended to 74 multiple source to multiple destinations. 76 +----------------+ 77 +------------+ | | 78 +--------+ | |--------->| destination(1) | 79 | | | | | | 80 | source |-------->| | +----------------+ 81 | | | | +----------------+ 82 +--------+ | D U T |--------->| | 83 | | | destination(2) | 84 | | | | 85 | | +----------------+ 86 | | . . . 87 | | +----------------+ 88 | | | | 89 | |--------->| destination(n) | 90 | | | | 91 | | +----------------+ 92 | | 93 +------------+ 94 Figure 1 96 Generally , the destination ports first join the desired number of 97 multicast groups by sending IGMP Join Group messages to the DUT/SUT. 98 To verify that all destination ports successfully joined the 99 appropriate groups, the source port MUST transmit IP multicast 100 frames destined for these groups. The destination ports MAY send 101 IGMP Leave Group messages after the transmission of IP Multicast 102 frames to clear the IGMP table of the DUT/SUT. 104 In addition, all transmitted frames MUST contain a recognizable 105 pattern that can be filtered on in order to ensure the receipt of only 106 the frames that are involved in the test. 108 3.1 Test Considerations 110 3.1.1 IGMP Support 112 Each of the receiving ports should support and be able to test both IGMP 113 version 1 and IGMP version 2. 115 Each receiving port should be able to respond to IGMP queries during the 116 test. 118 Each receiving port should also send LEAVE (running IGMP version 2) 119 after each test. 121 3.1.2 Group Addresses 123 The Class D Group address should be changed between tests. Many 124 DUTs have memory or cache that is not cleared properly and can 125 bias the results. 127 The following group addresses are recommended by use in a test: 129 224.0.1.27-224.0.1.255 130 224.0.5.128-224.0.5.255 131 224.0.6.128-224.0.6.255 133 If the number of group addresses accomodated by these ranges do not 134 satisfy the requrirements of the test, then these ranges may be 135 overlapped. 137 3.1.3 Frame Sizes 139 Each test should be run with different Multicast Frame Sizes. The 140 recommended frame sizes are 64, 128, 256, 512, 1024, 1280, and 1518 141 byte frames. 143 3.1.4 TTL 144 The source frames should have a TTL value large enough to accommodate 145 the DUT/SUT. 147 4. Forwarding and Throughput 149 This section contains the description of the tests that are related to 150 the characterization of the packet forwarding of a DUT/SUT in a 151 multicast environment. Some metrics extend the concept of throughput 152 presented in RFC 1242. The notion of Forwarding Rate is cited in RFC 153 2285. 155 4.1 Mixed Class Throughput 157 Definition 158 The maximum rate at which none of the offered frames, comprised from 159 a unicast Class and a multicast Class, to be forwarded are dropped 160 by the device across a fixed number of ports. 162 Procedure 164 Multicast and unicast traffic are mixed together in the same 165 aggregated traffic stream in order to simulate the non-homogenous 166 networking environment. While the multicast traffic is transmitted 167 from one source to multiple destinations, the unicast traffic MAY be 168 evenly distributed across the DUT/SUT architecture. In addition, the 169 DUT/SUT SHOULD learn the appropriate unicast IP addresses, either by 170 sending ARP frames from each unicast address, sending a RIP packet 171 or by assigning static entries into the DUT/SUT address table. 173 The rates at which traffic is transmitted for both traffic classes 174 MUST be set up in one of two ways: 176 a) A percentage of the bandwidth is allocated for each traffic class 177 and frames for each class are transmitted at the rate equal to 178 the allocated bandwidth. For example, 64 byte frames can be 179 transmitted at a theoretical maximum rate of 148810 frames/second. 180 If 80 percent of the bandwidth is allocated for unicast traffic 181 and 20 percent for multicast traffic, then unicast traffic will 182 be sent at a maximum rate of 119048 frames/second and the 183 multicast traffic at a rate of 29762 frames/second. 185 b) Transmission rate is fixed for both traffic classes and a percentage of 186 number of frames for each traffic class is specified. For example, if a 187 fixed rate of 100% of theoretical maximum is desired, then 64 byte 188 frames will be sent at 148810 frames/second for both unicast and 189 multicast traffic. If 80 percent of the frames are to be unicast and 190 20 percent multicast, then for a duration of 10 seconds, 1190480 191 frames of unicast and 297620 frames of multicast will be sent. This 192 fixed rate scenario actually over-subscribes the bandwidth, 193 potentially causing congestion in the DUT/SUT. 195 The transmission of the frames MUST be set up so that they form a 196 deterministic distribution while still maintaining the specified bandwidth 197 and transmission rates. See Appendix A for a discussion on determining an 198 even distribution. 200 Similar to the Frame loss rate test in RFC 2544, the first trial SHOULD be 201 run for the frame rate that corresponds to 100% of the maximum rate for 202 the frame size on the input media. Repeat the procedure for the rate that 203 corresponds to 90% of the maximum rate used and then for 80% of this rate. 204 This sequence SHOULD be continued (at reducing 10% intervals) until there 205 are two successive trials in which no frames are lost. The maximum 206 granularity of the trials MUST be 10% of the maximum rate, a finer 207 granularity is encouraged. 209 Result 211 Transmit and Receive rates in frames per second for each source and 212 destination port for both unicast and multicast traffic for each trial 213 percent transmit rate. The ratio of the Unicast traffic versus Multicast 214 traffic SHOULD be reported. The result report SHOULD contain the number of 215 frames transmitted and received per port per class type (unicast and 216 multicast traffic), reported in number of frames and percent loss per 217 port. 219 4.2 Scaled Group Forwarding Matrix 221 Definition: 223 A table that demonstrates Forwarding Rate as a function of tested 224 multicast groups for a fixed number of tested DUT/SUT ports. 226 Procedure: 228 Multicast traffic is sent at a fixed percent of line rate with a fixed 229 number of receive ports at a fixed frame length. 231 The receive ports will join an initial number of groups and the sender 232 will transmit to the same groups after a certain delay (a few seconds). 234 Then the receive ports will join an incremental value of groups and the 235 transmit port will send to all groups joined (initial plus incremental). 237 The receive ports will continue joining in the incremental fashion until a 238 user defined maximum is reached. 240 Results: 242 For each group load the result WILL display frame rate, frames 243 transmitted, total frames received, total frames loss, and percent 244 loss. The frame loss per receive port per group SHOULD also be available. 246 4.3 Aggregated Multicast Throughput 248 Definition: 250 The maximum rate at which none of the offered frames to be 251 forwarded through N destination interfaces of the same multicast 252 group are dropped. 254 Procedure: 256 Multicast traffic is sent at a fixed percent of line rate with a fixed 257 number of groups at a fixed frame length for a fixed duration of time. 259 The initial number of receive ports will join the group(s) and the 260 sender will transmit to the same groups after a certain delay (a few 261 seconds). 263 Then the an incremental or decremental number of receive ports will 264 join the same groups and then the Multicast traffic is sent as stated. 266 The receive ports will continue to be added or deleted and the Multicast 267 traffic sent until a user defined maximum number of ports is reached. 269 Results: 271 For each number of receive ports the result WILL display frame rate, frames 272 transmitted, total frames received, total frames loss, and percent loss. 273 The frame loss per receive port per group SHOULD also be available. 275 4.4 Encapsulation (Tunneling) Throughput 277 This sub-section provides the description of tests that help in obtaining 278 throughput measurements when a DUT/SUT or a set of DUTs are acting as tunnel 279 endpoints. The following Figure 2 presents the scenario for the tests. 281 Client A DUT/SUT A Network DUT/SUT B Client B 283 ---------- ---------- 284 | | ------ | | 285 -------(a) (b)| |(c) ( ) (d)| |(e) (f)------- 286 ||||||| -----> | |---->( )----->| |-----> ||||||| 287 ------- | | ------ | | ------- 288 | | | | 289 ---------- ---------- 291 Figure 2 292 -------- 294 A tunnel is created between DUT/SUT A (the encapsulator) and DUT/SUT B (the 295 decapsulator). Client A is acting as a source and Client B is the 296 destination. Client B joins a multicast group (for example, 224.0.1.1) and it 297 sends an IGMP Join message to DUT/SUT B to join that group. Client A now wants 298 to transmit some traffic to Client B. It will send the multicast traffic to 299 DUT/SUT A which encapsulates the multicast frames, sends it to DUT/SUT B which 300 will decapsulate the same frames and forward them to Client B. 302 4.4.1 Encapsulation Throughput 304 Definition 306 The maximum rate at which frames offered a DUT/SUT are encapsulated and 307 correctly forwarded by the DUT/SUT without loss. 309 Procedure 311 To test the forwarding rate of the DUT/SUT when it has to go through the 312 process of encapsulation, a test port B is injected at the other end of 313 DUT/SUT A (Figure B) that will receive the encapsulated frames and measure 314 the throughput. Also, a test port A is used to generate multicast frames that 315 will be passed through the tunnel. 317 The following is the test setup: 319 Test port A DUT/SUT A Test port B 321 ---------- (c') (d')--------- 322 | |-------------->| | 323 -------(a) (b)| | | | 324 ||||||| -----> | | ------ --------- 325 ------- | |(c) ( N/W ) 326 | |---->( ) 327 ---------- ------ 328 Figure 3 329 -------- 331 In Figure 2, a tunnel is created with the local IP address of DUT/SUT A as the 332 beginning of the tunnel (point c) and the IP address of DUT/SUT B as the end 333 of the tunnel (point d). DUT/SUT B is assumed to have the tunneling protocol 334 enabled so that the frames can be decapsulated. When the test port B is 335 inserted in between the DUT/SUT A and DUT/SUT B (Figure 3), the endpoint of 336 tunnel has to be re-configured to be directed to the test port B's IP address. 337 For example, in Figure 3, point c' would be assigned as the beginning of the 338 tunnel and point d' as the end of the tunnel. The test port B is acting as 339 the end of the tunnel, and it does not have to support any tunneling protocol 340 since the frames do not have to be decapsulated. Instead, the received 341 encapsulated frames are used to calculate the throughput and other necessary 342 measurements. 344 Result 346 Throughput in frames per second for each destination port. The results 347 should also contain the number of frames transmitted and received per port. 349 4.4.2 Decapsulation Throughput 351 Definition 352 The maximum rate at which frames offered a DUT/SUT are decapsulated and 353 correctly forwarded by the DUT/SUT without loss. 355 Procedure 357 The decapsulation process returns the tunneled unicast frames back to 358 their multicast format. This test measures the throughput of the DUT/SUT 359 when it has to perform the process of decapsulation, therefore, a test 360 port C is used at the end of the tunnel to receive the decapsulated 361 frames (Figure 4). 363 Test port A DUT/SUT A Test port B DUT/SUT B Test port C 365 ---------- ---------- 366 | | | | 367 -------(a) (b)| |(c) ------ (d)| |(e) (f)------- 368 ||||||| -----> | |----> |||||| ----->| |-----> ||||||| 369 ------- | | ------ | | ------- 370 | | | | 371 ---------- ---------- 373 Figure 4 374 -------- 376 In Figure 4, the encapsulation process takes place in DUT/SUT A. This may 377 effect the throughput of the DUT/SUT B. Therefore, two test ports should 378 be used to separate the encapsulation and decapsulation processes. 379 Client A is replaced with the test port A which will generate a 380 multicast frame that will be encapsulated by DUT/SUT A. Another test 381 port B is inserted between DUT/SUT A and DUT/SUT B that will receive the 382 encapsulated frames and forward it to DUT/SUT B. Test port C will 383 receive the decapsulated frames and measure the throughput. 385 Result 387 Throughput in frames per second for each destination port. The 388 results should also contain the number of frames transmitted and 389 received per port. 391 4.4.3 Re-encapsulation Throughput 393 Definition 395 The maximum rate at which frames of one encapsulated format offered 396 a DUT/SUT are converted to another encapsulated format and correctly 397 forwarded by the DUT/SUT without loss. 399 Procedure 401 Re-encapsulation takes place in DUT/SUT B after test port C has received the 402 decapsulated frames. These decapsulated frames will be re-inserted with 403 a new encapsulation frame and sent to test port B which will measure the 404 throughput. See Figure 5. 406 Test port A DUT/SUT A Test port B DUT/SUT B Test port C 408 ---------- ---------- 409 | | | | 410 -------(a) (b)| |(c) ------ (d)| |(e) (f)------- 411 ||||||| -----> | |----> |||||| <---->| |<----> ||||||| 412 ------- | | ------ | | ------- 413 | | | | 414 ---------- ---------- 416 Figure 5 417 -------- 419 Result 421 Throughput in frames per second for each destination port. The results 422 should also contain the number of frames transmitted and received per 423 port. 425 5. Forwarding Latency 427 This section presents methodologies relating to the characterization of 428 the forwarding latency of a DUT/SUT in a multicast environment. It 429 extends the concept of latency characterization presented in RFC 2544. 431 5.1 Multicast Latency 433 Definition 435 The set of individual latencies from a single input port on the DUT/SUT or 436 SUT to all tested ports belonging to the destination multicast group. 438 Procedure 440 According to RFC 2544, a tagged frame is sent half way through the 441 transmission that contains a timestamp used for calculation of latency. 442 In the multicast situation, a tagged frame is sent to all destinations 443 for each multicast group and latency calculated on a per multicast 444 group basis. Note that this test MUST be run using the transmission 445 rate that is less than the multicast throughput of the DUT/SUT. 447 Result 448 The latency value for each multicast group address per port. An aggregate 449 latency MAY also be reported. 451 5.2 Min/Max/Average Multicast Latency 453 Definition: 455 The difference between the maximum latency measurement and the 456 minimum latency measurement from the set of latencies produced by 457 the Multicast Latency benchmark. 459 Procedure: 461 For the entire duration of the Latency test the smallest latency, the 462 largest latency, the sum of latencies, and the number should be tracked 463 per receive port. 465 The test can also increment bucket counters that represent a range latency 466 range. This can be used to create a histogram. From the histogram, 467 minimum, maximum, and average the test results can show the jitter. 469 Results: 471 For each port the results WILL display the number of frames, minimum 472 latency, maximum latency, and the average latency. The results SHOULD 473 also display the histogram of latencies. 475 6. Overhead 477 This section presents methodology relating to the characterization of 478 the overhead delays associated with explicit operations found in 479 multicast environments. 481 6.1 Group Join Delay 483 Definition: 485 The time duration it takes a DUT/SUT to start forwarding multicast 486 packets from the time a successful IGMP group membership report 487 has been issued to the DUT/SUT. 489 Procedure: 491 Traffic is sent on the source port at the same time as the IGMP JOIN 492 Group message is transmitted from the destination ports. The join 493 delay is the difference in time from when the IGMP Join is sent and 494 the first frame is received. 496 One of the keys is to transmit at the fastest rate the DUT/SUT can handle 497 multicast frames. This is to get the best resultion in the Join Delay. 498 However, you do not want to transmit the frames to fast that frames 499 are dropped by the DUT/SUT. Traffic should be sent at the throughput rate 500 determined by the forwarding tests of section 4. 502 Results: 504 The JOIN delay for each port. An error or granularity of the 505 timestamp should be reported. This granularity may be within 20 506 nanoseconds of the result. 508 6.2 Group Leave Delay 510 Definition 511 The time duration it takes a DUT/SUT to cease forwarding multicast packets 512 after a corresponding IGMP "Leave Group" message has been successfully 513 offered to the DUT/SUT. 515 Procedure 517 Traffic is sent on the source port at the same time as the IGMP Leave 518 Group messages are transmitted from the destination ports. The frames 519 on both the source and destination ports are sent with the timestamps 520 inserted. The Group Leave Delay is the difference in the value of the 521 timestamp A of the first IGMP Leave Group frame sent and the timestamp 522 B of the last frame that is received on that destination port. 524 Group Leave delay = timestamp B - timestamp A 526 Traffic should be sent at the throughput rate determined by the 527 forwarding tests of section 4. 529 Result 531 Group Leave Delay values for each multicast group address on each 532 destination port. Also, the number of frames transmitted and received, 533 and percent loss may be displayed. 535 7. Capacity 536 This section offers terms relating to the identification of multicast 537 group limits of a DUT/SUT. 539 7.1 Multicast Group Capacity 541 Definition: 543 The maximum number of multicast groups a SUT/DUT/SUT can support while 544 maintaining the ability to forward multicast frames to all 545 multicast groups registered to that SUT/DUT/SUT. 547 Procedure: 549 One or more receiving ports will join an initial number of groups. 550 Then after a delay the source port will transmit to each group at a 551 transmission rate that the DUT/SUT can handle. If all frames sent are 552 forwarded and received the receiving ports will join an incremental 553 value of groups. Then after a delay the source port will transmit 554 to all groups at a transmission rate that the DUT/SUT can handle. If 555 all frames sent are forwarded and received the receiving ports will 556 continuing joining and testing until a frame is not forwarded nor 557 received. 559 The group capacity resolution will be the incremental value. So the 560 capacity could be greater then last capacity passed but less then the 561 one that failed. 563 Once a capacity is determined the test should be re run with greater 564 delays after the JOIN and a slower transmission rate. And the initial 565 group level should be raised to about five less then the previous 566 capacity and incremental value should be set to one. 568 Results: 570 The number of groups passed vs the number of groups failed. The 571 results SHOULD give details when the frame fails to be forwarded 572 about how many frames did and did not get forwarded. Which groups 573 DID and DID NOT get forwarded. Also, the frame rate MAY be reported. 575 Appendix A: Determining an even distribution 577 A.1 Scope Of This Appendix 579 This appendix discusses the suggested approach to configuring the 580 deterministic distribution methodology for tests that involve both 581 multicast and unicast traffic classes in an aggregated traffic stream. 582 As such, this appendix MUST not be read as an amendment to the 583 methodology described in the body of this document but as a guide 584 to testing practice. 586 It is important to understand and fully define the distribution of 587 frames among all multicast and unicast destinations. If the 588 distribution is not well defined or understood, the throughput and 589 forwarding metrics are not meaningful. 591 In a homogeneous environment, a large, single burst of multicast 592 frames may be followed by a large burst of unicast frames. This is a 593 very different distribution than that of a non-homogeneous 594 environment, where the multicast and unicast frames are intermingled 595 throughout the entire transmission. 597 The recommended distribution is that of the non-homogeneous 598 environment because it more closely represents a real-world 599 scenario. The distribution is modeled by calculating the number of 600 multicast frames per destination port as a burst, then calculating 601 the number of unicast frames to transmit as a percentage of the total 602 frames transmitted. The overall effect of the distribution is small 603 bursts of multicast frames intermingled with small bursts of unicast 604 frames. 606 Example 608 This example illustrates the ditribution algoirthm for a 100 Mbps rate. 610 Frame size = 64 611 Duration of test = 10 seconds 612 Transmission rate = 100% of maximum rate 613 Mapping for unicast traffic: Port 1 to Port 2 614 Port 3 to port 4 616 Mapping for multicast traffic: Port 1 to Ports 2,3,4 617 Number of Multicast group addresses per destination port = 3 618 Multicast groups joined by Port 2: 224.0.1.27 619 224.0.1.28 620 224,0.1.29 621 Multicast groups joined by Port 3: 224.0.1.30 622 224.0.1.31 623 224,0.1.32 624 Multicast groups joined by Port 4: 224.0.1.33 625 224.0.1.34 626 224,0.1.35 628 Percentage of Unicast frames = 20 629 Percentage of Multicast frames = 80 630 Total number of frames to be transmitted = 148810 fps * 10 sec 631 = 1488100 frames 632 Number of unicast frames = 20/100 * 1488100 = 297620 frames 633 Number of multicast frames = 80/100 * 1488100 = 1190480 frames 635 Unicast burst size = 20 * 9 = 180 636 Multicast burst size = 80 * 9 = 720 637 Loop counter = 1488100 / 900 = 1653.4444 (round it off to 1653) 639 Therefore, the actual number of frames that will be transmitted: 640 Unicast frames = 1653 * 180 = 297540 frames 641 Multicast frames = 1653 * 720 = 1190160 frames 643 The following pattern will be established: 645 UUUMMMMMMMMMMMMUUUMMMMMMMMMMMMUUUMMMMMMMMMMMMUUUMMMMMMMMMMMM 647 where U represents 60 Unicast frames (UUU = 180 frames) 648 M represents 60 Multicast frames (MMMMMMMMMMMM = 720 frames) 650 8. Security Considerations. 652 As this document is solely for the purpose of providing metric methodology 653 and describes neither a protocol nor a protocol's implementation, there 654 are no security considerations associated with this document. 656 9. References 658 [Br91] Bradner, S., "Benchmarking Terminology for Network 659 Interconnection Devices", RFC 1242, July 1991. 661 [Br96] Bradner, S., and J. McQuaid, "Benchmarking Methodology for 662 Network Interconnect Devices", RFC 2544, March 1999. 664 [Br97] Bradner, S. "Use of Keywords in RFCs to Reflect Requirement 665 Levels, RFC 2119, March 1997 667 [Du98] Dubray, K., "Terminology for IP Multicast Benchmarking", 668 RFC 2432, October 1998. 670 [Hu95] Huitema, C. "Routing in the Internet." Prentice-Hall, 1995. 672 [Ka98] Kosiur, D., "IP Multicasting: the Complete Guide to Interactive 673 Corporate Networks", John Wiley & Sons, Inc, 1998. 675 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching 676 Devices", RFC 2285, February 1998. 678 [Mt98] Maufer, T. "Deploying IP Multicast in the Enterprise." 679 Prentice-Hall, 1998. 681 [Se98] Semeria, C. and Maufer, T. "Introduction to IP Multicast 682 Routing." http://www.3com.com/nsc/501303.html 3Com Corp., 683 1998. 685 6. Author's Address 687 Hardev Soor 688 Ixia Communications 689 4505 Las Virgenes Road, Suite 209 690 Calabasas, CA 91302 691 USA 693 Phone: 818 871 1800 694 EMail: hardev@ixiacom.com 696 Debra Stopp 697 Ixia Communications 698 4505 Las Virgenes Road, Suite 209 699 Calabasas, CA 91302 700 USA 702 Phone: 818 871 1800 703 EMail: debby@ixiacom.com 705 Ralph Daniels 706 Netcom Systems 707 948 Loop Road 708 Clayton, NC 27520 709 USA 711 Phone: 919 550 9475 712 EMail: Ralph_Daniels@NetcomSystems.com