idnits 2.17.1 draft-ietf-bmwg-mcastm-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents -- however, there's a paragraph with a matching beginning. Boilerplate error? == There are 3 instances of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. == There are 4 instances of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 12 has weird spacing: '...d is in full ...' == Line 16 has weird spacing: '...), its areas...' == Line 17 has weird spacing: '...ups may also ...' == Line 18 has weird spacing: '...cuments as I...' == Line 21 has weird spacing: '...ths and may b...' == (12 more instances...) == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Data from invalid trials SHOULD be considered inconclusive. Data from invalid trials MUST not form the basis of comparison. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 2001) is 8261 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'Br91' is defined on line 847, but no explicit reference was found in the text == Unused Reference: 'Br96' is defined on line 850, but no explicit reference was found in the text == Unused Reference: 'Br97' is defined on line 853, but no explicit reference was found in the text == Unused Reference: 'Du98' is defined on line 856, but no explicit reference was found in the text == Unused Reference: 'Hu95' is defined on line 859, but no explicit reference was found in the text == Unused Reference: 'Ka98' is defined on line 861, but no explicit reference was found in the text == Unused Reference: 'Ma98' is defined on line 864, but no explicit reference was found in the text == Unused Reference: 'Mt98' is defined on line 867, but no explicit reference was found in the text == Unused Reference: 'Se98' is defined on line 870, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. 'Br91') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. 'Br96') ** Downref: Normative reference to an Informational RFC: RFC 2432 (ref. 'Du98') -- Possible downref: Non-RFC (?) normative reference: ref. 'Hu95' -- Possible downref: Non-RFC (?) normative reference: ref. 'Ka98' ** Downref: Normative reference to an Informational RFC: RFC 2285 (ref. 'Ma98') -- Possible downref: Non-RFC (?) normative reference: ref. 'Mt98' -- Possible downref: Non-RFC (?) normative reference: ref. 'Se98' Summary: 8 errors (**), 0 flaws (~~), 20 warnings (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Debra Stopp 3 Hardev Soor 4 INTERNET-DRAFT IXIA 5 Expires in: August 2001 7 Methodology for IP Multicast Benchmarking 8 10 Status of this Memo 12 This document is an Internet-Draft and is in full conformance 13 with all provisions of Section 10 of RFC2026. 15 Internet-Drafts are working documents of the Internet 16 Engineering Task Force (IETF), its areas, and its working 17 groups. Note that other groups may also distribute working 18 documents as Internet- Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six 21 months and may be updated, replaced, or obsoleted by other 22 documents at any time. It is inappropriate to use Internet- 23 Drafts as reference material or to cite them other than as "work 24 in progress." 26 The list of current Internet-Drafts can be accessed 27 at http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed 30 at http://www.ietf.org/shadow.html. 32 Copyright Notice 34 Copyright (C) The Internet Society (2001). All Rights Reserved. 36 Abstract 38 The purpose of this draft is to describe methodology specific to 39 the benchmarking of multicast IP forwarding devices. It builds 40 upon the tenets set forth in RFC 2544, RFC 2432 and other IETF 41 Benchmarking Methodology Working Group (BMWG) efforts. This 42 document seeks to extend these efforts to the multicast paradigm. 44 The BMWG produces two major classes of documents: 45 Benchmarking Terminology documents and Benchmarking Methodology 46 documents. The Terminology documents present the benchmarks and 47 other related terms. The Methodology documents define the 48 procedures required to collect the benchmarks cited in the 49 corresponding Terminology documents. 51 Table of Contents 53 1. INTRODUCTION...................................................3 55 2. KEY WORDS TO REFLECT REQUIREMENTS..............................3 57 3. TEST SET UP....................................................3 58 3.1. Test Considerations..........................................4 59 3.1.1. IGMP Support..............................................4 60 3.1.2. Group Addresses...........................................5 61 3.1.3. Frame Sizes...............................................5 62 3.1.4. TTL.......................................................5 63 3.2. Layer 2 Support..............................................5 64 4. FORWARDING AND THROUGHPUT......................................5 65 4.1. Mixed Class Throughput.......................................6 66 4.2. Scaled Group Forwarding Matrix...............................7 67 4.3. Aggregated Multicast Throughput..............................7 68 4.4. Encapsulation/Decapsulation (Tunneling) Throughput...........8 69 4.4.1. Encapsulation Throughput..................................9 70 4.4.2. Decapsulation Throughput..................................9 71 4.4.3. Re-encapsulation Throughput..............................10 72 5. FORWARDING LATENCY............................................10 73 5.1. Multicast Latency...........................................11 74 5.2. Min/Max Multicast Latency...................................13 75 6. OVERHEAD......................................................14 76 6.1. Group Join Delay............................................14 77 6.2. Group Leave Delay...........................................15 78 7. CAPACITY......................................................16 79 7.1. Multicast Group Capacity....................................16 80 8. INTERACTION...................................................16 81 8.1. Forwarding Burdened Multicast Latency.......................17 82 8.2. Forwarding Burdened Group Join Delay........................17 83 9. SECURITY CONSIDERATIONS.......................................17 85 10. ACKNOWLEDGEMENTS.............................................17 87 11. REFERENCES...................................................18 89 12. AUTHOR'S ADDRESSES...........................................19 91 13. FULL COPYRIGHT STATEMENT.....................................19 93 APPENDIX A: DETERMINING AN EVEN DISTRIBUTION.....................20 94 1. Introduction 96 This document defines a specific set of tests that vendors can use 97 to measure and report the performance characteristics and 98 forwarding capabilities of network devices that support IP 99 multicast protocols. The results of these tests will provide the 100 user comparable data from different vendors with which to evaluate 101 these devices. 103 A previous document, " Terminology for IP Multicast Benchmarking" 104 (RFC 2432), defined many of the terms that are used in this 105 document. The terminology document should be consulted before 106 attempting to make use of this document. 108 This methodology will focus on one source to many destinations, 109 although many of the tests described may be extended to use 110 multiple source to multiple destination IP multicast communication. 112 2. Key Words to Reflect Requirements 114 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 115 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" 116 in this document are to be interpreted as described in RFC 2119. 118 3. Test set up 120 Figure 1 shows a typical setup for an IP multicast test, with one 121 source to multiple destinations, although this MAY be extended to 122 multiple source to multiple destinations. 124 +----------------+ 125 +------------+ | Egress | 126 +--------+ | (-)-------->| destination(E1)| 127 | | | | | | 128 | source |------->(|)Ingress | +----------------+ 129 | | | | +----------------+ 130 +--------+ | D U T (-)-------->| Egress | 131 | | | destination(E2)| 132 | | | | 133 | | +----------------+ 134 | | . . . 135 | | +----------------+ 136 | | | Egress | 137 | (-)-------->| destination(En)| 138 | | | | 139 +------------+ +----------------+ 141 Figure 1 142 --------- 144 If the multicast metrics are to be taken across multiple devices 145 forming a System Under Test (SUT), then test packets are offered to 146 a single ingress interface on a device of the SUT, subsequently 147 routed across the SUT topology, and finally forwarded to the test 148 apparatus' packet-receiving components by the test egress 149 interface(s) of devices in the SUT. Figure 2 offers an example SUT 150 test topology. If a SUT is tested, the details of the test 151 topology MUST be disclosed with the corresponding test results. 153 +--------+ +----------------+ +--------+ 154 | | +------------+ |DUT B Egress E0(-)-->| | 155 | | |DUT A |--->| | | | 156 | Test | | | | Egress E1(-)-->| Test | 157 | App. |--->(-)Ingress, I | +----------------+ | App. | 158 | Traffic| | | +----------------+ | Traffic| 159 | Src. | | |--->|DUT C Egress E2(-)-->| Dest. | 160 | | +------------+ | | | | 161 | | | Egress En(-)-->| | 162 +--------+ +----------------+ +--------+ 164 Figure 2 165 --------- 167 Generally , the destination ports first join the desired number of 168 multicast groups by sending IGMP Join Group messages to the 169 DUT/SUT. To verify that all destination ports successfully joined 170 the appropriate groups, the source port MUST transmit IP multicast 171 frames destined for these groups. The destination ports MAY send 172 IGMP Leave Group messages after the transmission of IP Multicast 173 frames to clear the IGMP table of the DUT/SUT. 175 In addition, all transmitted frames MUST contain a recognizable 176 pattern that can be filtered on in order to ensure the receipt of 177 only the frames that are involved in the test. 179 3.1. Test Considerations 181 The procedures outlined below are written without regard for 182 specific physical layer or link layer protocols. The methodology 183 further assumes a uniform medium topology. Issues regarding mixed 184 transmission media, such as speed mismatch, headers differences, 185 etc., are not specifically addressed. Moreover, no provisions are 186 made for traffic-affecting factors, such as congestion control or 187 service differentiation mechanisms. Modifications to the specified 188 collection procedures might need to be made to accommodate the 189 transmission media actually tested. These accommodations MUST be 190 presented with the test results. 192 3.1.1. IGMP Support 194 Each of the destination ports should support and be able to test 195 all IGMP versions 1, 2 and 3. The minimum requirement, however, is 196 IGMP version 2. 198 Each destination port should be able to respond to IGMP queries 199 during the test. 201 Each destination port should also send LEAVE (running IGMP version 202 2) after each test. 204 3.1.2. Group Addresses 206 The Class D Group address SHOULD be changed between tests. Many 207 DUTs have memory or cache that is not cleared properly and can bias 208 the results. 210 The following group addresses are recommended by use in a test: 212 224.0.1.27-224.0.1.255 213 224.0.5.128-224.0.5.255 214 224.0.6.128-224.0.6.255 216 If the number of group addresses accommodated by these ranges do 217 not satisfy the requirements of the test, then these ranges may be 218 overlapped. The total number of configured group addresses must be 219 less than or equal to the IGMP table size of the DUT/SUT. 221 3.1.3. Frame Sizes 223 Each test SHOULD be run with different Multicast Frame Sizes. The 224 recommended frame sizes are 64, 128, 256, 512, 1024, 1280, and 1518 225 byte frames. 227 3.1.4. TTL 229 The source frames should have a TTL value large enough to 230 accommodate the DUT/SUT. 232 3.2. Layer 2 Support 234 Each of the destination ports should support GARP/GMRP protocols to 235 join groups on Layer 2 DUTs/SUTs. 237 4. Forwarding and Throughput 239 This section contains the description of the tests that are related 240 to the characterization of the packet forwarding of a DUT/SUT in a 241 multicast environment. Some metrics extend the concept of throughput 242 presented in RFC 1242. The notion of Forwarding Rate is cited in RFC 243 2285. 245 4.1. Mixed Class Throughput 247 Objective 249 To determine the maximum throughput rate at which none of the 250 offered frames, comprised from a unicast Class and a multicast 251 Class, to be forwarded are dropped by the device across a fixed 252 number of ports as defined in RFC 2432. 254 Procedure 256 Multicast and unicast traffic are mixed together in the same 257 aggregated traffic stream in order to simulate the non-homogenous 258 networking environment. While the multicast traffic is transmitted 259 from one source to multiple destinations, the unicast traffic MAY 260 be evenly distributed across the DUT/SUT architecture. In addition, 261 the DUT/SUT MUST learn the appropriate unicast IP addresses, either 262 by sending ARP frames from each unicast address, sending a RIP 263 packet or by assigning static entries into the DUT/SUT address 264 table. 266 The mixture of multicast and unicast traffic MUST be set up in one 267 of two ways: 269 a) As a percentage of the total traffic flow employing maximum 270 bandwidth utilization. Thus, each type of traffic is 271 transmitted at the maximum available bandwidth. This also 272 implies that the intended load, regardless of the type of 273 traffic, remains constant. 275 b) As a percentage of the total traffic flow employing a 276 proportionate bandwidth utilization. Thus, each type of 277 traffic is transmitted at a fraction of the available 278 bandwidth proportional to the specified ratio. This also 279 implies that the intended load for each traffic type varies in 280 proportion to its specified ratio. 282 The transmission of the frames MUST be set up so that they form a 283 deterministic distribution while still maintaining the specified 284 forwarding rates. See Appendix A for a discussion on non-homogenous 285 vs. homogenous packet distribution. 287 Similar to the Frame loss rate test in RFC 2544, the first trial 288 SHOULD be run for the frame rate that corresponds to 100% of the 289 maximum rate for the frame size on the input media. Repeat the 290 procedure for the rate that corresponds to 90% of the maximum rate 291 used and then for 80% of this rate. This sequence SHOULD be 292 continued (at reducing 10% intervals) until there are two 293 successive trials in which no frames are lost. The maximum 294 granularity of the trials MUST be 10% of the maximum rate, a finer 295 granularity is encouraged. 297 Result 299 Parameters to be measured SHOULD include the frame loss and percent 300 loss for each class of traffic per destination port. The ratio of 301 unicast traffic to multicast traffic MUST be reported. 303 The nature of the traffic stream contributing to the result MUST be 304 reported. All required reporting parameters of mixed class 305 throughput MUST be reflected in the results report, such as the 306 transmitted packet size(s) and offered load of the packet stream. 308 4.2. Scaled Group Forwarding Matrix 310 Objective 312 A table that demonstrates Forwarding Rate as a function of tested 313 multicast groups for a fixed number of tested DUT/SUT ports. 315 Procedure 317 Multicast traffic is sent at a fixed percent of maximum offered 318 load with a fixed number of receive ports of the tester at a fixed 319 frame length. 321 The receive ports SHOULD continue joining incrementally by 10 322 multicast groups until a user defined maximum is reached. 324 The receive ports will continue joining in the incremental fashion 325 until a user defined maximum is reached. 327 Results 329 Parameters to be measured SHOULD include the frame loss and percent 330 loss per destination port for each multicast group address. 332 The nature of the traffic stream contributing to the result MUST be 333 reported. All required reporting parameters MUST be reflected in 334 the results report, such as the transmitted packet size(s) and 335 offered load of the packet stream. 337 4.3. Aggregated Multicast Throughput 339 Objective 341 The maximum rate at which none of the offered frames to be 342 forwarded through N destination interfaces of the same multicast 343 group are dropped. 345 Procedure 347 Multicast traffic is sent at a fixed percent of maximum offered 348 load with a fixed number of groups at a fixed frame length for a 349 fixed duration of time. 351 The initial number of receive ports of the tester will join the 352 group(s) and the sender will transmit to the same groups after a 353 certain delay (a few seconds). 355 Then the an incremental number of receive ports will join the same 356 groups and then the Multicast traffic is sent as stated. 358 The receive ports will continue to be added and multicast traffic 359 sent until a user defined maximum number of ports is reached. 361 Results 363 Parameters to be measured SHOULD include the frame loss and percent 364 loss per destination port for each multicast group address. 366 The nature of the traffic stream contributing to the result MUST be 367 reported. All required reporting parameters of aggregated 368 throughput MUST be reflected in the results report, such as the 369 transmitted packet size(s) and offered load of the packet stream. 371 4.4. Encapsulation/Decapsulation (Tunneling) Throughput 373 This sub-section provides the description of tests that help in 374 obtaining throughput measurements when a DUT/SUT or a set of DUTs 375 are acting as tunnel endpoints. The following Figure 3 presents the 376 a tunneled network. 378 Client A DUT/SUT A Network DUT/SUT B Client B 380 ---------- ---------- 381 | | ------ | | 382 -----(a) (b)| |(c) ( ) (d)| |(e) (f)----- 383 ||||| -----> | |---->( )----->| |-----> ||||| 384 ----- | | ------ | | ----- 385 | | | | 386 ---------- ---------- 388 Figure 3 389 -------- 391 A tunnel is created between DUT/SUT A (the encapsulator) and 392 DUT/SUT B (the decapsulator). Client A is acting as a source and 393 Client B is the destination. Client B joins a multicast group (for 394 example, 224.0.1.1) by sending an IGMP Join message to DUT/SUT B to 395 join that group. Client A now wants to transmit some traffic to 396 Client B. It will send the multicast traffic to DUT/SUT A which 397 encapsulates the multicast frames, sends it to DUT/SUT B which will 398 decapsulate the same frames and forward them to Client B. 400 4.4.1. Encapsulation Throughput 402 Objective 404 The maximum rate at which frames offered a DUT/SUT are encapsulated 405 and correctly forwarded by the DUT/SUT without loss. 407 Procedure 409 Traffic is sent through a DUT/SUT that has been configured to 410 encapsulate the frames. Traffic is received on a test port prior to 411 decapsulation and throughput is calculated based on RFC2544. 413 Results 415 Parameters to be measured SHOULD include the measured throughput 416 per tunnel. 418 The nature of the traffic stream contributing to the result MUST be 419 reported. All required reporting parameters of encapsulation 420 throughput MUST be reflected in the results report, such as the 421 transmitted packet size(s) and offered load of the packet stream. 423 4.4.2. Decapsulation Throughput 425 Objective 427 The maximum rate at which frames offered a DUT/SUT are decapsulated 428 and correctly forwarded by the DUT/SUT without loss. 430 Procedure 432 Encapsulated traffic is sent through a DUT/SUT that has been 433 configured to decapsulate the frames. Traffic is received on a test 434 port after decapsulation and throughput is calculated based on 435 RFC2544. 437 Results 439 Parameters to be measured SHOULD include the measured throughput 440 per tunnel. 442 The nature of the traffic stream contributing to the result MUST be 443 reported. All required reporting parameters of decapsulation 444 throughput MUST be reflected in the results report, such as the 445 transmitted packet size(s) and offered load of the packet stream. 447 4.4.3. Re-encapsulation Throughput 449 Objective 451 The maximum rate at which frames of one encapsulated format offered 452 a DUT/SUT are converted to another encapsulated format and 453 correctly forwarded by the DUT/SUT without loss. 455 Procedure 457 Traffic is sent through a DUT/SUT that has been configured to 458 encapsulate frames into one format, then re-encapsulate the frames 459 into another format. Traffic is received on a test port after all 460 decapsulation is complete and throughput is calculated based on 461 RFC2544. 463 Results 465 Parameters to be measured SHOULD include the measured throughput 466 per tunnel. 468 The nature of the traffic stream contributing to the result MUST be 469 reported. All required reporting parameters of re-encapsulation 470 throughput MUST be reflected in the results report, such as the 471 transmitted packet size(s) and offered load of the packet stream. 473 5. Forwarding Latency 475 This section presents methodologies relating to the 476 characterization of the forwarding latency of a DUT/SUT in a 477 multicast environment. It extends the concept of latency 478 characterization presented in RFC 2544. 480 In order to lessen the effect of packet buffering in the DUT/SUT, 481 the latency tests MUST be run such that the offered load is less 482 than the multicast throughput of the DUT/SUT as determined in the 483 previous section. The tests should also take into account the 484 DUT's/SUT's need to cache the traffic in its IP cache, fastpath 485 cache or shortcut tables since the initial part of the traffic will 486 be utilized to build these tables. 488 Lastly, RFC 1242 and RFC 2544 draws distinction between two classes 489 of devices: "store and forward" and "bit-forwarding." Each class 490 impacts how latency is collected and subsequently presented. See 491 the related RFCs for more information. In practice, much of the 492 test equipment will collect the latency measurement for one class 493 or the other, and, if needed, mathematically derive the reported 494 value by the addition or subtraction of values accounting for 495 medium propagation delay of the packet, bit times to the timestamp 496 trigger within the packet, etc. Test equipment vendors SHOULD 497 provide documentation regarding the composition and calculation 498 latency values being reported. The user of this data SHOULD 499 understand the nature of the latency values being reported, 500 especially when comparing results collected from multiple test 501 vendors. (E.g., If test vendor A presents a "store and forward" 502 latency result and test vendor B presents a "bit-forwarding" 503 latency result, the user may erroneously conclude the DUT has two 504 differing sets of latency values.) 506 5.1. Multicast Latency 508 Objective 510 To produce a set of multicast latency measurements from a single, 511 multicast ingress port of a DUT or SUT through multiple, egress 512 multicast ports of that same DUT or SUT as provided for by the 513 metric "Multicast Latency" in RFC 2432. 515 The procedures highlighted below attempt to draw from the 516 collection methodology for latency in RFC 2544 to the degree 517 possible. The methodology addresses two topological scenarios: one 518 for a single device (DUT) characterization; a second scenario is 519 presented or multiple device (SUT) characterization. 521 Procedure 523 If the test trial is to characterize latency across a single Device 524 Under Test (DUT), an example test topology might take the form of 525 Figure 1 in section 3. That is, a single DUT with one ingress 526 interface receiving the multicast test traffic from packet- 527 transmitting component of the test apparatus and n egress 528 interfaces on the same DUT forwarding the multicast test traffic 529 back to the packet-receiving component of the test apparatus. Note 530 that n reflects the number of TESTED egress interfaces on the DUT 531 actually expected to forward the test traffic (as opposed to 532 configured but untested, non-forwarding interfaces, for example). 534 If the multicast latencies are to be taken across multiple devices 535 forming a System Under Test (SUT), an example test topology might 536 take the form of Figure 2 in section 3. 538 The trial duration SHOULD be 120 seconds. Departures to the 539 suggested traffic class guidelines MUST be disclosed with the 540 respective trial results. The nature of the latency measurement, 541 "store and forward" or "bit forwarding," MUST be associated with 542 the related test trial(s) and disclosed in the results report. 544 End-to-end reachability of the test traffic path SHOULD be verified 545 prior to the engagement of a test trial. This implies that 546 subsequent measurements are intended to characterize the latency 547 across the tested device's or devices' normal traffic forwarding 548 path (e.g., faster hardware-based engines) of the device(s) as 549 opposed a non-standard traffic processing path (e.g. slower, 550 software-based exception handlers). If the test trial is to be 551 executed with the intent of characterizing a non-optimal, 552 forwarding condition, then a description of the exception 553 processing conditions being characterized MUST be included with the 554 trial's results. 556 A test traffic stream is presented to the DUT. At the mid-point of 557 the trial's duration, the test apparatus MUST inject a uniquely 558 identifiable ("tagged") packet into the test traffic packets being 559 presented. This tagged packet will be the basis for the latency 560 measurements. By "uniquely identifiable," it is meant that the test 561 apparatus MUST be able to discern the "tagged" packet from the 562 other packets comprising the test traffic set. A packet generation 563 timestamp, Timestamp A, reflecting the completion of the 564 transmission of the tagged packet by the test apparatus, MUST be 565 determined. 567 The test apparatus then monitors packets from the DUT's tested 568 egress port(s) for the expected tagged packet(s) until the 569 cessation of traffic generation at the end of the configured trial 570 duration.A value of the Offered Load presented the DUT/SUT MUST be 571 noted. 573 The test apparatus MUST record the time of the successful detection 574 of a tagged packet from a tested egress interface with a timestamp, 575 Timestamp B. A set of Timestamp B values MUST be collected for all 576 tested egress interfaces of the DUT/SUT. 578 A trial MUST be considered INVALID should any of the following 579 conditions occur in the collection of the trial data: 581 . Forwarded test packets directed to improper destinations. 582 . Unexpected differences between Intended Load and Offered Load 583 or unexpected differences between Offered Load and the 584 resulting Forwarding Rate(s) on the DUT/SUT egress ports. 585 . Forwarded test packets improperly formed or packet header 586 fields improperly manipulated. 587 . Failure to forward required tagged packet(s) on all expected 588 egress interfaces. 589 . Reception of a tagged packet by the test apparatus outside the 590 configured test duration interval or 5 seconds, whichever is 591 greater. 593 Data from invalid trials SHOULD be considered inconclusive. Data 594 from invalid trials MUST not form the basis of comparison. 596 The set of latency measurements, M, composed from each latency 597 measurement taken from every ingress/tested egress interface 598 pairing MUST be determined from a valid test trial: 599 M = { (Timestamp B(E0) - Timestamp A), 600 (Timestamp B(E1) - Timestamp A), ... 601 (Timestamp B(En) - Timestamp A) } 603 where (E0 ... En) represents the range of all tested egress 604 interfaces and Timestamp B represents a tagged packet detection 605 event for a given DUT/SUT tested egress interface. 607 Results 609 Two types of information MUST be reported: 1) the set of latency 610 measurements and 2) the significant environmental, methodological, 611 or device particulars giving insight into the test or its results. 613 Specifically, when reporting the results of a VALID test trial, the 614 set of ALL latencies related to the tested ingress interface and 615 each tested egress DUT/SUT interface of MUST be presented. The 616 time units of the presented latency MUST be uniform and with 617 sufficient precision for the medium or media being tested. Results 618 MAY be offered in tabular format and SHOULD preserve the 619 relationship of latency to ingress/egress interface to assist in 620 trending across multiple trials. 622 The Offered Load of the test traffic presented the DUT/SUT, size of 623 the "tagged" packet, trial duration, and nature (i.e., store-and- 624 forward or bit-forwarding) of the trial's measurement MUST be 625 associated with any reported test trial's result. 627 5.2. Min/Max Multicast Latency 629 Objective 631 The difference between the maximum latency measurement and the 632 minimum latency measurement from a collected set of latencies 633 produced by the Multicast Latency benchmark. 635 Procedure 637 Collect a set of multicast latency measurements, as prescribed in 638 section 5.1. This will produce a set of multicase latencies, M, 639 where M is composed of individual forwarding altencies between DUT 640 packet ingress and DUT packet egress port pairs. E.g.: 642 M = {L(I,E1),L(I,E2), �, L(I,En)} 644 where L is the latency between a tested ingress port, I, of the 645 DUT, and Ex a specific, tested multicast egress port of the DUT. 646 E1 through En are unique egress ports on the DUT. 648 From the collected multicast latency measurements in set M, 649 identify MAX(M), where MAX is a function that yields the largest 650 latency value from set M. 652 Identify MIN(M), when MIN is a funtion that yields the smallest 653 latency value from set M. 655 The Max/Min value is determined from the following formula: 657 Result = MAX(M) � MIN(M) 659 Results 661 The result MUST be represented as a single numerical value in time 662 units consistent with the corresponding latency measurements. In 663 addition the number of tested egress ports on the DUT MUST be 664 reported. 666 The nature of the traffic stream contributing to the result MUST be 667 reported. All required reporting parameters of multicast latency 668 MUST be reflected in the min/max results report, such as the 669 transmitted packet size(s) and offered load of the packet stream in 670 which the tagged packet was presented to the DUT. 672 6. Overhead 674 This section presents methodology relating to the characterization 675 of the overhead delays associated with explicit operations found in 676 multicast environments. 678 6.1. Group Join Delay 680 Objective 682 The time duration it takes a DUT/SUT to start forwarding multicast 683 packets from the time a successful IGMP group membership report has 684 been issued to the DUT/SUT. 686 Procedure 688 Traffic is sent on the source port at the same time as the IGMP 689 JOIN Group message is transmitted from the destination ports. The 690 join delay is the difference in time from when the IGMP Join is 691 sent (timestamp A) and the first frame is forwarded to a receiving 692 member port (timestamp B). 694 Group Join delay = timestamp B - timestamp A 696 One of the keys is to transmit at the fastest rate the DUT/SUT can 697 handle multicast frames. This is to get the best resolution and 698 the least margin of error in the Join Delay. 700 However, you do not want to transmit the frames so fast that frames 701 are dropped by the DUT/SUT. Traffic should be sent at the 702 throughput rate determined by the forwarding tests of section 4. 704 Results 706 The parameter to be measured is the join delay time for each 707 multicast group address per destination port. In addition, the 708 number of frames transmitted and received and percent loss may be 709 reported. 711 6.2. Group Leave Delay 713 Objective 715 The time duration it takes a DUT/SUT to cease forwarding multicast 716 packets after a corresponding IGMP "Leave Group" message has been 717 successfully offered to the DUT/SUT. 719 Procedure 721 Traffic is sent on the source port at the same time as the IGMP 722 Leave Group messages are transmitted from the destination ports. 723 The leave delay is the difference in time from when the IGMP leave 724 is sent (timestamp A) and the last frame is forwarded to a 725 receiving member port (timestamp B). 727 Group Leave delay = timestamp B - timestamp A 729 One of the keys is to transmit at the fastest rate the DUT/SUT can 730 handle multicast frames. This is to get the best resolution and 731 least margin of error in the Leave Delay. However, you do not want 732 to transmit the frames too fast that frames are dropped by the 733 DUT/SUT. Traffic should be sent at the throughput rate determined 734 by the forwarding tests of section 4. 736 Results 738 The parameter to be measured is the leave delay time for each 739 multicast group address per destination port. In addition, the 740 number of frames transmitted and received and percent loss may be 741 reported. 743 7. Capacity 745 This section offers terms relating to the identification of 746 multicast group limits of a DUT/SUT. 748 7.1. Multicast Group Capacity 750 Objective 752 The maximum number of multicast groups a DUT/SUT can support while 753 maintaining the ability to forward multicast frames to all 754 multicast groups registered to that DUT/SUT. 756 Procedure 758 One or more destination ports of DUT/SUT will join an initial 759 number of groups. 761 Then after a delay (enough time for all ports to join) the source 762 port will transmit to each group at a transmission rate that the 763 DUT/SUT can handle without dropping IP Multicast frames. 765 If all frames sent are forwarded by the DUT/SUT and received the 766 test iteration is said to pass at the current capacity. 768 If the iteration passes at the capacity the test will add an user 769 defined incremental value of groups to each receive port. 771 The iteration is to run again at the new group level and capacity 772 tested as stated above. 774 Once the test fails at a capacity the capacity is stated to be the 775 last Iteration that pass at a giving capacity. 777 Results 779 The parameter to be measured is the total number of group addresses 780 that were successfully forwarded with no loss. 782 In addition, the nature of the traffic stream contributing to the 783 result MUST be reported. All required reporting parameters MUST be 784 reflected in the results report, such as the transmitted packet 785 size(s) and offered load of the packet stream. 787 8. Interaction 789 Network forwarding devices are generally required to provide more 790 functionality than just the forwarding of traffic. Moreover, 791 network forwarding devices may be asked to provide those functions 792 in a variety of environments. This section offers terms to assist 793 in the characterization of DUT/SUT behavior in consideration of 794 potentially interacting factors. 796 8.1. Forwarding Burdened Multicast Latency 798 The Multicast Latency metrics can be influenced by forcing the 799 DUT/SUT to perform extra processing of packets while multicast 800 traffic is being forwarded for latency measurements. In this test, 801 a set of ports on the tester will be designated to be source and 802 destination similar to the generic IP Multicast test setup. In 803 addition to this setup, another set of ports will be selected to 804 transmit some multicast traffic that is destined to multicast group 805 addresses that have not been joined by these additional set of 806 ports. 808 For example, if ports 1,2, 3, and 4 form the burdened response 809 setup (setup A) which is used to obtain the latency metrics and 810 ports 5, 6, 7, and 8 form the non-burdened response setup (setup B) 811 which will afflict the burdened response setup, then setup B 812 traffic will join multicast group addresses not joined by the ports 813 in this setup. By sending such multicast traffic, the DUT/SUT will 814 perform a lookup on the packets that will affect the processing of 815 setup A traffic. 817 8.2. Forwarding Burdened Group Join Delay 819 The port configuration in this test is similar to the one described 820 in section 8.1, but in this test, the multicast traffic is not sent 821 by the ports in setup B. In this test, the setup A traffic must be 822 influenced in such a way that will affect the DUT's/SUT's ability 823 to process Group Join messages. Therefore, in this test, the ports 824 in setup B will send a set of IGMP Group Join messages while the 825 ports in setup A are also joining its own set of group addresses. 826 Since the two sets of group addresses are independent of each 827 other, the group join delay for setup A may be different than in 828 the case when there were no other group addresses being joined. 830 9. Security Considerations 832 As this document is solely for the purpose of providing metric 833 methodology and describes neither a protocol nor a protocol's 834 implementation, there are no security considerations associated 835 with this document. 837 10. Acknowledgements 839 The authors would like to acknowledge the following individuals for 840 their help and participation of the compilation and editing of this 841 document � Ralph Daniels, Netcom Systems, who made significant 842 contributions to earlier versions of this draft and Kevin Dubray, 843 Juniper Networks. 845 11. References 847 [Br91] Bradner, S., "Benchmarking Terminology for Network 848 Interconnection Devices", RFC 1242, July 1991. 850 [Br96] Bradner, S., and J. McQuaid, "Benchmarking Methodology for 851 Network Interconnect Devices", RFC 2544, March 1999. 853 [Br97] Bradner, S. "Use of Keywords in RFCs to Reflect Requirement 854 Levels, RFC 2119, March 1997 856 [Du98] Dubray, K., "Terminology for IP Multicast Benchmarking", RFC 857 2432, October 1998. 859 [Hu95] Huitema, C. "Routing in the Internet." Prentice-Hall, 1995. 861 [Ka98] Kosiur, D., "IP Multicasting: the Complete Guide to 862 Interactive Corporate Networks", John Wiley & Sons, Inc, 1998. 864 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching 865 Devices", RFC 2285, February 1998. 867 [Mt98] Maufer, T. "Deploying IP Multicast in the Enterprise." 868 Prentice-Hall, 1998. 870 [Se98] Semeria, C. and Maufer, T. "Introduction to IP Multicast 871 Routing." http://www.3com.com/nsc/501303.html 3Com Corp., 872 1998. 874 12. Author's Addresses 876 Debra Stopp 877 IXIA 878 26601 W. Agoura Rd. 879 Calabasas, CA 91302 880 USA 882 Phone: 818 871 1800 883 EMail: debby@ixiacom.com 885 Hardev Soor 886 IXIA 887 26601 W. Agoura Rd. 888 Calabasas, CA 91302 889 USA 891 Phone: 818 871 1800 892 EMail: hardev@ixiacom.com 894 13. Full Copyright Statement 896 "Copyright (C) The Internet Society (date). All Rights Reserved. 897 This document and translations of it may be copied and furnished to 898 others, and derivative works that comment on or otherwise explain 899 it or assist in its implementation may be prepared, copied, 900 published and distributed, in whole or in part, without restriction 901 of any kind, provided that the above copyright notice and this 902 paragraph are included on all such copies and derivative works. 903 However, this document itself may not be modified in any way, such 904 as by removing the copyright notice or references to the Internet 905 Society or other Internet organizations, except as needed for the 906 purpose of developing Internet standards in which case the 907 procedures for copyrights defined in the Internet Standards process 908 must be followed, or as required to translate it into. 910 Appendix A: Determining an even distribution 912 It is important to understand and fully define the distribution of 913 frames among all multicast and unicast destinations. If the 914 distribution is not well defined or understood, the throughput and 915 forwarding metrics are not meaningful. 917 In a homogeneous environment, a large single burst of multicast 918 frames may be followed by a large burst of unicast frames. This is 919 a very different distribution than that of a non-homogeneous 920 environment, where the multicast and unicast frames are 921 intermingled throughout the entire transmission. 923 The recommended distribution is that of the non-homogeneous 924 environment because it more closely represents a real-world 925 scenario. The distribution is modeled by calculating the number of 926 multicast frames per destination port as a burst, then calculating 927 the number of unicast frames to transmit as a percentage of the 928 total frames transmitted. The overall effect of the distribution is 929 small bursts of multicast frames intermingled with small bursts of 930 unicast frames.