idnits 2.17.1 draft-ietf-bmwg-mcast-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == Mismatching filename: the document gives the document name as 'draft-ietf-bmwg-mcastm-08', but the file name used is 'draft-ietf-bmwg-mcast-08' == There are 3 instances of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. == There are 3 instances of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 96 has weird spacing: '...measure and ...' == Line 98 has weird spacing: '...lticast proto...' == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Data from invalid trials SHOULD be considered inconclusive. Data from invalid trials MUST not form the basis of comparison. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 2002) is 7804 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'Br96' is defined on line 841, but no explicit reference was found in the text == Unused Reference: 'Br97' is defined on line 844, but no explicit reference was found in the text == Unused Reference: 'Du98' is defined on line 847, but no explicit reference was found in the text == Unused Reference: 'Hu95' is defined on line 850, but no explicit reference was found in the text == Unused Reference: 'Ka98' is defined on line 852, but no explicit reference was found in the text == Unused Reference: 'Mt98' is defined on line 858, but no explicit reference was found in the text == Unused Reference: 'Se98' is defined on line 861, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. 'Br91') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. 'Br96') ** Downref: Normative reference to an Informational RFC: RFC 2432 (ref. 'Du98') -- Possible downref: Non-RFC (?) normative reference: ref. 'Hu95' -- Possible downref: Non-RFC (?) normative reference: ref. 'Ka98' ** Downref: Normative reference to an Informational RFC: RFC 2285 (ref. 'Ma98') -- Possible downref: Non-RFC (?) normative reference: ref. 'Mt98' -- Possible downref: Non-RFC (?) normative reference: ref. 'Se98' Summary: 7 errors (**), 0 flaws (~~), 15 warnings (==), 6 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Debra Stopp 3 Hardev Soor 4 INTERNET-DRAFT IXIA 5 Expires in: November 2002 7 Methodology for IP Multicast Benchmarking 8 10 Status of this Memo 12 This document is an Internet-Draft and is in full conformance with 13 all provisions of Section 10 of RFC2026. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six 21 months and may be updated, replaced, or obsoleted by other 22 documents at any time. It is inappropriate to use Internet-Drafts 23 as reference material or to cite them other than as "work in 24 progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 Copyright Notice 34 Copyright (C) The Internet Society (2002). All Rights Reserved. 36 Abstract 38 The purpose of this draft is to describe methodology specific to 39 the benchmarking of multicast IP forwarding devices. It builds upon 40 the tenets set forth in RFC 2544, RFC 2432 and other IETF 41 Benchmarking Methodology Working Group (BMWG) efforts. This 42 document seeks to extend these efforts to the multicast paradigm. 44 The BMWG produces two major classes of documents: Benchmarking 45 Terminology documents and Benchmarking Methodology documents. The 46 Terminology documents present the benchmarks and other related 47 terms. The Methodology documents define the procedures required to 48 collect the benchmarks cited in the corresponding Terminology 49 documents. 51 Table of Contents 53 1. INTRODUCTION...................................................3 55 2. KEY WORDS TO REFLECT REQUIREMENTS..............................3 57 3. TEST SET UP....................................................3 58 3.1. Test Considerations..........................................5 59 3.1.1. IGMP Support..............................................5 60 3.1.2. Group Addresses...........................................5 61 3.1.3. Frame Sizes...............................................5 62 3.1.4. TTL.......................................................6 63 3.1.5. Trial Duration............................................6 64 3.2. Layer 2 Support..............................................6 65 4. FORWARDING AND THROUGHPUT......................................6 66 4.1. Mixed Class Throughput.......................................6 67 4.2. Scaled Group Forwarding Matrix...............................7 68 4.3. Aggregated Multicast Throughput..............................8 69 4.4. Encapsulation/Decapsulation (Tunneling) Throughput...........9 70 4.4.1. Encapsulation Throughput..................................9 71 4.4.2. Decapsulation Throughput..................................9 72 4.4.3. Re-encapsulation Throughput..............................10 73 5. FORWARDING LATENCY............................................10 74 5.1. Multicast Latency...........................................11 75 5.2. Min/Max Multicast Latency...................................13 76 6. OVERHEAD......................................................14 77 6.1. Group Join Delay............................................14 78 6.2. Group Leave Delay...........................................15 79 7. CAPACITY......................................................16 80 7.1. Multicast Group Capacity....................................16 81 8. INTERACTION...................................................16 82 8.1. Forwarding Burdened Multicast Latency.......................17 83 8.2. Forwarding Burdened Group Join Delay........................17 84 9. SECURITY CONSIDERATIONS.......................................17 86 10. ACKNOWLEDGEMENTS.............................................17 88 11. REFERENCES...................................................18 90 12. AUTHOR'S ADDRESSES...........................................19 92 13. FULL COPYRIGHT STATEMENT.....................................19 93 1. Introduction 95 This document defines a specific set of tests that vendors can use 96 to measure and report the performance characteristics and 97 forwarding capabilities of network devices that support IP 98 multicast protocols. The results of these tests will provide the 99 user comparable data from different vendors with which to evaluate 100 these devices. 102 A previous document, " Terminology for IP Multicast Benchmarking" 103 (RFC 2432), defined many of the terms that are used in this 104 document. The terminology document should be consulted before 105 attempting to make use of this document. 107 This methodology will focus on one source to many destinations, 108 although many of the tests described may be extended to use 109 multiple source to multiple destination IP multicast communication. 111 2. Key Words to Reflect Requirements 113 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 114 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" 115 in this document are to be interpreted as described in RFC 2119. 116 RFC 2119 defines the use of these key words to help make the intent 117 of standards track documents as clear as possible. While this 118 document uses these keywords, this document is not a standards 119 track document. 121 3. Test set up 123 The set of methodologies presented in this draft are for single 124 ingress, multiple egress scenarios as exemplified by Figures 1 and 125 2. Methodologies for multiple ingress, multiple egress scenarios 126 are beyond the scope of this document. 128 Figure 1 shows a typical setup for an IP multicast test, with one 129 source to multiple destinations. 131 +----------------+ 132 +------------+ | Egress | 133 +--------+ | (-)-------->| destination(E1)| 134 | | | | | | 135 | source |------->(|)Ingress | +----------------+ 136 | | | | +----------------+ 137 +--------+ | D U T (-)-------->| Egress | 138 | | | destination(E2)| 139 | | | | 140 | | +----------------+ 141 | | . . . 142 | | +----------------+ 143 | | | Egress | 144 | (-)-------->| destination(En)| 145 | | | | 146 +------------+ +----------------+ 148 Figure 1 149 --------- 151 If the multicast metrics are to be taken across multiple devices 152 forming a System Under Test (SUT), then test packets are offered to 153 a single ingress interface on a device of the SUT, subsequently 154 routed across the SUT topology, and finally forwarded to the test 155 apparatus' packet-receiving components by the test egress 156 interface(s) of devices in the SUT. Figure 2 offers an example SUT 157 test topology. If a SUT is tested, the details of the test 158 topology MUST be disclosed with the corresponding test results. 160 +--------+ +----------------+ +--------+ 161 | | +------------+ |DUT B Egress E0(-)-->| | 162 | | |DUT A |--->| | | | 163 | Test | | | | Egress E1(-)-->| Test | 164 | App. |--->(-)Ingress, I | +----------------+ | App. | 165 | Traffic| | | +----------------+ | Traffic| 166 | Src. | | |--->|DUT C Egress E2(-)-->| Dest. | 167 | | +------------+ | | | | 168 | | | Egress En(-)-->| | 169 +--------+ +----------------+ +--------+ 171 Figure 2 172 --------- 174 Generally, the destination ports first join the desired number of 175 multicast groups by sending IGMP Join Group messages to the 176 DUT/SUT. To verify that all destination ports successfully joined 177 the appropriate groups, the source port MUST transmit IP multicast 178 frames destined for these groups. The destination ports MAY send 179 IGMP Leave Group messages after the transmission of IP Multicast 180 frames to clear the IGMP table of the DUT/SUT. 182 In addition, test equipment MUST validate the correct and proper 183 forwarding actions of the devices they test in order to ensure the 184 receipt of only the frames that are involved in the test. 186 3.1. Test Considerations 188 The procedures outlined below are written without regard for 189 specific physical layer or link layer protocols. The methodology 190 further assumes a uniform medium topology. Issues regarding mixed 191 transmission media, such as speed mismatch, headers differences, 192 etc., are not specifically addressed. Flow control, QoS and other 193 traffic-affecting mechanisms MUST be disabled. Modifications to 194 the specified collection procedures might need to be made to 195 accommodate the transmission media actually tested. These 196 accommodations MUST be presented with the test results. 198 3.1.1. IGMP Support 200 Each of the destination ports should support and be able to test 201 all IGMP versions 1, 2 and 3. The minimum requirement, however, is 202 IGMP version 2. 204 Each destination port should be able to respond to IGMP queries 205 during the test. 207 Each destination port should also send LEAVE (running IGMP version 208 2) after each test. 210 3.1.2. Group Addresses 212 The Class D Group address SHOULD be changed between tests. Many 213 DUTs have memory or cache that is not cleared properly and can bias 214 the results. 216 The following group addresses are recommended by use in a test: 218 224.0.1.27-224.0.1.255 219 224.0.5.128-224.0.5.255 220 224.0.6.128-224.0.6.255 222 If the number of group addresses accommodated by these ranges does 223 not satisfy the requirements of the test, then these ranges may be 224 overlapped. The total number of configured group addresses must be 225 less than or equal to the IGMP table size of the DUT/SUT. 227 3.1.3. Frame Sizes 229 Each test SHOULD be run with different Multicast Frame Sizes. The 230 recommended frame sizes are 64, 128, 256, 512, 1024, 1280, and 1518 231 byte frames. 233 3.1.4. TTL 235 The source frames should have a TTL value large enough to 236 accommodate the DUT/SUT. 238 3.1.5. Trial Duration 240 The duration of the test portion of each trial SHOULD be at least 241 30 seconds. This parameter MUST be included as part of the results 242 reporting for each methodology. 244 3.2. Layer 2 Support 246 Each of the destination ports should support GARP/GMRP protocols to 247 join groups on Layer 2 DUTs/SUTs. 249 4. Forwarding and Throughput 251 This section contains the description of the tests that are related 252 to the characterization of the packet forwarding of a DUT/SUT in a 253 multicast environment. Some metrics extend the concept of throughput 254 presented in RFC 1242. The notion of Forwarding Rate is cited in RFC 255 2285. 257 4.1. Mixed Class Throughput 259 Objective 261 To determine the maximum throughput rate at which none of the 262 offered frames, comprised from a unicast Class and a multicast 263 Class, to be forwarded are dropped by the device across a fixed 264 number of ports as defined in RFC 2432. 266 Procedure 268 Multicast and unicast traffic are mixed together in the same 269 aggregated traffic stream in order to simulate the non-homogenous 270 networking environment. The DUT/SUT MUST learn the appropriate 271 unicast IP addresses, either by sending ARP frames from each 272 unicast address, sending a RIP packet or by assigning static 273 entries into the DUT/SUT address table. 275 The mixture of multicast and unicast traffic MUST be set up in one 276 of two ways: 278 a) Input frame rate for each class of traffic [Br91] or as a 279 percentage of media_maximum-octets [Ma98]. Frame rate should 280 be specified independently for each traffic class. 282 b) As an aggregate rate (given either in frames per second or 283 as a percentage), with the ratio of multicast to unicast 284 traffic declared. 286 While the multicast traffic is transmitted from one source to 287 multiple destinations, the unicast traffic MAY be evenly 288 distributed across the DUT/SUT architecture. Unicast traffic 289 distribution can either be non-meshed or meshed [Ma98] as specified 290 in RFC2544 or RFC2289. 292 Throughput measurement is defined in RFC1242 [Br91]. A search 293 algorithm MUST be utilized to determine the maximum offered frame 294 rate with a zero frame loss rate. 296 Result 298 Parameters to be measured MUST include the aggregate offered load, 299 number of multicast frames offered, number of unicast frames 300 offered, number multicast frames received, number of unicast frames 301 received and transmit duration of offered frames. 303 4.2. Scaled Group Forwarding Matrix 305 Objective 307 A table that demonstrates Forwarding Rate as a function of tested 308 multicast groups for a fixed number of tested DUT/SUT ports. 310 Procedure 312 Multicast traffic is sent at a fixed percent of maximum offered 313 load with a fixed number of receive ports of the tester at a fixed 314 frame length. 316 On each iteration, the receive ports SHOULD incrementally join 10 317 multicast groups until a user defined maximum number of groups is 318 reached. 320 Results 322 Parameters to be measured MUST include the offered load and 323 forwarding rate as a function of the total number of multicast 324 groups, for each test iteration. 326 The nature of the traffic stream contributing to the result MUST be 327 reported, specifically number of source and destination ports 328 within the multicast group. In addition, all other reporting 329 parameters of the scaled group forwarding matrix methodology MUST 330 be reflected in the results report, such as the transmitted packet 331 size(s) and offered load of the packet stream for each source port. 333 Result reports MUST include the following parameters for each 334 iteration: the number of frames offered, number of frames received 335 per each group, number of multicast groups and forwarding rate, in 336 frames per second, and transmit duration of offered frames. 337 Constructing a table that contains the forwarding rate vs. number 338 of groups is desirable. 340 4.3. Aggregated Multicast Throughput 342 Objective 344 The maximum rate at which none of the offered frames to be 345 forwarded through N destination interfaces of the same multicast 346 group is dropped. 348 Procedure 350 Multicast traffic is sent at a fixed percent of maximum offered 351 load with a fixed number of groups at a fixed frame length for a 352 fixed duration of time. 354 The initial number of receive ports of the tester will join the 355 group(s) and the sender will transmit to the same groups after a 356 certain delay (a few seconds). 358 If any frame loss is detected, one receive port MUST leave the 359 group(s) and the sender will transmit again. Continue in this 360 iterative fashion until either there are no ports left joined to 361 the multicast group(s) OR 0% frame loss is achieved. 363 Results 365 Parameters to be measured MUST include the maximum offered load at 366 which no frame loss occurred (as defined by RFC 2544) 368 The nature of the traffic stream contributing to the result MUST be 369 reported. All required reporting parameters of aggregated 370 throughput MUST be reflected in the results report, such as the 371 initial number of receive ports, the final number of receive ports, 372 total number of multicast group addresses, the transmitted packet 373 size(s), offered load of the packet stream and transmit duration of 374 offered frames. 376 Constructing a table from the measurements might be useful in 377 illustrating the effect of modifying the number of active egress 378 ports on the tested system. 380 4.4. Encapsulation/Decapsulation (Tunneling) Throughput 382 This sub-section provides the description of tests that help in 383 obtaining throughput measurements when a DUT/SUT or a set of DUTs 384 are acting as tunnel endpoints 386 4.4.1. Encapsulation Throughput 388 Objective 390 The maximum rate at which frames offered a DUT/SUT are encapsulated 391 and correctly forwarded by the DUT/SUT without loss. 393 Procedure 395 Traffic is sent through a DUT/SUT that has been configured to 396 encapsulate the frames. Traffic is received on a test port prior to 397 decapsulation and throughput is calculated based on RFC2544. 399 Results 401 Parameters to be measured SHOULD include the measured throughput 402 per tunnel, 404 The nature of the traffic stream contributing to the result MUST be 405 reported. All required reporting parameters of encapsulation 406 throughput MUST be reflected in the results report, such as the 407 transmitted packet size(s), offered load of the packet stream and 408 transmit duration of offered frames. 410 4.4.2. Decapsulation Throughput 412 Objective 414 The maximum rate at which frames offered a DUT/SUT are decapsulated 415 and correctly forwarded by the DUT/SUT without loss. 417 Procedure 419 Encapsulated traffic is sent through a DUT/SUT that has been 420 configured to decapsulate the frames. Traffic is received on a test 421 port after decapsulation and throughput is calculated based on 422 RFC2544. 424 Results 426 Parameters to be measured SHOULD include the measured throughput 427 per tunnel. 429 The nature of the traffic stream contributing to the result MUST be 430 reported. All required reporting parameters of decapsulation 431 throughput MUST be reflected in the results report, such as the 432 transmitted packet size(s), offered load of the packet stream and 433 transmit duration of offered frames. 435 4.4.3. Re-encapsulation Throughput 437 Objective 439 The maximum rate at which frames of one encapsulated format offered 440 a DUT/SUT are converted to another encapsulated format and 441 correctly forwarded by the DUT/SUT without loss. 443 Procedure 445 Traffic is sent through a DUT/SUT that has been configured to 446 encapsulate frames into one format, then re-encapsulate the frames 447 into another format. Traffic is received on a test port after all 448 decapsulation is complete and throughput is calculated based on 449 RFC2544. 451 Results 453 Parameters to be measured SHOULD include the measured throughput 454 per tunnel. 456 The nature of the traffic stream contributing to the result MUST be 457 reported. All required reporting parameters of re-encapsulation 458 throughput MUST be reflected in the results report, such as the 459 transmitted packet size(s), offered load of the packet stream and 460 transmit duration of offered frames. 462 5. Forwarding Latency 464 This section presents methodologies relating to the 465 characterization of the forwarding latency of a DUT/SUT in a 466 multicast environment. It extends the concept of latency 467 characterization presented in RFC 2544. 469 In order to lessen the effect of packet buffering in the DUT/SUT, 470 the latency tests MUST be run such that the offered load is less 471 than the multicast throughput of the DUT/SUT as determined in the 472 previous section. The tests should also take into account the 473 DUT's/SUT's need to cache the traffic in its IP cache, fastpath 474 cache or shortcut tables since the initial part of the traffic will 475 be utilized to build these tables. 477 Lastly, RFC 1242 and RFC 2544 draw distinction between two classes 478 of devices: "store and forward" and "bit-forwarding." Each class 479 impacts how latency is collected and subsequently presented. See 480 the related RFCs for more information. In practice, much of the 481 test equipment will collect the latency measurement for one class 482 or the other, and, if needed, mathematically derive the reported 483 value by the addition or subtraction of values accounting for 484 medium propagation delay of the packet, bit times to the timestamp 485 trigger within the packet, etc. Test equipment vendors SHOULD 486 provide documentation regarding the composition and calculation 487 latency values being reported. The user of this data SHOULD 488 understand the nature of the latency values being reported, 489 especially when comparing results collected from multiple test 490 vendors. (E.g., If test vendor A presents a "store and forward" 491 latency result and test vendor B presents a "bit-forwarding" 492 latency result, the user may erroneously conclude the DUT has two 493 differing sets of latency values.) 495 5.1. Multicast Latency 497 Objective 499 To produce a set of multicast latency measurements from a single, 500 multicast ingress port of a DUT or SUT through multiple, egress 501 multicast ports of that same DUT or SUT as provided for by the 502 metric "Multicast Latency" in RFC 2432. 504 The procedures highlighted below attempt to draw from the 505 collection methodology for latency in RFC 2544 to the degree 506 possible. The methodology addresses two topological scenarios: one 507 for a single device (DUT) characterization; a second scenario is 508 presented or multiple device (SUT) characterization. 510 Procedure 512 If the test trial is to characterize latency across a single Device 513 Under Test (DUT), an example test topology might take the form of 514 Figure 1 in section 3. That is, a single DUT with one ingress 515 interface receiving the multicast test traffic from packet- 516 transmitting component of the test apparatus and n egress 517 interfaces on the same DUT forwarding the multicast test traffic 518 back to the packet-receiving component of the test apparatus. Note 519 that n reflects the number of TESTED egress interfaces on the DUT 520 actually expected to forward the test traffic (as opposed to 521 configured but untested, non-forwarding interfaces, for example). 523 If the multicast latencies are to be taken across multiple devices 524 forming a System Under Test (SUT), an example test topology might 525 take the form of Figure 2 in section 3. 527 The trial duration SHOULD be 120 seconds. Departures to the 528 suggested traffic class guidelines MUST be disclosed with the 529 respective trial results. The nature of the latency measurement, 530 "store and forward" or "bit forwarding," MUST be associated with 531 the related test trial(s) and disclosed in the results report. 533 End-to-end reach ability of the test traffic path SHOULD be 534 verified prior to the engagement of a test trial. This implies 535 that subsequent measurements are intended to characterize the 536 latency across the tested device's or devices' normal traffic 537 forwarding path (e.g., faster hardware-based engines) of the 538 device(s) as opposed a non-standard traffic processing path (e.g. 539 slower, software-based exception handlers). If the test trial is 540 to be executed with the intent of characterizing a non-optimal, 541 forwarding condition, then a description of the exception 542 processing conditions being characterized MUST be included with the 543 trial's results. 545 A test traffic stream is presented to the DUT. At the mid-point of 546 the trial's duration, the test apparatus MUST inject a uniquely 547 identifiable ("tagged") packet into the test traffic packets being 548 presented. This tagged packet will be the basis for the latency 549 measurements. By "uniquely identifiable," it is meant that the test 550 apparatus MUST be able to discern the "tagged" packet from the 551 other packets comprising the test traffic set. A packet generation 552 timestamp, Timestamp A, reflecting the completion of the 553 transmission of the tagged packet by the test apparatus, MUST be 554 determined. 556 The test apparatus then monitors packets from the DUT's tested 557 egress port(s) for the expected tagged packet(s) until the 558 cessation of traffic generation at the end of the configured trial 559 duration.A value of the Offered Load presented the DUT/SUT MUST be 560 noted. 562 The test apparatus MUST record the time of the successful detection 563 of a tagged packet from a tested egress interface with a timestamp, 564 Timestamp B. A set of Timestamp B values MUST be collected for all 565 tested egress interfaces of the DUT/SUT. 567 A trial MUST be considered INVALID should any of the following 568 conditions occur in the collection of the trial data: 570 . Forwarded test packets directed to improper destinations. 571 . Unexpected differences between Intended Load and Offered Load 572 or unexpected differences between Offered Load and the 573 resulting Forwarding Rate(s) on the DUT/SUT egress ports. 574 . Forwarded test packets improperly formed or packet header 575 fields improperly manipulated. 576 . Failure to forward required tagged packet(s) on all expected 577 egress interfaces. 578 . Reception of a tagged packet by the test apparatus outside the 579 configured test duration interval or 5 seconds, whichever is 580 greater. 582 Data from invalid trials SHOULD be considered inconclusive. Data 583 from invalid trials MUST not form the basis of comparison. 585 The set of latency measurements, M, composed from each latency 586 measurement taken from every ingress/tested egress interface 587 pairing MUST be determined from a valid test trial: 588 M = { (Timestamp B(E0) - Timestamp A), 589 (Timestamp B(E1) - Timestamp A), ... 590 (Timestamp B(En) - Timestamp A) } 592 where (E0 ... En) represents the range of all tested egress 593 interfaces and Timestamp B represents a tagged packet detection 594 event for a given DUT/SUT tested egress interface. 596 Results 598 Two types of information MUST be reported: 1) the set of latency 599 measurements and 2) the significant environmental, methodological, 600 or device particulars giving insight into the test or its results. 602 Specifically, when reporting the results of a VALID test trial, the 603 set of ALL latencies related to the tested ingress interface and 604 each tested egress DUT/SUT interface of MUST be presented. The 605 time units of the presented latency MUST be uniform and with 606 sufficient precision for the medium or media being tested. Results 607 MAY be offered in tabular format and SHOULD preserve the 608 relationship of latency to ingress/egress interface to assist in 609 trending across multiple trials. 611 The Offered Load of the test traffic presented the DUT/SUT, size of 612 the "tagged" packet, transmit duration of offered frames and nature 613 (i.e., store-and-forward or bit-forwarding) of the trial's 614 measurement MUST be associated with any reported test trial's 615 result. 617 5.2. Min/Max Multicast Latency 619 Objective 621 The difference between the maximum latency measurement and the 622 minimum latency measurement from a collected set of latencies 623 produced by the Multicast Latency benchmark. 625 Procedure 627 Collect a set of multicast latency measurements, as prescribed in 628 section 5.1. This will produce a set of multicast latencies, M, 629 where M is composed of individual forwarding latencies between DUT 630 packet ingress and DUT packet egress port pairs. E.g.: 632 M = {L(I,E1),L(I,E2), �, L(I,En)} 634 where L is the latency between a tested ingress port, I, of the 635 DUT, and Ex a specific, tested multicast egress port of the DUT. 636 E1 through En are unique egress ports on the DUT. 638 From the collected multicast latency measurements in set M, 639 identify MAX(M), where MAX is a function that yields the largest 640 latency value from set M. 642 Identify MIN(M), when MIN is a function that yields the smallest 643 latency value from set M. 645 The Max/Min value is determined from the following formula: 647 Result = MAX(M) � MIN(M) 649 Results 651 The result MUST be represented as a single numerical value in time 652 units consistent with the corresponding latency measurements. In 653 addition, the number of tested egress ports on the DUT MUST be 654 reported. 656 The nature of the traffic stream contributing to the result MUST be 657 reported. All required reporting parameters of multicast latency 658 MUST be reflected in the min/max results report, such as the 659 transmitted packet size(s), offered load of the packet stream in 660 which the tagged packet was presented to the DUT and transmit 661 duration of offered frames. 663 6. Overhead 665 This section presents methodology relating to the characterization 666 of the overhead delays associated with explicit operations found in 667 multicast environments. 669 6.1. Group Join Delay 671 Objective 673 The time duration it takes a DUT/SUT to start forwarding multicast 674 packets from the time a successful IGMP group membership report has 675 been issued to the DUT/SUT. 677 Procedure 679 Traffic is sent on the source port at the same time as the IGMP 680 JOIN Group message is transmitted from the destination ports. The 681 join delay is the difference in time from when the IGMP Join is 682 sent (timestamp A) and the first frame is forwarded to a receiving 683 member port (timestamp B). 685 Group Join delay = timestamp B - timestamp A 687 One of the keys is to transmit at the fastest rate the DUT/SUT can 688 handle multicast frames. This is to get the best resolution and 689 the least margin of error in the Join Delay. 691 However, you do not want to transmit the frames so fast that frames 692 are dropped by the DUT/SUT. Traffic should be sent at the 693 throughput rate determined by the forwarding tests of section 4. 695 Results 697 The parameter to be measured is the join delay time for each 698 multicast group address per destination port. In addition, the 699 number of frames transmitted and received and percent loss may be 700 reported. 702 6.2. Group Leave Delay 704 Objective 706 The time duration it takes a DUT/SUT to cease forwarding multicast 707 packets after a corresponding IGMP "Leave Group" message has been 708 successfully offered to the DUT/SUT. 710 Procedure 712 Traffic is sent on the source port at the same time as the IGMP 713 Leave Group messages are transmitted from the destination ports. 714 The leave delay is the difference in time from when the IGMP leave 715 is sent (timestamp A) and the last frame is forwarded to a 716 receiving member port (timestamp B). 718 Group Leave delay = timestamp B - timestamp A 720 One of the keys is to transmit at the fastest rate the DUT/SUT can 721 handle multicast frames. This is to get the best resolution and 722 least margin of error in the Leave Delay. However, you do not want 723 to transmit the frames too fast that frames are dropped by the 724 DUT/SUT. Traffic should be sent at the throughput rate determined 725 by the forwarding tests of section 4. 727 Results 729 The parameter to be measured is the leave delay time for each 730 multicast group address per destination port. In addition, the 731 number of frames transmitted and received and percent loss may be 732 reported. 734 7. Capacity 736 This section offers terms relating to the identification of 737 multicast group limits of a DUT/SUT. 739 7.1. Multicast Group Capacity 741 Objective 743 The maximum number of multicast groups a DUT/SUT can support while 744 maintaining the ability to forward multicast frames to all 745 multicast groups registered to that DUT/SUT. 747 Procedure 749 One or more destination ports of DUT/SUT will join an initial 750 number of groups. 752 Then after a delay (enough time for all ports to join) the source 753 port will transmit to each group at a transmission rate that the 754 DUT/SUT can handle without dropping IP Multicast frames. 756 If all frames sent are forwarded by the DUT/SUT and received the 757 test iteration is said to pass at the current capacity. 759 If the iteration passes at the capacity the test will add an user 760 defined incremental value of groups to each receive port. 762 The iteration is to run again at the new group level and capacity 763 tested as stated above. 765 Once the test fails at a capacity the capacity is stated to be the 766 last Iteration that pass at a giving capacity. 768 Results 770 The parameter to be measured is the total number of group addresses 771 that were successfully forwarded with no loss. 773 In addition, the nature of the traffic stream contributing to the 774 result MUST be reported. All required reporting parameters MUST be 775 reflected in the results report, such as the transmitted packet 776 size(s) and offered load of the packet stream. 778 8. Interaction 780 Network forwarding devices are generally required to provide more 781 functionality than just the forwarding of traffic. Moreover, 782 network-forwarding devices may be asked to provide those functions 783 in a variety of environments. This section offers terms to assist 784 in the characterization of DUT/SUT behavior in consideration of 785 potentially interacting factors. 787 8.1. Forwarding Burdened Multicast Latency 789 The Multicast Latency metrics can be influenced by forcing the 790 DUT/SUT to perform extra processing of packets while multicast 791 traffic is being forwarded for latency measurements. In this test, 792 a set of ports on the tester will be designated to be source and 793 destination similar to the generic IP Multicast test setup. In 794 addition to this setup, another set of ports will be selected to 795 transmit some multicast traffic that is destined to multicast group 796 addresses that have not been joined by these additional set of 797 ports. 799 For example, if ports 1,2, 3, and 4 form the burdened response 800 setup (setup A) which is used to obtain the latency metrics and 801 ports 5, 6, 7, and 8 form the non-burdened response setup (setup B) 802 which will afflict the burdened response setup, then setup B 803 traffic will join multicast group addresses not joined by the ports 804 in this setup. By sending such multicast traffic, the DUT/SUT will 805 perform a lookup on the packets that will affect the processing of 806 setup A traffic. 808 8.2. Forwarding Burdened Group Join Delay 810 The port configuration in this test is similar to the one described 811 in section 8.1, but in this test, the ports in setup B do not send 812 the multicast traffic. Rather, setup A traffic must be influenced 813 in such a way that will affect the DUT's/SUT's ability to process 814 Group Join messages. Therefore, in this test, the ports in setup B 815 will send a set of IGMP Group Join messages while the ports in 816 setup A are also joining its own set of group addresses. Since the 817 two sets of group addresses are independent of each other, the 818 group join delay for setup A may be different than in the case when 819 there were no other group addresses being joined. 821 9. Security Considerations 823 As this document is solely for the purpose of providing metric 824 methodology and describes neither a protocol nor a protocol's 825 implementation, there are no security considerations associated 826 with this document. 828 10. Acknowledgements 830 The authors would like to acknowledge the following individuals for 831 their help and participation of the compilation and editing of this 832 document � Ralph Daniels, Netcom Systems, who made significant 833 contributions to earlier versions of this draft, Daniel Bui, IXIA, 834 and Kevin Dubray, Juniper Networks. 836 11. References 838 [Br91] Bradner, S., "Benchmarking Terminology for Network 839 Interconnection Devices", RFC 1242, July 1991. 841 [Br96] Bradner, S., and J. McQuaid, "Benchmarking Methodology for 842 Network Interconnect Devices", RFC 2544, March 1999. 844 [Br97] Bradner, S. "Use of Keywords in RFCs to Reflect Requirement 845 Levels, RFC 2119, March 1997 847 [Du98] Dubray, K., "Terminology for IP Multicast Benchmarking", RFC 848 2432, October 1998. 850 [Hu95] Huitema, C. "Routing in the Internet." Prentice-Hall, 1995. 852 [Ka98] Kosiur, D., "IP Multicasting: the Complete Guide to 853 Interactive Corporate Networks", John Wiley & Sons, Inc, 1998. 855 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching 856 Devices", RFC 2285, February 1998. 858 [Mt98] Maufer, T. "Deploying IP Multicast in the Enterprise." 859 Prentice-Hall, 1998. 861 [Se98] Semeria, C. and Maufer, T. "Introduction to IP Multicast 862 Routing." http://www.3com.com/nsc/501303.html 3Com Corp., 863 1998. 865 12. Author's Addresses 867 Debra Stopp 868 IXIA 869 26601 W. Agoura Rd. 870 Calabasas, CA 91302 871 USA 873 Phone: 818 871 1800 874 EMail: debby@ixiacom.com 876 Hardev Soor 877 IXIA 878 26601 W. Agoura Rd. 879 Calabasas, CA 91302 880 USA 882 Phone: 818 871 1800 883 EMail: hardev@ixiacom.com 885 13. Full Copyright Statement 887 "Copyright (C) The Internet Society (date). All Rights Reserved. 888 This document and translations of it may be copied and furnished to 889 others, and derivative works that comment on or otherwise explain 890 it or assist in its implementation may be prepared, copied, 891 published and distributed, in whole or in part, without restriction 892 of any kind, provided that the above copyright notice and this 893 paragraph are included on all such copies and derivative works. 894 However, this document itself may not be modified in any way, such 895 as by removing the copyright notice or references to the Internet 896 Society or other Internet organizations, except as needed for the 897 purpose of developing Internet standards in which case the 898 procedures for copyrights defined in the Internet Standards process 899 must be followed, or as required to translate it into.