idnits 2.17.1 draft-ietf-bmwg-mcastm-13.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There is 1 instance of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The total number of multicast groups joined MUST not exceed the multicast group capacity of the DUT/SUT. The Group Capacity (Section 7.1) results MUST be known prior to running this test. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 2003) is 7585 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'Du96' is mentioned on line 1164, but not defined == Unused Reference: 'Br97' is defined on line 1306, but no explicit reference was found in the text == Unused Reference: 'Ca02' is defined on line 1317, but no explicit reference was found in the text == Unused Reference: 'De89' is defined on line 1320, but no explicit reference was found in the text == Unused Reference: 'Fe97' is defined on line 1323, but no explicit reference was found in the text == Unused Reference: 'Hu95' is defined on line 1326, but no explicit reference was found in the text == Unused Reference: 'Ka98' is defined on line 1328, but no explicit reference was found in the text == Unused Reference: 'Mt98' is defined on line 1331, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. 'Br91') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. 'Br96') ** Downref: Normative reference to an Informational RFC: RFC 2432 (ref. 'Du98') ** Downref: Normative reference to an Informational RFC: RFC 2285 (ref. 'Ma98') Summary: 6 errors (**), 0 flaws (~~), 12 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Debra Stopp 3 INTERNET-DRAFT Ixia 4 Expires in: August 2003 Brooks Hickman 5 Spirent Communications 6 July 2003 8 Methodology for IP Multicast Benchmarking 9 11 Status of this Memo 13 This document is an Internet-Draft and is in full conformance with 14 all provisions of Section 10 of RFC2026. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six 22 months and may be updated, replaced, or obsoleted by other 23 documents at any time. It is inappropriate to use Internet-Drafts 24 as reference material or to cite them other than as "work in 25 progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 Copyright Notice 35 Copyright (C) The Internet Society (2003). All Rights Reserved. 37 Abstract 39 The purpose of this document is to describe methodology specific to 40 the benchmarking of multicast IP forwarding devices. It builds upon 41 the tenets set forth in RFC 2544, RFC 2432 and other IETF 42 Benchmarking Methodology Working Group (BMWG) efforts. This 43 document seeks to extend these efforts to the multicast paradigm. 45 The BMWG produces two major classes of documents: Benchmarking 46 Terminology documents and Benchmarking Methodology documents. The 47 Terminology documents present the benchmarks and other related 48 terms. The Methodology documents define the procedures required to 49 collect the benchmarks cited in the corresponding Terminology 50 documents. 52 Table of Contents 54 1. INTRODUCTION...................................................3 56 2. KEY WORDS TO REFLECT REQUIREMENTS..............................3 58 3. TEST SET UP....................................................3 59 3.1. Test Considerations..........................................5 60 3.1.1. IGMP Support..............................................5 61 3.1.2. Group Addresses...........................................5 62 3.1.3. Frame Sizes...............................................6 63 3.1.4. TTL.......................................................6 64 3.1.5. Trial Duration............................................6 65 4. FORWARDING AND THROUGHPUT......................................6 66 4.1. Mixed Class Throughput.......................................7 67 4.2. Scaled Group Forwarding Matrix...............................8 68 4.3. Aggregated Multicast Throughput..............................9 69 4.4. Encapsulation/Decapsulation (Tunneling) Throughput..........10 70 4.4.1. Encapsulation Throughput.................................10 71 4.4.2. Decapsulation Throughput.................................12 72 4.4.3. Re-encapsulation Throughput..............................14 73 5. FORWARDING LATENCY............................................16 74 5.1. Multicast Latency...........................................16 75 5.2. Min/Max Multicast Latency...................................18 76 6. OVERHEAD......................................................20 77 6.1. Group Join Delay............................................20 78 6.2. Group Leave Delay...........................................22 79 7. CAPACITY......................................................24 80 7.1. Multicast Group Capacity....................................24 81 8. INTERACTION...................................................25 82 8.1. Forwarding Burdened Multicast Latency.......................25 83 8.2. Forwarding Burdened Group Join Delay........................26 84 9. SECURITY CONSIDERATIONS.......................................28 86 10. ACKNOWLEDGEMENTS.............................................28 88 11. CONTRIBUTIONS................................................28 90 12. REFERENCES...................................................29 92 13. AUTHOR'S ADDRESSES...........................................30 94 14. FULL COPYRIGHT STATEMENT.....................................30 95 1. Introduction 97 This document defines tests for measuring and reporting the 98 throughput, forwarding, latency and IGMP group membership 99 characteristics of devices that support IP multicast protocols. 100 The results of these tests will provide the user with meaningful 101 data on multicast performance. 103 A previous document, " Terminology for IP Multicast Benchmarking" 104 (RFC 2432), defined many of the terms that are used in this 105 document. The terminology document should be consulted before 106 attempting to make use of this document. 108 This methodology will focus on one source to many destinations, 109 although many of the tests described may be extended to use 110 multiple source to multiple destination topologies. 112 2. Key Words to Reflect Requirements 114 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 115 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" 116 in this document are to be interpreted as described in RFC 2119. 117 RFC 2119 defines the use of these key words to help make the intent 118 of standards track documents as clear as possible. While this 119 document uses these keywords, this document is not a standards 120 track document. 122 3. Test set up 124 The set of methodologies presented in this document are for single 125 ingress, multiple egress multicast scenarios as exemplified by 126 Figures 1 and 2. Methodologies for multiple ingress and multiple 127 egress multicast scenarios are beyond the scope of this document. 129 Figure 1 shows a typical setup for an IP multicast test, with one 130 source to multiple destinations. 132 +------------+ +--------------+ 133 | | | destination | 134 +--------+ | Egress(-)------->| test | 135 | source | | | | port(E1) | 136 | test |------>(|)Ingress | +--------------+ 137 | port | | | +--------------+ 138 +--------+ | Egress(-)------->| destination | 139 | | | test | 140 | | | port(E2) | 141 | DUT | +--------------+ 142 | | . . . 143 | | +--------------+ 144 | | | destination | 145 | Egress(-)------->| test | 146 | | | port(En) | 147 +------------+ +--------------+ 149 Figure 1 150 --------- 152 If the multicast metrics are to be taken across multiple devices 153 forming a System Under Test (SUT), then test frames are offered to 154 a single ingress interface on a device of the SUT, subsequently 155 forwarded across the SUT topology, and finally forwarded to the 156 test apparatus' frame-receiving components by the test egress 157 interface(s) of devices in the SUT. Figure 2 offers an example SUT 158 test topology. If a SUT is tested, the test topology and all 159 relevant configuration details MUST be disclosed with the 160 corresponding test results. 162 *-----------------------------------------* 163 | | 164 +--------+ | +----------------+ | +--------+ 165 | | | +------------+ |DUT B Egress E0(-)-|->| | 166 | | | |DUT A |--->| | | | | 167 | source | | | | | Egress E1(-)-|->| dest. | 168 | test |--|->(-)Ingress, I | +----------------+ | | test | 169 | port | | | | +----------------+ | | port | 170 | | | | |--->|DUT C Egress E2(-)-|->| | 171 | | | +------------+ | | | | | 172 | | | | Egress En(-)-|->| | 173 +--------+ | +----------------+ | +--------+ 174 | | 175 *------------------SUT--------------------* 177 Figure 2 178 --------- 180 Generally, the destination test ports first join the desired number 181 of multicast groups by sending IGMP Group Report messages to the 182 DUT/SUT. To verify that all destination test ports successfully 183 joined the appropriate groups, the source test port MUST transmit 184 IP multicast frames destined for these groups. After test 185 completion, the destination test ports MAY send IGMP Leave Group 186 messages to clear the IGMP table of the DUT/SUT. 188 In addition, test equipment MUST validate the correct and proper 189 forwarding actions of the devices they test in order to ensure the 190 receipt of the frames that are involved in the test. 192 3.1. Test Considerations 194 The methodology assumes a uniform medium topology. Issues regarding 195 mixed transmission media, such as speed mismatch, headers 196 differences, etc., are not specifically addressed. Flow control, 197 QoS and other non-essential traffic or traffic-affecting mechanisms 198 affecting the variable under test MUST be disabled. Modifications 199 to the collection procedures might need to be made to accommodate 200 the transmission media actually tested. These accommodations MUST 201 be presented with the test results. 203 An actual flow of test traffic MAY be required to prime related 204 mechanisms, (e.g., process RPF events, build device caches, etc.) 205 to optimally forward subsequent traffic. Therefore, prior to 206 running any tests that require forwarding of multicast or unicast 207 packets, the test apparatus MUST generate test traffic utilizing 208 the same addressing characteristics to the DUT/SUT that will 209 subsequently be used to measure the DUT/SUT response. The test 210 monitor should ensure the correct forwarding of traffic by the 211 DUT/SUT. The priming action need only be repeated to keep the 212 associated information current. 214 3.1.1. IGMP Support 216 All of the ingress and egress interfaces MUST support a version of 217 IGMP. The IGMP version on the ingress interface MUST be the same 218 version of IGMP that is being tested on the egress interfaces. 220 Each of the ingress and egress interfaces SHOULD be able to respond 221 to IGMP queries during the test. 223 Each of the ingress and egress interfaces SHOULD also send LEAVE 224 (running IGMP version 2 or later) after each test. 226 3.1.2. Group Addresses 228 It is intended that the collection of benchmarks prescribed in this 229 document be executed in an isolated lab environment. That is to 230 say, the test traffic offered the tested devices MUST NOT traverse 231 a live internet, intranet, or other production network. 233 There is no restriction to the use of multicast addresses to 234 compose the test traffic other than those assignments imposed by 235 IANA. The IANA assignments MUST be regarded for operational 236 consistency. For multicast address assignments see: 238 http://www.iana.org/assignments/multicast-addresses 240 Address selection does not need to be restricted to 241 Administratively Scoped IP Multicast addresses. 243 3.1.3. Frame Sizes 245 Each test SHOULD be run with different multicast frame sizes. For 246 Ethernet, the recommended sizes are 64, 128, 256, 512, 1024, 1280, 247 and 1518 byte frames. 249 Other link layer technologies MAY be used. The minimum and maximum 250 frame lengths of the link layer technology in use SHOULD be tested. 252 When testing with different frame sizes, the DUT/SUT configuration 253 MUST remain the same. 255 3.1.4. TTL 257 The data plane test traffic should have a TTL value large enough to 258 traverse the DUT/SUT. 260 The TTL in IGMP control plane messages MUST be in compliance with 261 the version of IGMP in use. 263 3.1.5. Trial Duration 265 The duration of the test portion of each trial SHOULD be at least 266 30 seconds. This parameter MUST be included as part of the results 267 reporting for each methodology. 269 4. Forwarding and Throughput 271 This section contains the description of the tests that are related 272 to the characterization of the frame forwarding of a DUT/SUT in a 273 multicast environment. Some metrics extend the concept of throughput 274 presented in RFC 1242. Forwarding Rate is cited in RFC 2285 [Ma98]. 276 4.1. Mixed Class Throughput 278 Objective: 280 To determine the throughput of a DUT/SUT when both unicast class 281 frames and multicast class frames are offered simultaneously to a 282 fixed number of interfaces as defined in RFC 2432. 284 Procedure: 286 Multicast and unicast traffic are mixed together in the same 287 aggregated traffic stream in order to simulate the non-homogenous 288 networking environment. 290 The following events MUST occur before offering test traffic: 292 o All destination test ports configured to receive multicast 293 traffic MUST join all configured multicast groups; 294 o The DUT/SUT MUST learn the appropriate unicast and 295 multicast addresses; and 296 o Group membership and unicast address learning MUST be 297 verified through some externally observable method. 299 The intended load [Ma98] SHOULD be configured as alternating 300 multicast class frames and unicast class frames to a single ingress 301 interface. The unicast class frames MUST be configured to transmit 302 in an unweighted round-robin fashion to all of the destination 303 ports. 305 Mixed class throughput measurement is defined in RFC2432 [Du98]. A 306 search algorithm MUST be utilized to determine the Mixed Class 307 Throughput. The ratio of unicast to multicast frames MUST remain 308 the same when varying the intended load. 310 Reporting Format: 312 The following configuration parameters MUST be reflected in the 313 test report: 315 o Frame size(s) 316 o Number of tested egress interfaces on the DUT/SUT 317 o Test duration 318 o IGMP version 319 o Total number of multicast groups 320 o Traffic distribution for unicast and multicast traffic 321 classes 322 o The ratio of multicast to unicast class traffic 324 The following results MUST be reflected in the test report: 326 o Mixed Class Throughput as defined in RFC2432 [Du98], 327 including: Throughput per unicast and multicast traffic 328 classes. 330 The Mixed Class Throughput results for each test SHOULD be reported 331 in the form of a table with a row for each of the tested frame 332 sizes per the recommendations in section 3.1.3. Each row SHOULD 333 specify the intended load, number of multicast frames offered, 334 number of unicast frames offered and measured throughput per class. 336 4.2. Scaled Group Forwarding Matrix 338 Objective: 340 To determine Forwarding Rate as a function of tested multicast 341 groups for a fixed number of tested DUT/SUT ports. 343 Procedure: 345 This is an iterative procedure. The destination test port(s) MUST 346 join an initial number of multicast groups on the first iteration. 347 All destination test ports configured to receive multicast traffic 348 MUST join all configured multicast groups. The recommended number 349 of groups to join on the first iteration is 10 groups. Multicast 350 traffic is subsequently transmitted to all groups joined on this 351 iteration and the forwarding rate is measured. 353 The number of multicast groups joined by each destination test port 354 is then incremented, or scaled, by an additional number of 355 multicast groups. The recommended granularity of additional groups 356 to join per iteration is 10, although the tester MAY choose a finer 357 granularity. Multicast traffic is subsequently transmitted to all 358 groups joined during this iteration and the forwarding rate is 359 measured. 361 The total number of multicast groups joined MUST not exceed the 362 multicast group capacity of the DUT/SUT. The Group Capacity 363 (Section 7.1) results MUST be known prior to running this test. 365 Reporting Format: 367 The following configuration parameters MUST be reflected in the 368 test report: 370 o Frame size(s) 371 o Number of tested egress interfaces on the DUT/SUT 372 o Test duration 373 o IGMP version 375 The following results MUST be reflected in the test report: 377 o The total number of multicast groups joined for that 378 iteration 379 o Forwarding rate determined for that iteration 381 The Scaled Group Forwarding results for each test SHOULD be 382 reported in the form of a table with a row representing each 383 iteration of the test. Each row or iteration SHOULD specify the 384 total number of groups joined for that iteration, offered load, 385 total number of frames transmitted, total number of frames received 386 and the aggregate forwarding rate determined for that iteration. 388 4.3. Aggregated Multicast Throughput 390 Objective: 392 To determine the maximum rate at which none of the offered frames 393 to be forwarded through N destination interfaces of the same 394 multicast groups are dropped. 396 Procedure: 398 Offer multicast traffic at an initial maximum offered load to a 399 fixed set of interfaces with a fixed number of groups at a fixed 400 frame length for a fixed duration of time. All destination test 401 ports MUST join all specified multicast groups. 403 If any frame loss is detected, the offered load is decreased and 404 the sender will transmit again. An iterative search algorithm MUST 405 be utilized to determine the maximum offered frame rate with a zero 406 frame loss. 408 Each iteration will involve varying the offered load of the 409 multicast traffic, while keeping the set of interfaces, number of 410 multicast groups, frame length and test duration fixed, until the 411 maximum rate at which none of the offered frames are dropped is 412 determined. 414 Parameters to be measured MUST include the maximum offered load at 415 which no frame loss occurred. Other offered loads MAY be measured 416 for diagnostic purposes. 418 Reporting Format: 420 The following configuration parameters MUST be reflected in the 421 test report: 423 o Frame size(s) 424 o Number of tested egress interfaces on the DUT/SUT 425 o Test duration 426 o IGMP version 427 o Total number of multicast groups 429 The following results MUST be reflected in the test report: 431 o Aggregated Multicast Throughput as defined in RFC2432 432 [Du98] 434 The Aggregated Multicast Throughput results SHOULD be reported in 435 the format of a table with a row for each of the tested frame sizes 436 per the recommendations in section 3.1.3. Each row or iteration 437 SHOULD specify offered load, total number of offered frames and the 438 measured Aggregated Multicast Throughput. 440 4.4. Encapsulation/Decapsulation (Tunneling) Throughput 442 This sub-section provides the description of tests related to the 443 determination of throughput measurements when a DUT/SUT or a set of 444 DUTs are acting as tunnel endpoints. 446 4.4.1. Encapsulation Throughput 448 Objective: 450 To determine the maximum rate at which frames offered to one 451 ingress interface of a DUT/SUT are encapsulated and correctly 452 forwarded on one or more egress interfaces of the DUT/SUT without 453 loss. 455 Procedure: 457 Source DUT/SUT Destination 458 Test Port Test Port(s) 459 +---------+ +-----------+ +---------+ 460 | | | | | | 461 | | | Egress|--(Tunnel)-->| | 462 | | | | | | 463 | |------->|Ingress | | | 464 | | | | | | 465 | | | Egress|--(Tunnel)-->| | 466 | | | | | | 467 +---------+ +-----------+ +---------+ 469 Figure 3 470 --------- 472 Figure 3 shows the setup for testing the encapsulation throughput 473 of the DUT/SUT. One or more tunnels are created between each 474 egress interface of the DUT/SUT and a destination test port. Non- 475 Encapsulated multicast traffic will then be offered by the source 476 test port, encapsulated by the DUT/SUT and forwarded to the 477 destination test port(s). 479 The DUT/SUT SHOULD be configured such that the traffic across each 480 egress interface will consist of either: 482 a) A single tunnel encapsulating one or more multicast address 483 groups OR 484 b) Multiple tunnels, each encapsulating one or more multicast 485 address groups. 487 The number of multicast groups per tunnel MUST be the same when the 488 DUT/SUT is configured in a multiple tunnel configuration. In 489 addition, it is RECOMMENDED to test with the same number of tunnels 490 on each egress interface. All destination test ports MUST join all 491 multicast group addresses offered by the source test port. Each 492 egress interface MUST be configured with the same MTU. 494 Note: when offering large frames sizes, the encapsulation process 495 may require the DUT/SUT to fragment the IP datagrams prior to being 496 forwarded on the egress interface. It is RECOMMENDED to limit the 497 offered frame size such that no fragmentation is required by the 498 DUT/SUT. 500 A search algorithm MUST be utilized to determine the encapsulation 501 throughput as defined in [Du98]. 503 Reporting Format: 505 The following configuration parameters MUST be reflected in the 506 test report: 508 o Number of tested egress interfaces on the DUT/SUT 509 o Test duration 510 o IGMP version 511 o Total number of multicast groups 512 o MTU size of DUT/SUT interfaces 513 o Originating un-encapsulated frame size 514 o Number of tunnels per egress interface 515 o Number of multicast groups per tunnel 517 The following results MUST be reflected in the test report: 519 o Measured Encapsulated Throughput as defined in RFC2432 520 [Du98] 521 o Encapsulated frame size 523 The Encapsulated Throughput results SHOULD be reported in the form 524 of a table and specific to this test there SHOULD be rows for each 525 originating un-encapsulated frame size. Each row or iteration 526 SHOULD specify the offered load, encapsulation method, encapsulated 527 frame size, total number of offered frames, and the encapsulation 528 throughput. 530 4.4.2. Decapsulation Throughput 532 Objective: 534 To determine the maximum rate at which frames offered to one 535 ingress interface of a DUT/SUT are decapsulated and correctly 536 forwarded by the DUT/SUT on one or more egress interfaces without 537 loss. 539 Procedure: 541 Source DUT/SUT Destination 542 Test Port Test Port(s) 543 +---------+ +-----------+ +---------+ 544 | | | | | | 545 | | | Egress|------->| | 546 | | | | | | 547 | |--(Tunnel)-->|Ingress | | | 548 | | | | | | 549 | | | Egress|------->| | 550 | | | | | | 551 +---------+ +-----------+ +---------+ 553 Figure 4 554 --------- 556 Figure 4 shows the setup for testing the decapsulation throughput 557 of the DUT/SUT. One or more tunnels are created between the source 558 test port and the DUT/SUT. Encapsulated multicast traffic will 559 then be offered by the source test port, decapsulated by the 560 DUT/SUT and forwarded to the destination test port(s). 562 The DUT/SUT SHOULD be configured such that the traffic across the 563 ingress interface will consist of either: 565 a) A single tunnel encapsulating one or more multicast address 566 groups OR 567 b) Multiple tunnels, each encapsulating one or more multicast 568 address groups. 570 The number of multicast groups per tunnel MUST be the same when the 571 DUT/SUT is configured in a multiple tunnel configuration. All 572 destination test ports MUST join all multicast group addresses 573 offered by the source test port. Each egress interface MUST 574 be configured with the same MTU. 576 A search algorithm MUST be utilized to determine the decapsulation 577 throughput as defined in [Du98]. 579 When making performance comparisons between the encapsulation and 580 decapsulation process of the DUT/SUT, the offered frame sizes 581 SHOULD reflect the encapsulated frame sizes reported in the 582 encapsulation test (See section 4.4.1) in place of those noted in 583 section 3.1.3. 585 Reporting Format: 587 The following configuration parameters MUST be reflected in the 588 test report: 590 o Number of tested egress interfaces on the DUT/SUT 591 o Test duration 592 o IGMP version 593 o Total number of multicast groups 594 o Originating encapsulation format 595 o Originating encapsulated frame size 596 o Number of tunnels 597 o Number of multicast groups per tunnel 599 The following results MUST be reflected in the test report: 601 o Measured Decapsulated Throughput as defined in RFC2432 602 [Du98] 603 o Decapsulated frame size 605 The Decapsulated Throughput results SHOULD be reported in the 606 format of a table and specific to this test there SHOULD be rows 607 for each originating encapsulated frame size. Each row or 608 iteration SHOULD specify the offered load, decapsulated frame size, 609 total number of offered frames and the decapsulation throughput. 611 4.4.3. Re-encapsulation Throughput 613 Objective: 615 To determine the maximum rate at which frames of one encapsulated 616 format offered to one ingress interface of a DUT/SUT are converted 617 to another encapsulated format and correctly forwarded by the 618 DUT/SUT on one or more egress interfaces without loss. 620 Procedure: 622 Source DUT/SUT Destination 623 Test Port Test Port(s) 624 +---------+ +---------+ +---------+ 625 | | | | | | 626 | | | Egress|-(Tunnel)->| | 627 | | | | | | 628 | |-(Tunnel)->|Ingress | | | 629 | | | | | | 630 | | | Egress|-(Tunnel)->| | 631 | | | | | | 632 +---------+ +---------+ +---------+ 634 Figure 5 635 --------- 637 Figure 5 shows the setup for testing the Re-encapsulation 638 throughput of the DUT/SUT. The source test port will offer 639 encapsulated traffic of one type to the DUT/SUT, which has been 640 configured to re-encapsulate the offered frames using a different 641 encapsulation format. The DUT/SUT will then forward the re- 642 encapsulated frames to the destination test port(s). 644 The DUT/SUT SHOULD be configured such that the traffic across the 645 ingress and each egress interface will consist of either: 647 a) A single tunnel encapsulating one or more multicast address 648 groups OR 649 b) Multiple tunnels, each encapsulating one or more multicast 650 address groups. 652 The number of multicast groups per tunnel MUST be the same when the 653 DUT/SUT is configured in a multiple tunnel configuration. In 654 addition, the DUT/SUT SHOULD be configured such that the number of 655 tunnels on the ingress and each egress interface are the same. All 656 destination test ports MUST join all multicast group addresses 657 offered by the source test port. Each egress interface MUST be 658 configured with the same MTU. 660 Note that when offering large frames sizes, the encapsulation 661 process may require the DUT/SUT to fragment the IP datagrams prior 662 to being forwarded on the egress interface. It is RECOMMENDED to 663 limit the offered frame sizes, such that no fragmentation is 664 required by the DUT/SUT. 666 A search algorithm MUST be utilized to determine the re- 667 encapsulation throughput as defined in [Du98]. 669 Reporting Format: 671 The following configuration parameters MUST be reflected in the 672 test report: 674 o Number of tested egress interfaces on the DUT/SUT 675 o Test duration 676 o IGMP version 677 o Total number of multicast groups 678 o MTU size of DUT/SUT interfaces 679 o Originating encapsulation format 680 o Originating encapsulated frame size 681 o Number of tunnels per interface 682 o Number of multicast groups per tunnel 684 The following results MUST be reflected in the test report: 686 o Measured Re-encapsulated Throughput as defined in RFC2432 687 [Du98] 688 o Re-encapsulated frame size 690 The Re-encapsulated Throughput results SHOULD be reported in the 691 format of a table and specific to this test there SHOULD be rows 692 for each originating encapsulated frame size. Each row or 693 iteration SHOULD specify the offered load, decapsulated frame size, 694 total number of offered frames and the Re-encapsulated Throughput. 696 5. Forwarding Latency 698 This section presents methodologies relating to the 699 characterization of the forwarding latency of a DUT/SUT in a 700 multicast environment. It extends the concept of latency 701 characterization presented in RFC 2544. 703 The offered load accompanying the latency-measured packet can 704 affect the DUT/SUT packet buffering, which may subsequently impact 705 measured packet latency. This SHOULD be a consideration when 706 selecting the intended load for the described methodologies below. 708 RFC 1242 and RFC 2544 draw a distinction between device types: 709 "store and forward" and "bit-forwarding." Each type impacts how 710 latency is collected and subsequently presented. See the related 711 RFCs for more information. 713 5.1. Multicast Latency 715 Objective: 717 To produce a set of multicast latency measurements from a single, 718 multicast ingress interface of a DUT/SUT through multiple, egress 719 multicast interfaces of that same DUT/SUT as provided for by the 720 metric "Multicast Latency" in RFC 2432 [Du98]. 722 The procedures below draw from the collection methodology for 723 latency in RFC 2544 [Br96]. The methodology addresses two 724 topological scenarios: one for a single device (DUT) 725 characterization; a second scenario is presented or multiple device 726 (SUT) characterization. 728 Procedure: 730 If the test trial is to characterize latency across a single Device 731 Under Test (DUT), an example test topology might take the form of 732 Figure 1 in section 3. That is, a single DUT with one ingress 733 interface receiving the multicast test traffic from frame- 734 transmitting component of the test apparatus and n egress 735 interfaces on the same DUT forwarding the multicast test traffic 736 back to the frame-receiving component of the test apparatus. Note 737 that n reflects the number of TESTED egress interfaces on the DUT 738 actually expected to forward the test traffic (as opposed to 739 configured but untested, non-forwarding interfaces, for example). 741 If the multicast latencies are to be taken across multiple devices 742 forming a System Under Test (SUT), an example test topology might 743 take the form of Figure 2 in section 3. 745 The trial duration SHOULD be 120 seconds to be consistent with RFC 746 2544 [Br96]. The nature of the latency measurement, "store and 747 forward" or "bit forwarding," MUST be associated with the related 748 test trial(s) and disclosed in the results report. 750 A test traffic stream is presented to the DUT. It is RECOMMENDED to 751 offer traffic at the measured aggregated multicast throughput rate 752 (Section 4.3). At the mid-point of the trial's duration, the test 753 apparatus MUST inject a uniquely identifiable ("tagged") frame into 754 the test traffic frames being presented. This tagged frame will be 755 the basis for the latency measurements. By "uniquely identifiable," 756 it is meant that the test apparatus MUST be able to discern the 757 "tagged" frame from the other frames comprising the test traffic 758 set. A frame generation timestamp, Timestamp A, reflecting the 759 completion of the transmission of the tagged frame by the test 760 apparatus, MUST be determined. 762 The test apparatus will monitor frames from the DUT's tested egress 763 interface(s) for the expected tagged frame(s) and MUST record the 764 time of the successful detection of a tagged frame from a tested 765 egress interface with a timestamp, Timestamp B. A set of Timestamp 766 B values MUST be collected for all tested egress interfaces of the 767 DUT/SUT. See RFC 1242 [Br91] for additional discussion regarding 768 store and forward devices and bit forwarding devices. 770 A trial MUST be considered INVALID should any of the following 771 conditions occur in the collection of the trial data: 773 o Unexpected differences between Intended Load and Offered 774 Load or unexpected differences between Offered Load and the 775 resulting Forwarding Rate(s) on the DUT/SUT egress ports. 776 o Forwarded test frames improperly formed or frame header 777 fields improperly manipulated. 778 o Failure to forward required tagged frame(s) on all expected 779 egress interfaces. 780 o Reception of tagged frames by the test apparatus more than 781 5 seconds after the cessation of test traffic by the source 782 test port. 784 The set of latency measurements, M, composed from each latency 785 measurement taken from every ingress/tested egress interface 786 pairing MUST be determined from a valid test trial: 788 M = { (Timestamp B(E0) - Timestamp A), 789 (Timestamp B(E1) - Timestamp A), ... 790 (Timestamp B(En) - Timestamp A) } 792 where (E0 ... En) represents the range of all tested egress 793 interfaces and Timestamp B represents a tagged frame detection 794 event for a given DUT/SUT tested egress interface. 796 A more continuous profile MAY be built from a series of individual 797 measurements. 799 Reporting Format: 801 The following configuration parameters MUST be reflected in the 802 test report: 804 o Frame size(s) 805 o Number of tested egress interfaces on the DUT/SUT 806 o Test duration 807 o IGMP version 808 o Offered load 809 o Total number of multicast groups 811 The following results MUST be reflected in the test report: 813 o The set of all latencies with respective time units related 814 to the tested ingress and each tested egress DUT/SUT 815 interface. 817 The time units of the presented latency MUST be uniform and with 818 sufficient precision for the medium or media being tested. 820 The results MAY be offered in a tabular format and should preserve 821 the relationship of latency to ingress/egress interface for each 822 multicast group to assist in trending across multiple trials. 824 5.2. Min/Max Multicast Latency 826 Objective: 828 To determine the difference between the maximum latency measurement 829 and the minimum latency measurement from a collected set of 830 latencies produced by the Multicast Latency benchmark. 832 Procedure: 834 Collect a set of multicast latency measurements over a single test 835 duration, as prescribed in section 5.1. This will produce a set of 836 multicast latencies, M, where M is composed of individual 837 forwarding latencies between DUT frame ingress and DUT frame egress 838 port pairs. E.g.: 840 M = {L(I,E1),L(I,E2), ..., L(I,En)} 842 where L is the latency between a tested ingress interface, I, of 843 the DUT, and Ex a specific, tested multicast egress interface of 844 the DUT. E1 through En are unique egress interfaces on the DUT. 846 From the collected multicast latency measurements in set M, 847 identify MAX(M), where MAX is a function that yields the largest 848 latency value from set M. 850 Identify MIN(M), when MIN is a function that yields the smallest 851 latency value from set M. 853 The Max/Min value is determined from the following formula: 855 Result = MAX(M) - MIN(M) 857 Reporting Format: 859 The following configuration parameters MUST be reflected in the 860 test report: 862 o Frame size(s) 863 o Number of tested egress interfaces on the DUT/SUT 864 o Test duration 865 o IGMP version 866 o Offered load 867 o Total number of multicast groups 869 The following results MUST be reflected in the test report: 871 o The Max/Min value 873 The following results SHOULD be reflected in the test report: 875 o The set of all latencies with respective time units related 876 to the tested ingress and each tested egress DUT/SUT 877 interface. 879 The time units of the presented latency MUST be uniform and with 880 sufficient precision for the medium or media being tested. 882 The results MAY be offered in a tabular format and should preserve 883 the relationship of latency to ingress/egress interface for each 884 multicast group. 886 6. Overhead 888 This section presents methodology relating to the characterization 889 of the overhead delays associated with explicit operations found in 890 multicast environments. 892 6.1. Group Join Delay 894 Objective: 896 To determine the time duration it takes a DUT/SUT to start 897 forwarding multicast frames from the time a successful IGMP group 898 membership report has been issued to the DUT/SUT. 900 Procedure: 902 The Multicast Group Join Delay measurement may be influenced by the 903 state of the Multicast Forwarding Database of the DUT/SUT. 904 The states of the MFDB may be described as follows: 906 . State 0, where the MFDB does not contain the specified 907 multicast group address. In this state, the delay measurement 908 includes the time the DUT/SUT requires to add the address to 909 the MFDB and begin forwarding. Delay measured from State 0 910 provides information about how the DUT/SUT is able to add new 911 addresses into MFDB. 913 . State 1, where the MFDB does contain the specified multicast 914 group address. In this state, the delay measurement includes 915 the time the DUT/SUT requires to update the MFDB with the 916 newly joined node and begin forwarding to the new node 917 plus packet replication time. Delay measured from State 1 918 provides information about how well the DUT/SUT is able to 919 update the MFDB for new nodes while transmitting packets to 920 other nodes for the same IP multicast address. Examples 921 include adding a new user to an event that is being promoted 922 via multicast packets. 924 The methodology for the Multicast Group Join Delay measurement 925 provides two alternate methods, based on the state of the MFDB, to 926 measure the delay metric. The methods MAY be used independently or 927 in conjunction to provide meaningful insight into the DUT/SUT 928 ability to manage the MFDB. 930 Users MAY elect to use either method to determine the Multicast 931 Group Join Delay; however the collection method MUST be specified 932 as part of the reporting format. 934 In order to minimize the variation in delay calculations as well as 935 minimize burden on the DUT/SUT, the test SHOULD be performed with 936 one multicast group. In addition, all destination test ports MUST 937 join the specified multicast group offered to the ingress interface 938 of the DUT/SUT. 940 Method A: 942 Method A assumes that the Multicast Forwarding Database of 943 the DUT/SUT does not contain or has not learned the specified 944 multicast group address; specifically, the MFDB MUST be in State 0. 945 In this scenario, the metric represents the time the DUT/SUT takes 946 to add the multicast address to the MFDB and begin forwarding the 947 multicast packet. Only one ingress and one egress MUST be used to 948 determine this metric. 950 Prior to sending any IGMP Group Membership Reports used to 951 calculate the Multicast Group Join Delay, it MUST be verified 952 through externally observable means that the destination test port 953 is not currently a member of the specified multicast group. In 954 addition, it MUST be verified through externally observable means 955 that the MFDB of the DUT/SUT does not contain the specified 956 multicast address. 958 Method B: 960 Method B assumes that the MFDB of the DUT/SUT does contain the 961 specified multicast group address; specifically, the MFDB MUST be 962 in State 1. In this scenario, the metric represents the time the 963 DUT/SUT takes to update the MFDB with the additional nodes and 964 their corresponding interfaces and to begin forwarding the 965 multicast packet. One or more egress ports MAY be used to 966 determine this metric. 968 Prior to sending any IGMP Group Membership Reports used to 969 calculate the Group Join Delay, it MUST be verified through 970 externally observable means that the MFDB contains the specified 971 multicast group address. A single un-instrumented test port MUST 972 be used to join the specified multicast group address prior to 973 sending any test traffic. This port will be used only for insuring 974 that the MFDB has been populated with the specified multicast group 975 address and can successfully forward traffic to the un-instrumented 976 port. 978 Join Delay Calculation 980 Once verification is complete, multicast traffic for the specified 981 multicast group address MUST be offered to the ingress interface 982 prior to the DUT/SUT receiving any IGMP Group Membership Report 983 messages. It is RECOMMENDED to offer traffic at the measured 984 aggregated multicast throughput rate (Section 4.3). 986 After the multicast traffic has been started, the destination test 987 port (See Figure 1) MUST send one IGMP Group Membership Report for 988 the specified multicast group. 990 The join delay is the difference in time from when the IGMP Group 991 Membership message is sent (timestamp A) and the first frame of the 992 multicast group is forwarded to a receiving egress interface 993 (timestamp B). 995 Group Join delay time = timestamp B - timestamp A 997 Timestamp A MUST be the time the last bit of the IGMP group 998 membership report is sent from the destination test port; timestamp 999 B MUST be the time the first bit of the first valid multicast frame 1000 is forwarded on the egress interface of the DUT/SUT. 1002 Reporting Format: 1004 The following configuration parameters MUST be reflected in the 1005 test report: 1007 o Frame size(s) 1008 o Number of tested egress interfaces on the DUT/SUT 1009 o IGMP version 1010 o Total number of multicast groups 1011 o Offered load to ingress interface 1012 o Method used to measure the join delay metric 1014 The following results MUST be reflected in the test report: 1016 o The group join delay time in microseconds per egress 1017 interface(s) 1019 The Group Join Delay results for each test MAY be reported in the 1020 form of a table, with a row for each of the tested frame sizes per 1021 the recommendations in section 3.1.3. Each row or iteration MAY 1022 specify the group join delay time per egress interface for that 1023 iteration. 1025 6.2. Group Leave Delay 1027 Objective: 1029 To determine the time duration it takes a DUT/SUT to cease 1030 forwarding multicast frames after a corresponding IGMP Leave Group 1031 message has been successfully offered to the DUT/SUT. 1033 Procedure: 1035 In order to minimize the variation in delay calculations as well as 1036 minimize burden on the DUT/SUT, the test SHOULD be performed with 1037 one multicast group. In addition, all destination test ports MUST 1038 join the specified multicast group offered to the ingress interface 1039 of the DUT/SUT. 1041 Prior to sending any IGMP Leave Group messages used to calculate 1042 the group leave delay, it MUST be verified through externally 1043 observable means that the destination test ports are currently 1044 members of the specified multicast group. If any of the egress 1045 interfaces do not forward validation multicast frames then the test 1046 is invalid. 1048 Once verification is complete, multicast traffic for the specified 1049 multicast group address MUST be offered to the ingress interface 1050 prior to receipt or processing of any IGMP Leave Group messages. 1051 It is RECOMMENDED to offer traffic at the measured aggregated 1052 multicast throughput rate (Section 4.3). 1054 After the multicast traffic has been started, each destination test 1055 port (See Figure 1) MUST send one IGMP Leave Group message for the 1056 specified multicast group. 1058 The leave delay is the difference in time from when the IGMP Leave 1059 Group message is sent (timestamp A) and the last frame of the 1060 multicast group is forwarded to a receiving egress interface 1061 (timestamp B). 1063 Group Leave delay time = timestamp B - timestamp A 1065 Timestamp A MUST be the time the last bit of the IGMP Leave Group 1066 message is sent from the destination test port; timestamp B MUST be 1067 the time the last bit of the last valid multicast frame is 1068 forwarded on the egress interface of the DUT/SUT. 1070 Reporting Format: 1072 The following configuration parameters MUST be reflected in the 1073 test report: 1075 o Frame size(s) 1076 o Number of tested egress interfaces on the DUT/SUT 1077 o IGMP version 1078 o Total number of multicast groups 1079 o Offered load to ingress interface 1081 The following results MUST be reflected in the test report: 1083 o The group leave delay time in microseconds per egress 1084 interface(s) 1086 The Group Leave Delay results for each test MAY be reported in the 1087 form of a table, with a row for each of the tested frame sizes per 1088 the recommendations in section 3.1.3. Each row or iteration MAY 1089 specify the group leave delay time per egress interface for that 1090 iteration. 1092 7. Capacity 1094 This section offers a procedure relating to the identification of 1095 multicast group limits of a DUT/SUT. 1097 7.1. Multicast Group Capacity 1099 Objective: 1101 To determine the maximum number of multicast groups a DUT/SUT can 1102 support while maintaining the ability to forward multicast frames 1103 to all multicast groups registered to that DUT/SUT. 1105 Procedure: 1107 One or more destination test ports of DUT/SUT will join an initial 1108 number of multicast groups. 1110 After a minimum delay as measured by section 6.1, the source test 1111 ports MUST transmit to each group at a specified offered load. 1113 If at least one frame for each multicast group is forwarded 1114 properly by the DUT/SUT on each participating egress interface, the 1115 iteration is said to pass at the current capacity. 1117 For each successful iteration, each destination test port will join 1118 an additional user-defined number of multicast groups and the test 1119 repeats. The test stops iterating when one or more of the egress 1120 interfaces fails to forward traffic on one or more of the 1121 configured multicast groups. 1123 Once the iteration fails, the last successful iteration is the 1124 stated Maximum Group Capacity result. 1126 Reporting Format: 1128 The following configuration parameters MUST be reflected in the 1129 test report: 1131 o Frame size(s) 1132 o Number of tested egress interfaces on the DUT/SUT 1133 o IGMP version 1134 o Offered load 1136 The following results MUST be reflected in the test report: 1138 o The total number of multicast group addresses that were 1139 successfully forwarded through the DUT/SUT 1141 The Multicast Group Capacity results for each test SHOULD be 1142 reported in the form of a table, with a row for each of the tested 1143 frame sizes per the recommendations in section 3.1.3. Each row or 1144 iteration SHOULD specify the number of multicast groups joined per 1145 destination interface, number of frames transmitted and number of 1146 frames received for that iteration. 1148 8. Interaction 1150 Network forwarding devices are generally required to provide more 1151 functionality than just the forwarding of traffic. Moreover, 1152 network-forwarding devices may be asked to provide those functions 1153 in a variety of environments. This section offers procedures to 1154 assist in the characterization of DUT/SUT behavior in consideration 1155 of potentially interacting factors. 1157 8.1. Forwarding Burdened Multicast Latency 1159 Objective: 1161 To produce a set of multicast latency measurements from a single 1162 multicast ingress interface of a DUT/SUT through multiple egress 1163 multicast interfaces of that same DUT/SUT as provided for by the 1164 metric "Multicast Latency" in RFC 2432 [Du96] while forwarding 1165 meshed unicast traffic. 1167 Procedure: 1169 The Multicast Latency metrics can be influenced by forcing the 1170 DUT/SUT to perform extra processing of packets while multicast 1171 class traffic is being forwarded for latency measurements. 1173 The Burdened Forwarding Multicast Latency test MUST follow the 1174 described setup for the Multicast Latency test in Section 5.1. In 1175 addition, another set of test ports MUST be used to burden the 1176 DUT/SUT (burdening ports). The burdening ports will be used to 1177 transmit unicast class traffic to the DUT/SUT in a fully meshed 1178 traffic distribution as described in RFC 2285 [Ma98]. The DUT/SUT 1179 MUST learn the appropriate unicast addresses and verified through 1180 some externally observable method. 1182 Perform a baseline measurement of Multicast Latency as described in 1183 Section 5.1. After the baseline measurement is obtained, start 1184 transmitting the unicast class traffic at a user-specified offered 1185 load on the set of burdening ports and rerun the Multicast Latency 1186 test. The offered load to the ingress port MUST be the same as was 1187 used in the baseline measurement. 1189 Reporting Format: 1191 Similar to Section 5.1, the following configuration parameters MUST 1192 be reflected in the test report: 1194 o Frame size(s) 1195 o Number of tested egress interfaces on the DUT/SUT 1196 o Test duration 1197 o IGMP version 1198 o Offered load to ingress interface 1199 o Total number of multicast groups 1200 o Offered load to burdening ports 1201 o Total number of burdening ports 1203 The following results MUST be reflected in the test report: 1205 o The set of all latencies related to the tested ingress and 1206 each tested egress DUT/SUT interface for both the baseline 1207 and burdened response. 1209 The time units of the presented latency MUST be uniform and with 1210 sufficient precision for the medium or media being tested. 1212 The latency results for each test SHOULD be reported in the form of 1213 a table, with a row for each of the tested frame sizes per the 1214 recommended frame sizes in section 3.1.3, and SHOULD preserve the 1215 relationship of latency to ingress/egress interface(s) to assist in 1216 trending across multiple trials. 1218 8.2. Forwarding Burdened Group Join Delay 1220 Objective: 1222 To determine the time duration it takes a DUT/SUT to start 1223 forwarding multicast frames from the time a successful IGMP Group 1224 Membership Report has been issued to the DUT/SUT while while 1225 forwarding meshed unicast traffic. 1227 Procedure: 1229 The Forwarding Burdened Group Join Delay test MUST follow the 1230 described setup for the Group Join Delay test in Section 6.1. In 1231 addition, another set of test ports MUST be used to burden the 1232 DUT/SUT (burdening ports). The burdening ports will be used to 1233 transmit unicast class traffic to the DUT/SUT in a fully meshed 1234 traffic pattern as described in RFC 2285 [Ma98]. The DUT/SUT MUST 1235 learn the appropriate unicast addresses and verified through some 1236 externally observable method. 1238 Perform a baseline measurement of Group Join Delay as described in 1239 Section 6.1. After the baseline measurement is obtained, start 1240 transmitting the unicast class traffic at a user-specified offered 1241 load on the set of burdening ports and rerun the Group Join Delay 1242 test. The offered load to the ingress port MUST be the same as was 1243 used in the baseline measurement. 1245 Reporting Format: 1247 Similar to Section 6.1, the following configuration parameters MUST 1248 be reflected in the test report: 1250 o Frame size(s) 1251 o Number of tested egress interfaces on the DUT/SUT 1252 o IGMP version 1253 o Offered load to ingress interface 1254 o Total number of multicast groups 1255 o Offered load to burdening ports 1256 o Total number of burdening ports 1257 o Method used to measure the join delay metric 1259 The following results MUST be reflected in the test report: 1261 o The group join delay time in microseconds per egress 1262 interface(s) for both the baseline and burdened response. 1264 The Group Join Delay results for each test MAY be reported in the 1265 form of a table, with a row for each of the tested frame sizes per 1266 the recommendations in section 3.1.3. Each row or iteration MAY 1267 specify the group join delay time per egress interface, number of 1268 frames transmitted and number of frames received for that 1269 iteration. 1271 9. Security Considerations 1273 As this document is solely for the purpose of providing metric 1274 methodology and describes neither a protocol nor a protocol's 1275 implementation, there are no security considerations associated 1276 with this document. 1278 10. Acknowledgements 1280 The Benchmarking Methodology Working Group of the IETF and 1281 particularly Kevin Dubray, Juniper Networks, are to be thanked for 1282 the many suggestions they collectively made to help complete this 1283 document. 1285 11. Contributions 1287 The authors would like to acknowledge the following individuals for 1288 their help and participation of the compilation of this document: 1289 Hardev Soor, Ixia, and Ralph Daniels, Spirent Communications, both 1290 who made significant contributions to the earlier versions of this 1291 document. In addition, the authors would like to acknowledge the 1292 members of the task team who helped bring this document to 1293 fruition: Michele Bustos, Tony De La Rosa, David Newman and Jerry 1294 Perser. 1296 12. References 1298 Normative References 1300 [Br91] Bradner, S., "Benchmarking Terminology for Network 1301 Interconnection Devices", RFC 1242, July 1991. 1303 [Br96] Bradner, S., and J. McQuaid, "Benchmarking Methodology for 1304 Network Interconnect Devices", RFC 2544, March 1999. 1306 [Br97] Bradner, S. "Use of Keywords in RFCs to Reflect Requirement 1307 Levels, RFC 2119, March 1997 1309 [Du98] Dubray, K., "Terminology for IP Multicast Benchmarking", RFC 1310 2432, October 1998. 1312 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching 1313 Devices", RFC 2285, February 1998. 1315 Informative References 1317 [Ca02] Cain, B., et al., "Internet Group Management Protocol, Version 1318 3", RFC 3376, October 2002. 1320 [De89] Deering, S., "Host Extensions for IP Multicasting", STD 5, RFC 1321 1112, August 1989. 1323 [Fe97] Fenner, W., "Internet Group Management Protocol, Version 2", 1324 RFC 2236, November 1997. 1326 [Hu95] Huitema, C. "Routing in the Internet." Prentice-Hall, 1995. 1328 [Ka98] Kosiur, D., "IP Multicasting: the Complete Guide to 1329 Interactive Corporate Networks", John Wiley & Sons, Inc, 1998. 1331 [Mt98] Maufer, T. "Deploying IP Multicast in the Enterprise." 1332 Prentice-Hall, 1998. 1334 13. Author's Addresses 1336 Debra Stopp 1337 Ixia 1338 26601 W. Agoura Rd. 1339 Calabasas, CA 91302 1340 USA 1342 Phone: + 1 818 871 1800 1343 EMail: debby@ixiacom.com 1345 Brooks Hickman 1346 Spirent Communications 1347 26750 Agoura Rd. 1348 Calabasas, CA 91302 1349 USA 1351 Phone: + 1 818 676 2412 1352 EMail: brooks.hickman@spirentcom.com 1354 14. Full Copyright Statement 1356 "Copyright (C) The Internet Society (2003). All Rights Reserved. 1357 This document and translations of it may be copied and furnished to 1358 others, and derivative works that comment on or otherwise explain 1359 it or assist in its implementation may be prepared, copied, 1360 published and distributed, in whole or in part, without restriction 1361 of any kind, provided that the above copyright notice and this 1362 paragraph are included on all such copies and derivative works. 1363 However, this document itself may not be modified in any way, such 1364 as by removing the copyright notice or references to the Internet 1365 Society or other Internet organizations, except as needed for the 1366 purpose of developing Internet standards in which case the 1367 procedures for copyrights defined in the Internet Standards process 1368 must be followed, or as required to translate it into.�