idnits 2.17.1 draft-ietf-bmwg-mcastm-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There is 1 instance of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: The total number of multicast groups joined MUST not exceed the capacity of the DUT/SUT. Both Group Join Delay and Group Capacity results MUST be known prior to running this test. == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Data from invalid trials SHOULD be considered inconclusive. Data from invalid trials MUST not form the basis of comparison. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 2003) is 7741 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'Br96' is defined on line 1264, but no explicit reference was found in the text == Unused Reference: 'Br97' is defined on line 1267, but no explicit reference was found in the text == Unused Reference: 'Ca02' is defined on line 1278, but no explicit reference was found in the text == Unused Reference: 'De89' is defined on line 1281, but no explicit reference was found in the text == Unused Reference: 'Fe97' is defined on line 1284, but no explicit reference was found in the text == Unused Reference: 'Hu95' is defined on line 1287, but no explicit reference was found in the text == Unused Reference: 'Ka98' is defined on line 1289, but no explicit reference was found in the text == Unused Reference: 'Mt98' is defined on line 1292, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 1242 (ref. 'Br91') ** Downref: Normative reference to an Informational RFC: RFC 2544 (ref. 'Br96') ** Downref: Normative reference to an Informational RFC: RFC 2432 (ref. 'Du98') ** Downref: Normative reference to an Informational RFC: RFC 2285 (ref. 'Ma98') Summary: 6 errors (**), 0 flaws (~~), 13 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Debra Stopp 3 INTERNET-DRAFT Ixia 4 Expires in: August 2003 Brooks Hickman 5 Spirent Communications 6 February 2003 8 Methodology for IP Multicast Benchmarking 9 11 Status of this Memo 13 This document is an Internet-Draft and is in full conformance with 14 all provisions of Section 10 of RFC2026. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six 22 months and may be updated, replaced, or obsoleted by other 23 documents at any time. It is inappropriate to use Internet-Drafts 24 as reference material or to cite them other than as "work in 25 progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 Copyright Notice 35 Copyright (C) The Internet Society (2003). All Rights Reserved. 37 Abstract 39 The purpose of this document is to describe methodology specific to 40 the benchmarking of multicast IP forwarding devices. It builds upon 41 the tenets set forth in RFC 2544, RFC 2432 and other IETF 42 Benchmarking Methodology Working Group (BMWG) efforts. This 43 document seeks to extend these efforts to the multicast paradigm. 45 The BMWG produces two major classes of documents: Benchmarking 46 Terminology documents and Benchmarking Methodology documents. The 47 Terminology documents present the benchmarks and other related 48 terms. The Methodology documents define the procedures required to 49 collect the benchmarks cited in the corresponding Terminology 50 documents. 52 Table of Contents 54 1. INTRODUCTION...................................................3 56 2. KEY WORDS TO REFLECT REQUIREMENTS..............................3 58 3. TEST SET UP....................................................3 59 3.1. Test Considerations..........................................5 60 3.1.1. IGMP Support..............................................5 61 3.1.2. Group Addresses...........................................5 62 3.1.3. Frame Sizes...............................................6 63 3.1.4. TTL.......................................................6 64 3.1.5. Trial Duration............................................6 65 4. FORWARDING AND THROUGHPUT......................................6 66 4.1. Mixed Class Throughput.......................................7 67 4.2. Scaled Group Forwarding Matrix...............................8 68 4.3. Aggregated Multicast Throughput..............................9 69 4.4. Encapsulation/Decapsulation (Tunneling) Throughput..........10 70 4.4.1. Encapsulation Throughput.................................10 71 4.4.2. Decapsulation Throughput.................................12 72 4.4.3. Re-encapsulation Throughput..............................13 73 5. FORWARDING LATENCY............................................15 74 5.1. Multicast Latency...........................................16 75 5.2. Min/Max Multicast Latency...................................18 76 6. OVERHEAD......................................................20 77 6.1. Group Join Delay............................................20 78 6.2. Group Leave Delay...........................................21 79 7. CAPACITY......................................................23 80 7.1. Multicast Group Capacity....................................23 81 8. INTERACTION...................................................24 82 8.1. Forwarding Burdened Multicast Latency.......................24 83 8.2. Forwarding Burdened Group Join Delay........................25 84 9. SECURITY CONSIDERATIONS.......................................26 86 10. ACKNOWLEDGEMENTS.............................................27 88 11. CONTRIBUTIONS................................................27 90 12. REFERENCES...................................................28 92 13. AUTHOR'S ADDRESSES...........................................29 94 14. FULL COPYRIGHT STATEMENT.....................................29 95 1. Introduction 97 This document defines tests for measuring and reporting the 98 forwarding, latency and IGMP group membership characteristics of 99 devices that support IP multicast routing protocols. The results 100 of these tests will provide the user with meaningful data on 101 multicast performance. 103 A previous document, " Terminology for IP Multicast Benchmarking" 104 (RFC 2432), defined many of the terms that are used in this 105 document. The terminology document should be consulted before 106 attempting to make use of this document. 108 This methodology will focus on one source to many destinations, 109 although many of the tests described may be extended to use 110 multiple source to multiple destination topologies. 112 2. Key Words to Reflect Requirements 114 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 115 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" 116 in this document are to be interpreted as described in RFC 2119. 117 RFC 2119 defines the use of these key words to help make the intent 118 of standards track documents as clear as possible. While this 119 document uses these keywords, this document is not a standards 120 track document. 122 3. Test set up 124 The set of methodologies presented in this document are for single 125 ingress, multiple egress scenarios as exemplified by Figures 1 and 126 2. Methodologies for multiple ingress and multiple egress 127 scenarios are beyond the scope of this document. 129 Figure 1 shows a typical setup for an IP multicast test, with one 130 source to multiple destinations. 132 +------------+ +--------------+ 133 | | | destination | 134 +--------+ | Egress(-)------->| test | 135 | source | | | | port(E1) | 136 | test |------>(|)Ingress | +--------------+ 137 | port | | | +--------------+ 138 +--------+ | Egress(-)------->| destination | 139 | | | test | 140 | | | port(E2) | 141 | DUT | +--------------+ 142 | | . . . 143 | | +--------------+ 144 | | | destination | 145 | Egress(-)------->| test | 146 | | | port(En) | 147 +------------+ +--------------+ 149 Figure 1 150 --------- 152 If the multicast metrics are to be taken across multiple devices 153 forming a System Under Test (SUT), then test frames are offered to 154 a single ingress interface on a device of the SUT, subsequently 155 forwarded across the SUT topology, and finally forwarded to the 156 test apparatus' frame-receiving components by the test egress 157 interface(s) of devices in the SUT. Figure 2 offers an example SUT 158 test topology. If a SUT is tested, the test topology and all 159 relevant configuration details MUST be disclosed with the 160 corresponding test results. 162 *-----------------------------------------* 163 | | 164 +--------+ | +----------------+ | +--------+ 165 | | | +------------+ |DUT B Egress E0(-)-|->| | 166 | | | |DUT A |--->| | | | | 167 | Test | | | | | Egress E1(-)-|->| Test | 168 | App. |--|->(-)Ingress, I | +----------------+ | | App. | 169 | Traffic| | | | +----------------+ | | Traffic| 170 | Src. | | | |--->|DUT C Egress E2(-)-|->| Dest. | 171 | | | +------------+ | | | | | 172 | | | | Egress En(-)--|->| | 173 +--------+ | +----------------+ | +--------+ 174 | | 175 *------------------SUT--------------------* 177 Figure 2 178 --------- 180 Generally, the destination test ports first join the desired number 181 of multicast groups by sending IGMP Group Report messages to the 182 DUT/SUT. To verify that all destination test ports successfully 183 joined the appropriate groups, the source port MUST transmit IP 184 multicast frames destined for these groups. The destination test 185 ports MAY send IGMP Leave Group messages after the transmission of 186 IP Multicast frames to clear the IGMP table of the DUT/SUT. 188 In addition, test equipment MUST validate the correct and proper 189 forwarding actions of the devices they test in order to ensure the 190 receipt of the frames that are involved in the test. 192 3.1. Test Considerations 194 The methodology assumes a uniform medium topology. Issues regarding 195 mixed transmission media, such as speed mismatch, headers 196 differences, etc., are not specifically addressed. Flow control, 197 QoS and other non-essential traffic or traffic-affecting mechanisms 198 affecting the variable under test MUST be disabled. Modifications 199 to the collection procedures might need to be made to accommodate 200 the transmission media actually tested. These accommodations MUST 201 be presented with the test results. 203 An actual flow of test traffic may be required to prime related 204 mechanisms, (e.g., process RPF events, build device caches, etc.) 205 to optimally forward subsequent traffic. Therefore, before an 206 initial, measured forwarding test trial, the test apparatus MUST 207 generate test traffic utilizing the same addressing characteristics 208 to the DUT/SUT that will subsequently be used to measure the 209 DUT/SUT response. The test monitor should ensure the correct 210 forwarding of traffic by the DUT/SUT. The priming action need only 211 be repeated to keep the associated information current. 213 3.1.1. IGMP Support 215 All of the ingress and egress interfaces MAY support any version of 216 IGMP. The IGMP version on the ingress interface MUST be the same 217 version of IGMP that is being tested on the egress interfaces. 219 Each of the ingress and egress interfaces SHOULD be able to respond 220 to IGMP queries during the test. 222 Each of the ingress and egress interfaces SHOULD also send LEAVE 223 (running IGMP version 2 or later) after each test. 225 3.1.2. Group Addresses 227 It is intended that the collection of benchmarks prescribed in this 228 document be executed in an isolated lab environment. That is to 229 say, the test traffic offered the tested devices MUST NOT traverse 230 a live internet, intranet, or other production network. 232 Assuming the above, there is no restriction to the use of multicast 233 addresses to compose the test traffic other than those assignments 234 imposed by IANA. The IANA assignments MUST be regarded for 235 operational consistency. For multicast address assignments see: 237 http://www.iana.org/assignments/multicast-addresses 239 Address selection does not need to be restricted to 240 Administratively Scoped IP Multicast addresses. 242 3.1.3. Frame Sizes 244 Each test SHOULD be run with different multicast frame sizes. For 245 Ethernet, the recommended sizes are 64, 128, 256, 512, 1024, 1280, 246 and 1518 byte frames. 248 Other link layer technologies MAY be used. The minimum and maximum 249 frame lengths of the link layer technology in use SHOULD be tested. 251 When testing with different frame sizes, the DUT/SUT configuration 252 MUST remain the same. 254 3.1.4. TTL 256 The data plane test traffic should have a TTL value large enough to 257 traverse the DUT/SUT. 259 The TTL in IGMP control plane messages is in compliance with the 260 version of IGMP in use. 262 3.1.5. Trial Duration 264 The duration of the test portion of each trial SHOULD be at least 265 30 seconds. This parameter MUST be included as part of the results 266 reporting for each methodology. 268 4. Forwarding and Throughput 270 This section contains the description of the tests that are related 271 to the characterization of the frame forwarding of a DUT/SUT in a 272 multicast environment. Some metrics extend the concept of throughput 273 presented in RFC 1242. Forwarding Rate is cited in RFC 2285. 275 4.1. Mixed Class Throughput 277 Objective: 279 To determine the throughput of a DUT/SUT when both unicast class 280 frames and multicast class frames are offered simultaneously to a 281 fixed number of interfaces as defined in RFC 2432. 283 Procedure: 285 Multicast and unicast traffic are mixed together in the same 286 aggregated traffic stream in order to simulate the non-homogenous 287 networking environment. 289 The following events MUST occur before offering test traffic: 291 o All DUT/SUT egress interfaces configured to receive 292 multicast traffic MUST join all configured multicast 293 groups; 294 o The DUT/SUT MUST learn the appropriate unicast addresses; 295 and 296 o Group membership and unicast address learning MUST be 297 verified through some externally observable method. 299 The intended load [Ma98] SHOULD be configured as alternating 300 multicast frames and unicast frames to a single ingress interface 301 in a 50-50 ratio. The unicast frames MUST be configured to 302 transmit in a round-robin fashion to all of the egress interfaces. 303 The multicast frames MUST be configured to transmit to all of the 304 egress interfaces. 306 Mixed class throughput measurement is defined in RFC2432 [Du98]. A 307 search algorithm MUST be utilized to determine the throughput for 308 both unicast class and multicast class traffic in a mixed class 309 environment. 311 Reporting Format: 313 The following configuration parameters MUST be reflected in the 314 results specific to this methodology: 316 o Frame size(s) 317 o Number of tested egress interfaces on the DUT/SUT 318 o Test duration 319 o IGMP version 320 o Total number of multicast groups 321 o Traffic distribution for unicast and multicast traffic 322 classes 323 o The ratio of multicast and unicast traffic must be declared 325 The following results MUST be reflected in the results specific to 326 this methodology: 328 o Mixed Class Throughput as defined in RFC2432 [Du98], 329 including: Throughput per unicast and multicast traffic 330 classes. 332 The Mixed Class Throughput results for each test SHOULD be reported 333 in the form of a table with a row for each of the tested frame 334 sizes per the recommendations in section 3.1.3. Each row SHOULD 335 specify the intended load, number of multicast frames offered, 336 number of unicast frames offered and measured throughput per class. 338 4.2. Scaled Group Forwarding Matrix 340 Objective: 342 To determine Forwarding Rate as a function of tested multicast 343 groups for a fixed number of tested DUT/SUT ports. 345 Procedure: 347 This is an iterative procedure. The destination test port(s) MUST 348 join an initial number of multicast groups on the first iteration. 349 All DUT/SUT destination test port(s) configured to receive 350 multicast traffic MUST join all configured multicast groups. The 351 recommended number of groups to join on the first iteration is 10 352 groups. Multicast traffic is subsequently transmitted to all 353 groups joined on this iteration. 355 The number of multicast groups joined by each destination test port 356 is then incremented, or scaled, by an additional number of 357 multicast groups. The recommended granularity of additional groups 358 to join per iteration is 10, although the tester MAY choose a finer 359 granularity. Multicast traffic is subsequently transmitted to all 360 groups joined during this iteration. 362 The total number of multicast groups joined MUST not exceed the 363 capacity of the DUT/SUT. Both Group Join Delay and Group Capacity 364 results MUST be known prior to running this test. 366 Reporting Format: 368 The following configuration parameters MUST be reflected in the 369 results specific to this methodology: 371 o Frame size(s) 372 o Number of tested egress interfaces on the DUT/SUT 373 o Test duration 374 o IGMP version 376 The following results MUST be reflected in the results specific to 377 this methodology: 379 o The total number of multicast groups joined for that 380 iteration 381 o Total number of frames transmitted 382 o Total number of frames received 383 o Offered load 384 o Forwarding rate determined for that iteration 386 The Scaled Group Forwarding results for each test SHOULD be 387 reported in the form of a table with a row representing each 388 iteration of the test. Each row or iteration SHOULD specify the 389 total number of groups joined for that iteration, total number of 390 frames transmitted, total number of frames received and the 391 aggregate forwarding rate determined for that iteration. 393 4.3. Aggregated Multicast Throughput 395 Objective: 397 To determine the maximum rate at which none of the offered frames 398 to be forwarded through N destination interfaces of the same 399 multicast groups are dropped. 401 Procedure: 403 Offer multicast traffic at an initial fixed offered load to a fixed 404 set of interfaces with a fixed number of groups at a fixed frame 405 length for a fixed duration of time. All destination test ports 406 MUST join all specified multicast groups. 408 If any frame loss is detected, the offered load is decreased and 409 the sender will transmit again. An iterative search algorithm MUST 410 be utilized to determine the maximum offered frame rate with a zero 411 frame loss. 413 Each iteration will involve varying the offered load of the 414 multicast traffic, while keeping the set of interfaces, number of 415 multicast groups, frame length and test duration fixed, until the 416 maximum rate at which none of the offered frames are dropped is 417 determined. 419 Parameters to be measured MUST include the offered load at which no 420 frame loss occurred. 422 Reporting Format: 424 The following configuration parameters MUST be reflected in the 425 results specific to this methodology: 427 o Frame size(s) 428 o Number of tested egress interfaces on the DUT/SUT 429 o Test duration 430 o IGMP version 431 o Total number of multicast groups 433 The following results MUST be reflected in the results specific to 434 this methodology: 436 o Aggregated Multicast Throughput as defined in RFC2432 437 [Du98] 439 The Aggregated Multicast Throughput results SHOULD be reported in 440 the format of a table with a row for each of the tested frame sizes 441 per the recommendations in section 3.1.3. Each row or iteration 442 SHOULD specify offered load, total number of offered frames and the 443 measured Aggregated Multicast Throughput. 445 4.4. Encapsulation/Decapsulation (Tunneling) Throughput 447 This sub-section provides the description of tests that help in 448 obtaining throughput measurements when a DUT/SUT or a set of DUTs 449 are acting as tunnel endpoints. 451 4.4.1. Encapsulation Throughput 453 Objective: 455 To determine the maximum rate at which frames offered to one 456 ingress interface of a DUT/SUT are encapsulated and correctly 457 forwarded on one or more egress interfaces of the DUT/SUT without 458 loss. 460 Procedure: 462 Source DUT/SUT Destination 463 Test Port Test Port(s) 464 +---------+ +-----------+ +---------+ 465 | | | | | | 466 | | | Egress|--(Tunnel)-->| | 467 | | | | | | 468 | |------->|Ingress | | | 469 | | | | | | 470 | | | Egress|--(Tunnel)-->| | 471 | | | | | | 472 +---------+ +-----------+ +---------+ 474 Figure 3 475 --------- 477 Figure 3 shows the setup for testing the encapsulation throughput 478 of the DUT/SUT. One or more tunnels are created between each 479 egress interface of the DUT/SUT and a destination test port. Non- 480 Encapsulated multicast traffic will then be offered by the source 481 test port, encapsulated by the DUT/SUT and forwarded to the 482 destination test port(s). 484 The DUT/SUT SHOULD be configured such that the traffic across each 485 egress interface will consist of either: 487 a) A single tunnel encapsulating one or more multicast address 488 groups OR 489 b) Multiple tunnels, each encapsulating one or more multicast 490 address groups. 492 The number of multicast groups per tunnel MUST be the same when the 493 DUT/SUT is configured in a multiple tunnel configuration. In 494 addition, it is RECOMMENDED to test with the same number of tunnels 495 on each egress interface. All destination test ports MUST join all 496 multicast group addresses offered by the source test port. Each 497 egress interface MUST be configured with the same MTU. 499 A search algorithm MUST be utilized to determine the encapsulation 500 throughput as defined in [Du98]. 502 Reporting Format: 504 The following configuration parameters MUST be reflected in the 505 results specific to this methodology: 507 o Number of tested egress interfaces on the DUT/SUT 508 o Test duration 509 o IGMP version 510 o Total number of multicast groups 511 o MTU size of DUT/SUT interfaces 513 The following results MUST be reflected in the results specific to 514 this methodology: 516 o Measured Encapsulated Throughput as defined in RFC2432 517 [Du98] 518 o Encapsulated frame size 519 o Originating un-encapsulated frame size 520 o Number of tunnels 521 o Number of multicast groups per tunnel 523 The Encapsulated Throughput results SHOULD be reported in the form 524 of a table and specific to this test there SHOULD be rows for each 525 originating un-encapsulated frame size. Each row or iteration 526 SHOULD specify the offered load, encapsulation method, encapsulated 527 frame size, total number of offered frames, and the encapsulation 528 throughput. 530 4.4.2. Decapsulation Throughput 532 Objective: 534 To determine the maximum rate at which frames offered to one 535 ingress interface of a DUT/SUT are decapsulated and correctly 536 forwarded by the DUT/SUT on one or more egress interfaces without 537 loss. 539 Procedure: 541 Source DUT/SUT Destination 542 Test Port Test Port(s) 543 +---------+ +-----------+ +---------+ 544 | | | | | | 545 | | | Egress|------->| | 546 | | | | | | 547 | |--(Tunnel)-->|Ingress | | | 548 | | | | | | 549 | | | Egress|------->| | 550 | | | | | | 551 +---------+ +-----------+ +---------+ 553 Figure 4 554 --------- 556 Figure 4 shows the setup for testing the decapsulation throughput 557 of the DUT/SUT. One or more tunnels are created between the source 558 test port and the DUT/SUT. Encapsulated multicast traffic will 559 then be offered by the source test port, decapsulated by the 560 DUT/SUT and forwarded to the destination test port(s). 562 The DUT/SUT SHOULD be configured such that the traffic across each 563 egress interface will consist of either: 565 a) A single tunnel encapsulating one or more multicast address 566 groups OR 567 b) Multiple tunnels, each encapsulating one or more multicast 568 address groups. 570 The number of multicast groups per tunnel MUST be the same when the 571 DUT/SUT is configured in a multiple tunnel configuration. All 572 destination test ports MUST join all multicast group addresses 573 offered by the source test port. Each egress interface MUST 574 be configured with the same MTU. 576 A search algorithm MUST be utilized to determine the decapsulation 577 throughput as defined in [Du98]. 579 Reporting Format: 581 The following configuration parameters MUST be reflected in the 582 results specific to this methodology: 584 o Number of tested egress interfaces on the DUT/SUT 585 o Test duration 586 o IGMP version 587 o Total number of multicast groups 588 o MTU size of DUT/SUT interfaces 590 The following results MUST be reflected in the results specific to 591 this methodology: 593 o Measured Decapsulated Throughput as defined in RFC2432 594 [Du98] 595 o Originating encapsulation format 596 o Decapsulated frame size 597 o Originating encapsulated frame size 598 o Number of tunnels 599 o Number of multicast groups per tunnel 601 The Decapsulated Throughput results SHOULD be reported in the 602 format of a table and specific to this test there SHOULD be rows 603 for each originating encapsulated frame size. Each row or 604 iteration SHOULD specify the offered load, decapsulated frame size, 605 total number of offered frames and the decapsulation throughput. 607 4.4.3. Re-encapsulation Throughput 609 Objective: 611 To determine the maximum rate at which frames of one encapsulated 612 format offered to one ingress interface of a DUT/SUT are converted 613 to another encapsulated format and correctly forwarded by the 614 DUT/SUT to one or more egress interfaces without loss. 616 Procedure: 618 Source DUT/SUT Destination 619 Test Port Test Port(s) 620 +---------+ +---------+ +---------+ 621 | | | | | | 622 | | | Egress|-(Tunnel)->| | 623 | | | | | | 624 | |-(Tunnel)->|Ingress | | | 625 | | | | | | 626 | | | Egress|-(Tunnel)->| | 627 | | | | | | 628 +---------+ +---------+ +---------+ 630 Figure 5 631 --------- 633 Figure 5 shows the setup for testing the Re-encapsulation 634 throughput of the DUT/SUT. The source test port will offer 635 encapsulated traffic of one type to the DUT/SUT, which has been 636 configured to re-encapsulate the offered frames using a different 637 encapsulation format. The DUT/SUT will then forward the re- 638 encapsulated frames to the destination test port(s). 640 The DUT/SUT SHOULD be configured such that the traffic across each 641 egress interface will consist of either: 643 a) A single tunnel encapsulating one or more multicast address 644 groups OR 645 b) Multiple tunnels, each encapsulating one or more multicast 646 address groups. 648 The number of multicast groups per tunnel MUST be the same when the 649 DUT/SUT is configured in a multiple tunnel configuration. 651 In addition, the DUT/SUT SHOULD be configured such that the number 652 of tunnels on the ingress and each egress interface are the same. 653 All destination test ports MUST join all multicast group addresses 654 offered by the source test port. Each egress interface MUST be 655 configured with the same MTU. 657 A search algorithm MUST be utilized to determine the re- 658 encapsulation throughput as defined in [Du98]. 660 Reporting Format: 662 The following configuration parameters MUST be reflected in the 663 results specific to this methodology: 665 o Number of tested egress interfaces on the DUT/SUT 666 o Test duration 667 o IGMP version 668 o Total number of multicast groups 669 o MTU size of DUT/SUT interfaces 671 The following results MUST be reflected in the results specific to 672 this methodology: 674 o Measured Re-encapsulated Throughput as defined in RFC2432 675 [Du98] 676 o Originating encapsulation format 677 o Decapsulated frame size 678 o Originating encapsulated frame size 679 o Number of tunnels 680 o Number of multicast groups per tunnel 682 The Decapsulated Throughput results SHOULD be reported in the 683 format of a table and specific to this test there SHOULD be rows 684 for each originating encapsulated frame size. Each row or 685 iteration SHOULD specify the offered load, decapsulated frame size, 686 total number of offered frames and the decapsulation throughput 688 5. Forwarding Latency 690 This section presents methodologies relating to the 691 characterization of the forwarding latency of a DUT/SUT in a 692 multicast environment. It extends the concept of latency 693 characterization presented in RFC 2544. 695 To lessen the effect of frame buffering in the DUT/SUT, the latency 696 tests MUST be run at the measured multicast throughput level of the 697 DUT; multicast latency at other offered loads is optional. 699 Lastly, RFC 1242 and RFC 2544 draw a distinction between device 700 types: "store and forward" and "bit-forwarding." Each type impacts 701 how latency is collected and subsequently presented. See the 702 related RFCs for more information. In practice, much of the test 703 equipment will collect the latency measurement for one type or the 704 other, and, if needed, mathematically derive the reported value by 705 the addition or subtraction of values accounting for medium 706 propagation delay of the frame, bit times to the timestamp trigger 707 within the frame, etc. 709 5.1. Multicast Latency 711 Objective: 713 To produce a set of multicast latency measurements from a single, 714 multicast ingress interface of a DUT/SUT through multiple, egress 715 multicast interfaces of that same DUT/SUT as provided for by the 716 metric "Multicast Latency" in RFC 2432. 718 The Procedures highlighted below attempt to draw from the 719 collection methodology for latency in RFC 2544 to the degree 720 possible. The methodology addresses two topological scenarios: one 721 for a single device (DUT) characterization; a second scenario is 722 presented or multiple device (SUT) characterization. 724 Procedure: 726 If the test trial is to characterize latency across a single Device 727 Under Test (DUT), an example test topology might take the form of 728 Figure 1 in section 3. That is, a single DUT with one ingress 729 interface receiving the multicast test traffic from frame- 730 transmitting component of the test apparatus and n egress 731 interfaces on the same DUT forwarding the multicast test traffic 732 back to the frame-receiving component of the test apparatus. Note 733 that n reflects the number of TESTED egress interfaces on the DUT 734 actually expected to forward the test traffic (as opposed to 735 configured but untested, non-forwarding interfaces, for example). 737 If the multicast latencies are to be taken across multiple devices 738 forming a System Under Test (SUT), an example test topology might 739 take the form of Figure 2 in section 3. 741 The trial duration SHOULD be 120 seconds to be consistent with RFC 742 2544. The nature of the latency measurement, "store and forward" 743 or "bit forwarding," MUST be associated with the related test 744 trial(s) and disclosed in the results report. 746 End-to-end reachability of the test traffic path MUST be verified 747 prior to the engagement of a test trial. This implies that 748 subsequent measurements are intended to characterize the latency 749 across the tested device's or devices' normal traffic forwarding 750 path (e.g., faster hardware-based engines) of the device(s) as 751 opposed a non-standard traffic processing path (e.g. slower, 752 software-based exception handlers). If the test trial is to be 753 executed with the intent of characterizing a non-optimal, 754 forwarding condition, then a description of the exception 755 processing conditions being characterized MUST be included with the 756 trial's results. 758 A test traffic stream is presented to the DUT. It is RECOMMENDED to 759 offer traffic at the measured aggregated multicast throughput rate 760 (Section 4.3). At the mid-point of the trial's duration, the test 761 apparatus MUST inject a uniquely identifiable ("tagged") frame into 762 the test traffic frames being presented. This tagged frame will be 763 the basis for the latency measurements. By "uniquely identifiable," 764 it is meant that the test apparatus MUST be able to discern the 765 "tagged" frame from the other frames comprising the test traffic 766 set. A frame generation timestamp, Timestamp A, reflecting the 767 completion of the transmission of the tagged frame by the test 768 apparatus, MUST be determined. 770 The test apparatus then monitors frames from the DUT's tested 771 egress interface(s) for the expected tagged frame(s) until the 772 cessation of traffic generation at the end of the configured trial 773 duration. 775 The test apparatus MUST record the time of the successful detection 776 of a tagged frame from a tested egress interface with a timestamp, 777 Timestamp B. A set of Timestamp B values MUST be collected for all 778 tested egress interfaces of the DUT/SUT. See RFC 1242 [Br91] for 779 additional discussion regarding store and forward devices and bit 780 forwarding devices. 782 A trial MUST be considered INVALID should any of the following 783 conditions occur in the collection of the trial data: 785 o Forwarded test frames directed to improper destinations. 786 o Unexpected differences between Intended Load and Offered 787 Load or unexpected differences between Offered Load and the 788 resulting Forwarding Rate(s) on the DUT/SUT egress ports. 789 o Forwarded test frames improperly formed or frame header 790 fields improperly manipulated. 791 o Failure to forward required tagged frame(s) on all expected 792 egress interfaces. 793 o Reception of a tagged frame by the test apparatus outside 794 the configured test duration interval or 5 seconds, 795 whichever is greater. 797 Data from invalid trials SHOULD be considered inconclusive. Data 798 from invalid trials MUST not form the basis of comparison. 800 The set of latency measurements, M, composed from each latency 801 measurement taken from every ingress/tested egress interface 802 pairing MUST be determined from a valid test trial: 804 M = { (Timestamp B(E0) - Timestamp A), 805 (Timestamp B(E1) - Timestamp A), ... 806 (Timestamp B(En) - Timestamp A) } 808 where (E0 ... En) represents the range of all tested egress 809 interfaces and Timestamp B represents a tagged frame detection 810 event for a given DUT/SUT tested egress interface. 812 A more continuous profile MAY be built from a series of individual 813 measurements. 815 Reporting Format: 817 The following configuration parameters MUST be reflected in the 818 results specific to this methodology: 820 o Frame size(s) 821 o Number of tested egress interfaces on the DUT/SUT 822 o Test duration 823 o IGMP version 824 o Offered load 825 o Total number of multicast groups 827 The following results MUST be reflected in the results specific to 828 this methodology: 830 o The time units of the presented latency MUST be uniform and 831 with sufficient precision for the medium or media being 832 tested. 833 o Specifically, when reporting the results of a valid test 834 trial, the set of all latencies related to the tested 835 ingress and each tested egress DUT/SUT interface of MUST be 836 presented. 838 The latency results for each test SHOULD be reported in the form of 839 a table, with a row for each of the tested frame sizes per the 840 recommended frame sizes in section 3.1.3, and SHOULD preserve the 841 relationship of latency to ingress/egress interface(s) to assist in 842 trending across multiple trials. 844 5.2. Min/Max Multicast Latency 846 Objective: 848 To determine the difference between the maximum latency measurement 849 and the minimum latency measurement from a collected set of 850 latencies produced by the Multicast Latency benchmark. 852 Procedure: 854 Collect a set of multicast latency measurements over a single test 855 duration, as prescribed in section 5.1. This will produce a set of 856 multicast latencies, M, where M is composed of individual 857 forwarding latencies between DUT frame ingress and DUT frame egress 858 port pairs. E.g.: 860 M = {L(I,E1),L(I,E2), ..., L(I,En)} 862 where L is the latency between a tested ingress interface, I, of 863 the DUT, and Ex a specific, tested multicast egress interface of 864 the DUT. E1 through En are unique egress interfaces on the DUT. 866 From the collected multicast latency measurements in set M, 867 identify MAX(M), where MAX is a function that yields the largest 868 latency value from set M. 870 Identify MIN(M), when MIN is a function that yields the smallest 871 latency value from set M. 873 The Max/Min value is determined from the following formula: 875 Result = MAX(M) - MIN(M) 877 A more continuous profile MAY be built from a series of individual 878 measurements. 880 Reporting Format: 882 The following configuration parameters MUST be reflected in the 883 results specific to this methodology: 885 o Frame size(s) 886 o Number of tested egress interfaces on the DUT/SUT 887 o Test duration 888 o IGMP version 889 o Offered load 890 o Total number of multicast groups 892 The following results MUST be reflected in the results specific to 893 this methodology: 895 o The result of the min/max value represented as a single 896 numerical value in time units consistent with the 897 corresponding latency measurements. 898 o Specifically, when reporting the results of a valid test 899 trial, the set of all latencies related to the tested 900 ingress interface MUST be reported. 902 The time units of the presented latency MUST be uniform and with 903 sufficient precision for the medium or media being tested. The 904 latency results for each test SHOULD be reported in the form of a 905 table, with a row for each of the tested frame sizes per the 906 recommendations in section 3.1.3, and SHOULD preserve the 907 relationship of latency to ingress/egress interface(s) to assist in 908 trending across multiple trials. 910 6. Overhead 912 This section presents methodology relating to the characterization 913 of the overhead delays associated with explicit operations found in 914 multicast environments. 916 6.1. Group Join Delay 918 Objective: 920 To determine the time duration it takes a DUT/SUT to start 921 forwarding multicast frames from the time a successful IGMP group 922 membership report has been issued to the DUT/SUT. 924 Procedure: 926 Prior to sending any IGMP Group Membership Reports used to 927 calculate the group join delay, it MUST be verified through 928 externally observable means that the destination test ports are not 929 currently a member of any of the specified multicast groups. If 930 any of the egress interfaces forward multicast frames, the test is 931 not valid. 933 Once verification is complete, multicast traffic for all relevant 934 multicast group addresses MUST be offered to the ingress interface 935 prior to receipt or processing of any IGMP Group Membership Report 936 messages. It is RECOMMENDED to offer traffic at the measured 937 aggregated multicast throughput rate (Section 4.3). 939 After the multicast traffic has been started, each destination test 940 port (See Figure 1) SHOULD send one IGMP Group Membership Report 941 with one or more (IGMP V3) multicast group(s) specified. All 942 destination test ports MUST join all multicast groups offered on 943 the ingress interface of the DUT/SUT. The test MUST be performed 944 with one multicast group and SHOULD be performed with multiple 945 groups. 947 The join delay is the difference in time from when the IGMP Group 948 Membership message is sent (timestamp A) and the first frame of the 949 multicast group is forwarded to a receiving egress interface 950 (timestamp B). 952 Group Join delay time = timestamp B - timestamp A 954 Timestamp A MUST be the time the last bit of the IGMP group 955 membership report is sent from the destination test port; timestamp 956 B MUST be the time the first bit of the first valid multicast frame 957 is forwarded on the egress interface of the DUT/SUT. 959 Reporting Format: 961 The following configuration parameters MUST be reflected in the 962 results specific to this methodology: 964 o Frame size(s) 965 o Number of tested egress interfaces on the DUT/SUT 966 o Test duration 967 o IGMP version 968 o Total number of multicast groups 970 The following results MUST be reflected in the results specific to 971 this methodology: 973 o The group join delay time per multicast group address 974 o The group join delay time per egress interface(s) 976 The Group Join Delay results for each test SHOULD be reported in 977 the form of a table, with a row for each of the tested frame sizes 978 per the recommendations in section 3.1.3. Each row or iteration 979 SHOULD specify the group join delay time for each multicast group 980 per destination interface, number of frames transmitted and number 981 of frames received for that iteration. 983 6.2. Group Leave Delay 985 Objective: 987 To determine the time duration it takes a DUT/SUT to cease 988 forwarding multicast frames after a corresponding IGMP Leave Group 989 message has been successfully offered to the DUT/SUT. 991 Procedure: 993 Prior to sending any IGMP Group Leave Group messages used to 994 calculate the group leave delay, it MUST be verified through 995 externally observable means that the destination test ports are 996 currently a member of all the specified multicast groups. If any 997 of the destination test ports do not receive multicast frames, the 998 test is not valid. 1000 Once verification is complete, multicast traffic for all relevant 1001 multicast group addresses MUST be offered to the ingress interface 1002 prior to receipt or processing of any IGMP Leave Group messages. 1003 It is RECOMMENDED to offer traffic at the measured aggregated 1004 multicast throughput rate (Section 4.3). 1006 After the multicast traffic has been started, each destination test 1007 port (See Figure 1) MUST send one IGMP Leave Group message for each 1008 multicast group specified. All destination test ports MUST leave 1009 all relevant multicast groups offered on the ingress interface of 1010 the DUT/SUT. The test MUST be performed with one multicast group 1011 and SHOULD be performed with multiple groups. 1013 The leave delay is the difference in time from when the IGMP Leave 1014 Group message is sent (timestamp A) and the last frame of the 1015 multicast group is forwarded to a receiving egress interface 1016 (timestamp B). 1018 Group Leave delay time = timestamp B - timestamp A 1020 Timestamp A MUST be the time the last bit of the IGMP Leave Group 1021 message is sent from the destination test port; timestamp B MUST be 1022 the time the last bit of the last valid multicast frame is 1023 forwarded on the egress interface of the DUT/SUT. 1025 Reporting Format: 1027 The following configuration parameters MUST be reflected in the 1028 results specific to this methodology: 1030 o Frame size(s) 1031 o Number of tested egress interfaces on the DUT/SUT 1032 o Test duration 1033 o IGMP version 1034 o Total number of multicast groups 1036 The following results MUST be reflected in the results specific to 1037 this methodology: 1039 o The group leave delay time per multicast group address 1040 o The group leave delay time per egress interface(s) 1042 The Group Leave Delay results for each test SHOULD be reported in 1043 the form of a table, with a row for each of the tested frame sizes 1044 per the recommendations in section 3.1.3. Each row or iteration 1045 SHOULD specify the group leave delay time for each multicast group 1046 per destination interface, number of frames transmitted and number 1047 of frames received for that iteration. 1049 7. Capacity 1051 This section offers terms relating to the identification of 1052 multicast group limits of a DUT/SUT. 1054 7.1. Multicast Group Capacity 1056 Objective: 1058 To determine the maximum number of multicast groups a DUT/SUT can 1059 support while maintaining the ability to forward multicast frames 1060 to all multicast groups registered to that DUT/SUT. 1062 Procedure: 1064 One or more egress interfaces of DUT/SUT will join an initial 1065 number of multicast groups. 1067 After a delay as determined by section 6.1, the ingress interface 1068 MUST transmit to each group at a specified offered load. 1070 If at least one frame for each multicast group is forwarded 1071 properly by the DUT/SUT to each participating egress interface, the 1072 iteration is said to pass at the current capacity. 1074 If the iteration passes, the test will add a user defined 1075 incremental value of groups to each egress interface. At the new 1076 group level and resultant capacity as stated above, run the 1077 iteration again. 1079 Once the test fails, the last/previous iteration capacity that 1080 passed is the stated Maximum Group Capacity result. 1082 Reporting Format: 1084 The following configuration parameters MUST be reflected in the 1085 results specific to this methodology: 1087 o Frame size(s) 1088 o Number of tested egress interfaces on the DUT/SUT 1089 o Test duration 1090 o IGMP version 1091 o Offered load 1093 The following results MUST be reflected in the results specific to 1094 this methodology: 1096 o The total number of multicast group addresses that were 1097 successfully forwarded through the DUT/SUT 1099 The Multicast Group Capacity results for each test SHOULD be 1100 reported in the form of a table, with a row for each of the tested 1101 frame sizes per the recommendations in section 3.1.3. Each row or 1102 iteration SHOULD specify the number of multicast groups joined per 1103 destination interface, number of frames transmitted and number of 1104 frames received for that iteration. 1106 8. Interaction 1108 Network forwarding devices are generally required to provide more 1109 functionality than just the forwarding of traffic. Moreover, 1110 network-forwarding devices may be asked to provide those functions 1111 in a variety of environments. This section offers procedures to 1112 assist in the characterization of DUT/SUT behavior in consideration 1113 of potentially interacting factors. 1115 8.1. Forwarding Burdened Multicast Latency 1117 Objective: 1119 To produce a set of multicast latency measurements from a single 1120 multicast ingress interface of a DUT/SUT through multiple egress 1121 multicast interfaces of that same DUT/SUT as provided for by the 1122 metric "Multicast Latency" in RFC 2432 while under the influence of 1123 a traffic forwarding requirement. 1125 Procedure: 1127 The Multicast Latency metrics can be influenced by forcing the 1128 DUT/SUT to perform extra processing of packets while multicast 1129 class traffic is being forwarded for latency measurements. 1131 The Burdened Forwarding Latency test MUST follow the described 1132 setup in Section 5.1. 1134 Perform a baseline measurement of latency as described in Section 1135 5.1. After the baseline measurement is obtained, the test is 1136 repeated with the ingress interface offering an additional set of 1137 user specified multicast group addresses which have not been joined 1138 by the destination test port(s). The offered load MUST be the same 1139 as was used in the baseline measurement. 1141 By sending such multicast class traffic, the DUT/SUT may perform a 1142 lookup on the frames that may affect the processing of traffic 1143 destined for the egress interface(s). 1145 Reporting Format: 1147 Similar to Section 5.1, the following configuration parameters MUST 1148 be reflected in the results specific to this methodology: 1150 o Frame size(s) 1151 o Number of tested egress interfaces on the DUT/SUT 1152 o Test duration 1153 o IGMP version 1154 o Total number of multicast groups in the baseline setup 1155 o Total number of additional multicast groups used to burden 1156 the setup 1158 The following results MUST be reflected in the results specific to 1159 this methodology: 1161 o The time units of the presented latency MUST be uniform and 1162 with sufficient precision for the medium or media being 1163 tested 1164 o Specifically, when reporting the results of a valid test 1165 trial, the set of all latencies related to the tested 1166 ingress and each tested egress DUT/SUT interface of MUST be 1167 presented 1168 o Reported results from baseline measurement; section 5.1 1170 The latency results for each test SHOULD be reported in the form of 1171 a table, with a row for each of the tested frame sizes per the 1172 recommended frame sizes in section 3.1.3, and SHOULD preserve the 1173 relationship of latency to ingress/egress interface(s) to assist in 1174 trending across multiple trials. 1176 8.2. Forwarding Burdened Group Join Delay 1178 Objective: 1180 To determine the time duration it takes a DUT/SUT to start 1181 forwarding multicast frames from the time a successful IGMP Group 1182 Membership Report has been issued to the DUT/SUT while under the 1183 influence of a traffic forwarding requirement. 1185 Procedure: 1187 The Group Join Delay metrics can be influenced by forcing the 1188 DUT/SUT to perform extra process of packets while attempting to 1189 update and maintain the IP multicast address forwarding table. 1191 The Forwarding Burdened Group Join Delay test MUST follow the 1192 described setup in Section 6.1. 1194 Perform a baseline measurement of group join delay as described in 1195 Section 6.1. After the baseline measurement is obtained, the test 1196 is repeated with the ingress interface offering an additional set 1197 of user specified multicast group address which have not been 1198 joined by the destination test port(s). The offered load MUST be 1199 the same as was used in the baseline measurement. 1201 By sending such multicast class traffic, the DUT/SUT may perform a 1202 lookup on the frames that may affect the processing of the IGMP 1203 Group Report messages. 1205 Reporting Format: 1207 Similar to Section 6.1, the following configuration parameters MUST 1208 be reflected in the results specific to this methodology: 1210 o Frame size(s) 1211 o Number of tested egress interfaces on the DUT/SUT 1212 o Test duration 1213 o IGMP version 1214 o Total number of multicast groups in the baseline setup 1215 o Total number of additional multicast groups used to burden 1216 the setup 1218 The following results MUST be reflected in the results specific to 1219 this methodology: 1221 o The group join delay time per multicast group address 1222 o The group join delay time per egress interface(s) 1223 o Reported results from baseline measurement; section 6.1 1225 The Group Join Delay results for each test SHOULD be reported in 1226 the form of a table, with a row for each of the tested frame sizes 1227 per the recommendations in section 3.1.3. Each row or iteration 1228 SHOULD specify the group join delay time for each multicast group 1229 per destination interface, number of frames transmitted and number 1230 of frames received for that iteration. 1232 9. Security Considerations 1234 As this document is solely for the purpose of providing metric 1235 methodology and describes neither a protocol nor a protocol's 1236 implementation, there are no security considerations associated 1237 with this document. 1239 10. Acknowledgements 1241 The Benchmarking Methodology Working Group of the IETF and 1242 particularly Kevin Dubray, Juniper Networks, are to be thanked for 1243 the many suggestions they collectively made to help complete this 1244 document. 1246 11. Contributions 1248 The authors would like to acknowledge the following individuals for 1249 their help and participation of the compilation of this document: 1250 Hardev Soor, Ixia, and Ralph Daniels, Spirent Communications, both 1251 who made significant contributions to the earlier versions of this 1252 document. In addition, the authors would like to acknowledge the 1253 members of the task team who helped bring this document to 1254 fruition: Michele Bustos, Tony De La Rosa, David Newman and Jerry 1255 Perser. 1257 12. References 1259 Normative References 1261 [Br91] Bradner, S., "Benchmarking Terminology for Network 1262 Interconnection Devices", RFC 1242, July 1991. 1264 [Br96] Bradner, S., and J. McQuaid, "Benchmarking Methodology for 1265 Network Interconnect Devices", RFC 2544, March 1999. 1267 [Br97] Bradner, S. "Use of Keywords in RFCs to Reflect Requirement 1268 Levels, RFC 2119, March 1997 1270 [Du98] Dubray, K., "Terminology for IP Multicast Benchmarking", RFC 1271 2432, October 1998. 1273 [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching 1274 Devices", RFC 2285, February 1998. 1276 Informative References 1278 [Ca02] Cain, B., et al., "Internet Group Management Protocol, Version 1279 3", RFC 3376, October 2002. 1281 [De89] Deering, S., "Host Extensions for IP Multicasting", STD 5, RFC 1282 1112, August 1989. 1284 [Fe97] Fenner, W., "Internet Group Management Protocol, Version 2", 1285 RFC 2236, November 1997. 1287 [Hu95] Huitema, C. "Routing in the Internet." Prentice-Hall, 1995. 1289 [Ka98] Kosiur, D., "IP Multicasting: the Complete Guide to 1290 Interactive Corporate Networks", John Wiley & Sons, Inc, 1998. 1292 [Mt98] Maufer, T. "Deploying IP Multicast in the Enterprise." 1293 Prentice-Hall, 1998. 1295 13. Author's Addresses 1297 Debra Stopp 1298 Ixia 1299 26601 W. Agoura Rd. 1300 Calabasas, CA 91302 1301 USA 1303 Phone: + 1 818 871 1800 1304 EMail: debby@ixiacom.com 1306 Brooks Hickman 1307 Spirent Communications 1308 26750 Agoura Rd. 1309 Calabasas, CA 91302 1310 USA 1312 Phone: + 1 818 676 2412 1313 EMail: brooks.hickman@spirentcom.com 1315 14. Full Copyright Statement 1317 "Copyright (C) The Internet Society (2003). All Rights Reserved. 1318 This document and translations of it may be copied and furnished to 1319 others, and derivative works that comment on or otherwise explain 1320 it or assist in its implementation may be prepared, copied, 1321 published and distributed, in whole or in part, without restriction 1322 of any kind, provided that the above copyright notice and this 1323 paragraph are included on all such copies and derivative works. 1324 However, this document itself may not be modified in any way, such 1325 as by removing the copyright notice or references to the Internet 1326 Society or other Internet organizations, except as needed for the 1327 purpose of developing Internet standards in which case the 1328 procedures for copyrights defined in the Internet Standards process 1329 must be followed, or as required to translate it into.�