Network Working Group INTERNET-DRAFT Expires in: December 2006 Scott Poretsky Reef Point Systems Richard Watts Cisco Systems June 2006 Methodology for Benchmarking Network-layer Traffic Control Mechanisms Intellectual Property Rights (IPR) statement: By submitting this Internet-Draft, each author represents that any applicable patent or other IPR claims of which he or she is aware have been or will be disclosed, and any of which he or she becomes aware will be disclosed, in accordance with Section 6 of BCP 79. Status of this Memo Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Copyright Notice Copyright (C) The Internet Society (2006). Abstract This document describes the methodology for the benchmarking of devices that implement traffic control based on classification such as diff-serv code point (DSCP) criteria. The methodology is to be applied to measurements made on the data plane to evaluate the performance of the traffic control mechanisms. The methodology permits the specific traffic control mechanisms and configuration commands to vary between DUTs. The methodology provides procedures using existing Terminology to benchmark DUT performance traffic control mechanisms applied to physical and logical interfaces using DSCP as the classification criteria. This includes scenarios where Forwarding Congestion occurs due to interface congestion. Poretsky, Watts [Page 1] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms Table of Contents 1. Introduction ...............................................2 2. Existing definitions .......................................2 3. Test Setup..................................................3 3.1 Test Topologies............................................3 3.2 Test Considerations........................................4 3.3 Reporting Format...........................................5 4. Test Cases..................................................6 4.1 Undifferentiated Response..................................6 4.2 Traffic Control Baseline Performance.......................6 4.3 Traffic Control Performance with Forwarding Congestion.....7 4.4 Undifferentiated Response with Ingress Rate-Limiting.......8 4.5 Ingress Rate-Limiting Baseline Performance.................8 4.6 Ingress Rate-Limiting with Forwarding Congestion...........9 5. IANA Considerations.........................................10 6. Security Considerations.....................................10 7. References..................................................11 8. Author's Address............................................12 9. Full Copyright Statement....................................13 1. Introduction This document describes the methodology for the benchmarking of devices that implement traffic control based on diff-serv code point (DSCP) criteria. The methodology is to be applied to measurements made on the data plane to evaluate the performance of the traffic control mechanisms. The methodology permits the specific traffic control mechanisms and configuration commands to vary between Devices Under Test (DUTs). This methodology provides procedures using existing Terminology to benchmark DUT performance for traffic control mechanisms using DSCP as the classification criteria. This includes scenarios where Forwarding Congestion occurs at physical or logical interfaces either due to Forwarding Capacity of the interface or DUT configuration of traffic control mechanisms, such as rate limiting. The methodology uses much of the terminology defined in [Pp06]. 2. Existing definitions For the sake of clarity and continuity this RFC adopts the template for definitions set out in Section 2 of RFC 1242. Definitions are indexed and grouped together in sections for ease of reference. Reference [Pp06] for benchmarking terminology. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14, RFC 2119 [Br97]. RFC 2119 defines the use of these key words to help make the intent of standards track documents as clear as possible. While this document uses these keywords, this document is not a standards track document. Poretsky, Watts [Page 2] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms 3. Test Setup 3.1 Test Topologies Figure 1 shows the logical Test Topology for benchmarking performance when Forwarding Congestion does not exist on the physical or logical egress interface. This topology is to be used when benchmarking the Undifferentiated Response and the Traffic Control without Forwarding Congestion. Figure 2 shows the logical Test Topology for benchmarking performance when Forwarding Congestion does exist on the physical or logical egress interface. This topology is to be used when benchmarking the Traffic Control with Forwarding Congestion. The Forwarding Congestion is produced by offering load to two ingress interfaces on the DUT destined for the same single egress interface. The aggregate of the ingress offered load MUST exceed the Forwarding Capacity of the egress interface to produce Forwarding Congestion. Expected Vector | | \/ --------- Offered Vector --------- | |<--------------------------------| | | | | | | | | | | DUT | | Tester| | | | | | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | | | Output Vector | | --------- --------- Figure 1. Logical Test Topology for Benchmarking Without Forwarding Congestion Expected Vector | | \/ --------- Offered Vector --------- | |<--------------------------------| | | | Ingress Interfaces 1,2 | | | |<--------------------------------| | | DUT | | Tester| | | | | | |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>| | | | Output Vector | | --------- --------- Figure 2. Logical Test Topology for Benchmarking With Forwarding Congestion Poretsky, Watts [Page 3] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms 3.2 Test Considerations 3.2.1 Routing Configuration Routing Protocols SHOULD NOT be used. All routing decisions SHOULD be made based upon pre-configured static routes. 3.2.2 Interface Types All test cases in this methodology document may be executed with any interface type. All interfaces MUST be the same media and Throughput [5,6] for each test case. 3.2.3 Offered Vector The Offered Vector MUST be configured on the Tester as follows: a. The Offered Load MUST be the Forwarding Capacity of the device at a fixed packet size. b. The Forwarding Capacity MUST be measured at the egress interface of the DUT c. Each test case MUST be executed using a single, selectable packet size. Packet Size is measured in bytes and includes the IP header and payload. If IPsec packets are used then the packet size also includes it. Packet Size MUST be equal to or less than the interface MTU so that there is no fragmentation. d. It is RECOMMENDED that the number of flows used be 1000, 10000, and/or 100000. A flow MUST be identified by its DSCP, IP Source Address, and IP Destination Address. e. It is RECOMMENDED that the number of DSCPs used be 1, 2, 3, 4, 6, 8, 16, and/or 64. When the number of DSCPs is 1 then the Undifferentiated Response is benchmarked. The actual values of the DSCPs used is selectable. 3.2.4 Test Duration It is RECOMMENDED that the Test Duration for each test case includes a minimum of 10 minutes of Offered Load and Output Vector measurement 3.2.5 Expected Vector The Expected Vector is configured on the DUT. The Traffic Control mechanisms and specific configuration commands may vary between DUTs. Test Cases may be repeated with variation to the Expected Vector to produce a more benchmark results. Poretsky, Watts [Page 4] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms 3.3 Reporting Format For each test case, it is recommended that the following reporting format be completed: PARAMETERS UNITS ---------- ----- Interfaces ---------- Link Type Physical or Logical Number Logical Interfaces interfaces Offered Vector -------------- Offered Load pps Number of DSCPs {1..64} Codepoint Set {0..63, 0..63, ... , x} Number of Flows {1000, 10000, 100000} Number of Flows per DSCP Number of Flows/Number of DSCPs Packet Size bytes Undifferentiated Response (Number of DSCPs = 1) ------------------------- Forwarding Capacity pps Packet Loss packets Forwarding Delay Minimum msec Maximum msec Average msec Jitter Average msec Peak-to-Peak msec Out-of-Order Packets packets Duplicate Packets packets Expected Vector {for DSCP=n} (as configured on DUT) ---------------------------- Forwarding Capacity pps Packet Loss packets Forwarding Delay Minimum msec Maximum msec Average msec Poretsky, Watts [Page 5] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms Output Vector {for DSCP=n} -------------------------- Forwarding Capacity pps Packet Loss packets Forwarding Delay Minimum msec Maximum msec Average msec Jitter Average msec Peak-to-Peak msec Out-of-Order Packets packets Duplicate Packets packets 4. Test Cases 4.1 Undifferentiated Response Purpose: To establish the baseline performance of the DUT. Procedure: 1. Configure DUT with Expected Vector. 2. Configure the Tester for the Offered Vector. Number of DSCPs MUST equal 1 and the RECOMMENDED DSCP value is 0 (Best Effort). Use 1000 Flows identified by IP SA/DA. All flows have the same DSCP value. 3. Using the Test Topology in Figure 1, source the Offered Load from the Tester to the DUT. 4. Measure and record the Output Vector. 5. Maintain offered load for 10 minutes minimum to observe possible variations in measurements. 6. Repeat steps 2 through 5 with 10000 and 100000 Flows. Expected Results: Forwarding Vector equals the Offered Load. There is no packet loss and no out-of-order packets. 4.2 Egress Traffic Control Baseline Performance Purpose: To benchmark the Output Vectors for a Codepoint Set without Forwarding Congestion. Poretsky, Watts [Page 6] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms Procedure: 1. Configure DUT with Expected Vector for each DSCP in the Codepoint Set. 2. Configure the Tester for the Offered Vector. Number of DSCPs MUST be 2 or more. Any DSCP values can be used. Use 1000 Flows identified by IP SA/DA and DSCP value. 3. Using the Test Topology in Figure 1, source the Offered Load from the Tester to the DUT. 4. Measure and record the Output Vector for each DSCP in the Codepoint Set. 5. Maintain offered load for 10 minutes minimum to observe possible variations in measurements. 6. Repeat steps 2 through 5 with 10000 and 100000 Flows. 7. Increment number of DSCPs used and repeat steps 1 through 6. Expected Results: Forwarding Vector equals the Offered Load. There is no packet loss and no out-of-order packets. Output vectors match the Expected Vectors for each DSCP in the Codepoint Set. 4.3 Egress Traffic Control Performance with Forwarding Congestion Purpose: To benchmark the Output Vectors for a Codepoint Set with Forwarding Congestion. Procedure: 1. Configure DUT with Expected Vector for each DSCP in the Codepoint Set. 2. Configure the Tester for the Offered Vector. Number of DSCPs MUST be 2 or more. Any DSCP values can be used. Use 1000 Flows identified by IP SA/DA and DSCP value. The Offered Load MUST exceed the Forwarding Capacity of a single egress link by 25% using 2 ingress links. 3. Using the Test Topology in Figure 2, source the Offered Load from the Tester to the DUT. The aggregate of the ingress offered load MUST exceed the Forwarding Capacity of the egress link to produce Forwarding Congestion. 4. Measure and record the Output Vector for each DSCP in the Codepoint Set. 5. Maintain offered load for 10 minutes minimum to observe possible variations in measurements. 6. Repeat steps 2 through 5 with 10000 and 100000 Flows. Poretsky, Watts [Page 7] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms 7. Increment offered load by 25% to 200% maximum. 8. Increment number of DSCPs used and repeat steps 1 through 6. Expected Results: Forwarding Vector equals the Expected Vector. There is packet loss and no out-of-order packets. Output vectors match the Expected Vectors for each DSCP in the Codepoint Set. 4.4 Undifferentiated Response with Logical Interfaces Purpose: To establish the baseline performance of the DUT with logical interface, such as VLANs, without Forwarding Congestion. Procedure: 1. Configure the egress physical interface so that it has multiple logical interfaces. The number of interfaces MUST be recorded. 2. Configure DUT with Expected Vector on each logical interface so that Expected Forwarding Vector = Forwarding Capacity of the logical interface for each DSCP in the Codepoint Set. 3. Configure the Tester for the Offered Vector. Number of DSCPs MUST equal 1 and the RECOMMENDED DSCP value is 0 (Best Effort). Use 1000 Flows identified by IP SA/DA. All flows have the same DSCP value. 4. Using the Test Topology in Figure 1, source the Offered Load from the Tester to the DUT . 5. Measure and record the Output Vector for each logical link. 6. Maintain offered load for 10 minutes minimum to observe possible variations in measurements. 7. Repeat steps 2 through 5 with 10000 and 100000 Flows. Expected Results: Forwarding Vector equals the Expected Vector, which also equals the Offered Load. There is no packet loss and no out-of-order packets. 4.5 Baseline for Traffic Control Mechanisms on Logical Interfaces Purpose: To benchmark the Output Vectors for a Codepoint Set at Logical Interfaces of a single physical egress link for each DSCP when there is no Forwarding Congestion. Poretsky, Watts [Page 8] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms Procedure: 1. Configure the egress physical interface so that it has multiple logical interfaces. The number of interfaces MUST be recorded. 2. Configure DUT with Expected Vectors on each logical interface so that the Expected Forwarding Vector for each DSCP in the Codepoint Set < Forwarding Capacity of the logical interface . 3. Configure the Tester for the Offered Vector. Number of DSCPs MUST be 2 or more. Any DSCP values can be used and MUST be recorded. Use 1000 Flows identified by IP SA/DA and DSCP value. 4. Using the Test Topology in Figure 1, source the Offered Load from the Tester to the DUT. 5. Measure and record the Output Vector for each DSCP in the Codepoint Set. 6. Maintain offered load for 10 minutes minimum to observe possible variations in measurements. 7. Repeat steps 2 through 5 with 10000 and 100000 Flows. 8. Increment number of DSCPs used and repeat steps 1 through 6. Expected Results: Output Vectors equal the Expected Vectors. There is no packet loss and no out-of-order packets. Output Vectors match the Expected Vectors for each DSCP in the Codepoint Set for each logical interface. 4.6 Traffic Control Mechanisms on Logical Interfaces with Forwarding Congestion Purpose: To benchmark the Output Vectors for a Codepoint Set at Logical Interfaces of a single physical link for each DSCP when there is Forwarding Congestion. Procedure: 1. Configure the egress physical interface so that it has multiple logical interfaces. The number of interfaces MUST be recorded. 2. Configure DUT with Expected Vectors on each logical interface so that Expected Forwarding Vector for each DSCP in the Codepoint Set < Forwarding Capacity of the logical interface . Poretsky, Watts [Page 9] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms 3. Configure the Tester for the Offered Vector. Number of DSCPs MUST be 2 or more. Any DSCP values can be used. Use 1000 Flows identified by IP SA/DA and DSCP value. The Offered Load MUST exceed the Forwarding caoacity of the Logical Interface by 25%. 4. Using the Test Topology in Figure 2, source the Offered Load from the Tester to the DUT. The ingress offered load MUST exceed the reduced interface bandwidth of each egress logical interface to produce Forwarding Congestion. 5. Measure and record the Output Vector for each DSCP in the Codepoint Set. 6. Maintain offered load for 10 minutes minimum to observe possible variations in measurements. 7. Repeat steps 2 through 5 with 10000 and 100000 Flows. 8. Increment offered load by 25% to 200% maximum. 9. Increment number of DSCPs used and repeat steps 1 through 6. Expected Results: Forwarding Vector equals the Expected Vector. There is packet loss due to drops from congestion and There are no out-of-order packets. Output vectors match the Expected Vectors for each DSCP in the Codepoint Set. 5. IANA Considerations This document requires no IANA considerations. 6. Security Considerations Documents of this type do not directly affect the security of the Internet or of corporate networks as long as benchmarking is not performed on devices or systems connected to production networks. Packets with unintended and/or unauthorized DSCP or IP precedence values may present security issues. Determining the security consequences of such packets is out of scope for this document. 7. Acknowledgements Poretsky, Watts [Page 10] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms 8. References 8.1 Normative References [Br91] Bradner, S., "Benchmarking Terminology for Network Interconnection Devices", RFC 1242, July 1991. [Br97] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", RFC 2119, March 1997 [Br98] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, S., Wroclawski, J. and L. Zhang, "Recommendations on Queue Management and Congestion Avoidance in the Internet", RFC 2309, April 1998. [Ma98] Mandeville, R., "Benchmarking Terminology for LAN Switching Devices", RFC 2285, July 1998. [Ni98] Nichols, K., Blake, S., Baker, F., Black, D., "Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers", RFC 2474, December 1998. [Pp06] Perser, J., Poretsky, S., Erramilli, S., and Khurana, S., "Terminology for Benchmarking Network-layer Traffic Control Mechanisms", draft-ieft-bwmg-dsmterm-12, work in progress, 2006. 8.2 Informative References [Bl98] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., Weiss, W., "An Architecture for Differentiated Services", RFC 2475, December 1998. [Br99] Bradner, S., McQuaid, J. "Benchmarking Methodology for Network Interconnect Devices", RFC 2544, March 1999 [Fl93] Floyd, S., and Jacobson, V., "Random Early Detection gateways for Congestion Avoidance", IEEE/ACM Transactions on Networking, V.1 N.4, August 1993, p. 397-413. URL "ftp://ftp.ee.lbl.gov/papers/early.pdf". [Ja99] Jacobson, V., Nichols, K., Poduri, K., "An Expedited Forwarding PHB", RFC 2598, June 1999 [Ma91] Mankin, A., Ramakrishnan, K., "Gateway Congestion Control Survey", RFC 1254, August 1991 [Sc96] Schulzrinne, H., Casner, S., Frederick, R., Jacobson, V., "RTP: A Transport Protocol for Real-Time Applications", RFC 1889, January 1996 Poretsky, Watts [Page 11] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms 9. Authors' Addresses Scott Poretsky Reef Point Systems 8 New England Executive Park Burlington, MA 01803 USA Phone: + 1 508 439 9008 EMail: sporetsky@reefpoint.com Richard Watts Cisco Systems 200 Longwater Avenue Reading RG2 6GB United Kingdom Phone: +44208 8248139 Email: riwatts@cisco.com Poretsky, Watts [Page 12] INTERNET-DRAFT Methodology for Benchmarking June 2006 Network-layer Traffic Control Mechanisms Full Copyright Statement Copyright (C) The Internet Society (2006). This document is subject to the rights, licenses and restrictions contained in BCP 78, and except as set forth therein, the authors retain all their rights. This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79. Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr. The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf- ipr@ietf.org. Acknowledgement Funding for the RFC Editor function is currently provided by the Internet Society. Poretsky, Watts [Page 13]