Network Working Group                                    J. Karthik
     Internet Draft Karthik, Ed.
Internet-Draft                                             Cisco Systems
Intended status: Informational                           R. Papneja, Ed.
Expires: January August 22, 2008                                         R. Papneja                                         Isocore
                                                              Charles Rexrode
                                                                      Verizon
                                                                  July, 2007
                                                              M. Nanduri
                                                     Tata Communications
                                                       February 19, 2008

              Methodology for Benchmarking LDP Data Plane Convergence

                  <draft-karthik-bmwg-ldp-convergence-meth-01.txt>
               draft-karthik-bmwg-ldp-convergence-meth-02

Status of this Memo

   By submitting this Internet-Draft, each author represents that any
   applicable patent or other IPR claims of which he or she is aware
   have been or will be disclosed, and any of which he or she becomes
   aware will be disclosed, in accordance with Section 6 of BCP 79.

        This document may only be posted in an Internet-Draft.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF), its areas, and its working groups.  Note that
   other groups may also distribute working documents as Internet-
   Drafts.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   The list of current Internet-Drafts can be accessed at
             http://www.ietf.org/ietf/1id-abstracts.txt
   http://www.ietf.org/ietf/1id-abstracts.txt.

   The list of Internet-Draft Shadow Directories can be accessed at
             http://www.ietf.org/shadow.html
   http://www.ietf.org/shadow.html.

   This Internet-Draft will expire on August 22, 2008.

Copyright Notice

   Copyright (C) The IETF Trust (2008).

Abstract

   This document describes methodology which includes procedure and
   network setup, for benchmarking Label Distribution Protocol (LDP) [MPLS-LDP]
     Convergence.
   [RFC5036]Convergence.  The proposed methodology is to be used for
   benchmarking LDP convergence independent of the underlying IGP used
   (OSPF or ISIS) and the LDP operating modes.  The terms used in this
   document are defined in a companion draft [LDP-TERM].

                         Benchmarking Methodology [LDP-Term].

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119 [RFC2119].

Table of Contents

   1. Introduction...................................................2  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  3
   2.  Existing definitions...........................................3 definitions . . . . . . . . . . . . . . . . . . . . .  3
   3. Test Considerations............................................4  Benchmarking Considerations  . . . . . . . . . . . . . . . . .  4
     3.1.  Convergence Events........................................4 Events . . . . . . . . . . . . . . . . . . . .  4
     3.2.  Failure Detection [LDP-TERM]..............................4  . . . . . . . . . . . . . . . . . . . .  5
     3.3.  Use of Data Traffic for LDP Convergence...................4 Convergence  . . . . . . . . .  5
     3.4.  Selection of IGP..........................................5 IGP . . . . . . . . . . . . . . . . . . . . .  5
     3.5.  LDP FEC Scaling...........................................5 Scaling  . . . . . . . . . . . . . . . . . . . . .  5
     3.6. Timers....................................................5  Timers . . . . . . . . . . . . . . . . . . . . . . . . . .  5
     3.7.  BGP Configuration.........................................5 Configuration  . . . . . . . . . . . . . . . . . . . .  6
     3.8.  Traffic generation........................................6 generation . . . . . . . . . . . . . . . . . . . .  6
     3.9.  Test Payload . . . . . . . . . . . . . . . . . . . . . . .  7
   4.  Test Setup.....................................................6
           4.1. Topology for Single NextHop FECs (Link Failure)...........6
           4.2. Topology for Multi NextHop FECs (Link and Node Failure)...7 Setup . . . . . . . . . . . . . . . . . . . . . . . . . .  7
   5.  Test Methodology...............................................7 Methodology . . . . . . . . . . . . . . . . . . . . . . .  7
     5.1.  Objective  . . . . . . . . . . . . . . . . . . . . . . . .  8
     5.2.  Test Setup . . . . . . . . . . . . . . . . . . . . . . . .  8
     5.3.  Test Configuration . . . . . . . . . . . . . . . . . . . .  8
     5.4.  Procedure  . . . . . . . . . . . . . . . . . . . . . . . .  8
   6.  Reporting Format...............................................8 Format . . . . . . . . . . . . . . . . . . . . . . .  8
   7. Security Considerations........................................9  IANA Considerations  . . . . . . . . . . . . . . . . . . . . .  9
   8. Acknowledgements...............................................9  Security Considerations  . . . . . . . . . . . . . . . . . . .  9
   9. References.....................................................9  Acknowledgements . . . . . . . . . . . . . . . . . . . . . . .  9
   10. Author's Address..............................................9 References . . . . . . . . . . . . . . . . . . . . . . . . . .  9
     10.1. Normative References . . . . . . . . . . . . . . . . . . .  9
     10.2. Informative References . . . . . . . . . . . . . . . . . . 10
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 10
   Intellectual Property and Copyright Statements . . . . . . . . . . 11

1.  Introduction

   Results of several recent surveys indicate that LDP is becoming one
   of the key enabler of large number of MPLS based services such as
   Layer 2 and Layer 3 VPNs. VPNs [RFC5037] and [RFC5038].  Given the revenue
   that these services generate for the service providers, it becomes
   imperative that reliability and recovery of these services from
   failures is very quick or may be un-noticeable to the end user.  This
   is ensured when implementations can guarantee very short convergence
   times from any planned or unplanned failures.  Given the criticality
   of network convergence, service providers are considering convergence
   as a key metric to evaluate router architectures and LDP
   implementations.  End customers monitor the service level
                         Benchmarking Methodology agreements
   based on total packets lost in a given time frame, hence convergence
   becomes a direct measure of reliability and quality.

   This document describes the methodology for benchmarking LDP Data
   Convergence.  An accompanying document describes the Terminology
   related to LDP data convergence benchmarking [LDP-TERM]. [LDP-Term].  The primary
   motivation for this work is the increased focus on minimizing
   convergence time for LDP as an alternative to other solutions such as
   MPLS Fast Reroute (i.e. protection techniques using RSVP-TE
   extensions).  The procedures outlined here are transparent to the
   Advertisement type (Downstream on Demand Vs Downstream Unsolicited),
   Label Retention mode in use as well as the Label Distribution Control
   and hence can be used in all of these types.

   The test cases defined in this document considers black-box type
   tests that emulate the network events causing route convergence
   events.  This is similar to that defined in [IGP
   [I-D.ietf-bmwg-igp-dataplane-conv-app][IGP APP].  The LDP methodology
   (and terminology) for benchmarking LDP FEC convergence is independent
   to any link-state IGP such as ISIS [IGP-ISIS] and OSPF [IGP-OSPF]. OSPF.  These methodologies
   apply to IPv4 and IPv6 traffic as well as IPv4 and IPv6 IGPs.

   Future versions of this document will include ECMP benchmarks, LDP
   targeted peers and correlated failure scenarios.

2.  Existing definitions

   For the sake of clarity and continuity this RFC adopts the template
   for definitions set out in Section 2 of RFC 1242.  Definitions are
   indexed and grouped together in sections for ease of reference.

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119. 2119
   The reader is assumed to be familiar with the commonly used MPLS
   terminology, some of which is defined in[LDP-Term]. as much as
   possible, but will extend the terminology when necessary.  It is
   assumed that the reader is familiar with the concepts introduced in [LDP-TERM].

                         Benchmarking Methodology ,
   as they will not be repeated here.

3. Test  Benchmarking Considerations

   This section discusses the fundamentals of LDP data plane convergence
   benchmarking:

            -Network

   o  Network events that cause rerouting
            -Failure

   o  Failure detections
            -Data

   o  Data traffic
            -Traffic

   o  Traffic generation
            -IGP

   o  IGP Selection

3.1.  Convergence Events

   FEC reinstallation by LDP is triggered by link or node failures
   downstream of the DUT (Device Under Test) that impact the network
   stability:

            -

   o  Interface Shutdown on DUT side with POS Alarm
            -

   o  Interface Shutdown on remote side with POS Alarm
            -

   o  Interface Shutdown on DUT side with BFD
            -

   o  Interface Shutdown on remote side with BFD
            -

   o  Fiber Pull on DUT side
            -

   o  Fiber Pull on remote side
            -

   o  Online Insertion and Removal (OIR) of line cards on DUT side
            -

   o  Online Insertion and Removal (OIR) on remote side
            -

   o  Downstream node failure
            -

   o  New peer coming up
            -
   o  New link coming up

   o  Soft Failures (LDP Hello Timers expiring)

   o  BFD detecting failures in the forwarding plane

3.2.  Failure Detection [LDP-TERM]

   Local failures can be detected via SONET failure detection with
   directly connected LSR.  Failure detection may vary with the type of
   alarm - LOS, AIS, or RDI.  Failures on Ethernet links such as Gigabit
   Ethernet sometimes rely upon Layer 3 signaling indication for
   failure.  L3 failures could also be detected using BFD

3.3.  Use of Data Traffic for LDP Convergence

   Customers of service providers use packet loss as the one metric for
   failover time.  Packet loss is an externally observable event having
   direct impact on customers' application performance.  LDP convergence
                         Benchmarking Methodology
   benchmarking aim at measuring traffic loss to determine the down time
   when a convergence event occurs.

3.4.  Selection of IGP

   The LDP convergence methodology presented here is independent of the
   type of underlying IGP used.

3.5.  LDP FEC Scaling

   The number of installed LDP FECs will impact the measured LDP
   convergence time for the entire LDP FEC table.  To obtain results
   similar to those that would be observed in an operation network, it
   is recommended that number of installed routes closely approximate be similar to that for of
   the routers in the real operational network.  The number of IGP areas, or
   levels may not impact the LDP convergence time, however it does
   impact the performance of the IGP route convergence.

3.6.  Timers

         There are some timers that will impact the measured LDP Convergence
        time. While the default timers may be suitable in most cases, it is
        recommended that the following timers be configured to the minimum
        value prior to beginning execution of the test cases:
             Timer                                   Recommended Value
             -----                                   -----------------
             Link Failure Indication Delay           <10milliseconds
             IGP Hello Timer                         1 second
             LDP Hello Timer                         1 second
             LDP Hold Timer                          3 seconds
             IGP Dead-Interval                       3 seconds
             LSA Generation Delay                    0
             LSA Flood Packet Pacing                 0
             LSA Retransmission Packet Pacing        0
             SPF Delay                               0

                      Figure 1: Timers Recommendation

3.7.  BGP Configuration

   The observed LDP convergence numbers could be different if BGP routes
   are installed, and will further worsen, if any failure event imposed
                         Benchmarking Methodology
   to measure the LDP convergence causes BGP routes to flap.  BGP routes
   installed will not only consume memory but also CPU cycles when
   routes need to reconverge.  Hence the tester could do one of the
   following. 1.  Have the BGP routes and LDP FECs on different paths,
   and hence ensuring that flaps in LDP path would not affect the BGP
   routes 2.  If the observations shows significant difference due to
   BGP convergence, then one needs to rerun the test with no BGP routes.

3.8.  Traffic generation

   It is suggested that at least 3 traffic streams be configured using a
   traffic generator.  In order to monitor the DUT performance for
   recovery times a set of route prefixes should be advertised before
   traffic is sent.  The traffic should be configured to be sent to
   these routes.

   A typical example would be configuring the traffic generator to send
   the traffic to the first and last of the advertised routes.  Also In
   order to have a good understanding of the performance behavior one
   may choose to send the traffic to the route, lying at the middle of
   the advertised routes.  For example, if 100 routes are advertised,
   the user should send traffic to route prefix number 1, route prefix
   number 50 and to last route prefix advertised, which is 100 in this
   example.

   If the traffic generator is capable of sending traffic to multiple
   prefixes without losing granularity, traffic could be generated to
   more number of prefixes than the recommended 3.

3.9.  Test Payload

   This memo does not explicitly discuss LDP applications such as Layer
   3 VPN, mVPN, Layer-2 VPN (VPLS, AToM) etc could be used as the
   payload of LDP.  The authors of this memo do not believe that the
   convergence of LDP is dependent on the application and verification
   of this statement is beyond the scope of this document.

4.  Test Setup

   Topologies to be used for benchmarking the LDP Convergence:

        4.1. Topology for Single NextHop FECs (Link Failure with parallel
           links)

                           --------   A  --------
                       TG-|Ingress |----| Egress |-TA
                          |  DUT   |----|  Node  |
                           --------   B  --------

                         A - Preferred egress interface
                         B - Next-best egress interface
                         Benchmarking Methodology
                         TA  - Traffic Analyzer
                         TG  - Traffic Generator

         Figure 1: 2: Topology for Single NextHop FECs (Link Failure)

        4.2. Topology for Multi NextHop FECs (Link and Node Failure)

                                  --------
                         --------| Midpt  |---------
                        |        | Node 2 |         |
                        | B       --------          |
                        |                           |
                    --------      --------      ---------
                TG-|Ingress |----| Midpt  |----| Egress  |-TA
                   |  DUT   |  A | Node 1 |    |  Node   |
                    --------      --------      ---------

                  A - Preferred egress interface
                  B - Next-best egress interface
                  TA  - Traffic Analyzer
                  TG  - Traffic Generator

         Figure 2: 3: Topology for Multi NextHop FECs (Node Failure)

5.  Test Methodology

   The procedure described here can apply to all the convergence
   benchmarking cases.

5.1.  Objective

   To benchmark the LDP Data Plane Convergence time as seen on the DUT
   when a Convergence event occurs resulting in the current best FEC is
   not reachable anymore.

5.2.  Test Setup
                         Benchmarking Methodology

   - Based on whether 1 hop or multi hop case is benchmarked use the
   appropriate setup from the ones described in section 4.

5.3.  Test Configuration
1. Configure LDP and other necessary Routing Protocol configuration on the DUT and
on the supporting devices
2. Advertise FECs over parallel interfaces upstream to the DUT.

5.4.  Procedure

             1. Verify
1.Verify that the DUT installs the FECs in the MPLS forwarding table
             2. Generate
2.Generate traffic destined to the FECs advertised by the egress.
             3. Verify
3.Verify and ensure there is 0 traffic loss
             4. Trigger
4.Trigger any choice of failure/convergence event as described in section 3.1
             5. Verify
5.Verify that forwarding resumes over the next best egress i/f.
             6. Stop
6.Stop traffic stream and measure the traffic loss.
             7. Convergence
7.Convergence time is calculated as defined in section 6, Reporting format.

6.  Reporting Format

   For each test, it is recommended that the results be reported in the
   following format.
Parameter                           Units

IGP used for the test               ISIS-TE/ OSPF-TE
Interface types                     Gige, POS, ATM, etc.
Packet Sizes offered to the DUT     Bytes
IGP routes advertised               number of IGP routes

Benchmarks
                         Benchmarking Methodology

1st Prefix's convergence time       milliseconds
Mid Prefix's convergence time       milliseconds
Last Prefix's convergence time      milliseconds

Convergence time suggested above is calculated using the following formula: (Numbers of packet drop/rate per second * 1000) milliseconds
7.  Security  IANA Considerations

         Documents

   This document makes no request of IANA.

   Note to RFC Editor: this type do not directly affect the section may be removed on publication as an
   RFC.

8.  Security Considerations

   The security considerations that apply to any active measurement of
         the Internet or of corporate
   live networks are relevant here as long as benchmarking
         is not performed on devices or systems connected to operating
         networks.

     8. well.  See [RFC4656].

9.  Acknowledgements

   We thank Bob Thomas for providing valuable comments to this document.
   We also thank Andrey Kiselev for his review and suggestions.

     9.

10.  References

         [LDP-TERM] Eriksson, et al, “Terminology

10.1.  Normative References

   [I-D.ietf-bmwg-igp-dataplane-conv-app]
              Poretsky, S., "Considerations for Benchmarking LDP Link-State
              IGP Data Plane Convergence”, draft-eriksson-ldp-convergence-term-
                    04 (Work Route  Convergence",
              draft-ietf-bmwg-igp-dataplane-conv-app-14 (work in
              progress), February November 2007.

         [MPLS-LDP]

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

   [RFC4656]  Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M.
              Zekauskas, "A One-way Active Measurement Protocol
              (OWAMP)", RFC 4656, September 2006.

   [RFC5036]  Andersson, L., Doolan, P., Feldman, N., Fredette, A. Minei, I., and B. Thomas, "LDP
              Specification", RFC 3036, January 2001.

         [IGP-METH] S. Poretsky, 5036, October 2007.

   [RFC5037]  Andersson, L., Minei, I., and B. Imhoff, "Benchmarking Methodology Thomas, "Experience with
              the Label Distribution Protocol (LDP)", RFC 5037,
              October 2007.

   [RFC5038]  Thomas, B. and L. Andersson, "The Label Distribution
              Protocol (LDP) Implementation Survey Results", RFC 5038,
              October 2007.

10.2.  Informative References

   [LDP-Term]
              T. Eriksson, et al., "Terminology for
                    IGP Benchmarking LDP
              Data Plane Route Convergence," draft-ietf-bmwg-igp-
                    dataplane-conv-meth-11.txt,” work in progress.

          [IGP OSPF]  Moy, J., "OSPF Version 2", RFC 2328, IETF, April 1998.

     10.  Author's Address Convergence", February 2007.

Authors' Addresses

   Jay Karthik
                         Benchmarking Methodology (editor)
   Cisco System Systems
   300 Beaver Brook Road
   Boxborough, MA  01719
   USA

   Phone: +1 978 936 0533
   Fax:   +1 978 936 0000
   Email: jkarthik@cisco.com
   URI:   http://www.cisco.com

   Rajiv Papneja (editor)
   Isocore
   12359 Sunrise Valley Drive, STE 100
        Reston, VA
   Reston  20190
   USA

   Phone: +1 703 860 9273
   Email: rpapneja@isocore.com

        Charles Rexrode
        Verizon
        320 St Paul Place, 14th Fl
        Baltimore, MD 21202
   URI:   www.isocore.com

   Mohan Nanduri
   Tata Communications
   12010 Sunset Hills Road, 4th Floor
   Reston  20190
   USA

   Email: charles.a.rexrode@verizon.com Mohan.Nanduri@tatacommunications.com

Full Copyright Statement

   Copyright (C) The IETF Trust (2007). (2008).

   This document is subject to the rights, licenses and restrictions
   contained in BCP 78, and except as set forth therein, the authors
   retain all their rights.

   This document and the information contained herein are provided on an
   "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS
   OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND
   THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS
   OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF
   THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
   WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

                         Benchmarking Methodology

Intellectual Property

   The IETF takes no position regarding the validity or scope of any
   Intellectual Property Rights or other rights that might be claimed to
   pertain to the implementation or use of the technology described in
   this document or the extent to which any license under such rights
   might or might not be available; nor does it represent that it has
   made any independent effort to identify any such rights.  Information
   on the procedures with respect to rights in RFC documents can be
   found in BCP 78 and BCP 79.

   Copies of IPR disclosures made to the IETF Secretariat and any
   assurances of licenses to be made available, or the result of an
   attempt made to obtain a general license or permission for the use of
   such proprietary rights by implementers or users of this
   specification can be obtained from the IETF on-line IPR repository at
   http://www.ietf.org/ipr.

   The IETF invites any interested party to bring to its attention any
   copyrights, patents or patent applications, or other proprietary
   rights that may cover technology that may be required to implement
   this standard.  Please address the information to the IETF at ietf-
        ipr@ietf.org.

        Acknowledgement
   ietf-ipr@ietf.org.

Acknowledgment

   Funding for the RFC Editor function is currently provided by the
        Internet Society. IETF
   Administrative Support Activity (IASA).