Network Working Group                                              X. Fu
Internet-Draft                                                  M. Betts                                                       ZTE
Intended status: Standards Track                                 Q. Wang                               V. Manral
Expires: January 27, March 15, 2012                            Hewlett-Packard Corp.
                                                            S. Giacalone
                                                         Thomson Reuters
                                                                M. Betts
                                                                 Q. Wang
                                                                     ZTE
                                                              D. McDysan
                                                                A. Malis
                                                                 Verizon
                                                            S. Giacalone
                                                         Thomson Reuters
                                                                J. Drake
                                                        Juniper Networks
                                                           July 26,
                                                      September 12, 2011

     Framework

        Traffic Engineering architecture for latency and loss traffic engineering application
               draft-fuxh-mpls-delay-loss-te-framework-00 services aware MPLS
               draft-fuxh-mpls-delay-loss-te-framework-01

Abstract

   Latency

   With more and packet loss is such requirement that must be achieved
   according to more enterprises using cloud based services, the Service Level Agreement (SLA) / Network Performance
   Objective (NPO)
   distances between customers and service providers.  Latency and
   packet loss can be associated with different service level.  The the user
   may select a private line provider based on and the ability applications are growing.  A lot
   of the current applications are designed to meet a
   latency and loss SLA.

   The key driver for latency work across LAN's and loss is stock/commodity trading
   have various inherent assumptions.  For multiple applications that use data base mirroring.  A few milli seconds such as
   High Performance Computing and
   packet loss can impact a transaction. Electronic Financial or trading companies markets, the
   response times are very focused on end-to-end private pipe line latency
   optimizations that improve things 2-3 ms.  Latency/loss and
   associated SLA critical as is one packet loss, while other
   applications require more throughput.

   [RFC3031] describes the architecture of MPLS based networks.  This
   draft extends the key parameters that these "high value"
   customers use MPLS architecture to select a private pipe line provider.  Other key
   applications like video gaming, conferencing and storage area
   networks require stringent allow for latency, loss and bandwidth.

   This document describes requirements and control plane implication
   for latency and packet loss
   jitter as a traffic engineering performance
   metric properties.

   Note MPLS architecture for Multicast will be taken up in today's network which is consisting of potentially multiple
   layers a future
   version of packet transport network the draft.

Requirements Language

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and optical transport network "OPTIONAL" in
   order this
   document are to meet the latency/loss SLA between service provider and his
   customers. be interpreted as described in [RFC 2119].

Status of this Memo

   This Internet-Draft is submitted in full conformance with the
   provisions of BCP 78 and BCP 79.

   Internet-Drafts are working documents of the Internet Engineering
   Task Force (IETF).  Note that other groups may also distribute
   working documents as Internet-Drafts.  The list of current Internet-
   Drafts is at http://datatracker.ietf.org/drafts/current/.

   Internet-Drafts are draft documents valid for a maximum of six months
   and may be updated, replaced, or obsoleted by other documents at any
   time.  It is inappropriate to use Internet-Drafts as reference
   material or to cite them other than as "work in progress."

   This Internet-Draft will expire on January 27, March 15, 2012.

Copyright Notice

   Copyright (c) 2011 IETF Trust and the persons identified as the
   document authors.  All rights reserved.

   This document is subject to BCP 78 and the IETF Trust's Legal
   Provisions Relating to IETF Documents
   (http://trustee.ietf.org/license-info) in effect on the date of
   publication of this document.  Please review these documents
   carefully, as they describe your rights and restrictions with respect
   to this document.  Code Components extracted from this document must
   include Simplified BSD License text as described in Section 4.e of
   the Trust Legal Provisions and are provided without warranty as
   described in the Simplified BSD License.

Table of Contents

   1.  Introduction . . . . . . . . . . . . . . . . . . . . . . . . .  4
     1.1.  Conventions Used in This Document
   2.  Architecture requirements overview . . . . . . . . . . . . . .  4
   2.  Latency and Loss Report
     2.1.  Requirement for Composite Link . . . . . . . . . . . . . .  4
     2.2.  Requirement for Hierarchy LSP  . . . . . . . . .  4
   3.  Requirements Identification . . . . .  5
   3.  End-to-End Latency Measurements  . . . . . . . . . . . . . . .  5
   4.  Control Plane Implication  End-to-End Jitter Measurements . . . . . . . . . . . . . . . .  6
   5.  End-to-End Loss Measurements . . . . . . . . . . . . . . . . .  7
   5.  Security
   6.  Protocol Considerations  . . . . . . . . . . . . . . . . . . .  9
   6.  IANA Considerations  7
   7.  Restoration, Protection and Rerouting  . . . . . . . . . . . .  8
   8.  Control Plane Implication  . . . . . . . . .  9
   7.  References . . . . . . . . .  8
     8.1.  Implications for Routing . . . . . . . . . . . . . . . . .  8
       8.1.1.  Implications for Signaling . . . . . . . . . . . . . .  9
     7.1.  Normative References
   9.  IANA Considerations  . . . . . . . . . . . . . . . . . . .  9
     7.2.  Informative References . . 10
   10. Security Considerations  . . . . . . . . . . . . . . . . . . . 10
   11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 10
   Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 10

1.  Introduction

   Current operation and maintenance mode of latency and packet loss
   measurement is high in cost and low in efficiency.  The latency and
   packet loss can only be measured after the connection has been
   established, if the measurement indicates that

   In High Frequency trading for Electronic Financial markets, computers
   make decisions based on the latency SLA is not
   met then another path is computed, setup and measured.  This "trial
   and error" process is very inefficient.  To avoid this problem Electronic Data received, without human
   intervention.  These trades now account for a
   means of making an accurate prediction majority of latency and packet loss
   before a path is establish is required.

   This document describes the requirements and control plane
   implication to communicate latency and packet loss as a traffic
   engineering performance metric in today's network which is consisting
   of potentially multiple layers of packet transport network trading
   volumes and
   optical transport network in order to meet the rely exclusively on ultra-low-latency direct market
   access.

   Extremely low latency and packet
   loss SLA between service provider and his customers.

1.1.  Conventions Used in This Document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document measurements for MPLS LSP tunnels are to be interpreted as described in [RFC2119].

2.  Latency and Loss Report

   This section isn't going to say how latency or packet loss is
   measured.  How to measure has been provided in ITU-T [Y.1731],
   [G.709] and [ietf-mpls-loss-delay].  It's purpose is to define what
   it is sufficiently clear that mechanisms could be defined
   in [draft-ietf-mpls-loss-delay].  They allow a mechanism to measure
   it, and so that independent implementations will report the same
   thing.  If control plane wish to define the ability to report latency
   and monitor performance metrics for packet loss, control plane must be clear what it are reporting.

   Packet/Frame loss probability is expressed as a percentage of the
   number of service packets/frames not delivered divided by the total
   number of service frames during time interval T. Loss is always
   measured by sending a measurement packet or frame from measurement
   point to its reception and recception sending back a response.

   The link of latecny is the time interval between the propagation of
   an electrical signal one-way and its reception.  Latency is always measured
   by sending a measurement packet or frame from measurement point to
   its reception.  In some usages, latency is measured by sending a
   packet/frame that is returned to the sender and the round-trip time
   is considered the latency of bidirectional co-routed or associated
   LSP.  One two-
   way time is considered as the latency of unidirectional
   LSP.  The one way latency may not be half of the round-trip latency
   in the case that the transmit and receive directions of the path are
   of unequal lengths.

   Control plane should report two components of the delay, "static" and
   "dynamic".  The dynamic component is caused by traffic loading.  What
   is reporting for "dynamic" portion is approximation.

   Latency on a connection has two sources: Node latency which is caused
   by the node as a result of process time in each node and: Link
   latency well as a result of packet/frame transit time between two
   neighbouring nodes or a FA-LSP/Composit Link [CL-REQ].  The average
   latency of node should be reported.  It is simpler to add node
   latency to the link related metrics like delay vs. carrying a separate parameter and does
   not hide any important information.  Latency variation and
   channel throughput.

   The measurements are however effective only after the LSP is a parameter
   that is created
   and cannot be used by MPLS Path computation engine to indicate define paths
   that have the variation range of latest latency.  This draft defines the latency value.
   Latency, latecny variation value must architecture
   used, so that end-to-end tunnels can be reported as a average value
   which is calculated by data plane.

3.  Requirements Identification set up based on latency, loss
   or jitter characteristics.

   End-to-end service optimization based on latency and packet loss is a
   key requirement for service provider.  This type of function will be
   adopted by their "premium" service customers.  They would like to pay
   for this "premium" service.  Latency and loss on a route level will
   help carriers' customers to make his provider selection decision.
   Following key

2.  Architecture requirements associated with latency and loss is
   identified.

   o  REQ #1: overview

   The solution MUST provide a means to communicate latency, latency
   variation and packet loss of links and nodes as a traffic engineering
   performance metric into IGP.

   o  REQ #2: Latency, latency variation and packet loss may be
      unstable, for example, if queueing latency were included, then IGP
      could become unstable.  The solution MUST provide a means to
      control latency and loss IGP message advertisement and avoid
      unstable when the latency, latency variation and packet loss value
      changes.

   o  REQ #3:

   Path computation entity MUST have the capability to compute one end-to-end end-
   to-end path with latency and packet loss constraint. for  For example, it
   has the capability to compute a route with X amount of bandwidth with
   less than Y ms of latency and Z% packet loss limit based on the
   latency and packet loss traffic engineering database.  It MUST also
   support the path computation with routing constraints combination
   with pre-defined priorities, e.g., SRLG diversity, latency, loss and
   cost.

   o  REQ #4:

2.1.  Requirement for Composite Link

   One end-to-end LSP may traverses some Composite Links [CL-
      REQ]. [CL-REQ].  Even
   if the transport technology (e.g., OTN) implementing
      the component links is are
   identical, the latency and packet loss characteristics of the
   component links may differ.  In order to
      assign the LSP to one of component links with different latency
      and packet loss characteristics, the

   The solution SHOULD MUST provide a means to indicate that a traffic flow
   should select a component link with minimum latency and/or packet
   loss, maximum acceptable latency and/or packet loss value and maximum
   acceptable delay variation value as specified by protocol.  The
   endpoints of Composite Link will take these parameters into account
   for component link selection or creation.

   o  REQ #5:  The exact details for
   component links will be taken up seperately and are not part of this
   document.

2.2.  Requirement for Hierarchy LSP

   One one end-to-end LSP may traverse a server layer.  There will be some
   latency and packet loss constraint requirement for the segment route
   in server layer.

   The solution SHALL MUST provide a means to indicate FA selection or FA-LSP
   creation with minimum latency and/or packet loss, maximum acceptable
   latency and/or packet loss value and maximum acceptable delay
   variation value.  The boundary nodes of FA-LSP will take these
   parameters into account for FA selection or FA-LSP creation.

   o  REQ #6:

3.  End-to-End Latency Measurements

   Procedures to measure latency and loss has been provided in ITU-T
   [Y.1731], [G.709] and [ietf-mpls-loss-delay].  The solution SHOULD provide control plane can
   is independent of the mechanism used and different mechanisms can be
   used for measurement based on different standards.

   Latency on a means to accumulate (e.g.,
      sum) path has two sources: Node latency which is caused by
   the node as a result of process time in each node and: Link latency information
   as a result of links and packet/frame transit time between two neighbouring
   nodes along one LSP
      across multi-domain (e.g., Inter-AS, Inter-Area or Multi-Layer) so a FA-LSP/ Composite Link [CL-REQ].

   Latency or one-way delay is the time it takes for a packet within a
   stream going from measurement point 1 to measurement point 2.

   The architecture uses assumption that an the sum of the latencies of the
   individual components approximately adds up to the average latency validation decision of
   an LSP.  Though using the sum may not be perfect, it however gives a
   good approximation that can be made at used for Traffic Engineering (TE)
   purposes.

   The total latency of an LSP consists of the sum of the source
      node.  One-way and round-trip latency collection along of the
   LSP by
      signaling protocol hop, as well as the average latency of switching on a device,
   which may vary based on queuing and buffering.

   Hop latency verification can be measured by getting the latency measurement
   between the egress of one MPLS LSR to the ingress of the nexthop LSR.
   This value may be constant for most part, unless there is protection
   switching, or other similar changes at a lower layer.

   The switching latency on a device, can be measured internally, and
   multiple mechanisms and data structures to do the end same have been
   defined.  Add references to papers by Verghese, Kompella, Duffield.
   Though the mechanisms define how to do flow based measurements, the
   amount of LSP
      should information gathered in such a case, may become too
   cumbersome for the Path Computation element to effectively use.

   An approximation of Flow based measurement is the per DSCP value,
   measurement from the ingress of one port to the egress of every other
   port in the device.

   Another approximation that can be supported. used is per interface DSCP based
   measurement, which can be an agrregate of the average measurements
   per interface.  The accumulation average can itself be calculated in ways, so as
   to provide closer approximation.

   For the purpose of this draft it is assumed that the node latency is
   a small factor of the total latency in the networks where this
   solution is deployed.  The node latency is hence ignored for the
   benefit of simplicity.

   The delay is "simple" measured in terms of 10's of nano-seconds.

4.  End-to-End Jitter Measurements

   Jitter or Packet Delay Variation of a packet within a stream of
   packets is defined for a selected pair of packets in the static component i.e. its stream going
   from measurement point 1 to measurement point 2.

   The architecture uses assumption that the sum of the jitter of the
   individual components approximately adds up to the average jitter of
   an LSP.  Though using the sum may not be perfect, it however gives a
   good approximation that can be used for Traffic Engineering (TE)
   purposes.

   There may be very less jitter on a linear addition, link-hop basis.

   The buffering and queuing within a device will lead to the dynamic/
      network loading component is more interesting jitter.
   Just like latency measurements, jitter measurements can be
   appproximated as either per DSCP per port pair (Ingresss and would involve
      some estimate Egress)
   or as per DSCP per egress port.

   For the purpose of this draft it is assumed that the "worst case".  However, method node latency is
   a small factor of deriving the total latency in the networks where this worst
   solution is deployed.  The node latency is hence ignored for the
   benefit of simplicity.

   The jitter is measured in terms of 10's of nano-seconds.

5.  End-to-End Loss Measurements

   Loss or Packet Drop probability of a packet within a stream of
   packets is defined as the number of packets dropped within a given
   interval.

   The architecture uses assumption that the sum of the loss of the
   individual components approximately adds up to the average loss of an
   LSP.  Though using the sum may not be perfect, it however gives a
   good approximation that can be used for Traffic Engineering (TE)
   purposes.

   There may be very less loss on a link-hop basis, except in case appears of
   physical link issues.

   The buffering and queuing mechanisms within a device will decide
   which packet is to be more dropped.  Just like latency and jitter
   measurements, the loss can best be appproximated as either per DSCP
   per port pair (Ingresss and Egress) or as per DSCP per egress port.

   The loss is measured in terms of the scope number of Network
      Operator policy than standards i.e. packets per million
   packets.

6.  Protocol Considerations

   The protocol metrics above can be sent in IGP protocol packets RFC
   3630.  They can then be used by the operator needs Path Computation engine to decide,
      based on
   dervice paths with the SLAs offered, desired path properties.

   As Link-state IGP information is flooded throughout an area, frequent
   changes can cause a lot of control traffic.  To prevent such
   flooding, data should only be flooded when it crosses a certain
   configured maximum.

   A seperate measurement should be done for an LSP when it is UP.  Also
   LSP's path should only be recalculated when the required confidence level.

   o  REQ #7: end-to-end metrics
   changes in a way it becomes more then desired.

7.  Restoration, Protection and Rerouting

   Some customers may insist on having the ability to re-
      route re-route if the
   latency and loss SLA is not being met.  If a "provisioned" end-to-end
   LSP latency and/or loss could not meet the latency and loss agreement
   between operator and his user, The the solution SHOULD support pre-defined pre-
   defined or dynamic re-routing to handle this case based on the local
   policy.  The latency
      performance of pre-defined protection or dynamic re-routing LSP
      MUST meet the latency SLA parameter.

   o  REQ #8:

   If a "provisioned" end-to-end LSP latency and/or loss performance is
   improved (i.e., beyond a configurable minimum value) because of some
   segment performance promotion, the solution SHOULD support the re-routing re-
   routing to optimize latency and/or loss end-to-end cost.

   o  REQ #9:

   The latency performance of pre-defined protection or dynamic re-
   routing LSP MUST meet the latency SLA parameter.  The difference of
   latency value between primary and protection/restoration path SHOULD
   be zero.

   As a result of the change of latency and loss in the LSP, current LSP
   may be frequently switched to a new LSP with a appropriate latency
   and packet loss value.  In order to avoid this, the solution SHOULD
   indicate the switchover of the LSP according to maximum acceptable
   change latency and packet loss value.

4.

8.  Control Plane Implication

   o

8.1.  Implications for Routing

   The latency and packet loss performance metric MUST be advertised
   into path computation entity by IGP (etc., OSPF-TE or IS-IS-TE) to
   perform route computation and network planning based on latecny latency and
   packet loss SLA target.

   Latency, latecny variation and packet loss value MUST be reported as
   a average value which is calculated by data plane.

   Latency and packet loss characteristics of these links and nodes may
   change dynamically.  In order to control IGP messaging and avoid
   being unstable when the latency, latency variation and packet loss
   value changes, a threshold and a limit on rate of change MUST be
   configured to control plane.

   If any latency and packet loss values change and over than the
   threshold and a limit on rate of change, then the latency and loss
   change of link MUST be notified to the IGP again.

   o  The receiving node
   detrimines whether the link affects any of these LSPs for which it is
   ingress.  If there are, it must determine whether those LSPs still
   meet end-to-end performance objectives.

   A minimum value MUST be configured to control plane.  If the link
   performance improves beyond a configurable minimum value, it must be
   re-advertised.  The receiving node detrimines whether a "provisioned"
   end-to-end LSP latency and/or loss performance is improved because of
   some segment performance promotion.

   It is sometimes important for paths that desire low latency is to
   avoid nodes that have a significant contribution to latency.  Control
   plane should report two components of the delay, "static" and
   "dynamic".  The dynamic component is always caused by traffic loading
   and queuing.  The "dynamic" portion SHOULD be reported as an
   approximate value.  It should be a fixed latency through the node
   without any queuing.  Link latency attribute may should also take into
   account the latency of a
      network element (node), node, i.e., the latency between the incoming
   port and the outgoing port of a network element.  If the link
      attribute is to include node latency AND link latency, then when
      the latency calculation is done for paths traversing links on the
      same node then  Half of the fixed
   node latency can be subtracted out.

   o  When added to each link.

8.1.1.  Implications for Signaling

   In order to assign the Composite Links [CL-REQ] is advertised into IGP, there
      are following considerations.

      *  The LSP to one of component links with different
   latency and packet loss characteristics, RSVP-TE message needs to carry a
   indication of composite link may be the range
         (e.g., at least request minimum and maximum) latency value of all
         component links.  It may also be the and/or packet loss, maximum
   acceptable latency and/or packet loss value of
         all and maximum acceptable
   delay variation value for the component links.  In link selection or creation.
   The composite link will take these cases, only partial information
         is transmited in the IGP.  So the path computation entity has
         insufficient information parameters into account when
   assigning traffic of LSP to determine whether a particular path
         can support its latency and packet loss requirements.  This
         leads to signaling crankback.  So IGP may be extended to
         advertise latency and packet of each component link within one
         Composite Link having an IGP adjacency.

   o link.

   One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may traverse
   a FA-LSP of server layer (e.g., OTN rings).  The boundary
      nodes of the FA-LSP SHOULD  There will be aware of the some
   latency and packet loss
      information of this FA-LSP.

      *  If constraint requirement for the FA-LSP is able segment route
   in server layer.  So RSVP-TE message needs to form carry a routing adjacency indication of
   request minimum latency and/or as a
         TE link in the client network, the total packet loss, maximum acceptable
   latency and and/or packet loss value and maximum acceptable delay
   variation value.  The boundary nodes of the FA-LSP can will take these
   parameters into account for FA selection or FA-LSP creation.

   RSVP-TE needs to be as an input extended to a transformation
         that results in a FA traffic engineering metric and advertised
         into the client layer routing instances.  Note that this metric
         will include the accumulate (e.g., sum) latency and packet loss
   information of the links and nodes
         that the trail traverses.

      *  If total latency and packet loss information of the FA-LSP
         changes along one LSP across multi-domain
   (e.g., due to a maintenance action Inter-AS, Inter-Area or failure in OTN
         rings), the boundary node of the FA-LSP will receive the TE
         link information advertisement including the Multi-Layer) so that an latency
   verification can be made at end points.  One-way and packet
         value which is already changed and if it is over than round-trip
   latency collection along the
         threshold and a limit on rate LSP by signaling protocol can be
   supported.  So the end points of change, then it will compute this LSP can verify whether the
   total latency and packet value amount of latency could meet the FA-LSP again.  If the
         total latency agreement between
   operator and packet loss value of FA-LSP changes, the
         client layer MUST also be notified about his user.  When RSVP-TE signaling is used, the latest value of
         FA.  The client layer source
   can then decide determine if it will accept the
         increased latency and packet loss or request a new path that
         meets requirement is met much more rapidly
   than performing the actual end-to-end latency and packet loss requirement.

   o measurement.

   Restoration, protection and equipment variations can impact
   "provisioned" latency and packet loss (e.g., latency and packet loss
   increase).  For example, restoration/provisioning action in transport
   network that increases latency seen by packet network observable by
   customers, possibly violating SLAs.  The change of one end-to-end LSP
   latency and packet loss performance MUST be known by source and/or
   sink node.  So it can inform the higher layer network of a latency
   and packet loss change.  The latency or packet loss change of links
   and nodes will affect one end-to-end LSP's LSPs total amount of latency or
   packet loss.  Applications can fail beyond an application-specific
   threshold.  Some remedy mechanism could be used.

      *

   Pre-defined protection or dynamic re-routing could be triggered to
   handle this case.  In the case of predefined protection, large
   amounts of redundant capacity may have a significant negative impact
   on the overall network cost.  Service provider may have many layers
   of pre-defined restoration for this transfer, but they have to
   duplicate restoration resources at significant cost.  Solution should
   provides some mechanisms to avoid the duplicate restoration and
   reduce the network cost.  Dynamic re-routing also has to face the
   risk of resource limitation.  So the choice of mechanism MUST be
   based on SLA or policy.  In the case where the latency SLA can not be
   met after a re-route is attempted, control plane should report an
   alarm to management plane.  It could also try restoration for several
   times which could be configured.

5.  Security

9.  IANA Considerations

   The use of control plane protocols for signaling, routing, and path
   computation of latency and loss opens security threats through
   attacks on those protocols.  The control plane may be secured using
   the mechanisms defined for the protocols discussed.  For further
   details of the specific security measures refer to the documents that
   define the protocols ([RFC3473], [RFC4203], [RFC4205], [RFC4204], and
   [RFC5440]).  [GMPLS-SEC] provides an overview of security
   vulnerabilities and protection mechanisms for the GMPLS control
   plane.

6.

   No new IANA consideration are raised by this document.

10.  Security Considerations

   This document makes not requests for IANA action.

7. raises no new security issues.

11.  Acknowledgements

   TBD.

12.  References

7.1.

12.1.  Normative References

   [RFC2119]  Bradner, S., "Key words for use in RFCs to Indicate
              Requirement Levels", BCP 14, RFC 2119, March 1997.

   [RFC3209]  Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V.,
              and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP
              Tunnels", RFC 3209, December 2001.

   [RFC3473]  Berger, L., "Generalized Multi-Protocol Label Switching
              (GMPLS) Signaling Resource ReserVation Protocol-Traffic
              Engineering (RSVP-TE) Extensions", RFC 3473, January 2003.

   [RFC3477]  Kompella, K. and Y. Rekhter, "Signalling Unnumbered Links
              in Resource ReSerVation Protocol - Traffic Engineering
              (RSVP-TE)", RFC 3477, January 2003.

   [RFC3630]  Katz, D., Kompella, K., and D. Yeung, "Traffic Engineering
              (TE) Extensions to OSPF Version 2", RFC 3630,
              September 2003.

   [RFC4203]  Kompella, K. and Y. Rekhter, "OSPF Extensions in Support
              of Generalized Multi-Protocol Label Switching (GMPLS)",
              RFC 4203, October 2005.

7.2.  Informative References

   [CL-REQ]   C. Villamizar, "Requirements for MPLS Over a Composite
              Link", draft-ietf-rtgwg-cl-requirement-02 .

   [G.709]    ITU-T Recommendation G.709, "Interfaces for the Optical
              Transport Network (OTN)", December 2009.

   [Y.1731]   ITU-T Recommendation Y.1731, "OAM functions and mechanisms
              for Ethernet based networks", Feb 2008.

   [ietf-mpls-loss-delay]
              D. Frost, "Packet Loss and Delay Measurement for MPLS
              Networks", draft-ietf-mpls-loss-delay-03 .

Authors' Addresses

   Xihua Fu
   ZTE

   Email: fu.xihua@zte.com.cn

   Vishwas Manral
   Hewlett-Packard Corp.
   191111 Pruneridge Ave.
   Cupertino, CA  95014
   US

   Phone: 408-447-1497
   Email: vishwas.manral@hp.com
   URI:

   Spencer Giacalone
   Thomson Reuters
   195 Broadway
   New York, NY  10007
   US

   Phone: 646-822-3000
   Email: spencer.giacalone@thomsonreuters.com
   URI:

   Malcolm Betts
   ZTE

   Email: malcolm.betts@zte.com.cn

   Qilei Wang
   ZTE

   Email: wang.qilei@zte.com.cn

   Dave McDysan
   Verizon

   Email: dave.mcdysan@verizon.com
   Andrew Malis
   Verizon

   Email: andrew.g.malis@verizon.com

   Spencer Giacalone
   Thomson Reuters

   Email: spencer.giacalone@thomsonreuters.com

   John Drake
   Juniper Networks

   Email: jdrake@juniper.net