| < draft-fuxh-mpls-delay-loss-te-framework-00.txt | draft-fuxh-mpls-delay-loss-te-framework-01.txt > | |||
|---|---|---|---|---|
| Network Working Group X. Fu | Network Working Group X. Fu | |||
| Internet-Draft M. Betts | Internet-Draft ZTE | |||
| Intended status: Standards Track Q. Wang | Intended status: Standards Track V. Manral | |||
| Expires: January 27, 2012 ZTE | Expires: March 15, 2012 Hewlett-Packard Corp. | |||
| S. Giacalone | ||||
| Thomson Reuters | ||||
| M. Betts | ||||
| Q. Wang | ||||
| ZTE | ||||
| D. McDysan | D. McDysan | |||
| A. Malis | A. Malis | |||
| Verizon | Verizon | |||
| S. Giacalone | ||||
| Thomson Reuters | ||||
| J. Drake | J. Drake | |||
| Juniper Networks | Juniper Networks | |||
| July 26, 2011 | September 12, 2011 | |||
| Framework for latency and loss traffic engineering application | Traffic Engineering architecture for services aware MPLS | |||
| draft-fuxh-mpls-delay-loss-te-framework-00 | draft-fuxh-mpls-delay-loss-te-framework-01 | |||
| Abstract | Abstract | |||
| Latency and packet loss is such requirement that must be achieved | With more and more enterprises using cloud based services, the | |||
| according to the Service Level Agreement (SLA) / Network Performance | distances between the user and the applications are growing. A lot | |||
| Objective (NPO) between customers and service providers. Latency and | of the current applications are designed to work across LAN's and | |||
| packet loss can be associated with different service level. The user | have various inherent assumptions. For multiple applications such as | |||
| may select a private line provider based on the ability to meet a | High Performance Computing and Electronic Financial markets, the | |||
| latency and loss SLA. | response times are critical as is packet loss, while other | |||
| applications require more throughput. | ||||
| The key driver for latency and loss is stock/commodity trading | [RFC3031] describes the architecture of MPLS based networks. This | |||
| applications that use data base mirroring. A few milli seconds and | draft extends the MPLS architecture to allow for latency, loss and | |||
| packet loss can impact a transaction. Financial or trading companies | jitter as properties. | |||
| are very focused on end-to-end private pipe line latency | ||||
| optimizations that improve things 2-3 ms. Latency/loss and | ||||
| associated SLA is one of the key parameters that these "high value" | ||||
| customers use to select a private pipe line provider. Other key | ||||
| applications like video gaming, conferencing and storage area | ||||
| networks require stringent latency, loss and bandwidth. | ||||
| This document describes requirements and control plane implication | Note MPLS architecture for Multicast will be taken up in a future | |||
| for latency and packet loss as a traffic engineering performance | version of the draft. | |||
| metric in today's network which is consisting of potentially multiple | ||||
| layers of packet transport network and optical transport network in | Requirements Language | |||
| order to meet the latency/loss SLA between service provider and his | ||||
| customers. | The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", | |||
| "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this | ||||
| document are to be interpreted as described in [RFC 2119]. | ||||
| Status of this Memo | Status of this Memo | |||
| This Internet-Draft is submitted in full conformance with the | This Internet-Draft is submitted in full conformance with the | |||
| provisions of BCP 78 and BCP 79. | provisions of BCP 78 and BCP 79. | |||
| Internet-Drafts are working documents of the Internet Engineering | Internet-Drafts are working documents of the Internet Engineering | |||
| Task Force (IETF). Note that other groups may also distribute | Task Force (IETF). Note that other groups may also distribute | |||
| working documents as Internet-Drafts. The list of current Internet- | working documents as Internet-Drafts. The list of current Internet- | |||
| Drafts is at http://datatracker.ietf.org/drafts/current/. | Drafts is at http://datatracker.ietf.org/drafts/current/. | |||
| Internet-Drafts are draft documents valid for a maximum of six months | Internet-Drafts are draft documents valid for a maximum of six months | |||
| and may be updated, replaced, or obsoleted by other documents at any | and may be updated, replaced, or obsoleted by other documents at any | |||
| time. It is inappropriate to use Internet-Drafts as reference | time. It is inappropriate to use Internet-Drafts as reference | |||
| material or to cite them other than as "work in progress." | material or to cite them other than as "work in progress." | |||
| This Internet-Draft will expire on January 27, 2012. | This Internet-Draft will expire on March 15, 2012. | |||
| Copyright Notice | Copyright Notice | |||
| Copyright (c) 2011 IETF Trust and the persons identified as the | Copyright (c) 2011 IETF Trust and the persons identified as the | |||
| document authors. All rights reserved. | document authors. All rights reserved. | |||
| This document is subject to BCP 78 and the IETF Trust's Legal | This document is subject to BCP 78 and the IETF Trust's Legal | |||
| Provisions Relating to IETF Documents | Provisions Relating to IETF Documents | |||
| (http://trustee.ietf.org/license-info) in effect on the date of | (http://trustee.ietf.org/license-info) in effect on the date of | |||
| publication of this document. Please review these documents | publication of this document. Please review these documents | |||
| carefully, as they describe your rights and restrictions with respect | carefully, as they describe your rights and restrictions with respect | |||
| to this document. Code Components extracted from this document must | to this document. Code Components extracted from this document must | |||
| include Simplified BSD License text as described in Section 4.e of | include Simplified BSD License text as described in Section 4.e of | |||
| the Trust Legal Provisions and are provided without warranty as | the Trust Legal Provisions and are provided without warranty as | |||
| described in the Simplified BSD License. | described in the Simplified BSD License. | |||
| Table of Contents | Table of Contents | |||
| 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 | 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 | |||
| 1.1. Conventions Used in This Document . . . . . . . . . . . . 4 | 2. Architecture requirements overview . . . . . . . . . . . . . . 4 | |||
| 2. Latency and Loss Report . . . . . . . . . . . . . . . . . . . 4 | 2.1. Requirement for Composite Link . . . . . . . . . . . . . . 4 | |||
| 3. Requirements Identification . . . . . . . . . . . . . . . . . 5 | 2.2. Requirement for Hierarchy LSP . . . . . . . . . . . . . . 5 | |||
| 4. Control Plane Implication . . . . . . . . . . . . . . . . . . 7 | 3. End-to-End Latency Measurements . . . . . . . . . . . . . . . 5 | |||
| 5. Security Considerations . . . . . . . . . . . . . . . . . . . 9 | 4. End-to-End Jitter Measurements . . . . . . . . . . . . . . . . 6 | |||
| 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 9 | 5. End-to-End Loss Measurements . . . . . . . . . . . . . . . . . 7 | |||
| 7. References . . . . . . . . . . . . . . . . . . . . . . . . . . 9 | 6. Protocol Considerations . . . . . . . . . . . . . . . . . . . 7 | |||
| 7.1. Normative References . . . . . . . . . . . . . . . . . . . 9 | 7. Restoration, Protection and Rerouting . . . . . . . . . . . . 8 | |||
| 7.2. Informative References . . . . . . . . . . . . . . . . . . 10 | 8. Control Plane Implication . . . . . . . . . . . . . . . . . . 8 | |||
| 8.1. Implications for Routing . . . . . . . . . . . . . . . . . 8 | ||||
| 8.1.1. Implications for Signaling . . . . . . . . . . . . . . 9 | ||||
| 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 | ||||
| 10. Security Considerations . . . . . . . . . . . . . . . . . . . 10 | ||||
| 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 10 | ||||
| Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 10 | Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 10 | |||
| 1. Introduction | 1. Introduction | |||
| Current operation and maintenance mode of latency and packet loss | In High Frequency trading for Electronic Financial markets, computers | |||
| measurement is high in cost and low in efficiency. The latency and | make decisions based on the Electronic Data received, without human | |||
| packet loss can only be measured after the connection has been | intervention. These trades now account for a majority of the trading | |||
| established, if the measurement indicates that the latency SLA is not | volumes and rely exclusively on ultra-low-latency direct market | |||
| met then another path is computed, setup and measured. This "trial | access. | |||
| and error" process is very inefficient. To avoid this problem a | ||||
| means of making an accurate prediction of latency and packet loss | ||||
| before a path is establish is required. | ||||
| This document describes the requirements and control plane | Extremely low latency measurements for MPLS LSP tunnels are defined | |||
| implication to communicate latency and packet loss as a traffic | in [draft-ietf-mpls-loss-delay]. They allow a mechanism to measure | |||
| engineering performance metric in today's network which is consisting | and monitor performance metrics for packet loss, and one-way and two- | |||
| of potentially multiple layers of packet transport network and | way delay, as well as related metrics like delay variation and | |||
| optical transport network in order to meet the latency and packet | channel throughput. | |||
| loss SLA between service provider and his customers. | ||||
| 1.1. Conventions Used in This Document | The measurements are however effective only after the LSP is created | |||
| and cannot be used by MPLS Path computation engine to define paths | ||||
| that have the latest latency. This draft defines the architecture | ||||
| used, so that end-to-end tunnels can be set up based on latency, loss | ||||
| or jitter characteristics. | ||||
| The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", | End-to-end service optimization based on latency and packet loss is a | |||
| "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this | key requirement for service provider. This type of function will be | |||
| document are to be interpreted as described in [RFC2119]. | adopted by their "premium" service customers. They would like to pay | |||
| for this "premium" service. Latency and loss on a route level will | ||||
| help carriers' customers to make his provider selection decision. | ||||
| 2. Latency and Loss Report | 2. Architecture requirements overview | |||
| This section isn't going to say how latency or packet loss is | The solution MUST provide a means to communicate latency, latency | |||
| measured. How to measure has been provided in ITU-T [Y.1731], | variation and packet loss of links and nodes as a traffic engineering | |||
| [G.709] and [ietf-mpls-loss-delay]. It's purpose is to define what | performance metric into IGP. | |||
| it is sufficiently clear that mechanisms could be defined to measure | ||||
| it, and so that independent implementations will report the same | ||||
| thing. If control plane wish to define the ability to report latency | ||||
| and packet loss, control plane must be clear what it are reporting. | ||||
| Packet/Frame loss probability is expressed as a percentage of the | Path computation entity MUST have the capability to compute one end- | |||
| number of service packets/frames not delivered divided by the total | to-end path with latency and packet loss constraint. For example, it | |||
| number of service frames during time interval T. Loss is always | has the capability to compute a route with X amount of bandwidth with | |||
| measured by sending a measurement packet or frame from measurement | less than Y ms of latency and Z% packet loss limit based on the | |||
| point to its reception and recception sending back a response. | latency and packet loss traffic engineering database. It MUST also | |||
| support the path computation with routing constraints combination | ||||
| with pre-defined priorities, e.g., SRLG diversity, latency, loss and | ||||
| cost. | ||||
| The link of latecny is the time interval between the propagation of | 2.1. Requirement for Composite Link | |||
| an electrical signal and its reception. Latency is always measured | ||||
| by sending a measurement packet or frame from measurement point to | ||||
| its reception. In some usages, latency is measured by sending a | ||||
| packet/frame that is returned to the sender and the round-trip time | ||||
| is considered the latency of bidirectional co-routed or associated | ||||
| LSP. One way time is considered as the latency of unidirectional | ||||
| LSP. The one way latency may not be half of the round-trip latency | ||||
| in the case that the transmit and receive directions of the path are | ||||
| of unequal lengths. | ||||
| Control plane should report two components of the delay, "static" and | One end-to-end LSP may traverses some Composite Links [CL-REQ]. Even | |||
| "dynamic". The dynamic component is caused by traffic loading. What | if the transport technology (e.g., OTN) component links are | |||
| is reporting for "dynamic" portion is approximation. | identical, the latency and packet loss characteristics of the | |||
| component links may differ. | ||||
| Latency on a connection has two sources: Node latency which is caused | The solution MUST provide a means to indicate that a traffic flow | |||
| by the node as a result of process time in each node and: Link | should select a component link with minimum latency and/or packet | |||
| latency as a result of packet/frame transit time between two | loss, maximum acceptable latency and/or packet loss value and maximum | |||
| neighbouring nodes or a FA-LSP/Composit Link [CL-REQ]. The average | acceptable delay variation value as specified by protocol. The | |||
| latency of node should be reported. It is simpler to add node | endpoints of Composite Link will take these parameters into account | |||
| latency to the link delay vs. carrying a separate parameter and does | for component link selection or creation. The exact details for | |||
| not hide any important information. Latency variation is a parameter | component links will be taken up seperately and are not part of this | |||
| that is used to indicate the variation range of the latency value. | document. | |||
| Latency, latecny variation value must be reported as a average value | ||||
| which is calculated by data plane. | ||||
| 3. Requirements Identification | 2.2. Requirement for Hierarchy LSP | |||
| End-to-end service optimization based on latency and packet loss is a | One end-to-end LSP may traverse a server layer. There will be some | |||
| key requirement for service provider. This type of function will be | latency and packet loss constraint requirement for the segment route | |||
| adopted by their "premium" service customers. They would like to pay | in server layer. | |||
| for this "premium" service. Latency and loss on a route level will | ||||
| help carriers' customers to make his provider selection decision. | ||||
| Following key requirements associated with latency and loss is | ||||
| identified. | ||||
| o REQ #1: The solution MUST provide a means to communicate latency, | The solution MUST provide a means to indicate FA selection or FA-LSP | |||
| latency variation and packet loss of links and nodes as a traffic | creation with minimum latency and/or packet loss, maximum acceptable | |||
| engineering performance metric into IGP. | latency and/or packet loss value and maximum acceptable delay | |||
| variation value. The boundary nodes of FA-LSP will take these | ||||
| parameters into account for FA selection or FA-LSP creation. | ||||
| o REQ #2: Latency, latency variation and packet loss may be | 3. End-to-End Latency Measurements | |||
| unstable, for example, if queueing latency were included, then IGP | ||||
| could become unstable. The solution MUST provide a means to | ||||
| control latency and loss IGP message advertisement and avoid | ||||
| unstable when the latency, latency variation and packet loss value | ||||
| changes. | ||||
| o REQ #3: Path computation entity MUST have the capability to | Procedures to measure latency and loss has been provided in ITU-T | |||
| compute one end-to-end path with latency and packet loss | [Y.1731], [G.709] and [ietf-mpls-loss-delay]. The control plane can | |||
| constraint. for example, it has the capability to compute a route | is independent of the mechanism used and different mechanisms can be | |||
| with X amount bandwidth with less than Y ms of latency and Z% | used for measurement based on different standards. | |||
| packet loss limit based on the latency and packet loss traffic | ||||
| engineering database. It MUST also support the path computation | ||||
| with routing constraints combination with pre-defined priorities, | ||||
| e.g., SRLG diversity, latency, loss and cost. | ||||
| o REQ #4: One end-to-end LSP may traverses some Composite Links [CL- | Latency on a path has two sources: Node latency which is caused by | |||
| REQ]. Even if the transport technology (e.g., OTN) implementing | the node as a result of process time in each node and: Link latency | |||
| the component links is identical, the latency and packet loss | as a result of packet/frame transit time between two neighbouring | |||
| characteristics of the component links may differ. In order to | nodes or a FA-LSP/ Composite Link [CL-REQ]. | |||
| assign the LSP to one of component links with different latency | ||||
| and packet loss characteristics, the solution SHOULD provide a | ||||
| means to indicate that a traffic flow should select a component | ||||
| link with minimum latency and/or packet loss, maximum acceptable | ||||
| latency and/or packet loss value and maximum acceptable delay | ||||
| variation value as specified by protocol. The endpoints of | ||||
| Composite Link will take these parameters into account for | ||||
| component link selection or creation. | ||||
| o REQ #5: One one end-to-end LSP may traverse a server layer. There | Latency or one-way delay is the time it takes for a packet within a | |||
| will be some latency and packet loss constraint requirement for | stream going from measurement point 1 to measurement point 2. | |||
| the segment route in server layer. The solution SHALL provide a | ||||
| means to indicate FA selection or FA-LSP creation with minimum | ||||
| latency and/or packet loss, maximum acceptable latency and/or | ||||
| packet loss value and maximum acceptable delay variation value. | ||||
| The boundary nodes of FA-LSP will take these parameters into | ||||
| account for FA selection or FA-LSP creation. | ||||
| o REQ #6: The solution SHOULD provide a means to accumulate (e.g., | The architecture uses assumption that the sum of the latencies of the | |||
| sum) of latency information of links and nodes along one LSP | individual components approximately adds up to the average latency of | |||
| across multi-domain (e.g., Inter-AS, Inter-Area or Multi-Layer) so | an LSP. Though using the sum may not be perfect, it however gives a | |||
| that an latency validation decision can be made at the source | good approximation that can be used for Traffic Engineering (TE) | |||
| node. One-way and round-trip latency collection along the LSP by | purposes. | |||
| signaling protocol and latency verification at the end of LSP | ||||
| should be supported. The accumulation of the delay is "simple" | ||||
| for the static component i.e. its a linear addition, the dynamic/ | ||||
| network loading component is more interesting and would involve | ||||
| some estimate of the "worst case". However, method of deriving | ||||
| this worst case appears to be more in the scope of Network | ||||
| Operator policy than standards i.e. the operator needs to decide, | ||||
| based on the SLAs offered, the required confidence level. | ||||
| o REQ #7: Some customers may insist on having the ability to re- | The total latency of an LSP consists of the sum of the latency of the | |||
| route if the latency and loss SLA is not being met. If a | LSP hop, as well as the average latency of switching on a device, | |||
| "provisioned" end-to-end LSP latency and/or loss could not meet | which may vary based on queuing and buffering. | |||
| the latency and loss agreement between operator and his user, The | ||||
| solution SHOULD support pre-defined or dynamic re-routing to | ||||
| handle this case based on the local policy. The latency | ||||
| performance of pre-defined protection or dynamic re-routing LSP | ||||
| MUST meet the latency SLA parameter. | ||||
| o REQ #8: If a "provisioned" end-to-end LSP latency and/or loss | Hop latency can be measured by getting the latency measurement | |||
| performance is improved because of some segment performance | between the egress of one MPLS LSR to the ingress of the nexthop LSR. | |||
| promotion, the solution SHOULD support the re-routing to optimize | This value may be constant for most part, unless there is protection | |||
| latency and/or loss end-to-end cost. | switching, or other similar changes at a lower layer. | |||
| o REQ #9: As a result of the change of latency and loss in the LSP, | The switching latency on a device, can be measured internally, and | |||
| current LSP may be frequently switched to a new LSP with a | multiple mechanisms and data structures to do the same have been | |||
| appropriate latency and packet loss value. In order to avoid | defined. Add references to papers by Verghese, Kompella, Duffield. | |||
| this, the solution SHOULD indicate the switchover of the LSP | Though the mechanisms define how to do flow based measurements, the | |||
| according to maximum acceptable change latency and packet loss | amount of information gathered in such a case, may become too | |||
| value. | cumbersome for the Path Computation element to effectively use. | |||
| 4. Control Plane Implication | An approximation of Flow based measurement is the per DSCP value, | |||
| measurement from the ingress of one port to the egress of every other | ||||
| port in the device. | ||||
| o The latency and packet loss performance metric MUST be advertised | Another approximation that can be used is per interface DSCP based | |||
| into path computation entity by IGP (etc., OSPF-TE or IS-IS-TE) to | measurement, which can be an agrregate of the average measurements | |||
| perform route computation and network planning based on latecny | per interface. The average can itself be calculated in ways, so as | |||
| and packet loss SLA target. Latency, latecny variation and packet | to provide closer approximation. | |||
| loss value MUST be reported as a average value which is calculated | ||||
| by data plane. Latency and packet loss characteristics of these | ||||
| links and nodes may change dynamically. In order to control IGP | ||||
| messaging and avoid being unstable when the latency, latency | ||||
| variation and packet loss value changes, a threshold and a limit | ||||
| on rate of change MUST be configured to control plane. If any | ||||
| latency and packet loss values change and over than the threshold | ||||
| and a limit on rate of change, then the change MUST be notified to | ||||
| the IGP again. | ||||
| o Link latency attribute may also take into account the latency of a | For the purpose of this draft it is assumed that the node latency is | |||
| network element (node), i.e., the latency between the incoming | a small factor of the total latency in the networks where this | |||
| port and the outgoing port of a network element. If the link | solution is deployed. The node latency is hence ignored for the | |||
| attribute is to include node latency AND link latency, then when | benefit of simplicity. | |||
| the latency calculation is done for paths traversing links on the | ||||
| same node then the node latency can be subtracted out. | ||||
| o When the Composite Links [CL-REQ] is advertised into IGP, there | The delay is measured in terms of 10's of nano-seconds. | |||
| are following considerations. | ||||
| * The latency and packet loss of composite link may be the range | 4. End-to-End Jitter Measurements | |||
| (e.g., at least minimum and maximum) latency value of all | ||||
| component links. It may also be the maximum latency value of | ||||
| all component links. In these cases, only partial information | ||||
| is transmited in the IGP. So the path computation entity has | ||||
| insufficient information to determine whether a particular path | ||||
| can support its latency and packet loss requirements. This | ||||
| leads to signaling crankback. So IGP may be extended to | ||||
| advertise latency and packet of each component link within one | ||||
| Composite Link having an IGP adjacency. | ||||
| o One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may | Jitter or Packet Delay Variation of a packet within a stream of | |||
| traverse a FA-LSP of server layer (e.g., OTN rings). The boundary | packets is defined for a selected pair of packets in the stream going | |||
| nodes of the FA-LSP SHOULD be aware of the latency and packet loss | from measurement point 1 to measurement point 2. | |||
| information of this FA-LSP. | ||||
| * If the FA-LSP is able to form a routing adjacency and/or as a | The architecture uses assumption that the sum of the jitter of the | |||
| TE link in the client network, the total latency and packet | individual components approximately adds up to the average jitter of | |||
| loss value of the FA-LSP can be as an input to a transformation | an LSP. Though using the sum may not be perfect, it however gives a | |||
| that results in a FA traffic engineering metric and advertised | good approximation that can be used for Traffic Engineering (TE) | |||
| into the client layer routing instances. Note that this metric | purposes. | |||
| will include the latency and packet loss of the links and nodes | ||||
| that the trail traverses. | ||||
| * If total latency and packet loss information of the FA-LSP | There may be very less jitter on a link-hop basis. | |||
| changes (e.g., due to a maintenance action or failure in OTN | ||||
| rings), the boundary node of the FA-LSP will receive the TE | ||||
| link information advertisement including the latency and packet | ||||
| value which is already changed and if it is over than the | ||||
| threshold and a limit on rate of change, then it will compute | ||||
| the total latency and packet value of the FA-LSP again. If the | ||||
| total latency and packet loss value of FA-LSP changes, the | ||||
| client layer MUST also be notified about the latest value of | ||||
| FA. The client layer can then decide if it will accept the | ||||
| increased latency and packet loss or request a new path that | ||||
| meets the latency and packet loss requirement. | ||||
| o Restoration, protection and equipment variations can impact | The buffering and queuing within a device will lead to the jitter. | |||
| "provisioned" latency and packet loss (e.g., latency and packet | Just like latency measurements, jitter measurements can be | |||
| loss increase). The change of one end-to-end LSP latency and | appproximated as either per DSCP per port pair (Ingresss and Egress) | |||
| packet loss performance MUST be known by source and/or sink node. | or as per DSCP per egress port. | |||
| So it can inform the higher layer network of a latency and packet | ||||
| loss change. The latency or packet loss change of links and nodes | ||||
| will affect one end-to-end LSP's total amount of latency or packet | ||||
| loss. Applications can fail beyond an application-specific | ||||
| threshold. Some remedy mechanism could be used. | ||||
| * Pre-defined protection or dynamic re-routing could be triggered | For the purpose of this draft it is assumed that the node latency is | |||
| to handle this case. In the case of predefined protection, | a small factor of the total latency in the networks where this | |||
| large amounts of redundant capacity may have a significant | solution is deployed. The node latency is hence ignored for the | |||
| negative impact on the overall network cost. Service provider | benefit of simplicity. | |||
| may have many layers of pre-defined restoration for this | ||||
| transfer, but they have to duplicate restoration resources at | ||||
| significant cost. Solution should provides some mechanisms to | ||||
| avoid the duplicate restoration and reduce the network cost. | ||||
| Dynamic re-routing also has to face the risk of resource | ||||
| limitation. So the choice of mechanism MUST be based on SLA or | ||||
| policy. In the case where the latency SLA can not be met after | ||||
| a re-route is attempted, control plane should report an alarm | ||||
| to management plane. It could also try restoration for several | ||||
| times which could be configured. | ||||
| 5. Security Considerations | The jitter is measured in terms of 10's of nano-seconds. | |||
| The use of control plane protocols for signaling, routing, and path | 5. End-to-End Loss Measurements | |||
| computation of latency and loss opens security threats through | ||||
| attacks on those protocols. The control plane may be secured using | ||||
| the mechanisms defined for the protocols discussed. For further | ||||
| details of the specific security measures refer to the documents that | ||||
| define the protocols ([RFC3473], [RFC4203], [RFC4205], [RFC4204], and | ||||
| [RFC5440]). [GMPLS-SEC] provides an overview of security | ||||
| vulnerabilities and protection mechanisms for the GMPLS control | ||||
| plane. | ||||
| 6. IANA Considerations | Loss or Packet Drop probability of a packet within a stream of | |||
| packets is defined as the number of packets dropped within a given | ||||
| interval. | ||||
| This document makes not requests for IANA action. | The architecture uses assumption that the sum of the loss of the | |||
| individual components approximately adds up to the average loss of an | ||||
| LSP. Though using the sum may not be perfect, it however gives a | ||||
| good approximation that can be used for Traffic Engineering (TE) | ||||
| purposes. | ||||
| 7. References | There may be very less loss on a link-hop basis, except in case of | |||
| physical link issues. | ||||
| 7.1. Normative References | The buffering and queuing mechanisms within a device will decide | |||
| which packet is to be dropped. Just like latency and jitter | ||||
| measurements, the loss can best be appproximated as either per DSCP | ||||
| per port pair (Ingresss and Egress) or as per DSCP per egress port. | ||||
| The loss is measured in terms of the number of packets per million | ||||
| packets. | ||||
| 6. Protocol Considerations | ||||
| The protocol metrics above can be sent in IGP protocol packets RFC | ||||
| 3630. They can then be used by the Path Computation engine to | ||||
| dervice paths with the desired path properties. | ||||
| As Link-state IGP information is flooded throughout an area, frequent | ||||
| changes can cause a lot of control traffic. To prevent such | ||||
| flooding, data should only be flooded when it crosses a certain | ||||
| configured maximum. | ||||
| A seperate measurement should be done for an LSP when it is UP. Also | ||||
| LSP's path should only be recalculated when the end-to-end metrics | ||||
| changes in a way it becomes more then desired. | ||||
| 7. Restoration, Protection and Rerouting | ||||
| Some customers may insist on having the ability to re-route if the | ||||
| latency and loss SLA is not being met. If a "provisioned" end-to-end | ||||
| LSP latency and/or loss could not meet the latency and loss agreement | ||||
| between operator and his user, the solution SHOULD support pre- | ||||
| defined or dynamic re-routing to handle this case based on the local | ||||
| policy. | ||||
| If a "provisioned" end-to-end LSP latency and/or loss performance is | ||||
| improved (i.e., beyond a configurable minimum value) because of some | ||||
| segment performance promotion, the solution SHOULD support the re- | ||||
| routing to optimize latency and/or loss end-to-end cost. | ||||
| The latency performance of pre-defined protection or dynamic re- | ||||
| routing LSP MUST meet the latency SLA parameter. The difference of | ||||
| latency value between primary and protection/restoration path SHOULD | ||||
| be zero. | ||||
| As a result of the change of latency and loss in the LSP, current LSP | ||||
| may be frequently switched to a new LSP with a appropriate latency | ||||
| and packet loss value. In order to avoid this, the solution SHOULD | ||||
| indicate the switchover of the LSP according to maximum acceptable | ||||
| change latency and packet loss value. | ||||
| 8. Control Plane Implication | ||||
| 8.1. Implications for Routing | ||||
| The latency and packet loss performance metric MUST be advertised | ||||
| into path computation entity by IGP (etc., OSPF-TE or IS-IS-TE) to | ||||
| perform route computation and network planning based on latency and | ||||
| packet loss SLA target. | ||||
| Latency, latecny variation and packet loss value MUST be reported as | ||||
| a average value which is calculated by data plane. | ||||
| Latency and packet loss characteristics of these links and nodes may | ||||
| change dynamically. In order to control IGP messaging and avoid | ||||
| being unstable when the latency, latency variation and packet loss | ||||
| value changes, a threshold and a limit on rate of change MUST be | ||||
| configured to control plane. | ||||
| If any latency and packet loss values change and over than the | ||||
| threshold and a limit on rate of change, then the latency and loss | ||||
| change of link MUST be notified to the IGP again. The receiving node | ||||
| detrimines whether the link affects any of these LSPs for which it is | ||||
| ingress. If there are, it must determine whether those LSPs still | ||||
| meet end-to-end performance objectives. | ||||
| A minimum value MUST be configured to control plane. If the link | ||||
| performance improves beyond a configurable minimum value, it must be | ||||
| re-advertised. The receiving node detrimines whether a "provisioned" | ||||
| end-to-end LSP latency and/or loss performance is improved because of | ||||
| some segment performance promotion. | ||||
| It is sometimes important for paths that desire low latency is to | ||||
| avoid nodes that have a significant contribution to latency. Control | ||||
| plane should report two components of the delay, "static" and | ||||
| "dynamic". The dynamic component is always caused by traffic loading | ||||
| and queuing. The "dynamic" portion SHOULD be reported as an | ||||
| approximate value. It should be a fixed latency through the node | ||||
| without any queuing. Link latency attribute should also take into | ||||
| account the latency of node, i.e., the latency between the incoming | ||||
| port and the outgoing port of a network element. Half of the fixed | ||||
| node latency can be added to each link. | ||||
| 8.1.1. Implications for Signaling | ||||
| In order to assign the LSP to one of component links with different | ||||
| latency and loss characteristics, RSVP-TE message needs to carry a | ||||
| indication of request minimum latency and/or packet loss, maximum | ||||
| acceptable latency and/or packet loss value and maximum acceptable | ||||
| delay variation value for the component link selection or creation. | ||||
| The composite link will take these parameters into account when | ||||
| assigning traffic of LSP to a component link. | ||||
| One end-to-end LSP (e.g., in IP/MPLS or MPLS-TP network) may traverse | ||||
| a FA-LSP of server layer (e.g., OTN rings). There will be some | ||||
| latency and packet loss constraint requirement for the segment route | ||||
| in server layer. So RSVP-TE message needs to carry a indication of | ||||
| request minimum latency and/or packet loss, maximum acceptable | ||||
| latency and/or packet loss value and maximum acceptable delay | ||||
| variation value. The boundary nodes of FA-LSP will take these | ||||
| parameters into account for FA selection or FA-LSP creation. | ||||
| RSVP-TE needs to be extended to accumulate (e.g., sum) latency | ||||
| information of links and nodes along one LSP across multi-domain | ||||
| (e.g., Inter-AS, Inter-Area or Multi-Layer) so that an latency | ||||
| verification can be made at end points. One-way and round-trip | ||||
| latency collection along the LSP by signaling protocol can be | ||||
| supported. So the end points of this LSP can verify whether the | ||||
| total amount of latency could meet the latency agreement between | ||||
| operator and his user. When RSVP-TE signaling is used, the source | ||||
| can determine if the latency requirement is met much more rapidly | ||||
| than performing the actual end-to-end latency measurement. | ||||
| Restoration, protection and equipment variations can impact | ||||
| "provisioned" latency and packet loss (e.g., latency and packet loss | ||||
| increase). For example, restoration/provisioning action in transport | ||||
| network that increases latency seen by packet network observable by | ||||
| customers, possibly violating SLAs. The change of one end-to-end LSP | ||||
| latency and packet loss performance MUST be known by source and/or | ||||
| sink node. So it can inform the higher layer network of a latency | ||||
| and packet loss change. The latency or packet loss change of links | ||||
| and nodes will affect one end-to-end LSPs total amount of latency or | ||||
| packet loss. Applications can fail beyond an application-specific | ||||
| threshold. Some remedy mechanism could be used. | ||||
| Pre-defined protection or dynamic re-routing could be triggered to | ||||
| handle this case. In the case of predefined protection, large | ||||
| amounts of redundant capacity may have a significant negative impact | ||||
| on the overall network cost. Service provider may have many layers | ||||
| of pre-defined restoration for this transfer, but they have to | ||||
| duplicate restoration resources at significant cost. Solution should | ||||
| provides some mechanisms to avoid the duplicate restoration and | ||||
| reduce the network cost. Dynamic re-routing also has to face the | ||||
| risk of resource limitation. So the choice of mechanism MUST be | ||||
| based on SLA or policy. In the case where the latency SLA can not be | ||||
| met after a re-route is attempted, control plane should report an | ||||
| alarm to management plane. It could also try restoration for several | ||||
| times which could be configured. | ||||
| 9. IANA Considerations | ||||
| No new IANA consideration are raised by this document. | ||||
| 10. Security Considerations | ||||
| This document raises no new security issues. | ||||
| 11. Acknowledgements | ||||
| TBD. | ||||
| 12. References | ||||
| 12.1. Normative References | ||||
| [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate | [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate | |||
| Requirement Levels", BCP 14, RFC 2119, March 1997. | Requirement Levels", BCP 14, RFC 2119, March 1997. | |||
| [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., | [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., | |||
| and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP | and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP | |||
| Tunnels", RFC 3209, December 2001. | Tunnels", RFC 3209, December 2001. | |||
| [RFC3473] Berger, L., "Generalized Multi-Protocol Label Switching | [RFC3473] Berger, L., "Generalized Multi-Protocol Label Switching | |||
| (GMPLS) Signaling Resource ReserVation Protocol-Traffic | (GMPLS) Signaling Resource ReserVation Protocol-Traffic | |||
| skipping to change at page 10, line 29 ¶ | skipping to change at page 12, line 12 ¶ | |||
| D. Frost, "Packet Loss and Delay Measurement for MPLS | D. Frost, "Packet Loss and Delay Measurement for MPLS | |||
| Networks", draft-ietf-mpls-loss-delay-03 . | Networks", draft-ietf-mpls-loss-delay-03 . | |||
| Authors' Addresses | Authors' Addresses | |||
| Xihua Fu | Xihua Fu | |||
| ZTE | ZTE | |||
| Email: fu.xihua@zte.com.cn | Email: fu.xihua@zte.com.cn | |||
| Vishwas Manral | ||||
| Hewlett-Packard Corp. | ||||
| 191111 Pruneridge Ave. | ||||
| Cupertino, CA 95014 | ||||
| US | ||||
| Phone: 408-447-1497 | ||||
| Email: vishwas.manral@hp.com | ||||
| URI: | ||||
| Spencer Giacalone | ||||
| Thomson Reuters | ||||
| 195 Broadway | ||||
| New York, NY 10007 | ||||
| US | ||||
| Phone: 646-822-3000 | ||||
| Email: spencer.giacalone@thomsonreuters.com | ||||
| URI: | ||||
| Malcolm Betts | Malcolm Betts | |||
| ZTE | ZTE | |||
| Email: malcolm.betts@zte.com.cn | Email: malcolm.betts@zte.com.cn | |||
| Qilei Wang | Qilei Wang | |||
| ZTE | ZTE | |||
| Email: wang.qilei@zte.com.cn | Email: wang.qilei@zte.com.cn | |||
| Dave McDysan | Dave McDysan | |||
| Verizon | Verizon | |||
| Email: dave.mcdysan@verizon.com | Email: dave.mcdysan@verizon.com | |||
| Andrew Malis | Andrew Malis | |||
| Verizon | Verizon | |||
| Email: andrew.g.malis@verizon.com | Email: andrew.g.malis@verizon.com | |||
| Spencer Giacalone | ||||
| Thomson Reuters | ||||
| Email: spencer.giacalone@thomsonreuters.com | ||||
| John Drake | John Drake | |||
| Juniper Networks | Juniper Networks | |||
| Email: jdrake@juniper.net | Email: jdrake@juniper.net | |||
| End of changes. 49 change blocks. | ||||
| 267 lines changed or deleted | 350 lines changed or added | |||
This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/ | ||||