< draft-fuxh-mpls-delay-loss-te-framework-03.txt   draft-fuxh-mpls-delay-loss-te-framework-04.txt >
Network Working Group X. Fu Network Working Group X. Fu
Internet-Draft ZTE Internet-Draft ZTE
Intended status: Standards Track V. Manral Intended status: Standards Track V. Manral
Expires: May 17, 2012 Hewlett-Packard Corp. Expires: September 10, 2012 Hewlett-Packard Corp.
D. McDysan D. McDysan
A. Malis A. Malis
Verizon Verizon
S. Giacalone S. Giacalone
Thomson Reuters Thomson Reuters
M. Betts M. Betts
Q. Wang Q. Wang
ZTE ZTE
J. Drake J. Drake
Juniper Networks Juniper Networks
November 14, 2011 March 9, 2012
Traffic Engineering architecture for services aware MPLS Loss and Delay Traffic Engineering Framework for MPLS
draft-fuxh-mpls-delay-loss-te-framework-03 draft-fuxh-mpls-delay-loss-te-framework-04
Abstract Abstract
With more and more enterprises using cloud based services, the With more and more enterprises using cloud based services, the
distances between the user and the applications are growing. A lot distances between the user and the applications are growing. A lot
of the current applications are designed to work across LAN's and of the current applications are designed to work across LAN's and
have various inherent assumptions. For multiple applications such as have various inherent assumptions. For multiple applications such as
High Performance Computing and Electronic Financial markets, the High Performance Computing and Electronic Financial markets, the
response times are critical as is packet loss, while other response times are critical as is packet loss, while other
applications require more throughput. applications require more throughput.
skipping to change at page 2, line 20 skipping to change at page 2, line 20
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on May 17, 2012. This Internet-Draft will expire on September 10, 2012.
Copyright Notice Copyright Notice
Copyright (c) 2011 IETF Trust and the persons identified as the Copyright (c) 2012 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
skipping to change at page 3, line 14 skipping to change at page 3, line 14
Table of Contents Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Architecture requirements overview . . . . . . . . . . . . . . 4 2. Architecture requirements overview . . . . . . . . . . . . . . 4
2.1. Communicate Latency and Loss as TE Metric . . . . . . . . 4 2.1. Communicate Latency and Loss as TE Metric . . . . . . . . 4
2.2. Requirement for Composite Link . . . . . . . . . . . . . . 5 2.2. Requirement for Composite Link . . . . . . . . . . . . . . 5
2.3. Requirement for Hierarchy LSP . . . . . . . . . . . . . . 5 2.3. Requirement for Hierarchy LSP . . . . . . . . . . . . . . 5
2.4. Latency Accumulation and Verification . . . . . . . . . . 5 2.4. Latency Accumulation and Verification . . . . . . . . . . 5
2.5. Restoration, Protection and Rerouting . . . . . . . . . . 6 2.5. Restoration, Protection and Rerouting . . . . . . . . . . 6
3. End-to-End Latency . . . . . . . . . . . . . . . . . . . . . . 6 3. End-to-End Latency . . . . . . . . . . . . . . . . . . . . . . 7
4. End-to-End Jitter . . . . . . . . . . . . . . . . . . . . . . 8 4. End-to-End Jitter . . . . . . . . . . . . . . . . . . . . . . 8
5. End-to-End Loss . . . . . . . . . . . . . . . . . . . . . . . 8 5. End-to-End Loss . . . . . . . . . . . . . . . . . . . . . . . 8
6. Protocol Considerations . . . . . . . . . . . . . . . . . . . 9 6. Protocol Considerations . . . . . . . . . . . . . . . . . . . 9
7. Control Plane Implication . . . . . . . . . . . . . . . . . . 9 7. Control Plane Implication . . . . . . . . . . . . . . . . . . 9
7.1. Implications for Routing . . . . . . . . . . . . . . . . . 9 7.1. Implications for Routing . . . . . . . . . . . . . . . . . 9
7.2. Implications for Signaling . . . . . . . . . . . . . . . . 11 7.2. Implications for Signaling . . . . . . . . . . . . . . . . 11
8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12
9. Security Considerations . . . . . . . . . . . . . . . . . . . 12 9. Security Considerations . . . . . . . . . . . . . . . . . . . 12
10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 12 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 12
11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 12
skipping to change at page 4, line 42 skipping to change at page 4, line 42
2.1. Communicate Latency and Loss as TE Metric 2.1. Communicate Latency and Loss as TE Metric
The solution MUST provide a means to communicate latency, latency The solution MUST provide a means to communicate latency, latency
variation and packet loss of links and nodes as a traffic engineering variation and packet loss of links and nodes as a traffic engineering
performance metric into IGP. performance metric into IGP.
Latency, latency variation and packet loss may be unstable, for Latency, latency variation and packet loss may be unstable, for
example, if queueing latency were included, then IGP could become example, if queueing latency were included, then IGP could become
unstable. The solution MUST provide a means to control latency and unstable. The solution MUST provide a means to control latency and
loss IGP message advertisement and avoid unstable when the latency, loss IGP message advertisement rate and avoid instability when the
latency variation and packet loss value changes. latency, latency variation and packet loss value changes frequently.
In the case where it is known that either the changes are too In the case where it is known that either the changes are too
frequent or there is a backup which is preferred, we can put the node frequent or there is a backup which is preferred, the solution shall
or the link in unusable state for services requiring a particular put the node or the link in unusable state for services requiring a
service capability. This unusable state is on a capability basis and particular service capability. This unusable state is on a
not a global basis. capability basis and not a global basis. The condition to get into
the state is locally configured and all routers in a domain should
have this criteria synchronized.
Path computation entity MUST have the capability to compute one end- Path computation entity MUST have the capability to compute one end-
to-end path with latency and packet loss constraint. For example, it to-end path with latency and packet loss constraint. For example, it
has the capability to compute a route with X amount of bandwidth with has the capability to compute a route with X amount of bandwidth with
less than Y ms of latency and less than Z% packet loss limit based on less than Y ms of latency and less than Z% packet loss limit based on
the latency and packet loss traffic engineering database. It MUST the latency and packet loss traffic engineering database. It MUST
also support the path computation with routing constraints also support the path computation with routing constraints
combination with pre-defined priorities, e.g., SRLG diversity, combination with pre-defined priorities, e.g., SRLG diversity,
latency, loss, jitter and cost. If the performance of link exceeds latency, loss, jitter and cost. If the performance of link exceeds
its configured maximum threshold, path computation entity may not its configured maximum threshold, path computation entity may not
select this kind of link although end-to-end performance is still select this kind of link although end-to-end performance is still
met. met.
2.2. Requirement for Composite Link 2.2. Requirement for Composite Link
One end-to-end LSP may traverses some Composite Links [CL-REQ]. Even One end-to-end LSP may traverses some Composite Links [CL-REQ]. Even
if the transport technology (e.g., OTN) component links are if the transport technology (e.g., OTN) component links are
identical, the latency and packet loss characteristics of the identical, the latency and packet loss characteristics of the
component links may differ. component links may differ due to factors such as fiber distance and/
or fiber characteristics.
The solution MUST provide a means to indicate that a traffic flow The solution MUST provide a means to indicate that a traffic flow
should select a component link with minimum latency and/or packet should select a component link with minimum latency and/or packet
loss, maximum acceptable latency and/or packet loss value and maximum loss, maximum acceptable latency and/or packet loss value and maximum
acceptable delay variation value as specified by protocol. The acceptable delay variation value as specified by protocol. The
endpoints of Composite Link will take these parameters into account endpoints of Composite Link will take these parameters into account
for component link selection or creation. The exact details for for component link selection or creation. Details of how transient
component links will be taken up seperately and are not part of this respose is taken is specified in Section 4.1 [CL-REQ]. The exact
document. details for component links will be taken up separately and are not
part of this document.
2.3. Requirement for Hierarchy LSP 2.3. Requirement for Hierarchy LSP
One end-to-end LSP may traverse a server layer. There will be some Heirarchical LSP's may traverse server layer LSP's. For such LSP's
latency and packet loss constraint requirement for the segment route there may be some latency and packet loss constraint requirement for
in server layer. the segment in server layer.
The solution MUST provide a means to indicate FA selection or FA-LSP The solution MUST provide a means to indicate FA selection or FA-LSP
creation with minimum latency and/or packet loss, maximum acceptable creation with minimum latency and/or packet loss, maximum acceptable
latency and/or packet loss value and maximum acceptable delay latency and/or packet loss value and maximum acceptable delay
variation value. The boundary nodes of FA-LSP will take these variation value. The boundary nodes of FA-LSP will take these
parameters into account for FA selection or FA-LSP creation. parameters into account for FA selection or FA-LSP creation.
2.4. Latency Accumulation and Verification 2.4. Latency Accumulation and Verification
The solution SHOULD provide a means to accumulate (e.g., sum) of The solution SHOULD provide a means to accumulate (e.g., sum) latency
latency information of links and nodes along one LSP across multi- information of links and nodes along that an LSP traverses, (e.g.,
domain (e.g., Inter-AS, Inter-Area or Multi-Layer) so that an latency Inter-AS, Inter-Area or Multi-Layer) so that the source node can
validation decision can be made at the source node. One-way and validate if the desired maximum latency constraint can be satisfied
round-trip latency collection along the LSP by signaling protocol and for a packet traversing the LSP. [Y.1541] provides details of how
latency verification at the end of LSP should be supported. the latency value is accumulated.
Both One-way and Round-trip latency collection along the LSP by
signaling protocol and latency verification at the end of LSP should
be supported.
The accumulation of the delay is "simple" for the static component The accumulation of the delay is "simple" for the static component
i.e. its a linear addition, the dynamic/network loading component is i.e. its a linear addition, the dynamic/network loading component is
more interesting and would involve some estimate of the "worst case". more interesting and would involve some estimate of the "worst case".
However, method of deriving this worst case appears to be more in the However, method of deriving this worst case appears to be more in the
scope of Network Operator policy than standards i.e. the operator scope of Network Operator policy than standards i.e. the operator
needs to decide, based on the SLAs offered, the required confidence needs to decide, based on the SLAs offered, the required confidence
level. level.
2.5. Restoration, Protection and Rerouting 2.5. Restoration, Protection and Rerouting
skipping to change at page 6, line 25 skipping to change at page 6, line 31
Some customers may insist on having the ability to re-route if the Some customers may insist on having the ability to re-route if the
latency and loss SLA is not being met. If a "provisioned" end-to-end latency and loss SLA is not being met. If a "provisioned" end-to-end
LSP latency and/or loss could not meet the latency and loss agreement LSP latency and/or loss could not meet the latency and loss agreement
between operator and his user, the solution SHOULD support pre- between operator and his user, the solution SHOULD support pre-
defined or dynamic re-routing (e.g., make-before-break) to handle defined or dynamic re-routing (e.g., make-before-break) to handle
this case based on the local policy. In revertive behaviour is this case based on the local policy. In revertive behaviour is
supported, the original LSP must not be released and is monitored by supported, the original LSP must not be released and is monitored by
control plane. When the end-to-end performance is repaired, the control plane. When the end-to-end performance is repaired, the
service is restored to the original LSP. service is restored to the original LSP.
The solution should support to move one end-to-end path away from any The solution SHOULD support to move an end-to-end LSP away from any
link whose performance exceeds the configured maximum threshold. The link whose performance violates the configured threshold.
anomalous path can be switch to protection path or rerouted to new
path because of end-to-end performance couldn't meet any more. End-to-end measurements of the LSP also need to be performed in
addition to the link-by-link measurements. A threshold violation of
the End-to-End criteria as measured by the head end node should cause
rerouting of the LSP.
The anomalous path can be switch to protection path or rerouted to
new path because of end-to-end performance couldn't meet any more.
If a "provisioned" end-to-end LSP latency and/or loss performance is If a "provisioned" end-to-end LSP latency and/or loss performance is
improved (i.e., beyond a configurable minimum value) because of some improved (i.e., beyond a configurable minimum value), the solution
segment performance promotion, the solution SHOULD support the re- SHOULD support the re-routing to optimize latency and/or loss end-to-
routing to optimize latency and/or loss end-to-end cost. end cost.
The latency performance of pre-defined protection or dynamic re- The latency performance of pre-defined protection or dynamic re-
routing LSP MUST meet the latency SLA parameter. The difference of routing LSP MUST meet the latency SLA parameter.
latency value between primary and protection/restoration path SHOULD
be zero.
As a result of the change of latency and loss in the LSP, current LSP Due to some flapping conditions the latency and loss of an LSP may
may be frequently switched to a new LSP with a appropriate latency change, this may cause the LSP to be frequently switched to a new
and packet loss value. In order to avoid this, the solution SHOULD path. In order to avoid churn, the solution SHOULD specify the
indicate the switchover of the LSP according to maximum acceptable switchover of the LSP according to maximum acceptable change rate.
change latency and packet loss value.
3. End-to-End Latency 3. End-to-End Latency
Procedures to measure latency and loss has been provided in ITU-T Procedures to measure latency and loss has been provided in ITU-T
[Y.1731], [G.709] and [ietf-mpls-loss-delay]. The control plane can [Y.1731], [G.709] and [ietf-mpls-loss-delay]. The control plane can
be independent of the mechanism used and different mechanisms can be be independent of the mechanism used and different mechanisms can be
used for measurement based on different standards. used for measurement based on different standards.
Latency on a path has two sources: Node latency which is caused by Latency on a path has two sources: Node latency which is caused by
the node as a result of process time in each node and: Link latency the node as a result of process time in each node and: Link latency
as a result of packet/frame transit time between two neighbouring as a result of packet/frame transit time between two neighbouring
nodes or a FA-LSP/ Composite Link [CL-REQ]. nodes or a FA-LSP/ Composite Link [CL-REQ].
Latency or one-way delay is the time it takes for a packet within a Latency or one-way delay is the time it takes for a packet within a
stream going from measurement point 1 to measurement point 2. stream going from measurement point 1 to measurement point 2, as
defined in [Y.1540].
The architecture uses assumption that the sum of the latencies of the The architecture uses assumption that the sum of the latencies of the
individual components approximately adds up to the average latency of individual components approximately adds up to the average latency of
an LSP. Though using the sum may not be perfect, it however gives a an LSP. Though using the sum may not be perfect, it however gives a
good approximation that can be used for Traffic Engineering (TE) good approximation that can be used for Traffic Engineering (TE)
purposes. purposes.
The total latency of an LSP consists of the sum of the latency of the The total measured latency of an LSP consists of the sum of the
LSP hop, as well as the average latency of switching on a device, latency of the LSP hop, as well as the average latency of switching
which may vary based on queuing and buffering. on a device, which may vary based on queuing and buffering.
Hop latency can be measured by getting the latency measurement Hop latency can be measured by getting the latency measurement
between the egress of one MPLS LSR to the ingress of the nexthop LSR. between the egress of one MPLS LSR to the ingress of the nexthop LSR.
This value may be constant for most part, unless there is protection This value may be constant for most part, unless there is protection
switching, or other similar changes at a lower layer. switching, or other similar changes at a lower layer.
The switching latency on a device, can be measured internally, and The switching latency on a device, can be measured internally, and
multiple mechanisms and data structures to do the same have been multiple mechanisms and data structures to do the same have been
defined. Add references to papers by Verghese, Kompella, Duffield. defined. [Add references to papers by Verghese, Kompella, Duffield].
Though the mechanisms define how to do flow based measurements, the
amount of information gathered in such a case, may become too
cumbersome for the Path Computation element to effectively use.
An approximation of Flow based measurement is the per DSCP value, We also looked at other measurement granularities before deciding on
measurement from the ingress of one port to the egress of every other an interface based measurement. An approximation of the Flow based
port in the device. measurement is the per DSCP value, measurement from the ingress of
one port to the egress of every other port in the device.
Another approximation that can be used is per interface DSCP based Another approximation that can be used is per interface DSCP based
measurement, which can be an agrregate of the average measurements measurement, which can be an agrregate of the average measurements
per interface. The average can itself be calculated in ways, so as per interface. The average can itself be calculated in ways, so as
to provide closer approximation. to provide closer approximation.
For the purpose of this draft it is assumed that the node latency is For the purpose of this draft it is assumed that the node latency is
a small factor of the total latency in the networks where this a small factor of the total latency in the networks where this
solution is deployed. The node latency is hence ignored for the solution is deployed. The node latency is hence ignored for the
benefit of simplicity. benefit of simplicity in this solution.
The average link delay over a configurable interval should be The average link delay over a configurable interval should be
reported by data plane in micro-seconds. reported by data plane in micro-seconds.
4. End-to-End Jitter 4. End-to-End Jitter
Jitter or Packet Delay Variation of a packet within a stream of Jitter or Packet Delay Variation of a packet within a stream of
packets is defined for a selected pair of packets in the stream going packets is defined for a selected pair of packets in the stream going
from measurement point 1 to measurement point 2. from measurement point 1 to measurement point 2.
The architecture uses assumption that the sum of the jitter of the This architecture uses the assumptions of [Y.1540] to calculate the
individual components approximately adds up to the average jitter of accumulated jitter from the individual components approximately.
an LSP. Though using the sum may not be perfect, it however gives a Though using this may not be perfect, it however gives a good
good approximation that can be used for Traffic Engineering (TE) approximation that can be used for Traffic Engineering (TE) purposes.
purposes.
There may be very less jitter on a link-hop basis.
The buffering and queuing within a device will lead to the jitter. The buffering and queuing within a device will lead to the jitter.
Just like latency measurements, jitter measurements can be Just like latency measurements, jitter measurements can be
appproximated as either per DSCP per port pair (Ingresss and Egress) approximated as either per DSCP per port pair (Ingress and Egress) or
or as per DSCP per egress port. as per DSCP per egress port, however such measurements have been left
out for the sake of simplicity of the solution.
For the purpose of this draft it is assumed that the node latency is For the purpose of this draft it is assumed that the node latency is
a small factor of the total latency in the networks where this a small factor of the total latency in the networks where this
solution is deployed. The node latency is hence ignored for the solution is deployed. The node latency is hence ignored for the
benefit of simplicity. benefit of simplicity.
The jitter is measured in terms of 10's of nano-seconds. The jitter is measured in micro-seconds.
5. End-to-End Loss 5. End-to-End Loss
Loss or Packet Drop probability of a packet within a stream of Loss or Packet Drop probability of a packet within a stream of
packets is defined as the number of packets dropped within a given packets is defined as the number of packets dropped within a given
interval. interval.
The architecture uses assumption that the sum of the loss of the This architecture uses the assumptions of [Y.1540] to calculate the
individual components approximately adds up to the average loss of an accumulated loss from the individual components approximately.
LSP. Though using the sum may not be perfect, it however gives a Though using the accumulated metrics may not be perfect, it however
good approximation that can be used for Traffic Engineering (TE) gives a good approximation that can be used for Traffic Engineering
purposes. (TE) purposes.
There may be very less loss on a link-hop basis, except in case of
physical link issues.
The buffering and queuing mechanisms within a device will decide The buffering and queuing mechanisms within a device will decide
which packet is to be dropped. Just like latency and jitter which packet is to be dropped. Just like latency and jitter
measurements, the loss can best be appproximated as either per DSCP measurements, the loss can best be appproximated as either per DSCP
per port pair (Ingresss and Egress) or as per DSCP per egress port. per port pair (Ingresss and Egress) or as per DSCP per egress port.
However such mechanisms are not used in this solution to keep the
solution simple.
The loss is measured in terms of the number of packets per million The loss is measured in terms of the number of packets per million
packets. packets.
6. Protocol Considerations 6. Protocol Considerations
The protocol metrics above can be sent in IGP protocol packets RFC The protocol metrics above can be sent in IGP protocol packets as
3630. They can then be used by the Path Computation engine to decide defined in RFC 3630. They can then be used by Source Node or the
paths with the desired path properties. Path Computation engine to decide paths with the desired path
properties.
As Link-state IGP information is flooded throughout an area, frequent As Link-state IGP information is flooded throughout an area, frequent
changes can cause a lot of control traffic. To prevent such changes can cause a lot of control traffic. To prevent such
flooding, data should only be flooded when it crosses a certain flooding, data should only be flooded when it crosses a certain
configured maximum. configured maximum.
A seperate measurement should be done for an LSP when it is UP. Also A separate measurement should be done for an LSP when it is UP. Also
LSP's path should only be recalculated when the end-to-end metrics LSP's path should only be recalculated when the end-to-end metrics
changes in a way it becomes more than desired. changes in a way it becomes more than desired.
7. Control Plane Implication 7. Control Plane Implication
7.1. Implications for Routing 7.1. Implications for Routing
The latency and packet loss performance metric MUST be advertised The latency and packet loss performance metric MUST be advertised
into path computation entity by IGP (etc., OSPF-TE or IS-IS-TE) to into path computation entity by IGP (OSPF-TE, OSPFv3-TE or IS-IS-TE)
perform route computation and network planning based on latency and to perform route computation and network planning based on latency
packet loss SLA target. and packet loss SLA target.
Latency, latecny variation and packet loss value MUST be reported as Latency, latency variation and packet loss value MUST be reported as
a average value which is calculated by data plane. a average value which is calculated by data plane measurements.
Latency and packet loss characteristics of these links and nodes may Latency and packet loss characteristics of these links and nodes may
change dynamically. In order to control IGP messaging and avoid change dynamically. In order to control IGP messaging and avoid
being unstable when the latency, latency variation and packet loss being unstable when the latency, latency variation and packet loss
value changes, a threshold and a limit on rate of change MUST be value changes, a threshold and a limit on rate of change MUST be
configured to control plane. configured in the IGP control plane.
If any latency and packet loss values change and over than the Latency and packet loss values changes need to be updated and flooded
threshold and a limit on rate of change, then the latency and loss in the IGP control messages only when there is significant changes in
change of link MUST be notified to the IGP again. The receiving node the value. When the head end-node deterimines the IGP update affects
detrimines whether the link affects any of these LSPs for which it is the LSP for which it is ingress, it recalculates the LSP.
ingress. If there are, it must determine whether those LSPs still
meet end-to-end performance objectives.
A minimum value MUST be configured to control plane. If the link A target value MUST be configured to control plane for each link. If
performance improves beyond a configurable minimum value, it must be the link performance improves beyond a configurable target value, it
re-advertised. The receiving node detrimines whether a "provisioned" must be re-advertised. The receiving node determines whether a
end-to-end LSP latency and/or loss performance is improved because of "provisioned" end-to-end LSP latency and/or loss performance is
some segment performance promotion. improved.
It is sometimes important for paths that desire low latency is to It is sometimes important for paths that desire low latency to avoid
avoid nodes that have a significant contribution to latency. Control nodes that have a significant contribution to latency. Control plane
plane should report two components of the delay, "static" and should report two components of the delay, "static" and "dynamic".
"dynamic". The dynamic component is always caused by traffic loading The dynamic component is always caused by traffic loading and
and queuing. The "dynamic" portion SHOULD be reported as an queuing. The "dynamic" portion SHOULD be reported as an approximate
approximate value. It should be a fixed latency through the node value. The static component should be a fixed latency through the
without any queuing. Link latency attribute should also take into node without any queuing. Link latency attribute should also take
account the latency of node, i.e., the latency between the incoming into account the latency of node, i.e., the latency between the
port and the outgoing port of a network element. Half of the fixed incoming port and the outgoing port of a network element. Half of
node latency can be added to each link. the fixed node latency can be added to each link.
When the Composite Links [CL-REQ] is advertised into IGP, there are When the Composite Links [CL-REQ] is advertised into IGP, there are
following considerations. following considerations.
o One option is that the latency and packet loss of composite link o One option is that the latency and packet loss of composite link
may be the range (e.g., at least minimum and maximum) latency may be the range (e.g., at least minimum and maximum) latency
value of all component links. It may also be the maximum or value of all component links. It may also be the maximum or
average latency value of all component links. In both cases, only average latency value of all component links. In both cases, only
partial information is transmited in the IGP. So the path partial information is transmited in the IGP. So the path
computation entity has insufficient information to determine computation entity has insufficient information to determine
 End of changes. 34 change blocks. 
102 lines changed or deleted 108 lines changed or added

This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/