< draft-wijnands-rtgwg-mcast-frr-tn-00.txt   draft-wijnands-rtgwg-mcast-frr-tn-01.txt >
Routing Working Group IJ. Wijnands, Ed. Routing Working Group IJ. Wijnands, Ed.
Internet-Draft Cisco Internet-Draft L. De Ghein
Intended status: Standards Track A. Csaszar, Ed. Intended status: Standards Track Cisco
Expires: April 18, 2013 J. Tantsura Expires: January 9, 2014 G. Enyedi, Ed.
A. Csaszar
J. Tantsura
Ericsson Ericsson
October 15, 2012 July 8, 2013
Tree Notification to Improve Multicast Fast Reroute Tree Notification to Improve Multicast Fast Reroute
draft-wijnands-rtgwg-mcast-frr-tn-00 draft-wijnands-rtgwg-mcast-frr-tn-01
Abstract Abstract
This draft proposes using dataplane triggered notifications in order This draft proposes dataplane triggered Tree Notifications to support
to support multicast fast reroute methods in various ways. Sending multicast fast reroute for PIM and mLDP. These Tree Notifications
such notifications down the tree can be used to trigger fail-over in are initiated by a node detecting the failure to a Repair Node
nodes not adjacent to the failure. Sending such dataplane downstream. A Repair Node is a node that has a pre-built backup path
notification up the tree can help to activate pre-built standby that can circumvent the failure. Using this mechanism, a Repair Node
backup tree segments. has the ability to learn about non-local failures quickly without
having to wait for the IGP to convergence. This draft also covers an
optional method to avoid bandwidth usage on the pre-built backup
path.
Status of this Memo Status of this Memo
This Internet-Draft is submitted in full conformance with the This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79. provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet- working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/. Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress." material or to cite them other than as "work in progress."
This Internet-Draft will expire on April 18, 2013. This Internet-Draft will expire on January 9, 2014.
Copyright Notice Copyright Notice
Copyright (c) 2012 IETF Trust and the persons identified as the Copyright (c) 2013 IETF Trust and the persons identified as the
document authors. All rights reserved. document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of (http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License. described in the Simplified BSD License.
This document may contain material from IETF Documents or IETF
Contributions published or made publicly available before November
10, 2008. The person(s) controlling the copyright in some of this
material may not have granted the IETF Trust the right to allow
modifications of such material outside the IETF Standards Process.
Without obtaining an adequate license from the person(s) controlling
the copyright in such materials, this document may not be modified
outside the IETF Standards Process, and derivative works of it may
not be created outside the IETF Standards Process, except to format
it for publication as an RFC or to translate it into languages other
than English.
Table of Contents Table of Contents
1. Terminology and Definitions . . . . . . . . . . . . . . . . . 4 1. Terminology and Definitions . . . . . . . . . . . . . . . . . 3
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4
3. Improving non-local failures . . . . . . . . . . . . . . . . . 5 3. Improving non-local failures . . . . . . . . . . . . . . . . . 4
3.1. Downstream Tree Notifications . . . . . . . . . . . . . . 6 3.1. Downstream Tree Notifications . . . . . . . . . . . . . . 5
3.2. DTNP processing/forwarding . . . . . . . . . . . . . . . . 6 3.2. DTN processing logic . . . . . . . . . . . . . . . . . . . 6
4. Reduce the bandwidth consumption in failure-free network . . . 8 3.3. Repair Node discovery . . . . . . . . . . . . . . . . . . 8
4.1. Upstream Tree Notifications . . . . . . . . . . . . . . . 8 3.3.1. Repair Node Information item . . . . . . . . . . . . . 8
4.2. Joining a tree in dedicated backup status . . . . . . . . 9 3.4. Reduce the bandwidth consumption in networks with fast
4.2.1. Single topology environment . . . . . . . . . . . . . 9 failover response times . . . . . . . . . . . . . . . . . 8
4.2.2. Multi-Topology Environment . . . . . . . . . . . . . . 10 3.4.1. Joining a secondary tree in blocking mode . . . . . . 9
4.3. Activation . . . . . . . . . . . . . . . . . . . . . . . . 10 3.4.2. Upstream Tree Notifications . . . . . . . . . . . . . 10
4.4. MRT/MCI-Only Mode . . . . . . . . . . . . . . . . . . . . 10 3.5. MRT/MCI-Only Mode . . . . . . . . . . . . . . . . . . . . 10
5. The TN Packet . . . . . . . . . . . . . . . . . . . . . . . . 11 3.6. TN Authentication . . . . . . . . . . . . . . . . . . . . 11
5.1. TN Packet Format . . . . . . . . . . . . . . . . . . . . . 11 3.7. The TN Packet . . . . . . . . . . . . . . . . . . . . . . 11
5.1.1. TN TimeStamp TLV Format . . . . . . . . . . . . . . . 12 3.7.1. TN Packet Format . . . . . . . . . . . . . . . . . . . 11
5.2. Origination of TN Packets . . . . . . . . . . . . . . . . 13 3.7.1.1. TN TimeStamp TLV Format . . . . . . . . . . . . . 13
6. IP/PIM Specific TN Components . . . . . . . . . . . . . . . . 13 3.7.1.2. TN Signature TLV Format . . . . . . . . . . . . . 14
6.1. IP/PIM Downstream Tree Notifications . . . . . . . . . . . 13 4. PIM Specific TN Components . . . . . . . . . . . . . . . . . . 15
6.2. IP/PIM Upstream Tree Notifications . . . . . . . . . . . . 13 4.1. RNI item in PIM Join Message . . . . . . . . . . . . . . . 15
6.3. Incremental deployment . . . . . . . . . . . . . . . . . . 14 4.2. Tree Information Item . . . . . . . . . . . . . . . . . . 16
7. mLDP Specific TN Components . . . . . . . . . . . . . . . . . 14 4.3. Incremental deployment . . . . . . . . . . . . . . . . . . 18
7.1. mLDP Downstream Tree Notification . . . . . . . . . . . . 15 5. mLDP Specific TN Components . . . . . . . . . . . . . . . . . 18
7.1.1. Originating a DTNP . . . . . . . . . . . . . . . . . . 15 5.1. RNI item in mLDP Label Mapping . . . . . . . . . . . . . . 18
7.1.2. Receiving a DTNP . . . . . . . . . . . . . . . . . . . 15 5.2. Tree Information Item . . . . . . . . . . . . . . . . . . 20
7.1.3. Forwarding a DTNP . . . . . . . . . . . . . . . . . . 15 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 20
7.2. mLDP Upstream Tree Notification . . . . . . . . . . . . . 15 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 20
7.2.1. Originating a UTNP . . . . . . . . . . . . . . . . . . 15 8. Security Considerations . . . . . . . . . . . . . . . . . . . 21
7.2.2. Receiving a UTNP . . . . . . . . . . . . . . . . . . . 16 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 21
7.2.3. Forwarding a UTNP . . . . . . . . . . . . . . . . . . 16 9.1. Normative References . . . . . . . . . . . . . . . . . . . 21
8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16 9.2. Informative References . . . . . . . . . . . . . . . . . . 22
9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 22
10. Security Considerations . . . . . . . . . . . . . . . . . . . 16
11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 17
11.1. Normative References . . . . . . . . . . . . . . . . . . . 17
11.2. Informative References . . . . . . . . . . . . . . . . . . 17
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 18
1. Terminology and Definitions 1. Terminology and Definitions
MoFRR : Multicast only Fast Re-Route. MoFRR : Multicast only Fast Re-Route.
LFA : Loop Free Alternate. LFA : Loop Free Alternate.
mLDP : Multi-point Label Distribution Protocol. mLDP : Multi-point Label Distribution Protocol.
PIM : Protocol Independent Multicast. PIM : Protocol Independent Multicast.
skipping to change at page 4, line 36 skipping to change at page 3, line 36
MCE : MultiCast Egress, the last node where the multicast stream MCE : MultiCast Egress, the last node where the multicast stream
exits the current transport technology (MPLS-mLDP or IP-PIM) exits the current transport technology (MPLS-mLDP or IP-PIM)
domain or administrative domain. This maybe the router attached domain or administrative domain. This maybe the router attached
to a multicast receiver. to a multicast receiver.
MCI : MultiCast Ingress, the node where the multicast stream enters MCI : MultiCast Ingress, the node where the multicast stream enters
the current transport technology (MPLS-mLDP or IP-PIM) domain. the current transport technology (MPLS-mLDP or IP-PIM) domain.
This maybe the router attached to the multicast source. This maybe the router attached to the multicast source.
DTNP : Downstream Tree Notification Packet. DTN : Downstream Tree Notification.
UTNP : Upstream Tree Notification Packet. UTN : Upstream Tree Notification.
TNP : Tree Notification Packet, Upstream or Downstream TN : Tree Notification, Upstream or Downstream
Repair Node : A node that can circumvent the failure.
JM : Join Message, the message used to join to a multicast tree, JM : Join Message, the message used to join to a multicast tree,
i.e. to build up the tree. In PIM, this is a JOIN message, while i.e. to build up the tree. In PIM, this is a JOIN message, while
in mLDP this corresponds to a LabelMap message. in mLDP this corresponds to a Label Mapping message.
MRT : Maximally Redundant Trees. MRT : Maximally Redundant Trees.
Repair Node : The node performing a dual-join to the tree through Repair Node : The node performing a dual-join to the tree through
two different UMHs. Sometimes also called as dual-joining node or two different UMHs. Sometimes also called as dual-joining node or
merging node (it merges the secondary and primary tree). merging node (it merges the secondary and primary tree).
Branching Node : A node, (i) which is considered as being on the RNI : The Repair Node Information is an item included in the TN
which holds the nessesary repair information when the TN is send
to the Repair Node.
Branching Node : A node, (i) which is considered as being on the
primary tree by its immediate UMH and (ii) which has at least one primary tree by its immediate UMH and (ii) which has at least one
secondary type of OIF installed for a multicast tree. OIF on the secondary tree installed for a multicast tree.
2. Introduction 2. Introduction
Both [I-D.karan-mofrr] and [I-D.atlas-rtgwg-mrt-mc-arch] describe Both [I-D.karan-mofrr] and [I-D.atlas-rtgwg-mrt-mc-arch] describe
"live-live" multicast protection, where a node joins a tree via "live-live" multicast protection, where a node joins a tree via
different candidate upstream multicast hops (UMH). With MoFRR the different candidate upstream multicast hops (UMH). With MoFRR the
list of candidate UMHs can come from either ECMP or Loop Free list of candidate UMHs can come from either ECMP or Loop Free
Alternate (LFA) paths towards the MultiCast Ingress node (MCI). With Alternate (LFA) paths towards the MultiCast Ingress node (MCI). With
MRT, the candidate UMHs are determined by looking up the MCI in two MRT, the candidate UMHs are determined by looking up the MCI in two
different (Red and Blue) topologies. In either case, the multicast different (Red and Blue) topologies. In either case, the multicast
traffic is simultaneously received over different paths/topologies traffic is simultaneously received over different paths/topologies
for the same tree. The node 'dual-joining' the tree needs a for the same tree. The node 'dual-joining' the tree needs a
mechanism to prevent duplicate packets being forwarding to the end mechanism to prevent duplicate packets being forwarding to the end
user. For that reason a node 'dual-joining' the tree only accepts user. For that reason a node 'dual-joining' the tree only accepts
packets from one of the UMHs at the time. Which UMH is preferred is packets from one of the UMHs at the time. Which UMH is preferred is
a local decision that can be based on IGP reachability, link status, a local decision that can be based on IGP reachability, link status,
BFD, traffic flow monitoring, etc... BFD, traffic flow monitoring, etc...
Should the node detect a local failure on the primary UMH, the node Should the node detect a local failure on the primary UMH, the node
has an instantly available secondary UMH that is can switch to, has an instantly available secondary UMH that it can switch to,
simply by unblocking the secondary UMH. The dual-joining node is simply by unblocking the secondary UMH. The dual-joining node is
also called Repair Node in the following. also called Repair Node in the following.
This draft attempts to improve these solutions by: This draft attempts to improve these solutions by:
o Improving fail-over time and the reliability of failure detection o Improving fail-over time and the reliability of failure detection
for non-local failures; and for non-local failures; and
o Reducing the bandwidth consumption in a failure-free network. o Reducing the bandwidth consumption in a network with fast failover
response times.
3. Improving non-local failures 3. Improving non-local failures
If a failure is not local and happens further upstream, the dual- If a failure is not local and happens further upstream, the dual-
joining node needs a fast and reliable mechanism (i) to detect the joining node needs a fast mechanism (i) to detect the upstream
upstream failure and (ii) to learn that other upstream nodes cannot failure and (ii) to learn that other upstream nodes cannot circumvent
circumvent the failure. Existing methods based on traffic monitoring the failure. Existing methods based on traffic monitoring are
are limited in scope and work best with a steady state packet flow. limited in scope and work best with a steady state packet flow.
Therefore, we propose a method which can trigger the unblocking Therefore, we propose a method which can trigger the unblocking the
independently of the packet flow. secondary UMH independently of the packet flow.
Figure 1 shows an example. Consider that, e.g., node A goes down. Figure 1 shows an example. Consider that, e.g., node A goes down.
Nodes C, D and E cannot detect that locally, so they need to resort Nodes C, D and E cannot detect that locally, so they need to resort
to other means. After detecting the failure, node C should not to other means. After detecting the failure, node C should not
change to its secondary UMH (node J) as it won't help for the failure change to its secondary UMH (node J) as it won't help for the failure
of A. Node D, on the other hand, will have to unblock its secondary of A. Node D, on the other hand, will have to unblock its secondary
UMH (node I). Yet again, with MoFRR, node E should not unblock its UMH (node I). Yet again, with MoFRR, node E should not unblock its
secondary UMH (node K): (i) this won't help in resolving the failure secondary UMH (node K): (i) this won't help in resolving the failure
of node A, and (ii) one of its upstream nodes (node D in this case) of node A, and (ii) one of its upstream nodes (node D in this case)
will be able to restore the stream with a fail-over action. will be able to restore the stream with a fail-over action.
3.1. Downstream Tree Notifications 3.1. Downstream Tree Notifications
The node detecting a local failure of its primary UMH MUST originate When a node detects a local failure of its primary UMH it MUST
a Downstream Tree Notification Packet (DTNP) to all downstream originate a Downstream Tree Notification (DTN) to all the Repair
branches of the tree. Each router that receives the DTNP determines Nodes directly below it in the multicast tree. The method of
if it is a Repair Node for that tree. If it is not a Repair Node, discovering such nodes is described in Section 3.3. When a Repair
the DTNP is forwarded further down the tree. If the node is the Node receives a DTN containing the primary UMH of the node, it must
Repair Node, the secondary UMH is unblocked and the DTNP is switch to the secondary UMH.
discarded. The DTNP allows a downstream router to unambigously
identify the multicast tree impacted by the failure.
In order to decrease reaction time, the DTNP SHOULD be originated DTN packets are sent via unicast to the Repair Node. The packet may
from the data plane when a local failure is detected, as well as be forwarded using any transport that is available (MPLS or IP) to
processed in the data plane when the DTNP is received. All the reach the destination. The IP precedence in the IP header should
information necessary to send and receive a DTNP has to be available have a value of 6 (Internetwork Control). The EXP field (Traffic
in the data plane in advance. Class field) in the MPLS header should have a value of 6. The DTN
packets are identified by a well known port number (to be allocated).
Using a well-known port number it is easy for the Repair Node to
identify the DTN packet and invoke the procedures as described in
this draft. We are proposing to allocate different port numbers for
PIM and MDLP since it will be easier to dispatch the packet to the
right process dealing with this request.
3.2. DTNP processing/forwarding When a router detects a local failure, it should sent out the DTN
packet to the Repair Node as fast as possible. The sooner the Repair
Node gets the packet, the sooner the traffic can be restored. It is
recommended that the DTN packet is pre-created and originated from
the data-plain. The same is true for receiving the DTN packet on the
Repair Node, the faster it can be processed, the faster the traffic
is restored. For both forwarding and processing the DTN, control-
plain interaction SHOULD be avoided to get the best failover results.
When a DTNP is received from an UMH, the node MUST check 3.2. DTN processing logic
o whether it has a secondary UMH, and if yes, When a DTN packet is received on the Repair Node it must determine
which tree and UMH the notification is for. The information encoded
in the DTN is specific for the type of tree being used, i.e. PIM vs
mLDP. For details on the specific encoding see Section 4 and
Section 5 for the details. Once the Repair node has determined the
tree and the UMH, the following rules are use for processing the DTN.
o whether this particular DTNP was received on the primary or 1. If the UMH encoded in the DTN packet is the primary UMH in the
secondary UMH, and tree, the secondary UMH MUST become the new primary UMH and the
old primary MUST become the secondary.
o whether another DTNP had been received beforehand from the other 2. If the UMH encoded in the DTN packet is the secondary UMH in the
UMH. tree, no action needs to be taken.
Whenever a node receives a DTNP from its primary UMH and the node has 3. If a DTN notification has been received on both the primary and
a secondary UMH for which no DTNP had been received beforehand, this secondary UMH in the tree, a new DTN notification MUST be
node could be a Repair Node, so unblocks its secondary UMH. The DTNP originated to the Repair Node(s) downstream from this node.
MUST not be forwarded, but the node has to store the fact that a DTNP
has been received for the primary UMH for this multicast tree.
If a node receives a DTNP from its primary UMH but does not have a In order for the Repair Node to determine that a DTN notification was
secondary UMH, this node is not the Repair Node and MUST forward the received for both the primary and secondary UMH, it must store the
DTNP. fact a DTN was received for a particular UMH.
If a node receives two DTNPs, one from the primary UMH and another Consider the example in Figure 1 below. MCI is the root of a tree
one from the secondary UMH, then this node is not the Repair Node and that includes the nodes as follows (based on the primary UMH).
it MUST forward the last received DTNP to all branches of the tree.
(Secondary UMH does not need to be unblocked since it cannot remedy
the failure.)
A DTNP received only from the secondary UMH MUST NOT be forwarded, ->F->G->H->I
but the node has to store the fact that a DTNP has been received for MCI
the secondary UMH for this multicast tree. ->A->B->C->D->E
Whenever a decision has been taken to originate or forward a DTNP, it Node C, D and E are candidate Repair Nodes.
will be automatically replicated to all downstream legs, given that
it is a multicast packet. DTNP MUST be replicated also to downstream
stand-by legs if such legs exist.
It would raise security issues if DTNPs propagated outside the -- Primary UMH
operator network, so MCEs MUST prevent that DTNP packets propagate to ++ Secondary UMH
receivers or to other domains. Rephrased, nodes MUST NOT forward
DTNPs to legs that lead to receivers or to external autonomous
systems.
+-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+
|F|---|G|---|H|---|I| |F|+++|G|+++|H|+++|I|
+-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+
/ \ + +
/ \ + +
+---+ +-+ +-+ +-+ +-+ +-+ +---+ +-+ +-+ +-+ +-+ +-+
|MCI|~~~~~~|A|---|B|---|C|---|D|---|E| Source ---|MCI|------|A|---|B|---|C|---|D|---|E|--- Receiver
+---+ +-+ +-+ +-+ +-+ +-+ +---+ +-+ +-+ +-+ +-+ +-+
\ / \ / + + + +
\ / \ / + + + +
+-+ +-+ +-+ +-+
|J| |K| |J| |K|
+-+ +-+ +-+ +-+
Figure 1: Remote failure example Figure 1: Remote failure example
As an example, consider Figure 1. If node A fails, B detects the Suppose that the link between node A and B failed, B is directly
failure locally and triggers a DTNP (towards C). Node C is not the connected and will detect the failure locally. In this case, node B
Repair Node because it will receive the DTNP from both the primary is the only node that detects the failure and will originate a DTN to
UMH (from B) and the secondary UMH (from J). Because node C is not its downstream repair node C. Node C will receive the DTN for the UMH
the Repair node it will forward the DTNP towards K and D (observing that is the primary UMH. Following rule 1 (Section 3.2), node C will
rule 3.). K does not have a secondary UMH for this tree, so it will make the backup UHM the new primary. No further action is needed
send the DTNP downstream towards E (rule 2.). Node D has a secondary because C has repaired the tree via node J. Note, J would not have
UMH, so it applies rule 1. Node E applies rule 4. As a result, sent a DTN to node C because J is not directly connected to the
subscribers sitting at or below nodes D and E will continue receiving failing link.
the multicast traffic.
4. Reduce the bandwidth consumption in failure-free network Suppose that node A fails, B and J are directly connected and detect
the failure locally. A DTN packet is triggered to first downstream
repair node of A, which is node C. Node C is an unusable Repair Node
because it will receive DTN for both the primary UMH (from B) and the
secondary UMH (from J). Following rule 3 (Section 3.2), C can't
repair the tree and must sent a new DTN packet towards the Repair
Nodes of C, which are D, on the primary path, and E, on the secondary
path.
Suppose that the link between A and the MCI failed. Node A is
directly connected to the failure and will trigger a DTN packet to
its downstream repair node(s). In this case, node A has learned
about the downstream repair node C twice, the primary UMH (via node
B) and secondary UMH (via node J). Node C will therefore sent a DTN
packet including both the primary and secondary UMH to node C (see
Section 3.7 for details on the encoding). Following rule 3
(Section 3.2), C can't repair the tree and must sent a new DTN packet
towards the Repair Nodes of C, which are D, on the primary path, and
E, on the secondary path.
The DTN packet that D received from C will match against the primary
UMH. Following rule 1, D will activate the backup path to I. The DTN
packet that E received from C will match against the backup UMH,
following rule 2, no action is taken. In the example one can see
that we recovered from the failure because node D started accepting
the data packets from node I and is forwarding them to node E.
3.3. Repair Node discovery
In example Figure 1 we wrote that nodes C, D and E are the repair
nodes. How does a node determine that it is a Repair Node? The rule
is straightforward, a node that is enabled to join two UMH's, one in
active the other in backup ([I-D.karan-mofrr]), is a repair node on
the tree. A Repair node has the ability to repair the tree for the
nodes upstream from this node. In order for the Repair Node to get
notified of upstream failures (ie DTN), the nodes upstream from the
Repair Node need to learn about it.
3.3.1. Repair Node Information item
A Repair Node MUST advertise its own address (either a router ID or
any directly connected address) and an UMH identifier to the nodes
upstream on the tree. This address and UMH are part of the RNI
(Repair Node Information) item that is included in the JM. The RNI
is carried hop by hop in the JM upstream. If a node along the path
is not a Repair Node, it will save the RNI and forward if further
upstream. If the node is Repair Node, it will save the RNI and
include its own RNI in the JM sent further upstream. If a Repair
Node changes one if its UMH's, it needs to trigger a new RNI to its
upstream node(s) to notify them of the changed UMH. If a RNI is
received and it does not match the saved RNI, the new RNI overrides
the old RNI and triggers a JM with the new RNI to its upstream
node(s). A RNI includes protocol specific information on how to
identify the tree and UMH, for that reason it is documented in the
protocol specific sections Section 4 and Section 5.
The Repair Node MAY include additional information in the RNI for
reasons of security and robustness, please see Section 3.6 and
Section 3.7.1.
3.4. Reduce the bandwidth consumption in networks with fast failover
response times
In some of networks, such as aggregation networks, bandwidth is more In some of networks, such as aggregation networks, bandwidth is more
sparse than, e.g., in core networks. Live-live multicast protection sparse than, e.g., in core networks. Live-live multicast protection
results in traffic duplication in the failure-free network as it results in more bandwidth consumption in the network as it
continuously uses bandwidth for both trees or segments. In such continuously pulls traffic on both trees. In such networks it is
networks it is relevant if the capacity serving backup purposes can relevant if the capacity serving backup purposes can be used, most of
be used in the failure-free network, i.e., most of the time, by best- the time, by best-effort or even by lower-than-best-effort traffic.
effort or even by lower-than-best-effort traffic.
+---+ +-+ +-+ +---+ +-+ +-+
|MCI|~~~~~~|A|---|B| |MCI|~~~~~~|A|---|B|
+---+ +-+ +-+ +---+ +-+ +-+
\\ // \\ //
\\ // \\ //
+-+ +-+
|C| |C|
+-+ +-+
Nodes A and B have receivers. Double lines show bandwidth Nodes A and B have receivers. Double lines show bandwidth
consumption that is superfluous when there is no failure in the consumption that is superfluous when there is no failure in the
network. network.
Figure 2: Example for secondary segments occupying bandwidth in MoFRR Figure 2: Example for secondary segments occupying bandwidth in MoFRR
In live-standby mode the aim is that the secondary tree or secondary In live-standby mode the aim is that the secondary tree is not
tree segments are not loaded with multicast traffic as long as there forwarding multicast traffic as long as there is no failure. In
is no failure. A "live-standby" type of multicast protection method, order to achive such a "live-standby" multicast protection the
however, requires two principal components: following requirements must be met:
o Blocking OIFs at branching points in the secondary tree to avoid
sending secondary packets in the first place; and
o Simple and fast-enough procedures to be able to activate the
standby tree or standby tree-segment.
4.1. Upstream Tree Notifications
The UTN mechanism requires that the secondary tree or tree segment
was built with dedicated backup status. In MoFRR or MRT live-live
mode the secondary tree and tree segments are active, only the merge
point, i.e. the Repair Node, keeps the secondary incoming interface
blocked. Dedicated backup status means that the OIFs corresponding
to the secondary tree are installed into the data plane but they are
installed with a flag denoting they are blocked. Packets are not
forwarded to these interfaces unless an Upstream Tree Notification
Packet (UTNP) activates them.
Sending notifications upstream helps facilitating live-standby mode
instead of live-live. Whenever a node detects a failure on the
primary tree (the failure being upstream from the node's location), a
UTNP SHOULD be sent upstream towards the source on the secondary tree
segment. It is to be noted that the reception of a DTNP MAY be used
as an upstream failure indication, so it MAY trigger sending a UTNP.
The UTNP activates the secondary tree segments at branching nodes,
i.e., unblocks the secondary OIFs.
Both the secondary JM and the UTNP go up the tree until a branching
node is reached. The branching node is
o in a single topology environment: a node that is part of the
primary tree and that also has a secondary leg; or
o the MCI.
4.2. Joining a tree in dedicated backup status
The secondary join process is almost identical to what the MRT and o Upsteam nodes block their OIF when they are part of a standby
MoFRR drafts describe, i.e., a repair node simply sends a secondary tree.
JM through another UMH (on another topology, in case of MRT).
For UTN, the secondary JM, however, has to explicitly indicate the o If all of the OIF's of the node are marked as blocking, the node
intended dedicated backup status. The backup indication MUST be an joins the tree in blocking mode further upstream.
opaque and transitive indication, so that legacy nodes transparently
keep the indication when sending the backup JM further up. In the
following, such a JM will be called as "backup JM". How a JM may
indicate its secondary status is protocol specific and will be
discussed in the appropriate chapter below.
4.2.1. Single topology environment o A procedure so that the upstream node can quickly unblock its OIF
and starts to forward.
In a single topology environment (MoFRR), the repair node sends the 3.4.1. Joining a secondary tree in blocking mode
secondary backup JM through a second UMH of its choice. From that
UMH on, the backup JM is routed towards the source as if it was a
regular JM. In every node, the backup JM MUST be processed
identically to a regular JM (including adding a new entry to the OIF
list), but, in addition, the added OIF MUST be marked with "blocked"
flag. Traffic MUST NOT be forwarded through this interface for this
multicast tree while in blocked status.
If a node receives a primary JM after receiving a secondary JM from The JM sent to the secondary UMH includes an identifier to indicate
the same neighbor, the node MUST reset the corresponding OIF entry to the upstream node MUST not forward packets down this branch of the
"unblocked" state. Furthermore, the primary JM MUST be sent further tree. The identifier is TBD. The mechanism to join a secondary path
upwards if the node had no other "unblocked" OIFs, i.e., if the node is identical to what the MRT and MoFRR drafts describe, i.e. a Repair
has not received a primary JM from any other neighbor for the given Node simply sends a secondary JM through another UMH (on another
multicast tree. topology, in case of MRT). If a node receives a JM without a
blocking identifier for an OIF that previously was in blocking mode,
the blocking mode is reset and the node stats forwarding out of this
interface. If this node joined the tree in blocking mode further
upstream, a new JM MUST be originated to reset the blocking state
further upstream.
4.2.2. Multi-Topology Environment 3.4.2. Upstream Tree Notifications
In a multi-topology environment (MRT), the secondary tree is built In order to make an upstream node start forwarding on the backup path
completely independent of the primary tree, on a second topology. quickly after a failure was detected on the primary UMH, we sent a
This topology ID is attached to the backup JM. Not only the repair Upstream Tree Notification (UTN) to the upstream node on the backup
node, but each following node receiving the backup JM will route the UMH. The failure on the primary UMH may be local or detected using a
backup JM towards the source on the second topology. The dedicated DTN. The UTN received by the upstream node should be processed in
backup indication MUST be separated from the topology ID, i.e. a the data-plane and reset the blocking state of the OIF. If this node
legacy node could send JMs on the secondary topology but will not set also joined the tree in blocking mode upstream, a UTN has to be
the dedicated backup flag. forwarded further upstream. This procedure is repeated until we find
a node that is not in blocking mode or we reached the MCI.
4.3. Activation When the upstream node resets the blocking mode in the data-plane,
the control plane will still have the blocking mode set. In order
for the control plane to get in sync with the data-plane, the node
that originated the UTN MUST also trigger a JM without blocking mode.
UTNP SHOULD be originated when an upstream failure has been detected The upstream node receiving the UTN must be able to identify the tree
on the primary multicast tree and the node has a secondary UMH which the notification is sent for, as well as the downstream
installed with stand-by status. Note that the upstream failure may interface it applies to. The information is encoded in a same RNI
mean not only the (directly connected) UMH, but any failure up to the item that is used for DTN packets. For details please see the
MCI. Such an upstream failure may be detected in several ways (out protocol specific sections Section 4 and Section 5.
of scope). We note, however, that the reception of a DTNP from the
primary UMH MAY be used as such a trigger.
The UTNP activates the blocked OIF on which it was received. The Like DTN packets, UTN packets are sent via unicast to the upstream
UTNP is forwarded up until a branching node is reached, which node.
discards the UTNP and starts forwarding multicast traffic on the leg
from where the UTNP was received (e.g., after unblocking the
respective OIF). If the branching node does not consider itself a
reliable forwarder of the multicast traffic of the indicated tree
(e.g., it received a failure indication in the form of a DTNP), it
also sent a UTNP after receiving that indication to its secondary
UMH, given it had one.
4.4. MRT/MCI-Only Mode 3.5. MRT/MCI-Only Mode
If each node in the network supports UTN and also all nodes support If each node in the network supports UTN and also all nodes support
MRT, the nodes may work in "MRT/MCI-only" mode. MRT, the nodes may work in "MRT/MCI-only" mode.
In MRT/MCI-only mode, there is one single branching point for all In MRT/MCI-only mode, there is one single Repair Node for all
failures, the MCI. Other nodes MUST NOT consider themselves as failures, the MCI. Other nodes MUST NOT consider themselves as
branching nodes. MRT ensures the necessary maximally disjoint Repair Nodes. MRT ensures the necessary maximally disjoint secondary
secondary tree up to the MCI, on a second topology. Only the MCI tree up to the MCI, on a second topology. Only the MCI MUST keep its
MUST keep its OIFs corresponding to the secondary tree blocked. OIFs corresponding to the secondary tree blocked. Similarly, only
Similarly, only MCEs MUST keep their secondary backup IIFs blocked. MCEs MUST keep their secondary backup IIFs blocked. Any other nodes
Any other nodes MUST NOT block their (secondary) IIFs or OIFs. MUST NOT block their (secondary) IIFs or OIFs.
In MRT/MCI-only mode, UTNP MUST be forwarded directly to the MCI. In MRT/MCI-only mode, the UTNP MUST be forwarded directly to the MCI.
The mode enables that a node detecting a downstream failure of the This mode enables that a node detecting a downstream failure of the
primary tree MAY send a UTNP upstream towards the source/MCI on the primary tree MAY send a UTNP upstream towards the source/MCI on the
primary tree. primary tree.
If an UTNP is received by the MCI on the secondary topology in "MRT/ If an UTNP is received by the MCI on the secondary topology in "MRT/
MCI-only" mode, the MCI MUST unblock the OIF where the UTNP was MCI-only" mode, the MCI MUST unblock the OIF where the UTNP was
received. This activates a whole sub-tree of the secondary tree. received. This activates a whole sub-tree of the secondary tree.
If an UTNP is received by the MCI on the primary topology in "MRT/ If an UTNP is received by the MCI on the primary topology in "MRT/
MCI-only" mode, the MCI gets no information on which leg to activate MCI-only" mode, the MCI gets no information on which leg to activate
on the secondary tree, so it MUST activate (unblock) all secondary on the secondary tree, so it MUST activate (unblock) all secondary
legs. legs.
5. The TN Packet 3.6. TN Authentication
5.1. TN Packet Format If a malicious attacker can reproduce the TN packet format, unwanted
reconvergence can be triggered. In order to avoid such attack, a TN
packet MAY contain a digital signature. Having authentication is
optional, it can be enabled or disabled in the network. If however
security is enabled, all the nodes must share the same secret key,
which they get either by configuration or from the multicast routing
protocol. Moreover, for protection against reply attacks, each TN
packet must contain a sequence number.
A Tree Notification is a IPv4 or IPv6 UDP packet with the following The sequence numbers in the network are not necessarily synchronised,
instead, each node can have its own. Sequence numbers can be
generated arbitrarily, it can be even some random value; the only
requirement is to create a new sequence number each time a
reconvergence was triggered due to a TN (i.e. the sequence number was
used).
The originator of the DTN packet MUST use the sequence number of the
Repair Node to create a TN signature TLV (see Section 3.7.1.2). For
UTN packet the sender MUST use its own sequence number, what it sent
previously to its UMH. The destination in this case must check
validity based on the sequence number of the sender.
A sequence number is learned from JM and part of the RNI. It is the
responsibility of multicast routing protocol to protect JM against
malicious change.
3.7. The TN Packet
3.7.1. TN Packet Format
A Tree Notification is an IPv4 or IPv6 UDP packet with the following
format. format.
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Version Number | Message Type | Flags | | Version Nr | Address Family | Type |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Originator ID | | Originator ID |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number | | Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| TLVs ... | | TreeInfo Count | TreeInfo size |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| TreeInfo item - 1 |
~ . ~
| TreeInfo item - n |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| TN option TLVs ... |
. . . .
. . . .
. . . .
| | | |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Version number: This is a 2 octet field encoding the version Version number: This is a 1 octet field encoding the version
number, currently 0. number, currently 0.
Message type: This is a 1 octet field encoding the message type, Address Family: This is a 2 octet field encoding a value from
currently two are defined; ADDRESS FAMILY NUMBERS in [RFC3232] that encodes the address
family for the Root Address of the tree.
Type: This is a 1 octet field encoding the message type, currently
two are defined;
Type 0: Downstream Tree Notification. Type 0: Downstream Tree Notification.
Type 1: Upstream Tree Notification. Type 1: Upstream Tree Notification.
Flags: A 1 octet field encoding the flags, currently no flags are Originator ID: 4 bytes long unique ID of the originator. That can
defined, set to zero on send, ignored when received. be some loopback IPv4 address if there is such, or can be set by
the operator.
Originator ID: IPv4 address owned by the of the TN originator. Sequence Number: Number unique for each failure case. It is
recommend to start at 0, and to be increased by 1 each time a new
TN is originated. The Sequence number may differ at each node,
thus the sender and the receiver must know the same sequence
number.
Sequence Number: Number starting at 0, and increased by 1 each time TreeInfo count: 2 octet field encoding the number of TreeInfo items
a new TN is originated. includes.
TLVs: TLVs (Type-Length-Value tuples). TreeInfo size: 2 octet field encoding the number of octets use to
encode the TreeInfo's following.
TreeInfo item: The encoding of this field is protocol specific, see
Section 4 and Section 5.
TN option TLVs: TLVs (Type-Length-Value tuples) describing
additional options for TN packets.
The TLV's have the following format. The TLV's have the following format.
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type | Length | | Type | Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Value | | Value |
. . . .
skipping to change at page 12, line 34 skipping to change at page 13, line 39
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Type: This is a 2 octet field encoding the type number of the TLV. Type: This is a 2 octet field encoding the type number of the TLV.
Length: This is a 2 octet field encoding the length of the Value in Length: This is a 2 octet field encoding the length of the Value in
octets. octets.
Value: String of Length octets, to be interpreted as specified by Value: String of Length octets, to be interpreted as specified by
the Type field. the Type field.
5.1.1. TN TimeStamp TLV Format 3.7.1.1. TN TimeStamp TLV Format
The TimeStamp is an optional TLV that MAY be included when the TN was The TimeStamp is an optional TLV that MAY be included when the TN was
originated, it has the following format. originated, it has the following format.
0 1 2 3 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type = 0 | Length = 8 | | Type = 0 | Length = 8 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| TimeStamp Sent (seconds) | | TimeStamp Sent (seconds) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| TimeStamp Sent (microseconds) | | TimeStamp Sent (microseconds) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
TimeStamp: The TimeStamp is the time-of-day (in seconds and TimeStamp: The TimeStamp is the time-of-day (in seconds and
microseconds, according to the sender's clock) in NTP format [NTP] microseconds, according to the sender's clock) in NTP format [NTP]
when the Tree Notification is sent. when the Tree Notification is sent.
5.2. Origination of TN Packets 3.7.1.2. TN Signature TLV Format
TN packets SHOULD be pre-loaded to the data plane cards, e.g. to a TN Signature is an optional TLV, which protects the whole TNP
buffer, so that the packet only needs to be flushed when needed. (including other TLVs) against attacks thus it must be the last TLV
This minimizes the incurred delay. if present. The signature is SHA-512 hash value. The input of the
hash function is as follows:
One TN packet MUST be sent per affected multicast tree. This does +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
not lead to a scalability problem in practical network deployments, | Complete packet content without signature TLV |
where it is not expected that a node has to send more than a few +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1000s of TN packets. | Secret key |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
6. IP/PIM Specific TN Components Signature input: The input of the hash function is the packet
extended with TN security key
The TN UDP datagram is encapsulated in an IP packet with (S,G) set as The build up of the TLV is as follows:
source and destination in the IP header. Such a TN packet is
originated for each affected (S,G) multicast tree. The UDP
portnumber is set to an IANA assigned number for PIM TN.
6.1. IP/PIM Downstream Tree Notifications 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Type = 1 | Length = 64 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Hash function result |
. .
. .
. .
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Signature: SHA-512 signature protecting TN packets.
As explained before, DTNP is multicasted on each tree on each 4. PIM Specific TN Components
outgoing interface (including potential "standby" OIFs). If a node
is a potential repair node for a multicast tree, the IP forwarding
engine MUST be programmed so that it monitors DTNP packets, which are
to be recognized among the (S,G) normal data packets based on their
UDP port number. If a DTNP is recognized, the affected tree can be
identified from the IP header's source and destination address
fields.
As noted in Section 3.2, nodes MUST NOT forward DTNP outside the In this section we are documenting the PIM specific data-structures
operator domain. I.e., nodes egressing the domain MUST filter and and procedures (if they are different from the generic procedures are
discard DTNP packets on their egress interfaces. defined in this document). As described in this document, TN packets
are UDP/IP packets sent via unicast to its destination. The UDP port
number for PIM is set to the (to be) assigned IANA port number for
PIM-TN.
The DTN mechanism does not require any update of PIM related 4.1. RNI item in PIM Join Message
specifications.
6.2. IP/PIM Upstream Tree Notifications As described previously, PIM must insert the RNI when sending a PIM
join to its UMH. The RNI includes its router ID, sequence number and
UMH Identifier. The UMH-ID can be locally unique identifier since
its has only local significance on the Repair Node. A good ID to use
would be the IP address of the interface associated with the UMH the
PIM join is sent to. The RNI is carried in the PIM Join as a new PIM
Attribute following [RFC5384]. The PIM RNI attribute has the
following format for IPv4.
An originated UTNP is to be sent upstream to the secondary UMH, i.e., 0 1 2 3
upstream through the secondary incoming interface. The forwarding 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
engine MUST be programmed so that despite the UTNP packet having +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
(S,G) in the IP header, it MUST forward the UTNP packet upstream. |F|E| Type | Length | Sequence number |
(U)TN(P) packets are to be recognized based on their UDP port number. +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Repair Node address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| UMH-ID |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Only nodes that have installed some OIFs in blocked (backup) status F Forward if not understood.
need to keep monitoring for UTNP packets.
The UTN mechanism requires that, when a node performs a secondary E End of Attributes, following [RFC5384].
join, the PIM JOIN message indicates its dedicated "standby" status.
Such an indication is required so that the recipient of a standby PIM
JOIN can recognise that it can install its interface, through which
the standby PIM JOIN was received, into the OIF list in blocked
state. (A received UTNP could be one trigger to unblock such a
backup OIF.) An extension of PIM JOIN messages and mechanisms is the
responsibility of the PIM WG. It is to be noted that a secondary
status indication has already been proposed to the IETF in
[I-D.liu-pim-single-stream-multicast-frr].
6.3. Incremental deployment Type: This 6 bit field should be assigned by IANA for TN specific
JOIN messages.
The DTNP can be forwarded by legacy nodes as a data packet. So DTN Length: Length = 10 octets.
can be deployed incrementally if the failure detecting node and
repair nodes support it.
In case of UTN, the (S,G) addressed (U)TN(P) packet MUST be forwarded Sequence number: 2 octets long field, describing the sequence
towards to source, upstream. This is in contrast to the normal number of the sending Repair Node.
forwarding procedures for (S,G) packets. This means that legacy
nodes cannot forward such packets. It remains to be studied if the
UTNP packet can be a unicast packet sent towards the source or MCI,
or if the UTNP packet can be tunneled through legacy nodes. In the
current version of the spec, legacy nodes cannot handle UTNP. As a
consequence, a node supporting this spec MUST NOT send dedicated
backup JOIN messages to a legacy node.
Detecting the capability of supporting Tree Notifications can be done Repair Node address: The router ID of the Repair Node, in IPv4
via capability advertisement. This should be specified by the PIM address format.
WG. As an indication, it is likely that a "TN-Capable" PIM-Hello
option needs to be standardized.
7. mLDP Specific TN Components UMH-ID: This is a 4 octet field encoding UMH identifier. This is
the IPv4 address of the interface associated with the UMH the PIM
join is sent to.
Since MPLS is used as transport technology, the UTN and DTN are Figure 3: PIM IPv4 RNI attribute TLV
forwarded up and down the LSP using MPLS encapsulation. The MPLS
label pushed onto the TN is the label associated with the MP LSP
impacted by the failure. This follows more of less the same
mechanism as described in [RFC4379]. Its important that a TN packet
is never IP forwarded when the tail of the MP LSP is reached. In
order to prevent IP forwarding, the destination address MUST be set
to an address from the 127/8 range for IPv4 and that same range
embedded in as IPv4-mapped IPv6 address. The source address in the
IP header MUST be set to an address local to the router. The UDP
port number is set to an IANA assigned number for mLDP TN.
7.1. mLDP Downstream Tree Notification The PIM RNI attribute has the following format for IPv6.
7.1.1. Originating a DTNP 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|F|E| Type | Length | Sequence number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Repair Node address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
~ UMH-ID ~
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
As documented in section Section 3.1, a Downstream Tree Notification F Forward if not understood.
is sent by a router that detects a failure of an upstream link or
node. The DTN packet is then sent to each LDP neighbor in the
Outgoing Interface List for each MP LSP impact by the failure using
the MPLS Label that this neighbor has assigned for that MP LSP.
7.1.2. Receiving a DTNP E End of Attributes, following [RFC5384].
A Downstream Tree Notification Packet is received inline with the Type: This 6 bit field should be assigned by IANA for TN specific
data on a particular LSP. If the receiving router is a Repair Node, JOIN messages.
the MPLS forwarding logic will monitor the MPLS packets in order to
detect the DTN packet based on the UDP port number assigned for mLDP
TN. When a DTNP is detected, the outer MPLS label identifies the
LSP. No additional mechanism or lookups are needed here. The MPLS
forwarding code can immediately activate the standby upstream path
and disable the old primary path following the procedures described
in Section 3.2
7.1.3. Forwarding a DTNP Length: Length = 16 octets.
If a router is not a Repair Node for a particular LSP it does not Sequence number: 2 octets long field, describing the sequence
need to monitor the incoming traffic for that LSP in order to detect number of the sending Repair Node.
the DFN packet. Such a router will just forward the DTN packet down
the LSP as normal data. Also, routers that don't support DTN
processing will always just forward a DTN packet as normal data. For
the network to benefit from this feature, not all routers need to be
DTN capable.
7.2. mLDP Upstream Tree Notification Repair Node address: The router ID of the Repair Node, in IPv4
address format.
7.2.1. Originating a UTNP UMH-ID: This is a 16 octet field encoding UMH identifier. This is
the IPv6 address of the interface associated with the UMH the PIM
join is sent to.
Following the procedures as described in Section 4.1, an UTNP MAY Figure 4: PIM IPv6 RNI attribute TLV
need to be originated and sent to an upstream LDP neighbor. A P2MP
LSP has no upstream labeled path to reach the root because a P2MP LSP
is unidirectional. In order to create an upstream path that follows
the P2MP LSP all the way up towards the root we apply the procedures
are documented in [I-D.ietf-mpls-mldp-hsmp]. A MP2MP LSP already has
an upstream path to the root of the tree, however, these packets are
also forwarded down the tree by other LSRs. There are two possible
approuches, an LSRs that received a DTNP on an upstream interface may
just choose to ignore these packets, or an LSR may filter out DTNP
packets from ever being forwarded down the tree. More details will
be added in later revisions of the draft.
7.2.2. Receiving a UTNP 4.2. Tree Information Item
An Upstream Tree Notification is received on the upstream path A TN packet contains one or more TreeInfo items that allows a Merge
associated with the MP LSP by node U. If router U has a downsteam Node to idenfy which tree(s) and interface(s) are effected by the TN.
interface in that MP LSPs OIF list that was joined in standby, it The same encoding is used for DTN and UTN packets. The PIM TreeInfo
will move that interface to forwarding. The outer label in the MPLS items are defined for IPv4 and IPv6. Which version is to be included
header will identify the MP LSP that is targeted. However, that does in the TN packet depends on Address Family in the TN packet. The
not necessarily identify the downstream LDP neighbor and interface UMH-ID included in the DTN MUST be taken from the RNI that was
that needs to be put in forwarding state. Following the procedures signalled for that tree. The UMH-ID for UTN packets is the PIM
in [I-D.ietf-mpls-mldp-hsmp] node U MAY assign all the downstream LDP neighbor address for that tree. The TreeInfo item has the following
neighbors the same label for the upstream path. For the purpose of format:
UTN, node U MUST assign a unique label for each downstream LDP
neighbor. If that Label is unique, the UTNP will identify the MP LSP
and the downstream LDP neighbor. Since node U has selected the
downstream interface, it knows which interface to put in forwarding
mode.
7.2.3. Forwarding a UTNP 0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| IPv4 Source address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| IPv4 group address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| UMH-ID |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
A UTNP has to be forward upstream towards the root of the MP LSP Source Address: This is a 4 octet field encoding the IPv4 source
following the procedures as defined in Section 4.3 address of the tree. A source address of 0.0.0.0 means that this
TN relates to a (*,G) tree.
8. Acknowledgements Group Address: This is a 4 octet field encoding the IPv4 group
address of the tree.
The authors would like express their thanks for Gabor Enyedi for UMH-ID: This is a 4 octet field encoding UMH identifier.
initial discussions. The authors would also like to thank Stefan
Olofsson and Javed Asghar for commenting on the draft.
9. IANA Considerations Figure 5: PIM IPv4 TreeInfo item
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
~ IPv6 Source address ~
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
~ IPv6 group address ~
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
~ UMH-ID ~
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Source Address: This is a 16 octet field encoding the IPv6 source
address of the tree. A source address of 0:0:0:0:0:0:0:0 means
that this TN relates to a (*,G) tree.
Group Address: This is a 16 octet field encoding the IPv6 group
address of the tree.
UMH-ID: This is a 16 octet field encoding UMH identifier.
Figure 6: PIM IPv6 TreeInfo item
4.3. Incremental deployment
Joins with a RNI can be forwarded through legacy nodes if the
Transitive Attribute (see [RFC5384]) has the F bit set to 1. It is
up to the network operator to determine this. The DTN functionality
can be deployed incrementally as long as the node detecting the
failure and Repair Nodes support it.
5. mLDP Specific TN Components
In this section we are documenting the mLDP specific data-structures
and procedures (if they are different from the generic procedures are
defined in this document). As described in this document, TN packets
are UDP/IP packets sent via unicast to its destination. The UDP port
number for mLDP is set to the (to be) assigned IANA port number for
mLDP-TN.
5.1. RNI item in mLDP Label Mapping
The RNI item for mLDP is encoded in a LDP MP Status TLV as documented
in [RFC6388] section 5. A new LDP MP Status Value Element is created
for this purpose and called the RNI Status. The RNI Status includes
the router ID, sequence number and UMH Identifier. The UMH-ID can be
locally unique identifier since its has only local significance on
the Repair node. For mLDP the value that MUST be used is the Local
Label associated with the UMH the mLDP Label Mapping is sent to. The
RNI status is carried in Label Mapping messages and has the following
format.
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| RNI | Length | Seq. Number .
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
. | IPv4 Repair Node address .
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
. | UMH-ID reserved | UMH-ID Label .
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
. |
+-+-+-+-+-+-+-+-+
RNI Type: This 1 octet field assigned by IANA for RNI Status Value
Element Types.
Length: This is a 2 octet field, describing the length of the
Value, Length = 10 octets.
Sequence number: 2 octets long field, describing the sequence
number of the sending Repair Node.
IPv4 Repair Node address: The IPv4 address of the Repair Node.
UMH-ID reserved: 12 bit field, reserved.
UMH-ID Label: This is a 20 bit field encoding a Label as UMH
identifier.
Figure 7: mLDP RNI Status Value Element
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| RNI | Length | Status code |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
MBB Type: Type 1 (to be assigned by IANA)
Length: 1
Status code: 1 = MBB request
2 = MBB ack
5.2. Tree Information Item
A TN packet contains one or more TreeInfo items that allows a Merge
Node to identify which tree(s) and interface(s) are effected by the
TN. The same encoding is used for DTN and UTN packets. Following
[RFC6388], mLDP will assign a unique Label to each upstream node per
MP-LSP. This label identifies the UMH AND the LSP. Since we are
using a label to identify the UMH and LSP, there is no need to define
a IPv4 and IPv6 specific encoding. The Label included in the DTN
MUST be taken from the RNI that was signalled for that tree. The
Label for UTN packets is the Local Label that was allocated for that
tree. The TreeInfo item has the following format:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Reserved | UMH-Label |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Reserved: This is a 12 bits filed, set to zero on sending, ignored
when received.
UMH-Label: This is a 20 bit field encoding MPLS Label of the UMH.
Figure 8: mLDP TreeInfo item
6. Acknowledgements
The authors would like to thank Stefan Olofsson, Javed Asghar and
Greg Sheperd for their comments on the draft.
7. IANA Considerations
IANA is requested to allocate UDP port numbers to TN messages. One IANA is requested to allocate UDP port numbers to TN messages. One
port number for TN in IP/PIM context, and another one for MPLS/mLDP port number for TN in IP/PIM context, and another one for MPLS/mLDP
context. The separation of UDP port numbers between IP and MPLS is context. The separation of UDP port numbers between IP and MPLS is
requested to prevent problems when a PIM multicast tree is requested to prevent problems when a PIM multicast tree is
transported partly through an mLDP multicast tree. transported partly through an mLDP multicast tree.
10. Security Considerations IANA is requested to allocate a value from "PIM Join Attribute" to
make routers capable to advertisement their Tree Notification
capability.
IANA is requested to allocate a value from "PIM Join Attribute Types"
for TN's join command extra information.
A new IANA registry is needed for "TN option TLVs". This describes
the types of TLVs containing extra options for TN messages.
8. Security Considerations
Two types of security problems can be foreseen by the authors: Two types of security problems can be foreseen by the authors:
o Handling illegally injected TN packets o Handling illegally injected TN packets
o Handling replay attacks (re-injecting previous TN messages) o Handling replay attacks (re-injecting previous TN messages)
o TN messages propagating outside an operator's domain o TN messages propagating outside an operator's domain
Illegal TN packets can be handled with authentication checks. Illegal TN packets can be detected with authentication check.
Providing authentication for TN messages will be considered in later Providing authentication for TN messages is described in Section 3.6.
revisions of this spec.
Prevention of replay attacks needs authentication in combination with Prevention of replay attacks needs authentication in combination with
sequence numbering. sequence numbering, which is also described at the same section.
Preventing TN messages that travel inline with data packets MUST be Preventing TN messages that travel inline with data packets MUST be
solved by nodes egressing the operator's domain. Solutions for IP solved by nodes egressing the operator's domain. Solutions for IP
and MPLS are described in sections Section 6 and Section 7, and MPLS are described in sections Section 4 and Section 5,
respectively. respectively.
11. References 9. References
11.1. Normative References
[I-D.ietf-mpls-mldp-hsmp]
Jin, L., JOUNAY, F., Wijnands, I., and N. Leymann, "LDP
Extensions for Hub & Spoke Multipoint Label Switched
Path", draft-ietf-mpls-mldp-hsmp-00 (work in progress),
September 2012.
[I-D.ietf-rtgwg-mrt-frr-architecture] 9.1. Normative References
Atlas, A., Kebler, R., Envedi, G., Csaszar, A.,
Konstantynowicz, M., White, R., and M. Shand, "An
Architecture for IP/LDP Fast-Reroute Using Maximally
Redundant Trees", draft-ietf-rtgwg-mrt-frr-architecture-01
(work in progress), March 2012.
[I-D.karan-mofrr] [I-D.karan-mofrr]
Karan, A., Filsfils, C., Farinacci, D., Decraene, B., Karan, A., Filsfils, C., Farinacci, D., Decraene, B.,
Leymann, N., and W. Henderickx, "Multicast only Fast Re- Leymann, N., and W. Henderickx, "Multicast only Fast Re-
Route", draft-karan-mofrr-02 (work in progress), Route", draft-karan-mofrr-02 (work in progress),
March 2012. March 2012.
11.2. Informative References [RFC5384] Boers, A., Wijnands, I., and E. Rosen, "The Protocol
Independent Multicast (PIM) Join Attribute Format",
RFC 5384, November 2008.
[RFC6388] Wijnands, IJ., Minei, I., Kompella, K., and B. Thomas,
"Label Distribution Protocol Extensions for Point-to-
Multipoint and Multipoint-to-Multipoint Label Switched
Paths", RFC 6388, November 2011.
9.2. Informative References
[I-D.atlas-rtgwg-mrt-mc-arch] [I-D.atlas-rtgwg-mrt-mc-arch]
Atlas, A., Kebler, R., Wijnands, I., Csaszar, A., and G. Atlas, A., Kebler, R., Wijnands, I., Csaszar, A., and G.
Envedi, "An Architecture for Multicast Protection Using Envedi, "An Architecture for Multicast Protection Using
Maximally Redundant Trees", Maximally Redundant Trees",
draft-atlas-rtgwg-mrt-mc-arch-00 (work in progress), draft-atlas-rtgwg-mrt-mc-arch-01 (work in progress),
March 2012. February 2013.
[I-D.liu-pim-single-stream-multicast-frr]
Liu, H., Zheng, L., Bai, T., and Y. Yu, "Single Stream
Multicast Fast ReRoute (SMFRR) Method",
draft-liu-pim-single-stream-multicast-frr-01 (work in
progress), October 2010.
Authors' Addresses Authors' Addresses
IJsbrand Wijnands (editor) IJsbrand Wijnands (editor)
Cisco Cisco
De kleetlaan 6a De kleetlaan 6a
Diegem, 1831 Diegem, 1831
Belgium Belgium
Phone: Phone:
Email: ice@cisco.com Email: ice@cisco.com
Andras Csaszar (editor) Luc De Ghein
Cisco
De kleetlaan 6a
Diegem, 1831
Belgium
Phone:
Email: ldeghein@cisco.com
Gabor Sandor Enyedi (editor)
Ericsson
Konyves Kalman Krt 11/B
Budapest, 1097
Hungary
Phone:
Email: Gabor.Sandor.Enyedi@ericsson.com
Andras Csaszar
Ericsson Ericsson
Konyves Kalman Krt 11/B Konyves Kalman Krt 11/B
Budapest, 1097 Budapest, 1097
Hungary Hungary
Phone: Phone:
Email: Andras.Csaszar@ericsson.com Email: Andras.Csaszar@ericsson.com
Jeff Tantsura Jeff Tantsura
Ericsson Ericsson
 End of changes. 109 change blocks. 
435 lines changed or deleted 614 lines changed or added

This html diff was produced by rfcdiff 1.48. The latest version is available from http://tools.ietf.org/tools/rfcdiff/