idnits 2.17.1 draft-ietf-mpls-ldp-p2mp-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 19. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1051. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1028. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1035. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1041. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: P2MP, MP2MP Downstream and MP2MP Upstream FECs are examples of mLDP Wildcard FECs. P2MP, and MP2MP Downstream Wildcard FECs may appear only in Label Request, Label Withdraw and Label Release messages. MP2MP Upstream Wildcard FECs may appear only in Label Withdraw and Label Release messages. A label TLV object MUST not be present in messages containing an mLDP wildcard FEC. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 25, 2006) is 6514 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 3036 (ref. '1') (Obsoleted by RFC 5036) ** Obsolete normative reference: RFC 1700 (ref. '3') (Obsoleted by RFC 3232) == Outdated reference: A later version (-03) exists of draft-leroux-mpls-mp-ldp-reqs-01 -- Possible downref: Normative reference to a draft: ref. '4' -- Possible downref: Normative reference to a draft: ref. '5' -- Possible downref: Normative reference to a draft: ref. '6' == Outdated reference: A later version (-07) exists of draft-ietf-mpls-rsvp-te-p2mp-02 == Outdated reference: A later version (-10) exists of draft-ietf-l3vpn-2547bis-mcast-00 Summary: 5 errors (**), 0 flaws (~~), 6 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group I. Minei (Editor) 3 Internet-Draft K. Kompella 4 Expires: December 27, 2006 Juniper Networks 5 I. Wijnands (Editor) 6 B. Thomas 7 Cisco Systems, Inc. 8 June 25, 2006 10 Label Distribution Protocol Extensions for Point-to-Multipoint and 11 Multipoint-to-Multipoint Label Switched Paths 12 draft-ietf-mpls-ldp-p2mp-01 14 Status of this Memo 16 By submitting this Internet-Draft, each author represents that any 17 applicable patent or other IPR claims of which he or she is aware 18 have been or will be disclosed, and any of which he or she becomes 19 aware will be disclosed, in accordance with Section 6 of BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt. 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html. 37 This Internet-Draft will expire on December 27, 2006. 39 Copyright Notice 41 Copyright (C) The Internet Society (2006). 43 Abstract 45 This document describes extensions to the Label Distribution Protocol 46 (LDP) for the setup of point to multi-point (P2MP) and multipoint-to- 47 multipoint (MP2MP) Label Switched Paths (LSPs) in Multi-Protocol 48 Label Switching (MPLS) networks. The solution relies on LDP without 49 requiring a multicast routing protocol in the network. Protocol 50 elements and procedures for this solution are described for building 51 such LSPs in a receiver-initiated manner. There can be various 52 applications for P2MP/MP2MP LSPs, for example IP multicast or support 53 for multicast in BGP/MPLS L3VPNs. Specification of how such 54 applications can use a LDP signaled P2MP/MP2MP LSP is outside the 55 scope of this document. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 1.1. Conventions used in this document . . . . . . . . . . . . 3 61 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 3 62 2. Setting up P2MP LSPs with LDP . . . . . . . . . . . . . . . . 4 63 2.1. The P2MP FEC Element . . . . . . . . . . . . . . . . . . . 4 64 2.2. The LDP MP Opaque Value Element . . . . . . . . . . . . . 6 65 2.2.1. The Generic LSP Identifier . . . . . . . . . . . . . . 6 66 2.3. Using the P2MP FEC Element . . . . . . . . . . . . . . . . 7 67 2.3.1. Label Map . . . . . . . . . . . . . . . . . . . . . . 8 68 2.3.2. Label Withdraw . . . . . . . . . . . . . . . . . . . . 9 69 3. Shared Trees . . . . . . . . . . . . . . . . . . . . . . . . . 10 70 4. Setting up MP2MP LSPs with LDP . . . . . . . . . . . . . . . . 11 71 4.1. The MP2MP downstream and upstream FEC elements. . . . . . 11 72 4.2. Using the MP2MP FEC elements . . . . . . . . . . . . . . . 12 73 4.2.1. MP2MP Label Map upstream and downstream . . . . . . . 13 74 4.2.2. MP2MP Label Withdraw . . . . . . . . . . . . . . . . . 15 75 5. mLDP wildcard FECs . . . . . . . . . . . . . . . . . . . . . . 16 76 5.1. Label Request Message . . . . . . . . . . . . . . . . . . 16 77 5.2. Label Withdraw Message . . . . . . . . . . . . . . . . . . 16 78 5.3. Label Release Message . . . . . . . . . . . . . . . . . . 17 79 6. Upstream label allocation on Ethernet networks . . . . . . . . 17 80 7. Root node redundancy for MP2MP LSPs . . . . . . . . . . . . . 17 81 7.1. Root node redundancy procedure . . . . . . . . . . . . . . 17 82 8. Make before break . . . . . . . . . . . . . . . . . . . . . . 18 83 8.1. Protocol event . . . . . . . . . . . . . . . . . . . . . . 19 84 9. Security Considerations . . . . . . . . . . . . . . . . . . . 19 85 10. IANA considerations . . . . . . . . . . . . . . . . . . . . . 19 86 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 19 87 12. Contributing authors . . . . . . . . . . . . . . . . . . . . . 20 88 13. References . . . . . . . . . . . . . . . . . . . . . . . . . . 21 89 13.1. Normative References . . . . . . . . . . . . . . . . . . . 21 90 13.2. Informative References . . . . . . . . . . . . . . . . . . 22 91 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 23 92 Intellectual Property and Copyright Statements . . . . . . . . . . 24 94 1. Introduction 96 The LDP protocol is described in [1]. It defines mechanisms for 97 setting up point-to-point (P2P) and multipoint-to-point (MP2P) LSPs 98 in the network. This document describes extensions to LDP for 99 setting up point-to-multipoint (P2MP) and multipoint-to-multipoint 100 (MP2MP) LSPs. These are collectively referred to as multipoint LSPs 101 (MP LSPs). A P2MP LSP allows traffic from a single root (or ingress) 102 node to be delivered to a number of leaf (or egress) nodes. A MP2MP 103 LSP allows traffic from multiple ingress nodes to be delivered to 104 multiple egress nodes. Only a single copy of the packet will be sent 105 on any link traversed by the MP LSP (see note at end of 106 Section 2.3.1). This is accomplished without the use of a multicast 107 protocol in the network. There can be several MP LSPs rooted at a 108 given ingress node, each with its own identifier. 110 The solution assumes that the leaf nodes of the MP LSP know the root 111 node and identifier of the MP LSP to which they belong. The 112 mechanisms for the distribution of this information are outside the 113 scope of this document. The specification of how an application can 114 use a MP LSP signaled by LDP is also outside the scope of this 115 document. 117 Interested readers may also wish to peruse the requirement draft [4] 118 and other documents [8] and [9]. 120 1.1. Conventions used in this document 122 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 123 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 124 document are to be interpreted as described in RFC 2119 [2]. 126 1.2. Terminology 128 The following terminology is taken from [4]. 130 P2P LSP: An LSP that has one Ingress LSR and one Egress LSR. 132 P2MP LSP: An LSP that has one Ingress LSR and one or more Egress 133 LSRs. 135 MP2P LSP: A LSP that has one or more Ingress LSRs and one unique 136 Egress LSR. 138 MP2MP LSP: A LSP that connects a set of leaf nodes, acting 139 indifferently as ingress or egress. 141 MP LSP: A multipoint LSP, either a P2MP or an MP2MP LSP. 143 Ingress LSR: Source of the P2MP LSP, also referred to as root node. 145 Egress LSR: One of potentially many destinations of an LSP, also 146 referred to as leaf node in the case of P2MP and MP2MP LSPs. 148 Transit LSR: An LSR that has one or more directly connected 149 downstream LSRs. 151 Bud LSR: An LSR that is an egress but also has one or more directly 152 connected downstream LSRs. 154 2. Setting up P2MP LSPs with LDP 156 A P2MP LSP consists of a single root node, zero or more transit nodes 157 and one or more leaf nodes. Leaf nodes initiate P2MP LSP setup and 158 tear-down. Leaf nodes also install forwarding state to deliver the 159 traffic received on a P2MP LSP to wherever it needs to go; how this 160 is done is outside the scope of this document. Transit nodes install 161 MPLS forwarding state and propagate the P2MP LSP setup (and tear- 162 down) toward the root. The root node installs forwarding state to 163 map traffic into the P2MP LSP; how the root node determines which 164 traffic should go over the P2MP LSP is outside the scope of this 165 document. 167 For the setup of a P2MP LSP with LDP, we define one new protocol 168 entity, the P2MP FEC Element to be used in the FEC TLV. The 169 description of the P2MP FEC Element follows. 171 2.1. The P2MP FEC Element 173 The P2MP FEC Element consists of the address of the root of the P2MP 174 LSP and an opaque value. The opaque value consists of one or more 175 LDP MP Opaque Value Elements. The opaque value is unique within the 176 context of the root node. The combination of (Root Node Address, 177 Opaque Value) uniquely identifies a P2MP LSP within the MPLS network. 179 The P2MP FEC element is encoded as follows: 181 0 1 2 3 182 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 183 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 184 |P2MP Type (TBD)| Address Family | Address Length| 185 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 186 | Root Node Address | 187 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 188 | Opaque Length | Opaque Value ... | 189 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + 190 ~ ~ 191 | | 192 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 193 | | 194 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 196 Type: The type of the P2MP FEC element is to be assigned by IANA, 197 such that the U-bit is set (=1) and the F-bit is clear (=0). This 198 ensures that an LSR which cannot process the P2MP FEC element, 199 silently ignores it. 201 Address Family: Two octet quantity containing a value from ADDRESS 202 FAMILY NUMBERS in [3] that encodes the address family for the Root 203 LSR Address. 205 Address Length: Length of the Root LSR Address in octets. 207 Root Node Address: A host address encoded according to the Address 208 Family field. 210 Opaque Length: The length of the Opaque Value, in octets. 212 Opaque Value: One or more MP Opaque Value elements, uniquely 213 identifying the P2MP LSP in the context of the Root Node. This is 214 described in the next section. 216 If the Address Family is IPv4, the Address Length MUST be 4; if the 217 Address Family is IPv6, the Address Length MUST be 16. No other 218 Address Lengths are defined at present. 220 If the Address Length doesn't match the defined length for the 221 Address Family, the receiver SHOULD abort processing the message 222 containing the FEC Element, and send an "Unknown FEC" Notification 223 message to its LDP peer signaling an error. 225 If a FEC TLV contains a P2MP FEC Element, the P2MP FEC Element MUST 226 be the only FEC Element in the FEC TLV. 228 A P2MP FEC with the Root Node Address octets filled with zeros and 229 Opaque Length set to 0 is a wildcard P2MP FEC for all P2MPs FECs of 230 matching root node address family. 232 2.2. The LDP MP Opaque Value Element 234 The LDP MP Opaque Value Element is used in the P2MP and MP2MP FEC 235 elements defined in subsequent sections. It carries information that 236 is meaningful to leaf (and bud) LSRs, but need not be interpreted by 237 non-leaf LSRs. 239 The LDP MP Opaque Value Element is encoded as follows: 241 0 1 2 3 242 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 243 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 244 | Type(TBD) | Length | Value ... | 245 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 246 ~ ~ 247 | | 248 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 249 | | 251 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 253 Type: The type of the LDP MP Opaque Value Element is to be assigned 254 by IANA. 256 Length: The length of the Value field, in octets. 258 Value: String of Length octets, to be interpreted as specified by the 259 Type field. 261 2.2.1. The Generic LSP Identifier 263 The generic LSP identifier is a type of Opaque Value Element encoded 264 as follows: 266 Type: 1 (to be assigned by IANA) 268 Length: 4 270 Value: A 32bit integer, unique in the context of the root, as 271 identified by the root's address. 273 This type of opaque value element is recommended when mapping of 274 traffic to LSPs is non-algorithmic, and done by means outside LDP. 276 2.3. Using the P2MP FEC Element 278 This section defines the rules for the processing and propagation of 279 the P2MP FEC Element. The following notation is used in the 280 processing rules: 282 1. P2MP FEC Element : a FEC Element with Root Node Address X 283 and Opaque Value Y. 285 2. P2MP Label Map : a Label Map message with a FEC TLV with 286 a single P2MP FEC Element and Label TLV with label L. 288 3. P2MP Label Withdraw : a Label Withdraw message with a 289 FEC TLV with a single P2MP FEC Element and Label TLV with 290 label L. 292 4. P2MP LSP (or simply ): a P2MP LSP with Root Node 293 Address X and Opaque Value Y. 295 5. The notation L' -> { ..., } on LSR X 296 means that on receiving a packet with label L', X makes n copies 297 of the packet. For copy i of the packet, X swaps L' with Li and 298 sends it out over interface Ii. 300 The procedures below are organized by the role which the node plays 301 in the P2MP LSP. Node Z knows that it is a leaf node by a discovery 302 process which is outside the scope of this document. During the 303 course of protocol operation, the root node recognizes its role 304 because it owns the Root Node Address. A transit node is any node 305 (other than the root node) that receives a P2MP Label Map message 306 (i.e., one that has leaf nodes downstream of it). 308 Note that a transit node (and indeed the root node) may also be a 309 leaf node. 311 2.3.1. Label Map 313 The following lists procedures for generating and processing P2MP 314 Label Map messages for nodes that participate in a P2MP LSP. An LSR 315 should apply those procedures that apply to it, based on its role in 316 the P2MP LSP. 318 For the approach described here we use downstream assigned labels. 319 On Ethernet networks this may be less optimal, see Section 6. 321 2.3.1.1. Determining one's 'upstream LSR' 323 A node Z that is part of P2MP LSP determines the LDP peer U 324 which lies on the best path from Z to the root node X. If there are 325 more than one such LDP peers, only one of them is picked. U is Z's 326 "Upstream LSR" for . 328 When there are several candidate upstream LSRs, the LSR MAY select 329 one upstream LSR using the following procedure: 331 1. The candidate upstream LSRs are numbered from lower to higher IP 332 address 334 2. The following hash is performed: H = (Sum Opaque value) modulo N, 335 where N is the number of candidate upstream LSRs 337 3. The selected upstream LSR U is the LSR that has the number H. 339 This allows for load balancing of a set of LSPs among a set of 340 candidate upstream LSRs, while ensuring that on a LAN interface a 341 single upstream LSR is selected. 343 2.3.1.2. Leaf Operation 345 A leaf node Z of P2MP LSP determines its upstream LSR U for 346 as per Section 2.3.1.1, allocates a label L, and sends a P2MP 347 Label Map to U. 349 2.3.1.3. Transit Node operation 351 Suppose a transit node Z receives a P2MP Label Map over 352 interface I. Z checks whether it already has state for . If 353 not, Z allocates a label L', and installs state to swap L' with L 354 over interface I. Z also determines its upstream LSR U for as 355 per Section 2.3.1.1, and sends a P2MP Label Map to U. 357 If Z already has state for , then Z does not send a Label Map 358 message for P2MP LSP . All that Z needs to do in this case is 359 update its forwarding state. Assuming its old forwarding state was 360 L'-> { ..., }, its new forwarding state 361 becomes L'-> { ..., , }. 363 2.3.1.4. Root Node Operation 365 Suppose the root node Z receives a P2MP Label Map over 366 interface I. Z checks whether it already has forwarding state for . If not, Z creates forwarding state to push label L onto the 368 traffic that Z wants to forward over the P2MP LSP (how this traffic 369 is determined is outside the scope of this document). 371 If Z already has forwarding state for , then Z adds "push label 372 L, send over interface I" to the nexthop. 374 2.3.2. Label Withdraw 376 The following lists procedures for generating and processing P2MP 377 Label Withdraw messages for nodes that participate in a P2MP LSP. An 378 LSR should apply those procedures that apply to it, based on its role 379 in the P2MP LSP. 381 2.3.2.1. Leaf Operation 383 If a leaf node Z discovers (by means outside the scope of this 384 document) that it is no longer a leaf of the P2MP LSP, it SHOULD send 385 a Label Withdraw to its upstream LSR U for , where L 386 is the label it had previously advertised to U for . 388 2.3.2.2. Transit Node Operation 390 If a transit node Z receives a Label Withdraw message from 391 a node W, it deletes label L from its forwarding state, and sends a 392 Label Release message with label L to W. 394 If deleting L from Z's forwarding state for P2MP LSP results 395 in no state remaining for , then Z propagates the Label 396 Withdraw to its upstream for . 398 2.3.2.3. Root Node Operation 400 The procedure when the root node of a P2MP LSP receives a Label 401 Withdraw message are the same as for transit nodes, except that it 402 would not propagate the Label Withdraw upstream (as it has no 403 upstream). 405 2.3.2.4. Upstream LSR change 407 If, for a given node Z participating in a P2MP LSP , the 408 upstream LSR changes, say from U to U', then Z MUST update its 409 forwarding state by deleting the state for label L, allocating a new 410 label, L', for , and installing the forwarding state for L'. In 411 addition Z MUST send a Label Map to U' and send a Label 412 Withdraw to U. 414 3. Shared Trees 416 The mechanism described above shows how to build a tree with a single 417 root and multiple leaves, i.e., a P2MP LSP. One can use essentially 418 the same mechanism to build Shared Trees with LDP. A Shared Tree can 419 be used by a group of routers that want to multicast traffic among 420 themselves, i.e., each node is both a root node (when it sources 421 traffic) and a leaf node (when any other member of the group sources 422 traffic). A Shared Tree offers similar functionality to a MP2MP LSP, 423 but the underlying multicasting mechanism uses a P2MP LSP. One 424 example where a Shared Tree is useful is video-conferencing. Another 425 is Virtual Private LAN Service (VPLS) [7], where for some types of 426 traffic, each device participating in a VPLS must send packets to 427 every other device in that VPLS. 429 One way to build a Shared Tree is to build an LDP P2MP LSP rooted at 430 a common point, the Shared Root (SR), and whose leaves are all the 431 members of the group. Each member of the Shared Tree unicasts 432 traffic to the SR (using, for example, the MP2P LSP created by the 433 unicast LDP FEC advertised by the SR); the SR then splices this 434 traffic into the LDP P2MP LSP. The SR may be (but need not be) a 435 member of the multicast group. 437 A major advantage of this approach is that no further protocol 438 mechanisms beyond the one already described are needed to set up a 439 Shared Tree. Furthermore, a Shared Tree is very efficient in terms 440 of the multicast state in the network, and is reasonably efficient in 441 terms of the bandwidth required to send traffic. 443 A property of this approach is that a sender will receive its own 444 packets as part of the multicast; thus a sender must be prepared to 445 recognize and discard packets that it itself has sent. For a number 446 of applications (for example, VPLS), this requirement is easy to 447 meet. Another consideration is the various techniques that can be 448 used to splice unicast LDP MP2P LSPs to the LDP P2MP LSP; these will 449 be described in a later revision. 451 4. Setting up MP2MP LSPs with LDP 453 An MP2MP LSP is much like a P2MP LSP in that it consists of a single 454 root node, zero or more transit nodes and one or more leaf LSRs 455 acting equally as Ingress or Egress LSR. A leaf node participates in 456 the setup of an MP2MP LSP by establishing both a downstream LSP, 457 which is much like a P2MP LSP from the root, and an upstream LSP 458 which is used to send traffic toward the root and other leaf nodes. 459 Transit nodes support the setup by propagating the upstream and 460 downstream LSP setup toward the root and installing the necessary 461 MPLS forwarding state. The transmission of packets from the root 462 node of a MP2MP LSP to the receivers is identical to that for a P2MP 463 LSP. Traffic from a leaf node follows the upstream LSP toward the 464 root node and branches downward along the downstream LSP as required 465 to reach other leaf nodes. Mapping traffic to the MP2MP LSP may 466 happen at any leaf node. How that mapping is established is outside 467 the scope of this document. 469 Due to how a MP2MP LSP is built a leaf LSR that is sending packets on 470 the MP2MP LSP does not receive its own packets. There is also no 471 additional mechanism needed on the root or transit LSR to match 472 upstream traffic to the downstream forwarding state. Packets that 473 are forwarded over a MP2MP LSP will not traverse a link more than 474 once, with the exception of LAN links which are discussed in 475 Section 4.2.1 477 For the setup of a MP2MP LSP with LDP we define 2 new protocol 478 entities, the MP2MP downstream FEC and upstream FEC element. Both 479 elements will be used in the FEC TLV. The description of the MP2MP 480 elements follow. 482 4.1. The MP2MP downstream and upstream FEC elements. 484 The structure, encoding and error handling for the MP2MP downstream 485 and upstream FEC elements are the same as for the P2MP FEC element 486 described in Section 2.1. The difference is that two new FEC types 487 are used: MP2MP downstream type (TBD) and MP2MP upstream type (TBD). 489 If a FEC TLV contains an MP2MP FEC Element, the MP2MP FEC Element 490 MUST be the only FEC Element in the FEC TLV. 492 A MP2MP Downstream FEC with the Root Node Address octets filled with 493 zeros and Opaque Length set to 0 is a wildcard MP2MP Downstream FEC 494 for all MP2MP Downstream FECs of matching root node address family. 495 Similarly, a MP2MP Upstream FEC with the Root Node Address octets 496 filled with zeros and Opaque Length set to 0 is a wildcard MP2MP 497 Upstream FEC for all MP2MP Upstream FECs of matching root node 498 address family. 500 4.2. Using the MP2MP FEC elements 502 This section defines the rules for the processing and propagation of 503 the MP2MP FEC elements. The following notation is used in the 504 processing rules: 506 1. MP2MP downstream LSP (or simply downstream ): an 507 MP2MP LSP downstream path with root node address X and opaque 508 value Y. 510 2. MP2MP upstream LSP (or simply upstream ): a 511 MP2MP LSP upstream path for downstream node D with root node 512 address X and opaque value Y. 514 3. MP2MP downstream FEC element : a FEC element with root node 515 address X and opaque value Y used for a downstream MP2MP LSP. 517 4. MP2MP upstream FEC element : a FEC element with root node 518 address X and opaque value Y used for an upstream MP2MP LSP. 520 5. MP2MP Label Map downstream : A Label Map message with a 521 FEC TLV with a single MP2MP downstream FEC element and 522 label TLV with label L. 524 6. MP2MP Label Map upstream : A Label Map message with a 525 FEC TLV with a single MP2MP upstream FEC element and label 526 TLV with label Lu. 528 7. MP2MP Label Withdraw downstream : a Label Withdraw 529 message with a FEC TLV with a single MP2MP downstream FEC element 530 and label TLV with label L. 532 8. MP2MP Label Withdraw upstream : a Label Withdraw 533 message with a FEC TLV with a single MP2MP upstream FEC element 534 and label TLV with label Lu. 536 The procedures below are organized by the role which the node plays 537 in the MP2MP LSP. Node Z knows that it is a leaf node by a discovery 538 process which is outside the scope of this document. During the 539 course of the protocol operation, the root node recognizes its role 540 because it owns the root node address. A transit node is any node 541 (other then the root node) that receives a MP2MP Label Map message 542 (i.e., one that has leaf nodes downstream of it). 544 Note that a transit node (and indeed the root node) may also be a 545 leaf node and the root node does not have to be an ingress LSR or 546 leaf of the MP2MP LSP. 548 4.2.1. MP2MP Label Map upstream and downstream 550 The following lists procedures for generating and processing MP2MP 551 Label Map messages for nodes that participate in a MP2MP LSP. An LSR 552 should apply those procedures that apply to it, based on its role in 553 the MP2MP LSP. 555 For the approach described here if there are several receivers for a 556 MP2MP LSP on a LAN, packets are replicated over the LAN. This may 557 not be optimal; optimizing this case is for further study, see [5]. 559 4.2.1.1. Determining one's upstream MP2MP LSR 561 Determining the upstream LDP peer U for a MP2MP LSP follows 562 the procedure for a P2MP LSP described in Section 2.3.1.1. 564 4.2.1.2. Determining one's downstream MP2MP LSR 566 A LDP peer U which receives a MP2MP Label Map downstream from a LDP 567 peer D will treat D as downstream MP2MP LSR. 569 4.2.1.3. MP2MP leaf node operation 571 A leaf node Z of a MP2MP LSP determines its upstream LSR U for 572 as per Section 4.2.1.1, allocates a label L, and sends a MP2MP 573 Label Map downstream to U. 575 Leaf node Z expects an MP2MP Label Map upstream from node 576 U in response to the MP2MP Label Map downstream it sent to node U. Z 577 checks whether it already has forwarding state for upstream . 578 If not, Z creates forwarding state to push label Lu onto the traffic 579 that Z wants to forward over the MP2MP LSP. How it determines what 580 traffic to forward on this MP2MP LSP is outside the scope of this 581 document. 583 4.2.1.4. MP2MP transit node operation 585 When node Z receives a MP2MP Label Map downstream over 586 interface I from node D it checks whether it has forwarding state for 587 downstream . If not, Z allocates a label L' and installs 588 downstream forwarding state to swap label L' with label L over 589 interface I. Z also determines its upstream LSR U for as per 590 Section 4.2.1.1, and sends a MP2MP Label Map downstream to 591 U. 593 If Z already has forwarding state for downstream , all that Z 594 needs to do is update its forwarding state. Assuming its old 595 forwarding state was L'-> { ..., }, its new 596 forwarding state becomes L'-> { ..., , }. 599 Node Z checks whether it already has forwarding state upstream . If it does, then no further action needs to happen. If it does 601 not, it allocates a label Lu and creates a new label swap for Lu from 602 the label swap(s) from the forwarding state downstream , 603 omitting the swap on interface I for node D. This allows upstream 604 traffic to follow the MP2MP tree down to other node(s) except the 605 node from which Z received the MP2MP Label Map downstream . 606 Node Z determines the downstream MP2MP LSR as per Section 4.2.1.2, 607 and sends a MP2MP Label Map upstream to node D. 609 Transit node Z will also receive a MP2MP Label Map upstream in response to the MP2MP Label Map downstream sent to node U over 611 interface Iu. Node Z will add label swap Lu over interface Iu to the 612 forwarding state upstream . This allows packets to go up 613 the tree towards the root node. 615 4.2.1.5. MP2MP root node operation 617 4.2.1.5.1. Root node is also a leaf 619 Suppose root/leaf node Z receives a MP2MP Label Map downstream over over interface I from node D. Z checks whether it already has 621 forwarding state downstream . If not, Z creates forwarding 622 state for downstream to push label L on traffic that Z wants to 623 forward down the MP2MP LSP. How it determines what traffic to 624 forward on this MP2MP LSP is outside the scope of this document. If 625 Z already has forwarding state for downstream , then Z will add 626 the label push for L over interface I to it. 628 Node Z checks if it has forwarding state for upstream . If 629 not, Z allocates a label Lu and creates upstream forwarding state to 630 push Lu with the label push(s) from the forwarding state downstream 631 , except the push on interface I for node D. This allows 632 upstream traffic to go down the MP2MP to other node(s), except the 633 node from which the traffic was received. Node Z determines the 634 downstream MP2MP LSR as per section Section 4.2.1.2, and sends a 635 MP2MP Label Map upstream to node D. Since Z is the root of 636 the tree Z will not send a MP2MP downstream map and will not receive 637 a MP2MP upstream map. 639 4.2.1.5.2. Root node is not a leaf 641 Suppose the root node Z receives a MP2MP Label Map dowbstream over over interface I from node D. Z checks whether it already has 643 forwarding state for downstream . If not, Z creates downstream 644 forwarding state and installs a outgoing label L over interface I. If 645 Z already has forwarding state for downstream , then Z will add 646 label L over interface I to the existing state. 648 Node Z checks if it has forwarding state for upstream . If 649 not, Z allocates a label Lu and creates forwarding state to swap Lu 650 with the label swap(s) from the forwarding state downstream , 651 except the swap for node D. This allows upstream traffic to go down 652 the MP2MP to other node(s), except the node is was received from. 653 Root node Z determines the downstream MP2MP LSR D as per 654 Section 4.2.1.2, and sends a MP2MP Label Map upstream to 655 it. Since Z is the root of the tree Z will not send a MP2MP 656 downstream map and will not receive a MP2MP upstream map. 658 4.2.2. MP2MP Label Withdraw 660 The following lists procedures for generating and processing MP2MP 661 Label Withdraw messages for nodes that participate in a MP2MP LSP. 662 An LSR should apply those procedures that apply to it, based on its 663 role in the MP2MP LSP. 665 4.2.2.1. MP2MP leaf operation 667 If a leaf node Z discovers (by means outside the scope of this 668 document) that it is no longer a leaf of the MP2MP LSP, it SHOULD 669 send a downstream Label Withdraw to its upstream LSR U for 670 , where L is the label it had previously advertised to U for 671 . 673 Leaf node Z expects the upstream router U to respond by sending a 674 downstream label release for L and a upstream Label Withdraw for to remove Lu from the upstream state. Node Z will remove 676 label Lu from its upstream state and send a label release message 677 with label Lu to U. 679 4.2.2.2. MP2MP transit node operation 681 If a transit node Z receives a downstream label withdraw message from node D, it deletes label L from its forwarding state 683 downstream and from all its upstream states for . Node 684 Z sends a label release message with label L to D. Since node D is no 685 longer part of the downstream forwarding state, Z cleans up the 686 forwarding state upstream and sends a upstream Label 687 Withdraw for to D. 689 If deleting L from Z's forwarding state for downstream results 690 in no state remaining for , then Z propagates the Label 691 Withdraw to its upstream node U for . 693 4.2.2.3. MP2MP root node operation 695 The procedure when the root node of a MP2MP LSP receives a label 696 withdraw message is the same as for transit nodes, except that the 697 root node would not propagate the Label Withdraw upstream (as it has 698 no upstream). 700 4.2.2.4. MP2MP Upstream LSR change 702 The procedure for changing the upstream LSR is the same as documented 703 in Section 2.3.2.4, except it is applied to MP2MP FECs, using the 704 procedures described in Section 4.2.1 through Section 4.2.2.3. 706 5. mLDP wildcard FECs 708 P2MP, MP2MP Downstream and MP2MP Upstream FECs are examples of mLDP 709 Wildcard FECs. P2MP, and MP2MP Downstream Wildcard FECs may appear 710 only in Label Request, Label Withdraw and Label Release messages. 711 MP2MP Upstream Wildcard FECs may appear only in Label Withdraw and 712 Label Release messages. A label TLV object MUST not be present in 713 messages containing an mLDP wildcard FEC. 715 5.1. Label Request Message 717 Use of a Label Request Message is defined only for Wildcard versions 718 of the P2MP, and MP2MP Downstream FEC. An LSR sends a Label Request 719 Message containing an mLDP Wildcard FEC to requests the 720 readvertisement of all FECs of the specified address family. The 721 procedures defined above for various mLDP FEC types are to be 722 reapplied individually to any received Label Mapping advertisements. 724 5.2. Label Withdraw Message 726 An LSR sends a Label Withdraw Message containing an mLDP Wildcard FEC 727 when it wants to withdraw all the P2MP, MP2MP Upstream or MP2MP 728 Downstream FECs previously advertised. The same procedures defined 729 for the non Wildcard case apply except that instead of sending 730 individual label release messages only a single mLDP wildcard Label 731 Release message is sent to acknowledge completion of a mLDP wildcard 732 withdraw. 734 5.3. Label Release Message 736 An LSR sends an mLDP wildcard Label Release Message to acknowledge 737 the completion of processing for a mLDP wildcard withdraw. An LSR 738 may also send an unsolicited wildcard Label release to indicate to 739 that LDP neighbor that all previously send label mappings are to be 740 released. An LSR receiving an Label Release Message for a Wildcard 741 FEC MUST release all labels it assigned to this LSR for the given FEC 742 type and removes them from forwarding use. 744 6. Upstream label allocation on Ethernet networks 746 On Ethernet networks the upstream LSR will send a copy of the packet 747 to each receiver individually. If there is more then one receiver on 748 the Ethernet we don't take full benefit of the multi-access 749 capability of the network. We may optimize the bandwidth consumption 750 on the Ethernet and replication overhead on the upstream LSR by using 751 upstream label allocation [5]. Procedures on how to distribute 752 upstream labels using LDP is documented in [6]. 754 7. Root node redundancy for MP2MP LSPs 756 MP2MP leaf nodes must use the same root node to setup the MP2MP LSP. 757 Otherwise there will be partitioned MP2MP LSP and traffic sourced by 758 some leafs is not received by others. Having a single root node for 759 a MP2MP LSP is a single point of failure, which is not preferred. We 760 need a fast and efficient mechanism to recover from a root node 761 failure. 763 7.1. Root node redundancy procedure 765 It is likely that the root node for a MP2MP LSP is defined 766 statically. The root node address may be configured on each leaf 767 statically or learned using a dynamic protocol. How MP2MP leafs 768 learn about the root node is out of the scope of this document. A 769 MP2MP LSP is uniquely identified by a opaque value and the root node 770 address. Suppose that for the same opaque value we define two root 771 node addresses and we build a tree to each root using the same opaque 772 value. Effectively these will be treated as different MP2MP LSPs in 773 the network. Since all leafs have setup a MP2MP LSP to each one of 774 the root nodes for this opaque value, a sending leaf may pick either 775 of the two MP2MP LSPs to forward a packet on. The leaf nodes will 776 receive the packet on one of the MP2MP LSPs, the client of the MP2MP 777 LSP does not care on which MP2MP LSP the packet was received from, as 778 long as they are for the same opaque value. The sending leaf MUST 779 only forward a packet on one MP2MP LSP at a given point in time. The 780 receiving leafs are unable to discard duplicate packets because they 781 accept on both LSPs. Using both these MP2MP LSPs we can implement 782 redundancy using the following procedures. 784 A sending leaf selects a single root node out of the available roots 785 for a given opaque value. A good strategy MAY be to look at the 786 unicast routing table and select a root that is closest according in 787 terms of unicast metric. As soon as the root address of our active 788 root disappears from the unicast routing table (or becomes less 789 attractive) due to root node or link failure we can select a new best 790 root address and start forwarding to it directly. If multiple root 791 nodes have the same unicast metric, the highest root node addresses 792 MAY be selected, or we MAY do per session load balancing over the 793 root nodes. 795 All leafs participating in a MP2MP LSP MUST join to all the available 796 root nodes for a give opaque value. Since the sending leaf may pick 797 any MP2MP LSP, it must be prepared to receive on it. 799 The advantage of pre-building multiple MP2MP LSPs for a single opaque 800 value is that we can converge from a root node failure as fast as the 801 unicast routing protocol is able to notify us. There is no need for 802 an additional protocol to advertise to the leaf nodes which root node 803 is the active root. The root selection is a local leaf policy that 804 does not need to be coordinated with other leafs. The disadvantage 805 is that we are using more label resources depending on how many root 806 nodes are defined. 808 8. Make before break 810 An upstream LSR is chosen based on the best path to reach the root of 811 the MP LSP. When the best path to reach the root changes it needs to 812 choose a new upstream LSR. Section 2.3.2.4 and Section 4.2.2.4 813 describes these procedures. When the best path to the root changes 814 the LSP may be broken and packet forwarding is interrupted, in that 815 case it needs to converge to a new upstream LSR ASAP. There are also 816 scenarios where the best path changed, but the LSP is still 817 forwarding packets. That happens when links come up or routing 818 metrics are changed. In that case it would like to build the new LSP 819 before it breaks the old LSP to minimize the traffic interruption. 820 The approuch described below deals with both scenarios and does not 821 require LDP to know which of the events above caused the upstream 822 router to change. The approuch below is an optional extention to the 823 MP LSP building procedures described in this draft. 825 8.1. Protocol event 827 An approach is to use additional signaling in LDP. Suppose a 828 downstream LSR-D is changing to a new upstream LSR-U for FEC-A, this 829 LSR-U may already be forwarding packets for this FEC-A. Based on the 830 existence of state for FEC-A, LSR-U will send a notification to the 831 LSR-D to initiate the switchover. The assumption is that if our 832 upstream LSR-U has state for the FEC-A and it has received a 833 notification from its upstream router, then this LSR is forwarding 834 packets for this FEC-A and it can send a notification back to 835 initiate the switchover. You could say there is an explicit 836 notification to tell the LSR it became part of the tree identified by 837 FEC-A. LSR-D can be in 3 different states. 839 1. There no state for a given FEC-A. 841 2. State for FEC-A has just been created and is waiting for 842 notification. 844 3. State for FEC-A exists and notification was received. 846 Suppose LSR-D sends a label mapping for FEC-A to LSR-U. LSR-U must 847 only reply with a notification to LSR-D if it is in state #3 as 848 described above. If LSR-U is in state 1 or 2, it should remember it 849 has received a label mapping from LSR-D which is waiting for a 850 notification. As soon as LSR-U received a notification from its 851 upstream LSR it can move to state #3 and trigger notifications to its 852 downstream LSR's that requested it. More details will be added in 853 the next revision of the draft. 855 9. Security Considerations 857 The same security considerations apply as for the base LDP 858 specification, as described in [1]. 860 10. IANA considerations 862 This document creates a new name space (the LDP MP Opaque Value 863 Element type) that is to be managed by IANA. Also, this document 864 requires allocation of three new LDP FEC element types: the P2MP 865 type, the MP2MP-up and the MP2MP-down types. 867 11. Acknowledgments 869 The authors would like to thank the following individuals for their 870 review and contribution: Nischal Sheth, Yakov Rekhter, Rahul 871 Aggarwal, Arjen Boers, Eric Rosen, Nidhi Bhaskar, Toerless Eckert and 872 George Swallow. 874 12. Contributing authors 876 Below is a list of the contributing authors in alphabetical order: 878 Shane Amante 879 Level 3 Communications, LLC 880 1025 Eldorado Blvd 881 Broomfield, CO 80021 882 US 883 Email: Shane.Amante@Level3.com 885 Luyuan Fang 886 AT&T 887 200 Laurel Avenue, Room C2-3B35 888 Middletown, NJ 07748 889 US 890 Email: luyuanfang@att.com 892 Hitoshi Fukuda 893 NTT Communications Corporation 894 1-1-6, Uchisaiwai-cho, Chiyoda-ku 895 Tokyo 100-8019, 896 Japan 897 Email: hitoshi.fukuda@ntt.com 899 Yuji Kamite 900 NTT Communications Corporation 901 Tokyo Opera City Tower 902 3-20-2 Nishi Shinjuku, Shinjuku-ku, 903 Tokyo 163-1421, 904 Japan 905 Email: y.kamite@ntt.com 907 Kireeti Kompella 908 Juniper Networks 909 1194 N. Mathilda Ave. 910 Sunnyvale, CA 94089 911 US 912 Email: kireeti@juniper.net 913 Ina Minei 914 Juniper Networks 915 1194 N. Mathilda Ave. 916 Sunnyvale, CA 94089 917 US 918 Email: ina@juniper.net 920 Jean-Louis Le Roux 921 France Telecom 922 2, avenue Pierre-Marzin 923 Lannion, Cedex 22307 924 France 925 Email: jeanlouis.leroux@francetelecom.com 927 Bob Thomas 928 Cisco Systems, Inc. 929 300 Beaver Brook Road 930 Boxborough, MA, 01719 931 E-mail: rhthomas@cisco.com 933 Lei Wang 934 Telenor 935 Snaroyveien 30 936 Fornebu 1331 937 Norway 938 Email: lei.wang@telenor.com 940 IJsbrand Wijnands 941 Cisco Systems, Inc. 942 De kleetlaan 6a 943 1831 Diegem 944 Belgium 945 E-mail: ice@cisco.com 947 13. References 949 13.1. Normative References 951 [1] Andersson, L., Doolan, P., Feldman, N., Fredette, A., and B. 952 Thomas, "LDP Specification", RFC 3036, January 2001. 954 [2] Bradner, S., "Key words for use in RFCs to Indicate Requirement 955 Levels", BCP 14, RFC 2119, March 1997. 957 [3] Reynolds, J. and J. Postel, "Assigned Numbers", RFC 1700, 958 October 1994. 960 [4] Roux, J., "Requirements for point-to-multipoint extensions to 961 the Label Distribution Protocol", 962 draft-leroux-mpls-mp-ldp-reqs-01 (work in progress), July 2005. 964 [5] Aggarwal, R., "MPLS Upstream Label Assignment and Context 965 Specific Label Space", draft-raggarwa-mpls-upstream-label-01 966 (work in progress), October 2005. 968 [6] Aggarwal, R. and J. Roux, "MPLS Upstream Label Assignment for 969 RSVP-TE and LDP", draft-raggarwa-mpls-rsvp-ldp-upstream-00 (work 970 in progress), July 2005. 972 13.2. Informative References 974 [7] Andersson, L. and E. Rosen, "Framework for Layer 2 Virtual 975 Private Networks (L2VPNs)", draft-ietf-l2vpn-l2-framework-05 976 (work in progress), June 2004. 978 [8] Aggarwal, R., "Extensions to RSVP-TE for Point to Multipoint TE 979 LSPs", draft-ietf-mpls-rsvp-te-p2mp-02 (work in progress), 980 July 2005. 982 [9] Rosen, E. and R. Aggarwal, "Multicast in MPLS/BGP IP VPNs", 983 draft-ietf-l3vpn-2547bis-mcast-00 (work in progress), June 2005. 985 Authors' Addresses 987 Ina Minei 988 Juniper Networks 989 1194 N. Mathilda Ave. 990 Sunnyvale, CA 94089 991 US 993 Email: ina@juniper.net 995 Kireeti Kompella 996 Juniper Networks 997 1194 N. Mathilda Ave. 998 Sunnyvale, CA 94089 999 US 1001 Email: kireeti@juniper.net 1003 IJsbrand Wijnands 1004 Cisco Systems, Inc. 1005 De kleetlaan 6a 1006 Diegem 1831 1007 Belgium 1009 Email: ice@cisco.com 1011 Bob Thomas 1012 Cisco Systems, Inc. 1013 300 Beaver Brook Road 1014 Boxborough 01719 1015 US 1017 Email: rhthomas@cisco.com 1019 Intellectual Property Statement 1021 The IETF takes no position regarding the validity or scope of any 1022 Intellectual Property Rights or other rights that might be claimed to 1023 pertain to the implementation or use of the technology described in 1024 this document or the extent to which any license under such rights 1025 might or might not be available; nor does it represent that it has 1026 made any independent effort to identify any such rights. Information 1027 on the procedures with respect to rights in RFC documents can be 1028 found in BCP 78 and BCP 79. 1030 Copies of IPR disclosures made to the IETF Secretariat and any 1031 assurances of licenses to be made available, or the result of an 1032 attempt made to obtain a general license or permission for the use of 1033 such proprietary rights by implementers or users of this 1034 specification can be obtained from the IETF on-line IPR repository at 1035 http://www.ietf.org/ipr. 1037 The IETF invites any interested party to bring to its attention any 1038 copyrights, patents or patent applications, or other proprietary 1039 rights that may cover technology that may be required to implement 1040 this standard. Please address the information to the IETF at 1041 ietf-ipr@ietf.org. 1043 Disclaimer of Validity 1045 This document and the information contained herein are provided on an 1046 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1047 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 1048 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 1049 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 1050 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1051 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1053 Copyright Statement 1055 Copyright (C) The Internet Society (2006). This document is subject 1056 to the rights, licenses and restrictions contained in BCP 78, and 1057 except as set forth therein, the authors retain all their rights. 1059 Acknowledgment 1061 Funding for the RFC Editor function is currently provided by the 1062 Internet Society.