idnits 2.17.1 draft-rosen-l3vpn-mvpn-segments-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (April 10, 2013) is 4033 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 4601 (ref. 'PIM-SM') (Obsoleted by RFC 7761) == Outdated reference: A later version (-04) exists of draft-ietf-mpls-targeted-mldp-01 == Outdated reference: A later version (-08) exists of draft-rekhter-mpls-pim-sm-over-mldp-02 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 L3VPN Working Group Maria Napierala 3 Internet Draft AT&T 4 Intended Status: Proposed Standard 5 Expires: October 10, 2013 Eric C. Rosen 6 IJsbrands Wijnands 7 Cisco Systems, Inc. 9 April 10, 2013 11 A Simple Method for Segmenting Multicast Tunnels for Multicast VPNs 13 draft-rosen-l3vpn-mvpn-segments-05.txt 15 Abstract 17 To provide Multicast VPN (MVPN) Service, Service Providers (SPs) need 18 to instantiate multicast tunnels (known as "P-tunnels") that enable 19 the Provider Edge (PE) routers of a given VPN to transmit multicast 20 packets to each other. Some SPs organize their networks in a 21 hierarchical manner, with the PE routers in "edge areas", and the 22 edge areas connected to each other via a "core area". A P-tunnel that 23 connects PE routers in different edge areas can be thought of as 24 having three segments: a segment through one edge area, a segment 25 through the core area, and a segment through the second edge area. 26 It is desirable to preserve the independence of the core area by 27 allowing it to use a different tunneling technology than that used in 28 the edge areas. However, it is not desirable for the core area 29 Border Routers (BRs) to participate in the MVPN-specific signaling, 30 or even to have any knowledge of which MVPNs are in the edge areas 31 that attach to it. This document specifies a simple method for 32 segmenting MVPN P-tunnels at the BRs, subject to these constraints. 34 Status of this Memo 36 This Internet-Draft is submitted to IETF in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF), its areas, and its working groups. Note that 41 other groups may also distribute working documents as Internet- 42 Drafts. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 49 The list of current Internet-Drafts can be accessed at 50 http://www.ietf.org/ietf/1id-abstracts.txt. 52 The list of Internet-Draft Shadow Directories can be accessed at 53 http://www.ietf.org/shadow.html. 55 Copyright and License Notice 57 Copyright (c) 2013 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents 62 (http://trustee.ietf.org/license-info) in effect on the date of 63 publication of this document. Please review these documents 64 carefully, as they describe your rights and restrictions with respect 65 to this document. Code Components extracted from this document must 66 include Simplified BSD License text as described in Section 4.e of 67 the Trust Legal Provisions and are provided without warranty as 68 described in the Simplified BSD License. 70 Table of Contents 72 1 Introduction .......................................... 4 73 1.1 MVPN through Core and Edge Areas ...................... 4 74 1.2 Terminology ........................................... 7 75 2 Procedures ............................................ 7 76 2.1 Choosing the Upstream BR .............................. 7 77 2.2 When the P-tunnel uses mLDP in the Edge Areas ......... 8 78 2.3 When the P-tunnel uses PIM in the Edge Areas .......... 9 79 2.3.1 Source-Specific Trees ................................. 9 80 2.3.2 Shared Trees when the BR is not the RP ................ 9 81 2.3.3 Shared Trees when each BR is an RP .................... 9 82 3 Aggregation Strategies ................................ 10 83 4 Preventing Aggregation ................................ 10 84 4.1 mLDP P-Tunnels ........................................ 11 85 4.2 PIM P-Tunnels ......................................... 11 86 5 IANA Considerations ................................... 12 87 6 Security Considerations ............................... 12 88 7 Acknowledgments ....................................... 12 89 8 Authors' Addresses .................................... 13 90 9 Normative References .................................. 13 91 10 Informational References .............................. 14 92 1. Introduction 94 1.1. MVPN through Core and Edge Areas 96 Consider a Service Provider (SP) network that consists of a number of 97 "edge areas", along with a "core area". Each edge area contains some 98 number of Provider Edge (PE) routers. At the boundary of the "core 99 area" are "Border Routers" (BRs). Each BR has a number of "edge- 100 facing" interfaces that lead into edge areas, and a number of "core- 101 facing" interfaces that lead to the core. Any data packet that needs 102 to travel from one edge area to another must traverse the core (going 103 through at least one BR) to do so. 105 We assume that some set of PEs are offering Multicast Virtual Private 106 Network (MVPN) service according to [MVPN]. This requires the PEs of 107 a given VPN to create one or more multicast tunnels through the SP 108 network ("P-tunnels"). Each such tunnel will have a single "root PE" 109 and a number of "leaf PEs". Any P-tunnel that has a leaf PE that is 110 in a different edge area than the root PE will have to traverse the 111 core area, passing through one or more BRs. 113 When a P-tunnel traverses one or more BRs, one of them is the ingress 114 BR and one is the egress BR. The procedures of this document are 115 applicable whenever the egress BR for a particular P-tunnel can 116 determine the identity of the ingress BR for that P-tunnel. This 117 will be the case if (though not necessarily only if) at least one of 118 the following two conditions holds: 120 - The BRs use BGP to distribute to each other the routes to the PE 121 routers. (We do not assume that the core area is in a different 122 Autonomous System (AS) than the edge areas; this may or may not 123 be the case.) 125 - The BRs are fully meshed by a set of point-to-point RSVP-TE 126 tunnels. 128 That is, the procedures of this document are applicable whenever the 129 procedures of [TMLDP] section 1.3 ("Targeted mLDP and the Upstream 130 LSR") can be applied. Even if neither of the two conditions above 131 holds, there may be other methods that an egress BR can use to 132 determine the identity of the egress BR. This document does not 133 place any restrictions on the method used. 135 There are a number of different multicast tunneling technologies that 136 the PEs can use for setting up the P-tunnels. The tunneling 137 technology need not be the same in the core area as in the edge area, 138 and the tunneling technology need not be the same in both edge areas. 139 In this document, we focus on two particular edge area technologies: 141 PIM [PIM-SM] and Multipoint LDP (mLDP) [mLDP]. Generalization to the 142 use of other P-tunnel technologies in the edge areas is 143 straightforward. It is even possible to use unicast replication, 144 rather than a true multicast tunneling technology. Furthermore, if 145 the core area does use multicast tunnels, it can aggregate a number 146 of P-tunnels from the edge areas into a single tunnel through the 147 core. In this case, a set of P-tunnels are aggregated upon entry 148 into the core, and deaggregated upon exit from the core. 150 Let us consider the following topology: 152 PE1 --- Edge Area 1 --- BR1 ---------- BR3 --- Edge Area 3 --- PE3 153 | | 154 PE2 -------| | 155 Core Area 156 | 157 |----- BR4 --- Edge Area 4 --- PE4 159 Suppose there is some MVPN to which PE1, ..., PE4 are attached. 160 Suppose that there are two P-tunnels for that MVPN. Tunnel T1 is 161 rooted at PE1 and has PE3 as a leaf. Tunnel T2 is rooted at PE2 and 162 has both PE3 and PE4 as leaf nodes. 164 If no segmentation is done, the creation of tunnel T1 begins at PE3 165 with PIM or mLDP signaling. This signaling passes along through edge 166 area 3 to BR3, and passes through the core to BR1. "Passing through 167 the core" means passing through each intermediate core router on the 168 path from BR3 to BR1. The signaling then passes through edge area 1 169 to PE1. 171 If segmentation is done, the creation of tunnel T1 begins in exactly 172 the same way, at PE3, and passes through Edge Area 3 to BR3 in 173 exactly the same manner. However, the signaling through the core is 174 different. No signaling passes through the intermediate nodes in the 175 core area. Rather, BR3 does mLDP signaling over a Targeted LDP 176 Session to BR1 [TMLDP]. BR3 passes enough information in this 177 Targeted mLDP signaling to enable BR1 to reinitiate the necessary PIM 178 or mLDP signaling in Edge Area 1. Within the core area, the Border 179 Routers decide what sort of tunneling technology to use, and the core 180 tunneling technology is transparent to the PEs and to any other 181 systems in the edge areas. 183 The BRs may, for example, decide to use Unicast Replication. In this 184 case, when BR1 receives, from Edge Area 1, a packet traveling on the 185 Area 1 segment of tunnel T1, it encapsulates it and unicasts it to 186 BR3. The encapsulation carries enough information to inform BR3 that 187 the packet needs to be put on the Area 3 segment of tunnel T1. If 188 BR1 receives, from Edge Area 1, a packet traveling on the Area 1 189 segment of T2, BR1 makes two copies of the packet, encapsulates each, 190 and then unicasts one copy to BR3 and one copy to BR4. Based on 191 information carried in the encapsulation, BR3 and BR4 each know that 192 the packet must be forwarded on the segment of T2 in their respective 193 Edge Areas. 195 Alternatively, the BRs may decide to aggregate T1 and T2 into a 196 single core multicast tunnel. Suppose, for example, that there is an 197 RSVP-TE P2MP LSP whose head-end is BR1, and that has BR3 and BR4 as 198 leaf nodes. Then when BR1 receives, from Edge Area 1, a packet 199 traveling on tunnel T1, it encapsulates it and sends it through this 200 core tunnel. When BR1 receives, from Edge Area 1, a packet traveling 201 on P-tunnel T2, it again encapsulates it and sends it through this 202 same core tunnel. Both packets will be received by BR3 and BR4. The 203 packet traveling on P-tunnel T2 will be forwarded by BR3 and BR4 on 204 the Edge Area 3 and Edge Area 4 segments of T2 respectively. The 205 packet traveling on P-tunnel T1 will be forwarded by BR3 on the Edge 206 Area 3 segment of T1. However, BR4 will drop that packet, because 207 BR4 there is no Edge Area 4 segment of T1. Naturally, the 208 encapsulation used by the head-end of the core tunnel must enable the 209 leaf nodes of the core tunnel to determine whether a given packet is 210 traveling on a P-tunnel that passes through that leaf node, and if 211 so, which P-tunnel. 213 It is worth noting that in the following topology: 215 PE1 --- Edge Area 1 --- BR1 --------- BR3 --- Edge Area 3 --- PE3 216 | | | 217 PE2 -------| | | 218 |--- BR2 --- Core 219 | 220 |---- BR4 --- Edge Area 4 --- PE4 221 | 222 |---- BR5 --- Edge Area 5 --- PE5 224 it is possible to combine unicast replication by the PEs with unicast 225 replication by the BRs. For instance, if PE2 has a multicast packet 226 to be sent to PE4 and PE5, it is possible for PE2 to unicast one copy 227 of the packet to BR2 and then for BR2 to unicast a copy to BR4 and a 228 copy to B5. This can be done if the PEs have Targeted LDP sessions 229 to the BRs, and the BRs have Targeted LDP sessions to each other. 230 This is a straightforward generalization of the procedures described 231 in section 2.1 below. 233 1.2. Terminology 235 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 236 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 237 document, when and only when appearing in all capital letters, are to 238 be interpreted as described in [RFC2119]. 240 2. Procedures 242 The Border Routers of the core area must know that they are core area 243 Border Routers. This is known either by provisioning, or by some 244 other method that is outside the scope of this document. 246 2.1. Choosing the Upstream BR 248 It is assumed that the Border Routers have routes to the PE routers. 249 It is also assumed that the procedures of [TMLDP] section 1.3 250 ("Targeted mLDP and the Upstream LSR") can be applied. Suppose, for 251 example, that the path from a given PE, PE1, and a second PE, PE2, 252 enters the core through BR1 and exits the core through BR2. Then BR1 253 is to consider BR2 to be the "upstream BR" on its path to PE2. The 254 procedures used to select the "upstream BR" for a particular PE may 255 be one of the following: 257 - If BR1 finds that its route to PE1 is a BGP-distributed route 258 whose next hop is another BR, say BR2, then BR2 may be considered 259 to be the "upstream BR". 261 - If the core contains a full mesh of RSVP-TE P2P tunnels among the 262 BRs, and if BR1's "next hop interface" to PE1 is a tunnel leading 263 to BR2, then BR2 may be considered to be the "upstream BR". 265 Other methods of finding the "upstream BR" MAY be used; the decision 266 to use a particular method belongs to the SP. However, if given BR 267 cannot determine the core area's exit point on the path to a given 268 PE, then the procedures of this document are not applicable. 270 2.2. When the P-tunnel uses mLDP in the Edge Areas 272 The creation of an mLDP P-tunnel begins at a PE router, call it PE2, 273 that is one of the egress nodes of the P-tunnel. In order to use 274 mLDP to set up the P-tunnel, PE2 must specify the PE that is the root 275 of the P-tunnel. We will call the root PE1. If it is necessary for 276 the given P-tunnel to pass through a particular BR, say BR1, then BR1 277 will receive an LDP Label Mapping Message for that P-tunnel. This 278 message will be received on an LDP session over one of BR1's edge- 279 facing interfaces. This message specifies an identifier for the P- 280 tunnel and also specifies the P-tunnel's root node, PE1. 282 BR1 then looks up its path to PE1. If BR1's path to PE1 is via one 283 of BR1's edge-facing interfaces, the procedures of this document do 284 not apply. Otherwise, BR1 must find the "upstream BR". 286 If BR2 is selected as the upstream BR towards PE1, then BR1 sets up a 287 Targeted LDP session to BR2. (If such a Targeted LDP session to BR2 288 already exists, a new session is not set up; rather the existing one 289 is used.) BR1 then executes the procedures specified in [TMLDP]. 291 In order to execute the procedures specified in [TMLDP], a downstream 292 BR must know whether the technique for carrying a P-tunnel in the 293 core is unicast replication or whether it is multicast tunneling. 294 This is known either by provisioning, or by some method that is 295 outside the scope of this document. If multicast tunneling is used 296 in the core, the upstream BR must know what kind of tunnel is to be 297 used. If aggregation of multiple P-tunnels into a single core tunnel 298 is used, all the BRs must support MPLS upstream-assigned labels. 300 In the above procedure, PE2 does not necessarily have any a priori 301 knowledge of the identity of the BR. In networks where PE2 does have 302 such a priori knowledge, PE2 could send an mLDP Label Mapping message 303 containing a "Recursive Forwarding Equivalence Class (FEC)", as 304 described in [LDP-RECURS]. The recursive FEC would explicitly 305 identify PE1 as the root of the "outer" tree and BR1 as the root of 306 the "inner" tree. If BR1 receives such a message, it follows the 307 procedures of [LDP-RECURS], to obtain a new root. If its route to 308 the new root is a BGP route whose next hop is another BR, the 309 procedures of [TMLDP] are followed. 311 2.3. When the P-tunnel uses PIM in the Edge Areas 313 The creation of a PIM P-tunnel begins when a PE router sends a PIM 314 Join message specifying either (*,G) or (S,G). We first consider the 315 case where the P-tunnel is a source-specific multicast tree. Then we 316 consider two different options that may be used when the P-tunnel is 317 a PIM shared tree. The first option may be used by a BR which is not 318 itself the Rendezvous Point (RP) for the shared tree. The second 319 option is useful if all the BRs function as RPs for the shared trees 320 within their attached edge areas. 322 2.3.1. Source-Specific Trees 324 In this case, a BR, say BR1, will receive a Join(S,G) over one of its 325 edge-facing interfaces. BR1 looks up its path to S, and determines 326 the "upstream BR" on that path. BR1 sets up a Targeted mLDP session 327 to the "upstream BR" (or uses an existing Targeted mLDP session to 328 it). BR1 then uses the encoding specified in [LDP-INBAND] to derive 329 an LDP multipath FEC from the (S,G). From this point on, the 330 procedures of [TMLDP] are used, precisely as described in the 331 previous section. 333 2.3.2. Shared Trees when the BR is not the RP 335 If the PIM P-tunnel is a shared tree (*,G), and if the PEs are 336 configured so that they never switch to source-specific trees for G, 337 then a similar procedure can be used. BR1 looks up its route to the 338 RP [PIM-SM] of the shared tree, and determines the "upstream BR" for 339 that P-tunnel. BR1 uses a Targeted mLDP session to the upstream BR, 340 and uses the encoding specified in [LDP-INBAND-SHARED] to derive an 341 LDP multipath FEC from the (*,G). Then the procedures of [TMLDP] are 342 used. 344 2.3.3. Shared Trees when each BR is an RP 346 If G is a non-SSM PIM group address, and if there are sources for G 347 in a particular edge area, it is possible to configure the BRs of 348 that area to function as RPs for G. However, each such BR then needs 349 to discover all the other BRs that are also functioning as RPs for G. 350 This can be done by having the BRs originate and receive "Source 351 Active BGP A-D routes". The procedures for generating and receiving 352 these routes, and the mLDP procedures for setting up P2MP LSPs based 353 on these routes, are specified in described in [LDP-INBAND-SHARED]. 354 However, the LDP signaling described therein would take place over 355 Targeted LDP sessions. 357 Each such Source Active A-D route refers to a particular group G. It 358 is RECOMMENDED that each such Source Active A-D route carry an IPv4 359 Address specific Route Target or an IPv6 Address specific Route 360 Target (as appropriate)[RFC4360, RFC5701], with the address G in the 361 "global administrator" field. This allows BRs that have no interest 362 in group G to filter out the Source Active A-D routes that are about 363 G. 365 3. Aggregation Strategies 367 If it is desired to aggregate multiple P-tunnels into a single core 368 area multicast tunnel, the choice of P-tunnels to map into which core 369 area multicast tunnels is made by the upstream Border Router. The 370 procedures for performing the aggregation are described in [TMLDP]. 372 When the P-tunnels are MP-LSPs, It is RECOMMENDED that an upstream 373 Border Router be able to aggregate all P-tunnels whose FECs begin 374 with the same bit string (i.e., aggregate the set of FECs that are 375 identical under a mask, where the mask consists of a sequence of ones 376 followed by a sequence of zeroes). If the PEs have been configured 377 to use MP FECs with type 2 opaque values (as defined in [MLDP-OV]), 378 this technique allows all MP-LSPs of a given MVPN to be aggregated 379 together. 381 Selection of an aggregation strategy is outside the scope of this 382 document. 384 Note that PIM P-Tunnels and mLDP P-Tunnels may be aggregated in the 385 same core tunnel. 387 4. Preventing Aggregation 389 It may be desirable to prevent certain P-tunnels from being 390 aggregated into a single core multicast tunnel. This is done by 391 configuration at the PEs. The PEs convey this information to the BRs 392 by including certain TLVs in the mLDP or PIM messages used to set up 393 the P-tunnels. 395 4.1. mLDP P-Tunnels 397 When a P-tunnel is instantiated as an MP-LSP, and the PEs of that P- 398 tunnel have been configured to disallow aggregation of that P-tunnel, 399 the PEs indicate this fact in their MLDP signaling. When the PEs 400 send a label mapping message that includes the corresponding FEC 401 element, the PEs will also include an LDP MP Status TLV [mLDP] that 402 carries the "Do Not Aggregate" status code (to be assigned by IANA). 403 This TLV MUST be passed along with FEC element by any upstream LSR 404 that sends a label mapping message or label request message 405 containing the FEC element. If an mLDP node receives label mapping 406 messages for a given FEC from more than one downstream neighbor, and 407 some of those messages have the "Do Not Aggregate" status code while 408 others do not, the "Do Not Aggregate" status code MUST be passed 409 upstream. 411 When a BR receives a label mapping message for an MP FEC element and 412 a MP Status TLV containing the "Do Not Aggregate" status code, the BR 413 knows that the MP-LSP corresponding to the FEC element SHOULD NOT be 414 aggregated into a core multicast tunnel. 416 If some PEs are configured to disallow aggregation for a given P- 417 tunnel, but others are not, the results are unpredictable. For a 418 given P-tunnel, the upstream BR MAY make its decision to aggregate or 419 not based on the first mLDP label mapping message it sees for that P- 420 tunnel. 422 4.2. PIM P-Tunnels 424 When a P-tunnel is instantiated as a PIM multicast tree, the the PEs 425 of that P-tunnel have been configured to disallow aggregation of that 426 P-tunnel, the PEs indicate this fact in their PIM signaling. When 427 the PEs send a PIM Join message for the corresponding (S,G) or (*,G), 428 the PEs will include the "Do Not Aggregate" PIM Join Attribute. This 429 is a PIM Join Attribute as specified in [PIM-JA]. The value of the 430 Attr_Type field of this Join Attribute is to be assigned by IANA. 431 The Length field of this Join Attribute is set to 0, and the F bit is 432 set to 1. This attribute MUST be passed upstream in the PIM Join 433 messages for the given (S,G) or (*,G). If a PIM node receives 434 Join(S,G) or Join(*,G) from more than one downstream neighbor, and 435 some of those Joins have the "Do Not Aggregate" Join Attribute while 436 others do not, the attribute MUST be passed upstream. The conflict 437 resolution procedure in [PIM-JA] is not used. 439 When a BR receives a PIM Join message containing the "Do Not 440 Aggregate" Join Attribute, the BR knows that the corresponding 441 multicast distribution tree SHOULD NOT be aggregated into a core 442 multicast tunnel. 444 If some PEs are configured to disallow aggregation for a given P- 445 tunnel, but others are not, the results are unpredictable. For a 446 given P-tunnel, the upstream BR MAY make its decision to aggregate or 447 not based on the first PIM Join it sees for that P-tunnel. 449 If a BR uses the procedures of [LDP-INBAND] to map a PIM tree into a 450 MP-LSP, and if the PIM tree has been set up with the "Do Not 451 Aggregate" Join Attribute, the corresponding LDP messages SHOULD 452 carry the "Do Not Aggregate" status code. If a BR using the 453 procedures of [LDP-INBAND] needs to map an MP-LSP to a PIM tree, and 454 the corresponding LDP messages carry the "Do Not Aggregate" status 455 code, the corresponding PIM messages SHOULD carry the "Do Not 456 Aggregate" PIM Join Attribute. 458 5. IANA Considerations 460 [mLDP] creates a registry known as "LDP MP Status Value Element 461 Types". This document requests IANA to assign a value from this 462 registry for "Do Not Aggregate". 464 [PIM-JA] creates a registry known as "PIM Join Attributes Types". 465 This document requests IANA to assign a value from this registry for 466 "Do Not Aggregate". 468 6. Security Considerations 470 This document raises no new security considerations beyond those 471 discussed in [LDP], [LDP-UP], and [RFC5331]. 473 7. Acknowledgments 475 The authors wish to thank Don Heidrich for his contribution to this 476 work. Thanks to Eric Rosenberg for his comments and review. 478 8. Authors' Addresses 480 Maria Napierala 481 AT&T Labs 482 200 Laurel Avenue, Middletown, NJ 07748 483 E-mail: mnapierala@att.com 485 Eric C. Rosen 486 Cisco Systems, Inc. 487 1414 Massachusetts Avenue 488 Boxborough, MA, 01719 489 E-mail: erosen@cisco.com 491 IJsbrand Wijnands 492 Cisco Systems, Inc. 493 De kleetlaan 6a Diegem 1831 494 Belgium 495 E-mail: ice@cisco.com 497 9. Normative References 499 [LDP] Loa Andersson, Ina Minei, Bob Thomas, editors, "LDP 500 Specification", RFC 5036, October 2007 502 [LDP-INBAND] IJsbrand Wijnands, Toerless Eckert, Maria Napierala, 503 Nicolai Leymann, "Multipoint LDP In-Band Signaling for Point-to- 504 Multipoint and Multipoint-to-Multipoint Label Switched Paths", RFC 505 6826, January 2013 507 [LDP-RECURS] IJsbrand Wijnands, Eric Rosen, Maria Napierala, Nicolai 508 Leymann, "Using Multipoint LDP When the Backbone Has No Route to the 509 Root", RFC 6512, February 2012 511 [LDP-UP] Rahul Aggarwal, Jean-Louis Le Roux, "MPLS Upstream Label 512 Assignment for LDP", RFC 6389, November 2011 514 [mLDP] IJsbrand Wijnands, Ina Minei, Kireeti Kompella, Bob Thomas, 515 "Label Distribution Protocol Extensions for Point-to-Multipoint and 516 Multipoint-to-Multipoint Label Switched Paths", RFC 6388, November 517 2011 519 [MVPN] Eric Rosen, Rahul Aggarwal (editors), "Multicast in MPLS/BGP 520 IP VPNs", RFC 6513, February 2012 522 [PIM-JA] Arjen Boers, IJsbrand Wijnands, Eric Rosen, "The PIM Join 523 Attribute Format", RFC 5384, November 2008 525 [PIM-SM] "Protocol Independent Multicast - Sparse Mode (PIM-SM)", 526 Fenner, Handley, Holbrook, Kouvelas, August 2006, RFC 4601 528 [RFC2119] "Key words for use in RFCs to Indicate Requirement 529 Levels.", Bradner, March 1997 531 [RFC5331] Rahul Aggarwal, Yakov Rekhter, Eric Rosen, "MPLS Upstream 532 Label Assignment and Context-Specific Label Space", RFC 5331, August 533 2009 535 [TMLDP] Maria Napierala, Eric Rosen, IJsbrands Wijnands, "Using LDP 536 Multipoint Extensions on Targeted LDP Sessions", draft-ietf-mpls- 537 targeted-mldp-01.txt, January 2013 539 10. Informational References 541 [LDP-INBAND-SHARED] Yakov Rekhter, Rahul Aggarwal, Nicolai Leymann, 542 "Carrying PIM-SM in ASM mode Trees over P2MP mLDP LSPs", draft- 543 rekhter-mpls-pim-sm-over-mldp-02.txt, January 2013 545 [MLDP-OV] Sandeep Bishnoi, Pranjal Kumar Dutta, IJsbrand Wijnands, 546 "LDP Multipoint Opaque Value Element Types", draft-bishnoi-mpls-mldp- 547 opaque-types-01.txt, October 2009 549 [RFC4360] Srihari R. Sangli, Dan Tappan, Yakov Rekhter, "BGP Extended 550 Communities Attribute", RFC 4360, February 2006 552 [RFC5701] Yakov Rekhter, "IPv6 Address Specific BGP Extended 553 Community Attribute", RFC 5701, November 2009