idnits 2.17.1 draft-ietf-l3vpn-2547bis-mcast-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 19. -- Found old boilerplate from RFC 3978, Section 5.5 on line 3134. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 3145. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 3152. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 3158. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 2006) is 6523 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'REF' is mentioned on line 2974, but not defined == Missing Reference: 'ADMIN-ADDR' is mentioned on line 2984, but not defined == Outdated reference: A later version (-10) exists of draft-ietf-l3vpn-ppvpn-mcast-reqts-08 ** Downref: Normative reference to an Informational draft: draft-ietf-l3vpn-ppvpn-mcast-reqts (ref. 'MVPN-REQ') -- No information found for draft-ietf-pim- - is the name correct? -- Possible downref: Normative reference to a draft: ref. 'PIM-SM' == Outdated reference: A later version (-07) exists of draft-ietf-mpls-rsvp-te-p2mp-05 == Outdated reference: A later version (-10) exists of draft-ietf-mpls-multicast-encaps-00 == Outdated reference: A later version (-07) exists of draft-ietf-mpls-upstream-label-01 -- Possible downref: Normative reference to a draft: ref. 'MVPN-BGP' == Outdated reference: A later version (-15) exists of draft-rosen-vpn-mcast-08 == Outdated reference: A later version (-01) exists of draft-yasukawa-l3vpn-p2mp-mcast-00 Summary: 4 errors (**), 0 flaws (~~), 11 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Eric C. Rosen (Editor) 3 Internet Draft Cisco Systems, Inc. 4 Expiration Date: December 2006 5 Rahul Aggarwal (Editor) 6 Juniper Networks 8 June 2006 10 Multicast in MPLS/BGP IP VPNs 12 draft-ietf-l3vpn-2547bis-mcast-02.txt 14 Status of this Memo 16 By submitting this Internet-Draft, each author represents that any 17 applicable patent or other IPR claims of which he or she is aware 18 have been or will be disclosed, and any of which he or she becomes 19 aware will be disclosed, in accordance with Section 6 of BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt. 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html. 37 Abstract 39 In order for IP multicast traffic within a BGP/MPLS IP VPN (Virtual 40 Private Network) to travel from one VPN site to another, special 41 protocols and procedures must be implemented by the VPN Service 42 Provider. These protocols and procedures are specified in this 43 document. 45 Table of Contents 47 1 Specification of requirements ...................... 4 48 2 Introduction ....................................... 4 49 2.1 Optimality vs Scalability .......................... 5 50 2.1.1 Multicast Distribution Trees ....................... 7 51 2.1.2 Ingress Replication through Unicast Tunnels ........ 8 52 2.2 Overview ........................................... 8 53 2.2.1 Multicast Routing Adjacencies ...................... 8 54 2.2.2 MVPN Definition .................................... 8 55 2.2.3 Auto-Discovery ..................................... 9 56 2.2.4 PE-PE Multicast Routing Information ................ 10 57 2.2.5 PE-PE Multicast Data Transmission .................. 11 58 2.2.6 Inter-AS MVPNs ..................................... 11 59 2.2.7 Optional Deployment Models ......................... 12 60 3 Concepts and Framework ............................. 12 61 3.1 PE-CE Multicast Routing ............................ 12 62 3.2 P-Multicast Service Interfaces (PMSIs) ............. 13 63 3.2.1 Inclusive and Selective PMSIs ...................... 14 64 3.2.2 Tunnels Instantiating PMSIs ........................ 15 65 3.3 Use of PMSIs for Carrying Multicast Data ........... 17 66 3.3.1 MVPNs with Default MI-PMSIs ........................ 18 67 3.3.2 When MI-PMSIs are Required ......................... 18 68 3.3.3 MVPNs That Do Not Use MI-PMSIs ..................... 18 69 4 BGP-Based Autodiscovery of MVPN Membership ......... 19 70 5 PE-PE Transmission of C-Multicast Routing .......... 21 71 5.1 RPF Information for Unicast VPN-IP Routes .......... 21 72 5.2 PIM Peering ........................................ 23 73 5.2.1 Full Per-MVPN PIM Peering Across a MI-PMSI ......... 23 74 5.2.2 Lightweight PIM Peering Across a MI-PMSI ........... 23 75 5.2.3 Unicasting of PIM C-Join/Prune Messages ............ 24 76 5.2.4 Details of Per-MVPN PIM Peering over MI-PMSI ....... 24 77 5.2.4.1 PIM C-Instance Control Packets ..................... 25 78 5.2.4.2 PIM C-instance RPF Determination ................... 25 79 5.3 Use of BGP for Carrying C-Multicast Routing ........ 26 80 5.3.1 Sending BGP Updates ................................ 27 81 5.3.2 Explicit Tracking .................................. 29 82 5.3.3 Withdrawing BGP Updates ............................ 29 83 6 I-PMSI Instantiation ............................... 29 84 6.1 MVPN Membership and Egress PE Auto-Discovery ....... 30 85 6.1.1 Auto-Discovery for Ingress Replication ............. 30 86 6.1.2 Auto-Discovery for P-Multicast Trees ............... 30 87 6.2 C-Multicast Routing Information Exchange ........... 31 88 6.3 Aggregation ........................................ 31 89 6.3.1 Aggregate Tree Leaf Discovery ...................... 31 90 6.3.2 Aggregation Methodology ............................ 32 91 6.3.3 Encapsulation of the Aggregate Tree ................ 33 92 6.3.4 Demultiplexing C-multicast traffic ................. 33 93 6.4 Mapping Received Packets to MVPNs .................. 34 94 6.4.1 Unicast Tunnels .................................... 34 95 6.4.2 Non-Aggregated P-Multicast Trees ................... 35 96 6.4.3 Aggregate P-Multicast Trees ........................ 36 97 6.5 I-PMSI Instantiation Using Ingress Replication ..... 36 98 6.6 Establishing P-Multicast Trees ..................... 37 99 6.7 RSVP-TE P2MP LSPs .................................. 38 100 6.7.1 P2MP TE LSP Tunnel - MVPN Mapping .................. 38 101 6.7.2 Demultiplexing C-Multicast Data Packets ............ 39 102 7 Optimizing Multicast Distribution via S-PMSIs ...... 39 103 7.1 S-PMSI Instantiation Using Ingress Replication ..... 40 104 7.2 Protocol for Switching to S-PMSIs .................. 40 105 7.2.1 A UDP-based Protocol for Switching to S-PMSIs ...... 40 106 7.2.1.1 Binding a Stream to an S-PMSI ...................... 41 107 7.2.1.2 Packet Formats and Constants ....................... 42 108 7.2.2 A BGP-based Protocol for Switching to S-PMSIs ...... 42 109 7.2.2.1 Advertising C-(S, G) Binding to a S-PMSI using BGP . 42 110 7.2.2.2 Explicit Tracking .................................. 43 111 7.2.2.3 Switching to S-PMSI ................................ 44 112 7.3 Aggregation ........................................ 44 113 7.4 Instantiating the S-PMSI with a PIM Tree ........... 44 114 7.5 Instantiating S-PMSIs using RSVP-TE P2MP Tunnels ... 45 115 8 Inter-AS Procedures ................................ 45 116 8.1 Non-Segmented Inter-AS Tunnels ..................... 46 117 8.1.1 Inter-AS MVPN Auto-Discovery ....................... 47 118 8.1.2 Inter-AS MVPN Routing Information Exchange ......... 47 119 8.1.3 Inter-AS I-PMSI .................................... 47 120 8.1.4 Inter-AS S-PMSI .................................... 49 121 8.2 Segmented Inter-AS Tunnels ......................... 49 122 8.2.1 Inter-AS MVPN Auto-Discovery Tunnels ............... 49 123 8.2.1.1 Originating Inter-AS MVPN A-D Information .......... 49 124 8.2.1.2 Propagating Inter-AS MVPN A-D Information .......... 50 125 8.2.1.2.1 Inter-AS Auto-Discovery Route received via EBGP .... 51 126 8.2.1.2.2 Leaf Auto-Discovery Route received via EBGP ........ 52 127 8.2.1.2.3 Inter-AS Auto-Discovery Route received via IBGP .... 52 128 8.2.2 Inter-AS MVPN Routing Information Exchange ......... 53 129 8.2.3 Inter-AS I-PMSI .................................... 54 130 8.2.3.1 Support for Unicast VPN Inter-AS Methods ........... 54 131 8.2.4 Inter-AS S-PMSI .................................... 55 132 9 Duplicate Packet Detection and Single Forwarder PE . 56 133 10 Deployment Models .................................. 59 134 10.1 Co-locating C-RPs on a PE .......................... 59 135 10.1.1 Initial Configuration .............................. 59 136 10.1.2 Anycast RP Based on Propagating Active Sources ..... 60 137 10.1.2.1 Receiver(s) Within a Site .......................... 60 138 10.1.2.2 Source Within a Site ............................... 60 139 10.1.2.3 Receiver Switching from Shared to Source Tree ...... 61 140 10.2 Using MSDP between a PE and a Local C-RP ........... 61 141 11 Encapsulations ..................................... 62 142 11.1 Encapsulations for Single PMSI per Tunnel .......... 62 143 11.1.1 Encapsulation in GRE ............................... 62 144 11.1.2 Encapsulation in IP ................................ 64 145 11.1.3 Encapsulation in MPLS .............................. 64 146 11.2 Encapsulations for Multiple PMSIs per Tunnel ....... 65 147 11.2.1 Encapsulation in GRE ............................... 65 148 11.2.2 Encapsulation in IP ................................ 65 149 11.3 Encapsulations for Unicasting PIM Control Messages . 66 150 11.4 General Considerations for IP and GRE Encaps ....... 66 151 11.4.1 MTU ................................................ 66 152 11.4.2 TTL ................................................ 66 153 11.4.3 Differentiated Services ............................ 67 154 11.4.4 Avoiding Conflict with Internet Multicast .......... 67 155 12 Security Considerations ............................ 67 156 13 IANA Considerations ................................ 67 157 14 Other Authors ...................................... 67 158 15 Other Contributors ................................. 67 159 16 Authors' Addresses ................................. 68 160 17 Normative References ............................... 69 161 18 Informative References ............................. 70 162 19 Full Copyright Statement ........................... 71 163 20 Intellectual Property .............................. 71 165 1. Specification of requirements 167 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 168 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 169 document are to be interpreted as described in [RFC2119]. 171 2. Introduction 173 [RFC4364] specifies the set of procedures which a Service Provider 174 (SP) must implement in order to provide a particular kind of VPN 175 service ("BGP/MPLS IP VPN") for its customers. The service described 176 therein allows IP unicast packets to travel from one customer site to 177 another, but it does not provide a way for IP multicast traffic to 178 travel from one customer site to another. 180 This document extends the service defined in [RFC4364] so that it 181 also includes the capability of handling iP multicast traffic. This 182 requires a number of different protocols to work together. The 183 document provides a framework describing how the various protocols 184 fit together, and also provides detailed specification of some of the 185 protocols. The detailed specification of some of the other 186 protocols is found in pre-existing documents or in companion 187 documents. 189 2.1. Optimality vs Scalability 191 In a "BGP/MPLS IP VPN" [RFC4364], unicast routing of VPN packets is 192 achieved without the need to keep any per-VPN state in the core of 193 the SP's network (the "P routers"). Routing information from a 194 particular VPN is maintained only by the Provider Edge routers (the 195 "PE routers", or "PEs") that attach directly to sites of that VPN. 196 Customer data travels through the P routers in tunnels from one PE to 197 another (usually MPLS Label Switched Paths, LSPs), so to support the 198 VPN service the P routers only need to have routes to the PE routers. 199 The PE-to-PE routing is optimal, but the amount of associated state 200 in the P routers depends only on the number of PEs, not on the number 201 of VPNs. 203 However, in order to provide optimal multicast routing for a 204 particular multicast flow, the P routers through which that flow 205 travels have to hold state which is specific to that flow. 206 Scalability would be poor if the amount of state in the P routers 207 were proportional to the number of multicast flows in the VPNs. 208 Therefore, when supporting multicast service for a BGP/MPLS IP VPN, 209 the optimality of the multicast routing must be traded off against 210 the scalability of the P routers. We explain this below in more 211 detail. 213 If a particular VPN is transmitting "native" multicast traffic over 214 the backbone, we refer to it as an "MVPN". By "native" multicast 215 traffic, we mean packets that a CE sends to a PE, such that the IP 216 destination address of the packets is a multicast group address, or 217 the packets are multicast control packets addressed to the PE router 218 itself, or the packets are IP multicast data packets encapsulated in 219 MPLS. 221 We say that the backbone multicast routing for a particular multicast 222 group in a particular VPN is "optimal" if and only if all of the 223 following conditions hold: 225 - When a PE router receives a multicast data packet of that group 226 from a CE router, it transmits the packet in such a way that the 227 packet is received by every other PE router which is on the path 228 to a receiver of that group; 230 - The packet is not received by any other PEs; 232 - While in the backbone, no more than one copy of the packet ever 233 traverses any link. 235 - While in the backbone, if bandwidth usage is to be optimized, the 236 packet traverses minimum cost trees rather than shortest path 237 trees. 239 Optimal routing for a particular multicast group requires that the 240 backbone maintain one or more source-trees which are specific to that 241 flow. Each such tree requires that state be maintained in all the P 242 routers that are in the tree. 244 This would potentially require an unbounded amount of state in the P 245 routers, since the SP has no control of the number of multicast 246 groups in the VPNs that it supports. Nor does the SP have any control 247 over the number of transmitters in each group, nor of the 248 distribution of the receivers. 250 The procedures defined in this document allow an SP to provide 251 multicast VPN service without requiring the amount of state 252 maintained by the P routers to be proportional to the number of 253 multicast data flows in the VPNs. The amount of state is traded off 254 against the optimality of the multicast routing. Enough flexibility 255 is provided so that a given SP can make his own tradeoffs between 256 scalability and optimality. An SP can even allow some multicast 257 groups in some VPNs to receive optimal routing, while others do not. 258 Of course, the cost of this flexibility is an increase in the number 259 of options provided by the protocols. 261 The basic technique for providing scalability is to aggregate a 262 number of customer multicast flows onto a single multicast 263 distribution tree through the P routers. A number of aggregation 264 methods are supported. 266 The procedures defined in this document also accommodate the SP that 267 does not want to build multicast distribution trees in his backbone 268 at all; the ingress PE can replicate each multicast data packet and 269 then unicast each replica through a tunnel to each egress PE that 270 needs to receive the data. 272 2.1.1. Multicast Distribution Trees 274 This document supports the use of a single multicast distribution 275 tree in the backbone to carry all the multicast traffic from a 276 specified set of one or more MVPNs. Such a tree is referred to as an 277 "Inclusive Tree". An Inclusive Tree which carries the traffic of more 278 than one MVPN is an "Aggregate Inclusive Tree". An Inclusive Tree 279 contains, as its members, all the PEs that attach to any of the MVPNs 280 using the tree. 282 With this option, even if each tree supports only one MVPN, the upper 283 bound on the amount of state maintained by the P routers is 284 proportional to the number of VPNs supported, rather than to the 285 number of multicast flows in those VPNs. If the trees are 286 unidirectional, it would be more accurate to say that the state is 287 proportional to the product of the number of VPNs and the average 288 number of PEs per VPN. The amount of state maintained by the P 289 routers can be further reduced by aggregating more MVPNs onto a 290 single tree. If each such tree supports a set of MVPNs, (call it an 291 "MVPN aggregation set"), the state maintained by the P routers is 292 proportional to the product of the number of MVPN aggregation sets 293 and the average number of PEs per MVPN. Thus the state does not grow 294 linearly with the number of MVPNs. 296 However, as data from many multicast groups is aggregated together 297 onto a single "Inclusive Tree", it is likely that some PEs will 298 receive multicast data for which they have no need, i.e., some degree 299 of optimality has been sacrificed. 301 This document also provides procedures which enable a single 302 multicast distribution tree in the backbone to be used to carry 303 traffic belonging only to a specified set of one or more multicast 304 groups, from one or more MVPNs. Such a tree is referred to as a 305 "Selective Tree" and more specifically as an "Aggregate Selective 306 Tree" when the multicast groups belong to different MVPNs. By 307 default, traffic from most multicast groups could be carried by an 308 Inclusive Tree, while traffic from, e.g., high bandwidth groups could 309 be carried in one of the "Selective Trees". When setting up the 310 Selective Trees, one should include only those PEs which need to 311 receive multicast data from one or more of the groups assigned to the 312 tree. This provides more optimal routing than can be obtained by 313 using only Inclusive Trees, though it requires additional state in 314 the P routers. 316 2.1.2. Ingress Replication through Unicast Tunnels 318 This document also provides procedures for carry MVPN data traffic 319 through unicast tunnels from the ingress PE to each of the egress 320 PEs. The ingress PE replicates the multicast data packet received 321 from a CE and sends it to each of the egress PEs using the unicast 322 tunnels. This requires no multicast routing state in the P routers 323 at all, but it puts the entire replication load on the ingress PE 324 router, and makes no attempt to optimize the multicast routing. 326 2.2. Overview 328 2.2.1. Multicast Routing Adjacencies 330 In BGP MPLS IP VPNs [RFC4364], each CE ("Customer Edge") router is a 331 unicast routing adjacency of a PE router, but CE routers at different 332 sites do not become unicast routing adjacencies of each other. This 333 important characteristic is retained for multicast routing -- a CE 334 router becomes a multicast routing adjacency of a PE router, but CE 335 routers at different sites do not become multicast routing 336 adjacencies of each other. 338 The multicast routing protocol on the PE-CE link is presumed to be 339 PIM. The Sparse Mode, Dense Mode, Single Source Mode, and 340 Bidirectional Modes are supported. A CE router exchanges "ordinary" 341 PIM control messages with the PE router to which it is attached. 343 The PEs attaching to a particular MVPN then have to exchange the 344 multicast routing information with each other. Two basic methods for 345 doing this are defined: (1) PE-PE PIM, and (2) BGP. In the former 346 case, the PEs need to be multicast routing adjacencies of each other. 347 In the latter case, they do not. For example, each PE may be a BGP 348 adjacency of a Route Reflector (RR), and not of any other PEs. 350 To support the "Carrier's Carrier" model of [RFC4364], mLDP or BGP 351 can be used on the PE-CE interface. This will be described in 352 subsequent versions of this document. 354 2.2.2. MVPN Definition 356 An MVPN is defined by two sets of sites, Sender Sites set and 357 Receiver Sites set, with the following properties: 359 - Hosts within the Sender Sites set could originate multicast 360 traffic for receivers in the Receiver Sites set. 362 - Receivers not in the Receiver Sites set should not be able to 363 receive this traffic. 365 - Hosts within the Receiver Sites set could receive multicast 366 traffic originated by any host in the Sender Sites set. 368 - Hosts within the Receiver Sites set should not be able to 369 receive multicast traffic originated by any host that is not in 370 the Sender Sites set. 372 A site could be both in the Sender Sites set and Receiver Sites set, 373 which implies that hosts within such a site could both originate and 374 receive multicast traffic. An extreme case is when the Sender Sites 375 set is the same as the Receiver Sites set, in which case all sites 376 could originate and receive multicast traffic from each other. 378 Sites within a given MVPN may be either within the same, or in 379 different organizations, which implies than an MVPN can be either an 380 Intranet or na Extranet. 382 A given site may be in more than one MVPN, which implies that MVPNs 383 may overlap. 385 Not all sites of a given MVPN have to be connected to the same 386 service provider, which implies that an MVPN can span multiple 387 service providers. 389 Another way to look at MVPN is to say that an MVPN is defined by a 390 set of administrative policies. Such policies determine both Sender 391 Sites set and Receiver Site set. Such policies are established by 392 MVPN customers, but implemented/realized by MVPN Service Providers 393 using the existing BGP/MPLS VPN mechanisms, such as Route Targets, 394 with extensions, as necessary. 396 2.2.3. Auto-Discovery 398 In order for the PE routers attaching to a given MVPN to exchange 399 MVPN control information with each other, each one needs to discover 400 all the other PEs that attach to the same MVPN. (Strictly speaking, 401 a PE in the receiver sites set need only discover the other PEs in 402 the sender sites set and a PE in the sender sites set need only 403 discover the other PEs in the receiver sites set.) This is referred 404 to as "MVPN Auto-Discovery". 406 This document discusses two ways of providing MVPN autodiscovery: 408 - BGP can be used for discovering and maintaining MVPN membership. 409 The PE routers advertise their MVPN membership to other PE 410 routers using BGP. A PE is considered to be a "member" of a 411 particular MVPN if it contains a VRF (Virtual Routing and 412 Forwarding table, see [RFC4364]) which is configured to contain 413 the multicast routing information of that MVPN. This auto- 414 discovery option does not make any assumptions about the methods 415 used for transmitting MVPN multicast data packets through the 416 backbone. 418 - If it is known that the multicast data packets of a particular 419 MVPN are to be transmitted (at least, by default) through a non- 420 aggregated Inclusive Tree which is to be set up by PIM-SM or 421 PIM-Bidir, and if the PEs attaching to that MVPN are configured 422 with the group address corresponding to that tree, then the PEs 423 can auto-discover each other simply by joining the tree and then 424 multicasting PIM Hellos over the tree. 426 2.2.4. PE-PE Multicast Routing Information 428 The BGP/MPLS IP VPN [RFC4364] specification requires a PE to maintain 429 at most one BGP peering with every other PE in the network. This 430 peering is used to exchange VPN routing information. The use of Route 431 Reflectors further reduces the number of BGP adjacencies maintained 432 by a PE to exchange VPN routing information with other PEs. This 433 document describes various options for exchanging MVPN control 434 information between PE routers based on the use of PIM or BGP. These 435 options have different overheads with respect to the number of 436 routing adjacencies that a PE router needs to maintain to exchange 437 MVPN control information with other PE routers. Some of these options 438 allow the retention of the unicast BGP/MPLS VPN model letting a PE 439 maintain at most one routing adjacency with other PE routers to 440 exchange MVPN control information. 442 The solution in [RFC4364] uses BGP to exchange VPN routing 443 information between PE routers. This document describes various 444 solutions for exchanging MVPN control information. One option is the 445 use of BGP, providing reliable transport. Another option is the use 446 of the currently existing, "soft state" PIM standard [PIM-SM]. 448 2.2.5. PE-PE Multicast Data Transmission 450 Like [RFC4364], this document decouples the procedures for exchanging 451 routing information from the procedures for transmitting data 452 traffic. Hence a variety of transport technologies may be used in the 453 backbone. For inclusive trees, these transport technologies include 454 unicast PE-PE tunnels (using MPLS or IP/GRE encapsulation), multicast 455 distribution trees created by PIM-SSM, PIM-SM, or PIM-Bidir (using 456 IP/GRE encapsulation), point-to-multipoint LSPs created by RSVP-TE or 457 mLDP, and multipoint-to-multipoint LSPs created by mLDP. (However, 458 techniques for aggregating the traffic of multiple MVPNs onto a 459 single multipoint-to-multipoint LSP or onto a single bidirectional 460 multicast distribution tree are for further study.) For selective 461 trees, only unicast PE-PE tunnels (using MPLS or IP/GRE 462 encapsulation) and unidirectional single-source trees are supported, 463 and the supported tree creation protocols are PIM-SSM (using IP/GRE 464 encapsulation), RSVP-TE, and mLDP. 466 In order to aggregate traffic from multiple MVPNs onto a single 467 multicast distribution tree, it is necessary to have a mechanism to 468 enable the egresses of the tree to demultiplex the multicast traffic 469 received over the tree and to associate each received packet with a 470 particular MVPN. This document specifies a mechanism whereby 471 upstream label assignment [MPLS-UPSTREAM-LABEL] is used by the root 472 of the tree to assign a label to each flow. This label is used by 473 the receivers to perform the demultiplexing. This document also 474 describes procedures based on BGP that are used by the root of an 475 Aggregate Tree to advertise the Inclusive and/or Selective binding 476 and the demultiplexing information to the leaves of the tree. 478 This document also describes the data plane encapsulations for 479 supporting the various SP multicast transport options. 481 This document assumes that when SP multicast trees are used, traffic 482 for a particular multicast group is transmitted by a particular PE on 483 only one SP multicast tree. The use of multiple SP multicast trees 484 for transmitting traffic belonging to a particular multicast group is 485 for further study. 487 2.2.6. Inter-AS MVPNs 489 [RFC4364] describes different options for supporting Inter-AS 490 BGP/MPLS unicast VPNs. This document describes how Inter-AS MVPNs can 491 be supported for each of the unicast BGP/MPLS VPN Inter-AS options. 492 This document also specifies a model where Inter-AS MVPN service can 493 be offered without requiring a single SP multicast tree to span 494 multiple ASes. In this model, an inter-AS multicast tree consists of 495 a number of "segments", one per AS, which are stitched together at AS 496 boundary points. These are known as "segmented inter-AS trees". Each 497 segment of a segmented inter-AS tree may use a different multicast 498 transport technology. 500 It is also possible to support Inter-AS MVPNs with non-segmented 501 source trees that extend across AS boundaries. 503 2.2.7. Optional Deployment Models 505 The document also discusses an optional MVPN deployment model in 506 which PEs take on all or part of the role of a PIM RP (Rendezvous 507 Point). The necessary prtoocol extensions to support this are 508 defined. 510 3. Concepts and Framework 512 3.1. PE-CE Multicast Routing 514 Support of multicast in BGP/MPLS IP VPNs is modeled closely after 515 support of unicast in BGP/MPLS IP VPNs. That is, a multicast routing 516 protocol will be run on the PE-CE interfaces, such that PE and CE are 517 multicast routing adjacencies on that interface. CEs at different 518 sites do not become multicast routing adjacencies of each other. 520 If a PE attaches to n VPNs for which multicast support is provided 521 (i.e., to n "MVPNs"), the PE will run n independent instances of a 522 multicast routing protocol. We will refer to these multicast routing 523 instances as "VPN-specific multicast routing instances", or more 524 briefly as "multicast C-instances". The notion of a "VRF" ("Virtual 525 Routing and Forwarding Table"), defined in [RFC4364], is extended to 526 include multicast routing entries as well as unicast routing entries. 527 Each multicast routing entry is thus associated with a particular 528 VRF. 530 Whether a particular VRF belongs to an MVPN or not is determined by 531 configuration. 533 In this document, we will not attempt to provide support for every 534 possible multicast routing protocol that could possibly run on the 535 PE-CE link. Rather, we consider multicast C-instances only for the 536 following multicast routing protocols: 538 - PIM Sparse Mode (PIM-SM) 540 - PIM Single Source Mode (PIM-SSM) 542 - PIM Bidirectional Mode (PIM-Bidir) 544 - PIM Dense Mode (PIM-DM) 546 In order to support the "Carrier's Carrier" model of [RFC4364], mLDP 547 or BGP will also be supported on the PE-CE interface; however, this 548 is not described in this revision. 550 As the document only supports PIM-based C-instances, we will 551 generally use the term "PIM C-instances" to refer to the multicast 552 C-instances. 554 A PE router may also be running a "provider-wide" instance of PIM, (a 555 "PIM P-instance"), in which it has a PIM adjacency with, e.g., each 556 of its IGP neighbors (i.e., with P routers), but NOT with any CE 557 routers, and not with other PE routers (unless another PE router 558 happens to be an IGP adjacency). In this case, P routers would also 559 run the P-instance of PIM, but NOT a C-instance. If there is a PIM 560 P-instance, it may or may not have a role to play in support of VPN 561 multicast; this is discussed in later sections. However, in no case 562 will the PIM P-instance contain VPN-specific multicast routing 563 information. 565 In order to help clarify when we are speaking of the PIM P-instance 566 and when we are speaking of a PIM C-instance, we will also apply the 567 prefixes "P-" and "C-" respectively to control messages, addresses, 568 etc. Thus a P-Join would be a PIM Join which is processed by the PIM 569 P-instance, and a C-Join would be a PIM Join which is processed by a 570 C-instance. A P-group address would be a group address in the SP's 571 address space, and a C-group address would be a group address in a 572 VPN's address space. 574 3.2. P-Multicast Service Interfaces (PMSIs) 576 Multicast data packets received by a PE over a PE-CE interface must 577 be forwarded to one or more of the other PEs in the same MVPN for 578 delivery to one or more other CEs. 580 We define the notion of a "P-Multicast Service Interface" (PMSI). If 581 a particular MVPN is supported by a particular set of PE routers, 582 then there will be a PMSI connecting those PE routers. A PMSI is a 583 conceptual "overlay" on the P network with the following property: a 584 PE in a given MVPN can give a packet to the PMSI, and the packet will 585 be delivered to some or all of the other PEs in the MVPN, such that 586 any PE receiving such a packet will be able to tell which MVPN the 587 packet belongs to. 589 As we discuss below, a PMSI may be instantiated by a number of 590 different transport mechanisms, depending on the particular 591 requirements of the MVPN and of the SP. We will refer to these 592 transport mechanisms as "tunnels". 594 For each MVPN, there are one or more PMSIs that are used for 595 transmitting the MVPN's multicast data from one PE to others. We 596 will use the term "PMSI" such that a single PMSI belongs to a single 597 MVPN. However, the transport mechanism which is used to instantiate 598 a PMSI may allow a single "tunnel" to carry the data of multiple 599 PMSIs. 601 In this document we make a clear distinction between the multicast 602 service (the PMSI) and its instantiation. This allows us to separate 603 the discussion of different services from the discussion of different 604 instantiations of each service. The term "tunnel" is used to refer 605 only to the transport mechanism that instantiates a service. 607 [This is a significant change from previous drafts on the topic of 608 MVPN, which have used the term "Multicast Tunnel" to refer both to 609 the multicast service (what we call here the PMSI) and to its 610 instantiation.] 612 3.2.1. Inclusive and Selective PMSIs 614 We will distinguish between three different kinds of PMSI: 616 - "Multidirectional Inclusive" PMSI (MI-PMSI) 618 A Multidirectional Inclusive PMSI is one which enables ANY PE 619 attaching to a particular MVPN to transmit a message such that it 620 will be received by EVERY other PE attaching to that MVPN. 622 There is at most one MI-PMSI per MVPN. (Though the tunnel which 623 instantiates an MI-PMSI may actually carry the data of more than 624 one PMSI.) 626 An MI-PMSI can be thought of as an overlay broadcast network 627 connecting the set of PEs supporting a particular MVPN. 629 [The "Default MDTs" of rosen-08 provide the transport service of 630 MI-PMSIs, in this terminology.] 632 - "Unidirectional Inclusive" PMSI (UI-PMSI) 634 A Unidirectional Inclusive PMSI is one which enables a particular 635 PE, attached to a particular MVPN, to transmit a message such 636 that it will be received by all the other PEs attaching to that 637 MVPN. There is at most one UI-PMSI per PE per MVPN, though the 638 "tunnel" which instantiates a UI-PMSI may in fact carry the data 639 of more than one PMSI. 641 - "Selective" PMSI (S-PMSI). 643 A Selective PMSI is one which provides a mechanism wherein a 644 particular PE in an MVPN can multicast messages so that they will 645 be received by a subset of the other PEs of that MVPN. There may 646 be an arbitrary number of S-PMSIs per PE per MVPN. Again, the 647 "tunnel" which instantiates a given S-PMSI may carry data from 648 multiple S-PMSIs. 650 [The "Data MDTs" of earlier drafts provide the transport service 651 of "Selective PMSIs" in the terminology of this draft.] 653 We will see in later sections the role played by these different 654 kinds of PMSI. We will use the term "I-PMSI" when we are not 655 distinguishing between "MI-PMSIs" and "UI-PMSIs". 657 3.2.2. Tunnels Instantiating PMSIs 659 A number of different tunnel setup techniques can be used to create 660 the tunnels that instantiate the PMSIs. Among these are: 662 - PIM 664 A PMSI can be instantiated as (a set of) Multicast Distribution 665 Trees created by the PIM P-instance ("P-trees"). 667 PIM-SSM, PIM-Bidir, or PIM-SM can be used to create P-trees. 668 (PIM-DM is not supported for this purpose.) 670 A single MI-PMSI can be instantiated by a single shared P-tree, 671 or by a number of source P-trees (one for each PE of the MI- 672 PMSI). P-trees may be shared by multiple MVPNs (i.e., a given 673 P-tree may be the instantiation of multiple PMSIs), as long as 674 the enacapsulation provides some means of demultiplexing the data 675 traffic by MVPN. 677 Selective PMSIs are most instantiated by source P-trees, and are 678 most naturally created by PIM-SSM, since by definition only one 679 PE is the source of the multicast data on a Selective PMSI. 681 [The "Default MDTs" of [rosen-08] are MI-PMSIs instantiated as 682 PIM trees. The "data MDTs" of [rosen-08] are S-PMSIs 683 instantiated as PIM trees.] 685 - MLDP 687 A PMSI may be instantiated as one or more mLDP Point-to- 688 Multipoint (P2MP) LSPs, or as an mLDP Multipoint-to-Point(MP2MP) 689 LSP. A Selective PMSI or a Unidirectional Inclusive PMSI would 690 be instantiated as a single mLDP P2MP LSP, whereas a 691 Multidirectional Inclusive PMSI could be instantiated either as a 692 set of such LSPs (one for each PE in the MVPN) or as a single 693 M2PMP LSP. 695 MLDP P2MP LSPs can be shared across multiple MVPNs. 697 - RSVP-TE 699 A PMSI may be instantiated as one or more RSVP-TE Point-to- 700 Multipoint (P2MP) LSPs. A Selective PMSI or a Unidirectional 701 Inclusive PMSI would be instantiated as a single RSVP-TE P2MP 702 LSP, whereas a Multidirectional Inclusive PMSI would be 703 instantiated as a set of such LSPs, one for each PE in the MVPN. 704 RSVP-TE P2MP LSPs can be shared across multiple MVPNs. 706 - A Mesh of Unicast Tunnels. 708 If a PMSI is implemented as a mesh of unicast tunnels, a PE 709 wishing to transmit a packet through the PMSI would replicate the 710 packet, and send a copy to each of the other PEs. 712 An MI-PMSI for a given MVPN can be instantiated as a full mesh of 713 unicast tunnels among that MVPN's PEs. A UI-PMSI or an S-PMSI 714 can be instantiated as a partial mesh. 716 - Unicast Tunnels to the Root of a P-Tree. 718 Any type of PMSI can be instantiated through a method in which 719 there is a single P-tree (created, for example, via PIM-SSM or 720 via RSVP-TE), and a PE transmits a packet to the PMSI by sending 721 it in a unicast tunnel to the root of that P-tree. All PEs in 722 the given MVPN would need to be leaves of the tree. 724 When this instantiation method is used, the transmitter of the 725 multicast data may receive its own data back. Methods for 726 avoiding this are for further study. 728 It can be seen that each method of implementing PMSIs has its own 729 area of applicability. This specification therefore allows for the 730 use of any of these methods. At first glance, this may seem like an 731 overabundance of options. However, the history of multicast 732 development and deployment should make it clear that there is no one 733 option which is always acceptable. The use of segmented inter-AS 734 trees does allow each SP to select the option which it finds most 735 applicable in its own environment, without causing any other SP to 736 choose that same option. 738 Specifying the conditions under which a particular tree building 739 method is applicable is outside the scope of this document. 741 3.3. Use of PMSIs for Carrying Multicast Data 743 Each PE supporting a particular MVPN must have a way of discovering: 745 - The set of other PEs in its AS that are attached to sites of that 746 MVPN, and the set of other ASes that have PEs attached to sites 747 of that MVPN. However, if segmented inter-AS trees are not used 748 (see section 8.2), then each PE needs to know the entire set of 749 PEs attached to sites of that MVPN. 751 - If segmented inter-AS trees are to be used, the set of border 752 routers in its AS that support inter-AS connectivity for that 753 MVPN 755 - If the MVPN is configured to use a default MI-PMSI, the 756 information needed to set up and to use the tunnels instantiating 757 the default MI-PMSI, 759 - For each other PE, whether the PE supports Aggregate Trees for 760 the MVPN, and if so, the demultiplexing information which must be 761 provided so that the other PE can determine whether a packet 762 which it received on an aggregate tree belongs to this MVPN. 764 In some cases this information is provided by means of the BGP-based 765 auto-discovery procedures detailed in section 4. In other cases, 766 this information is provided after discovery is complete, by means of 767 procedurs defined in section 6.1.2. In either case, the information 768 which is provided must be sufficient to enable the PMSI to be bound 769 to the identified tunnel, to enable the tunnel to be created if it 770 does not already exist, and to enable the different PMSIs which may 771 travel on the same tunnel to be properly demultiplexed. 773 3.3.1. MVPNs with Default MI-PMSIs 775 If an MVPN uses an MI-PMSI, then the MI-PMSI for that MVPN will be 776 created as soon as the necessary information has been obtained. 777 Creating a PMSI means creating the tunnel which carries it (unless 778 that tunnel already exists), as well as binding the PMSI to the 779 tunnel. The MI-PMSI for that MVPN is then used as the default method 780 of transmitting multicast data packets for that MVPN. In effect, all 781 the multicast streams for the MVPN are, by default, aggregated onto 782 the MI-MVPN. 784 If a particular multicast stream from a particular source PE has 785 certain characteristics, it can be desirable to migrate it from the 786 MI-PMSI to an S-PMSI. Procedures for migrating a stream from an MI- 787 PMSI to an S-PMSI are discussed in section 7. 789 3.3.2. When MI-PMSIs are Required 791 MI-PMSIs are required under the following conditions: 793 - The MVPN is using PIM-DM, or some other protocol (such as BSR) 794 which relies upon flooding. Only with an MI-PMSI can the C-data 795 (or C-control-packets) received from any CE be flooded to all 796 PEs. 798 - If the procedure for carrying C-multicast routes from PE to PE 799 involves the multicasting of P-PIM control messages among the PEs 800 (see sections 5.2.1, 5.2.2, and 5.2.4). 802 3.3.3. MVPNs That Do Not Use MI-PMSIs 804 If a particular MVPN does not use a default MI-PMSI, then its 805 multicast data may be sent by default on a UI-PMSI. 807 It is also possible to send all the multicast data on an S-PMSI, 808 omitting any usage of I-PMSIs. This prevents PEs from receiving data 809 which they don't need, at the cost of requiring additional tunnels. 810 However, in order to cost-effectively instantiate S-PMSIs with 811 Aggregate P-trees, it is necessary for the transmitting PE to know 812 which PEs need to receive which multicast streams. This is known as 813 "explicit tracking", and the procedures to enable explicit tracking 814 may themselves impose a cost. This is further discussed in section 815 7.2.2.2. 817 4. BGP-Based Autodiscovery of MVPN Membership 819 BGP-based autodiscovery is done by means of a new address family, the 820 MCAST-VPN address family. (This address family also has other uses, 821 as will be seen later.) Any PE which attaches to an MVPN must issue 822 a BGP update message containing an NLRI in this address family, along 823 with a specific set of attributes. In this document, we specify the 824 information which must be contained in these BGP updates in order to 825 provide auto-discovery. The encoding details, along with the 826 complete set of detailed procedures, are specified in a separate 827 document [MVPN-BGP]. 829 This section specifies the intra-AS BGP-based autodiscovery 830 procedures. When segmented inter-AS trees are used, additional 831 procedures are needed, as specified in section 8. Further detail may 832 be found in [MVPN-BGP]. (When segmented inter-AS trees are not used, 833 the inter-AS procedures are almost identical to the intra-AS 834 procedures.) 836 BGP-based autodiscovery uses a particular kind of MCAST-VPN route 837 known as an "auto-discovery routes", or "A-D route". 839 An "intra-AS A-D route" is a particular kind of A-D route that is 840 never distributed outside its AS of origin. Intra-AS A-D routes are 841 originated by the PEs that are (directly) connected to the site(s) of 842 that MVPN. 844 For the purpose of auto-discovery, each PE attached to a site in a 845 given MVPN must originate an intra-AS auto-discovery route. The NLRI 846 of that route must the following information: 848 - The route type (i.e., intra-AS A-D route) 850 - IP address of the originating PE 852 - An RD configured locally for the MVPN. This is an RD which can 853 be prepended to that IP address to form a globally unique VPN-IP 854 address of the PE. 856 The A-D route must also carry the following attributes: 858 - One or more Route Target attributes. If any other PE has one of 859 these Route Targets configured for import into a VRF, it treats 860 the advertising PE as a member in the MVPN to which the VRF 861 belongs. This allows each PE to discover the PEs that belong to a 862 given MVPN. More specifically it allows a PE in the receiver 863 sites set to discover the PEs in the sender sites set of the MVPN 864 and the PEs in the sender sites set of the MVPN to discover the 865 PEs in the receiver sites set of the MVPN. The PEs in the 866 receiver sites set would be configured to import the Route 867 Targets advertised in the BGP Auto-Discovery routes by PEs in the 868 sender sites set. The PEs in the sender sites set would be 869 configured to import the Route Targets advertised in the BGP 870 Auto-Discovery routes by PEs in the receiver sites set. 872 * PMSI tunnel attribute. This attribute is present if and only if 873 a default MI-PMSI is to be used for the MVPN. It contains the 874 following information: 876 whether the MI-PMSI is instantiated by 878 + A PIM-Bidir tree, 880 + a set of PIM-SSM trees, 882 + a set of PIM-SM trees 884 + a set of RSVP-TE point-to-multipoint LSPs 886 + a set of mLDP point-to-multipoint LSPs 888 + an mLDP multipoint-to-multipoint LSP 890 + a set of unicast tunnels 892 + a set of unicast tunnels to the root of a shared tree (in 893 this case the root must be identified) 895 * If the PE wishes to setup a default tunnel to instantiate the 896 I-PMSI, a unique identifier for the tunnel used to 897 instantiate the I-PMSI. 899 All the PEs attaching to a given MVPN (within a given AS) 900 must have been configured with the same PMSI tunnel attribute 901 for that MVPN. They are also expected to know the 902 encapsulation to use. 904 Note that a default tunnel can be identified at discovery 905 time only if the tunnel already exists (e.g., it was 906 constructed by means of configuration), or if it can be 907 constructed without each PE knowing the the identities of all 908 the others (e.g., it is constructed by a receiver-initiated 909 join technique such as PIM or mLDP). 911 In other cases, a default tunnel cannot be identified until 912 the PE has discovered one or more of the other PEs. This 913 will be the case, for example, if the tunnel is an RSVP-TE 914 P2MP LSP, which must be set up from the head end. In these 915 cases, a PE will first send an A-D route without a tunnel 916 identifier, and then will send another one with a tunnel 917 identifier after discovering one or more of the other PEs. 919 * Whether the tunnel used to instantiate the I-PMSI for this 920 MVPN is aggregating I-PMSIs from multiple MVPNs. This will 921 affect the encapsulation used. If aggregation is to be used, 922 a demultiplexor value to be carried by packets for this 923 particular MVPN must also be specified. The demultiplexing 924 mechanism and signaling procedures are described in section 925 6. 926 Further details of the use of this information are provided in 927 subsequent sections. 929 5. PE-PE Transmission of C-Multicast Routing 931 As a PE attached to a given MVPN receives C-Join/Prune messages from 932 its CEs in that MVPN, it must convey the information contained in 933 those messages to other PEs that are attached to the same MVPN. 935 There are several different methods for doing this. As these methods 936 are not interoperable, the method to be used for a particular MVPN 937 must either be configured, or discovered as part of the BGP-based 938 auto-discovery process. 940 5.1. RPF Information for Unicast VPN-IP Routes 942 When a PE receives a C-Join/Prune message from a CE, the message 943 identifies a particular multicast flow as belong either to a source 944 tree (S,G) or to a shared tree (*,G). We use the term C-source to 945 refer to S, in the case of a source tree, or to the Rendezvous Point 946 (RP) for G, in the case of (*,G). The PE needs to find the "upstream 947 multicast hop" for the (S,G) or (*,G) flow, and it does this by 948 looking up the C-source in the unicast VRF associated with the PE-CE 949 interfaces over which the C-Join/Prune was received. To facilitate 950 this, all unicast VPN-IP routes from an MVPN will carry RPF 951 information, which identifies the PE that originated the route, as 952 well as identifying the Autonomous System containing that PE. This 953 information is consulted when a PE does an "RPF lookup" of the C- 954 source as part of processing the C-Join/Prune messages. This RPF 955 information contains the following: 957 - Source AS Extended Community 959 To support MVPN a PE that originates a (unicast) route to VPN- 960 IPv4 addresses MUST include in the BGP Update message that 961 carries this route the Source AS extended community, except if it 962 is known a priori that none of these addresses will act as 963 multicast sources and/or RP, in which case the (unicast) route 964 need not carry the Source AS extended community. The Global 965 Administrator field of this community MUST be set to the 966 autonomous system number of the PE. The Local Administrator field 967 of this community SHOULD be set to 0. This community is described 968 further in [MVPN-BGP]. 970 - Route Import Extended Community 972 To support MVPN in addition to the import/export Route Target(s) 973 used by the unicast routing, each VRF on a PE MUST have an import 974 Route Target that is unique to this VRF, except if it is known a 975 priori that none of the (local) MVPN sites associated with the 976 VRF contain multicast source(s) and/or RP, in which case the VRF 977 need not have this import Route Target. This Route Target MUST be 978 IP address specific, and is constructed as follows: 980 + The Global Administrator field of the Route Target MUST be set to 981 an IP address of the PE. This address MUST be a routable IP 982 address. This address MAY be common for all the VRFs on the PE 983 (e.,g., this address may be PE's loopback address). 985 + The Local Administrator field of the Route Target associated with 986 a given VRF contains a 2 octets long number that uniquely 987 identifies that VRF within the PE that contains the VRF 988 (procedures for assigning such numbers are purely local to the 989 PE, and outside the scope of this document). 991 A PE that originates a (unicast) route to VPN-IPv4 addresses MUST 992 include in the BGP Updates message that carries this route the Route 993 Import extended community that has the value of this Route Target, 994 except if it is known a priori that none of these addresses will act 995 as multicast sources and/or RP, in which case the (unicast) route 996 need not carry the Route Import extended community. 998 The Route Import Extended Community is described further in [MVPN- 999 BGP]. 1001 5.2. PIM Peering 1003 5.2.1. Full Per-MVPN PIM Peering Across a MI-PMSI 1005 If the set of PEs attached to a given MVPN are connected via a MI- 1006 PMSI, the PEs can form "normal" PIM adjacencies with each other. 1007 Since the MI-PMSI functions as a broadcast network, the standard PIM 1008 procedures for forming and maintaining adjacencies over a LAN can be 1009 applied. 1011 As a result, the C-Join/Prune messages which a PE receives from a CE 1012 can be multicast to all the other PEs of the MVPN. PIM "join 1013 suppression" can be enabled and the PEs can send Asserts as needed. 1015 [This is the procedure specified in [rosen-08].] 1017 5.2.2. Lightweight PIM Peering Across a MI-PMSI 1019 The procedure of the previous section has the following 1020 disadvantages: 1022 - Periodic Hello messages must be sent by all PEs. 1024 Standard PIM procedures require that each PE in a particular MVPN 1025 periodically multicast a Hello to all the other PEs in that MVPN. 1026 If the number of MVPNs becomes very large, sending and receiving 1027 these Hellos can become a substantial overhead for the PE 1028 routers. 1030 - Periodic retransmission of C-Join/Prune messages. 1032 PIM is a "soft-state" protocol, in which reliability is assured 1033 through frequent retransmissions (refresh) of control messages. 1034 This too can begin to impose a large overhead on the PE routers 1035 as the number of MVPNs grows. 1037 The first of these disadvantages is easily remedied. The reason for 1038 the periodic PIM Hellos is to ensure that each PIM speaker on a LAN 1039 knows who all the other PIM speakers on the LAN are. However, in the 1040 context of MVPN, PEs in a given MVPN can learn the identities of all 1041 the other PEs in the MVPN by means of the BGP-based auto-discovery 1042 procedure of section 4. In that case, the periodic Hellos would 1043 serve no function, and could simply be eliminated. (Of course, this 1044 does imply a change to the standard PIM procedures.) 1046 When Hellos are suppressed, we may speak of "lightweight PIM 1047 peering". 1049 The periodic refresh of the C-Join/Prunes is not as simple to 1050 eliminate. The L3VPN WG has asked the PIM WG to specify "refresh 1051 reduction" procedures for PIM, so as to eliminate the need for the 1052 periodic refreshes. If and when such procedures have been specified, 1053 it will be very useful to incorporate them, so as to make the 1054 lightweight PIM peering procedures even more lightweight. 1056 5.2.3. Unicasting of PIM C-Join/Prune Messages 1058 PIM does not require that the C-Join/Prune messages which a PE 1059 receives from a CE to be multicast to all the other PEs; it allows 1060 them to be unicast to a single PE, the one which is upstream on the 1061 path to the root of the multicast tree mentioned in the Join/Prune 1062 message. Note that when the C-Join/Prune messages are unicast, there 1063 is no such thing as "join suppression". Therefore PIM Refresh 1064 Reduction may be considered to be a pre-requisite for the procedure 1065 of unicasting the C-Join/Prune messages. 1067 When the C-Join/Prunes are unicast, they are not transmitted on a 1068 PMSI at all. Note that the procedure of unicasting the C-Join/Prunes 1069 is different than the procedure of transmitting the C-Join/Prunes on 1070 an MI-PMSI which is instantiated as a mesh of unicast tunnels. 1072 If there are multiple PEs that can be used to reach a given C-source, 1073 procedures described in section 9 MUST be used to ensue that, at 1074 least within a single AS, all PEs choose the same PE to reach the C- 1075 source. 1077 5.2.4. Details of Per-MVPN PIM Peering over MI-PMSI 1079 In this section, we assume that inter-AS MVPNs will be supported by 1080 means of non-segmented inter-AS trees. Support for segmented inter- 1081 AS trees with PIM peering is for further study. 1083 When an MVPN uses an MI-PMSI, the C-instances of that MVPN can treat 1084 the MI-PMSI as a LAN interface, and form either full PIM adjacencies 1085 or lightweight PIM adjacencies with each other over that "LAN 1086 interface". 1088 To form a full PIM adjacency, the PEs execute the PIM LAN procedures, 1089 including the generation and processing of PIM Hello, Join/Prune, 1090 Assert, DF election and other PIM control packets. These are 1091 executed independently for each C-instance. PIM "join suppression" 1092 SHOULD be enabled. 1094 If it is known that all C-instances of a particular MVPN can support 1095 lightweight adjacencies, then lightweight adjacencies MUST be used. 1096 If it is not known that all such C-instances support lightweight 1097 instances, then full adjacencies MUST be used. Whether all the C- 1098 instances support lightweight adjacencies is known by virtue of the 1099 BGP-based auto-discovery procedures (combined with configuration). 1100 This knowledge might change over time, so the PEs must be able to 1101 switch in real time between the use of full adjacencies and 1102 lightweight adjacencies. 1104 The difference between a lightweight adjacency and a full adjacency 1105 is that no PIM Hellos are sent or received on a lightweight 1106 adjacency. The function which Hellos usually provide in PIM can be 1107 provided in MVPN by the BGP-based auto-discovery procedures, so the 1108 Hellos become superfluous. 1110 Whether or not Hellos are sent, if PIM Refresh Reduction procedures 1111 are available, and all the PEs supporting the MVPN are known to 1112 support these procedures, then the refresh reduction procedures MUST 1113 be used. 1115 5.2.4.1. PIM C-Instance Control Packets 1117 All PIM C-Instance control packets of a particular MVPN are addressed 1118 to the ALL-PIM-ROUTERS (224.0.0.13) IP destination address, and 1119 transmitted over the MI-PMSI of that MVPN. While in transit in the 1120 P-network, the packets are encapsulated as required for the 1121 particular kind of tunnel that is being used to instantiate the MI- 1122 PMSI. Thus the C-instance control packets are not processed by the P 1123 routers, and MVPN-specific PIM routes can be extended from site to 1124 site without appearing in the P routers. 1126 5.2.4.2. PIM C-instance RPF Determination 1128 Although the MI-PMSI is treated by PIM as a LAN interface, unicast 1129 routing is NOT run over it, and there are no unicast routing 1130 adjacencies over it. It is therefore necessary to specify special 1131 procedures for determining when the MI-PMSI is to be regarded as the 1132 "RPF Interface" for a particular C-address. 1134 When a PE needs to determine the RPF interface of a particular C- 1135 address, it looks up the C-address in the VRF. If the route matching 1136 it (call this the "RPF route") is not a VPN-IP route learned from 1137 MP-BGP as described in [RFC4364], or if that route's outgoing 1138 interface is one of the interfaces associated with the VRF, then 1139 ordinary PIM procedures for determining the RPF interface apply. 1141 However, if the RPF route is a VPN-IP route whose outgoing interface 1142 is not one of the interfaces associated with the VRF, then PIM will 1143 consider the outgoing interface to be the MI-PMSI associated with the 1144 VPN-specific PIM instance. 1146 Once PIM has determined that the RPF interface for a particular C- 1147 address is the MI-PMSI, it is necessary for PIM to determine the RPF 1148 neighbor for that C-address. This will be one of the other PEs that 1149 is a PIM adjacency over the MI-PMSI. 1151 When a PE distributes a given VPN-IP route via BGP, the PE must 1152 determine whether that route might possibly be regarded, by another 1153 PE, as an RPF route. (If a given VRF is part of an MVPN, it may be 1154 simplest to regard every route exported from that VRF to be a 1155 potential RPF route.) If the given VPN-IP route is a potential RPF 1156 route, then when the VPN-IP route is distributed by BGP, it SHOULD be 1157 accompanied by "RPF information". 1159 The RPF information contains an IP address which the PE will use as 1160 its Source IP address in any PIM control messages which it transmits 1161 to other PEs in the same MVPN. 1163 When a PE has determined that the RPF interface for a particular C- 1164 address is the MI-PMSI, it must look up the RPF information that was 1165 distributed along with the VPN-IP address corresponding to that C- 1166 address. The IP address in this RPF information will be considered 1167 to be the IP address of the RPF adjacency for the C-address. 1169 If the RPF information is not present, but the "BGP Next Hop" for the 1170 C-address is one of the PEs that is a PIM adjacency over the MI-PMSI, 1171 then that PE should be treated as the RPF adjacency for that C- 1172 address. However, if the MVPN spans multiple Autonomous Systems, the 1173 BGP Next Hop might not be a PIM adjacency, and if that is the case 1174 the RPF check will not succeed unless the RPF information is used. 1176 5.3. Use of BGP for Carrying C-Multicast Routing 1178 It is possible to use BGP to carry C-multicast routing information 1179 from PE to PE, dispensing entirely with the transmission of C- 1180 Join/Prune messages from PE to PE. This section describes the 1181 procedures for carrying intra-AS multicast routing information. 1182 Inter-AS procedures are described in section 8. 1184 5.3.1. Sending BGP Updates 1186 The MCAST-VPN address family is used for this purpose. MCAST-VPN 1187 routes used for the purpose of carrying C-multicast routing 1188 information are distinguished from those used for the purpose of 1189 carrying auto-discovery information by means of a "route type" field 1190 which is encoded into the NLRI. The following information is 1191 required in BGP to advertise the MVPN routing information. The NLRI 1192 contains: 1194 - The type of C-multicast route. 1196 There are two types: 1198 * source tree join 1200 * shared tree join 1202 - The RD configured, for the MVPN, on the PE that is advertising 1203 the information. This is required to uniquely identify the as the addresses could overlap between different 1205 MVPNs. 1207 - The C-Source address. (Omitted if the route type is "shared tree 1208 join") 1210 - The C-Group address. 1212 - The RD from the VPN-IP route to the C-source. 1214 That is, the route to the C-source is looked up in the local 1215 unicast VRF associated with the CE-PE interface over which the 1216 C-multicast control packet arrived. The corresponding VPN-IP 1217 route is then examined, and the RD from that route is placed into 1218 the C-multicast route. 1220 Note that this RD is NOT necessarily one which is configured on 1221 the local PE. Rather it is one which is configured on the remote 1222 PE that is on the path to the C-source. 1224 The following attribute must also be included: 1226 - The upstream multicast hop. 1228 If a PE receives a C-Join (*, G) from a CE, the C-source is 1229 considered to be the C-RP for the particular C-G. When the C- 1230 multicast route represents a "shared tree join", it is presumed 1231 that the root of the tree (e.g., the RP) is determined by some 1232 means outside the scope of this specification. 1234 When the PE processes a C-PIM Join/Prune message, the route to 1235 the C-source is looked up in the local unicast VRF associated 1236 with the CE-PE interface over which the C-multicast control 1237 packet arrived. The corresponding VPN-IP route is then examined. 1238 If the AS specified therein is the local AS, or if no AS is 1239 specified therein, then the PE specified therein becomes the 1240 upstream multicast hop. If the AS specified therein is a remote 1241 AS, the BGP next hop on the route to the MVPN Auto-Discovery 1242 route advertised by the remote AS, becomes the upstream multicast 1243 hop. 1245 N.B.: It is possible that here is more than one unicast VPN-IP 1246 route to the C-source. In this case, the route that was 1247 installed in the VRF is not necessarily the route that must be 1248 chosen by the PE. In order to choose the proper route, the 1249 procedures followed in section 9 MUST be followed. 1251 The upstream multicast hop is identified in an Extended Communities 1252 attribute to facilitate the optional use of filters which can prevent 1253 the distribution of the update to BGP speakers other than the 1254 upstream multicast hop. 1256 When a PE distributes this information via BGP, it must include a 1257 Route Import Extended Communities attribute learned from the RPF 1258 information. 1260 Note that for these procedures to work the VPN-IP route MUST contain 1261 the RPF information. 1263 Note that there is no C-multicast route corresponding to the PIM 1264 function of pruning a source off the shared tree when a PE switches 1265 from a tree to a tree. Section 9 of this 1266 document specifies a mandatory procedure that ensures that if any PE 1267 joins a source tree, all other PEs that have joined or 1268 will join the shared tree will also join the 1269 source tree. This eliminates the need for a C-multicast route that 1270 prunes C-S off the shared tree when switching from to tree. 1273 5.3.2. Explicit Tracking 1275 Note that the upstream multicast hop is NOT part of the NLRI in the 1276 C-multicast BGP routes. This means that if several PEs join the same 1277 C-tree, the BGP routes they distribute to do so are regarded by BGP 1278 as comparable routes, and only one will be installed. If a route 1279 reflector is being used, this further means that the PE which is used 1280 to reach the C-source will know only that one or more of the other 1281 PEs have joined the tree, but it won't know which one. That is, this 1282 BGP update mechanism does not provide "explicit tracking". Explicit 1283 tracking is not provided by default because it increases the amount 1284 of state needed and thus decreases scalability. Also, as 1285 constructing the C-PIM messages to send "upstream" for a given tree 1286 does not depend on knowing all the PEs that are downstream on that 1287 tree, there is no reason for the C-multicast route type updates to 1288 provide explicit tracking. 1290 There are some cases in which explicit tracking is necessary in order 1291 for the PEs to set up certain kinds of P-trees. There are other 1292 cases in which explicit tracking is desirable in order to determine 1293 how to optimally aggregate multicast flows onto a given aggregate 1294 tree. As these functions have to do with the setting up of 1295 infrastructure in the P-network, rather than with the dissemination 1296 of C-multicast routing information, any explicit tracking that is 1297 necessary is handled by sending the "source active" A-D routes, that 1298 are described in sections 9 and 10. Detailed procedures for turning 1299 on explicit tracking can be found in [MVPN-BGP]. 1301 5.3.3. Withdrawing BGP Updates 1303 A PE removes itself from a C-multicast tree (shared or source) by 1304 withdrawing the corresponding BGP update. 1306 If a PE has pruned a C-source from a shared C-multicast tree, and it 1307 needs to "unprune" that source from that tree, it does so by 1308 withdrawing the route that pruned the source from the tree. 1310 6. I-PMSI Instantiation 1312 This section describes how tunnels in the SP network can be used to 1313 instantiate an I-PMSI for an MVPN on a PE. When C-multicast data is 1314 delivered on an I-PMSI, the data will go to all PEs that are on the 1315 path to receivers for that C-group, but may also go to PEs that are 1316 not on the path to receivers for that C-group. 1318 The tunnels which instantiate I-PMSIs can be either PE-PE unicast 1319 tunnels or P-multicast trees. When PE-PE unicast tunnels are used the 1320 PMSI is said to be instantiated using ingress replication. 1322 [Editor's Note: MD trees described in [ROSEN-8, MVPN-BASE] are an 1323 example of P-multicast trees. Also Aggregate Trees described in 1324 [RAGGARWA-MCAST] are an example of P-multicast trees.] 1326 6.1. MVPN Membership and Egress PE Auto-Discovery 1328 As described in section 4 a PE discovers the MVPN membership 1329 information of other PEs using BGP auto-discovery mechanisms or using 1330 a mechanism that instantiates a MI-PMSI interface. When a PE supports 1331 only a UI-PMSI service for an MVPN, it MUST rely on the BGP auto- 1332 discovery mechanisms for discovering this information. This 1333 information also results in a PE in the sender sites set discovering 1334 the leaves of the P-multicast tree, which are the egress PEs that 1335 have sites in the receiver sites set in one or more MVPNs mapped onto 1336 the tree. 1338 6.1.1. Auto-Discovery for Ingress Replication 1340 In order for a PE to use Unicast Tunnels to send a C-multicast data 1341 packet for a particular MVPN to a set of remote PEs, the remote PEs 1342 must be able to correctly decapsulate such packets and to assign each 1343 one to the proper MVPN. This requires that the encapsulation used for 1344 sending packets through the tunnel have demultiplexing information 1345 which the receiver can associate with a particular MVPN. 1347 If ingress replication is being used for an MVPN, the PEs announce 1348 this as part of the BGP based MVPN membership auto-discovery process, 1349 described in section 4. The PMSI tunnel attribute specifies ingress 1350 replication. The demultiplexor value is a downstream-assigned MPLS 1351 label (i.e., assigned by the PE that originated the A-D route, to be 1352 used by other PEs when they send multicast packets on a unicast 1353 tunnel to that PE). 1355 Other demultiplexing procedures for unicast are under consideration. 1357 6.1.2. Auto-Discovery for P-Multicast Trees 1359 A PE announces the P-multicast technology it supports for a specified 1360 MVPN, as part of the BGP MVPN membership discovery. This allows other 1361 PEs to determine the P-multicast technology they can use for building 1362 P-multicast trees to instantiate an I-PMSI. If a PE has a default 1363 tree instantiation of an I-PMSI, it also announces the tree 1364 identifier as part of the auto-discovery, as well as announcing its 1365 aggregation capability. 1367 The announcement of a tree identifier at discovery time is only 1368 possible if the tree already exists (e.g., a preconfigured "traffic 1369 engineered" tunnel), or if the tree can be constructed dynamically 1370 without any PE having to know in advance all the other PEs on the 1371 tree (e.g., the tree is created by receiver-initiated joins). 1373 6.2. C-Multicast Routing Information Exchange 1375 When a PE doesn't support the use of a MI-PMSI for a given MVPN, it 1376 MUST either unicast MVPN routing information using PIM or else use 1377 BGP for exchanging the MVPN routing information. 1379 6.3. Aggregation 1381 A P-multicast tree can be used to instantiate a PMSI service for only 1382 one MVPN or for more than one MVPN. When a P-multicast tree is shared 1383 across multiple MVPNs it is termed an Aggregate Tree [RAGGARWA- 1384 MCAST]. The procedures described in this document allow a single SP 1385 multicast tree to be shared across multiple MVPNs. The procedures 1386 that are specific to aggregation are optional and are explicitly 1387 pointed out. Unless otherwise specified a P-multicast tree technology 1388 supports aggregation. 1390 Aggregate Trees allow a single P-multicast tree to be used across 1391 multiple MVPNs and hence state in the SP core grows per-set-of-MVPNs 1392 and not per MVPN. Depending on the congruency of the aggregated 1393 MVPNs, this may result in trading off optimality of multicast 1394 routing. 1396 An Aggregate Tree can be used by a PE to provide an UI-PMSI or MI- 1397 PMSI service for more than one MVPN. When this is the case the 1398 Aggregate Tree is said to have an inclusive mapping. 1400 6.3.1. Aggregate Tree Leaf Discovery 1402 BGP MVPN membership discovery allows a PE to determine the different 1403 Aggregate Trees that it should create and the MVPNs that should be 1404 mapped onto each such tree. The leaves of an Aggregate Tree are 1405 determined by the PEs, supporting aggregation, that belong to all the 1406 MVPNs that are mapped onto the tree. 1408 If an Aggregate Tree is used to instantiate one or more S-PMSIs, then 1409 it may be desirable for the PE at the root of the tree to know which 1410 PEs (in its MVPN) are receivers on that tree. This enables the PE to 1411 decide when to aggregate two S-PMSIs, based on congruence (as 1412 discussed in the next seciton). Thus explicit tracking may be 1413 required. Since the procedures for disseminating C-multicast routes 1414 do not provide explicit tracking, a type of A-D route known as a 1415 "Leaf A-D Route" is used. The PE which wants to assign a particular 1416 C-multicast flow to a particular Aggregate Tree can send an A-D route 1417 which elicits Leaf A-D routes from the PEs that need to receive that 1418 C-multicast flow. This provides the explicit tracking information 1419 needed to support the aggregation methodology discussed in the next 1420 section. 1422 6.3.2. Aggregation Methodology 1424 This document does not specify the mandatory implementatino of any 1425 particular set of rules for determining whether or not the PMSIs of 1426 two particular MVPNs are to be instantiated by the same Aggregate 1427 Tree. This determination can be made by implementation-specific 1428 heuristics, by configuration, or even perhaps by the use of offline 1429 tools. 1431 It is the intention of this document that the control procedures will 1432 always result in all the PEs of an MVPN to agree on the PMSIs which 1433 are to be used and on the tunnels used to instantiate those PMSIs. 1435 This section discusses potential methodologies with respect to 1436 aggregation. 1438 The "congruency" of aggregation is defined by the amount of overlap 1439 in the leaves of the customer trees that are aggregated on a SP tree. 1440 For Aggregate Trees with an inclusive mapping the congruency depends 1441 on the overlap in the membership of the MVPNs that are aggregated on 1442 the tree. If there is complete overlap i.e. all MVPNs have exactly 1443 the same sites, aggregation is perfectly congruent. As the overlap 1444 between the MVPNs that are aggregated reduces, i.e. the number of 1445 sites that are common across all the MVPNs reduces, the congruency 1446 reduces. 1448 If aggregation is done such that it is not perfectly congruent a PE 1449 may receive traffic for MVPNs to which it doesn't belong. As the 1450 amount of multicast traffic in these unwanted MVPNs increases 1451 aggregation becomes less optimal with respect to delivered traffic. 1452 Hence there is a tradeoff between reducing state and delivering 1453 unwanted traffic. 1455 An implementation should provide knobs to control the congruency of 1456 aggregation. These knobs are implementation dependent. Configuring 1457 the percentage of sites that MVPNs must have in common to be 1458 aggregated, is an example of such a knob. This will allow a SP to 1459 deploy aggregation depending on the MVPN membership and traffic 1460 profiles in its network. If different PEs or servers are setting up 1461 Aggregate Trees this will also allow a service provider to engineer 1462 the maximum amount of unwanted MVPNs hat a particular PE may receive 1463 traffic for. 1465 6.3.3. Encapsulation of the Aggregate Tree 1467 An Aggregate Tree may use an IP/GRE encapsulation or an MPLS 1468 encapsulation. The protocol type in the IP/GRE header in the former 1469 case and the protocol type in the data link header in the latter need 1470 further explanation. This will be specified in a separate document. 1472 6.3.4. Demultiplexing C-multicast traffic 1474 When multiple MVPNs are aggregated onto one P-Multicast tree, 1475 determining the tree over which the packet is received is not 1476 sufficient to determine the MVPN to which the packet belongs. The 1477 packet must also carry some demultiplexing information to allow the 1478 egress PEs to determine the MVPN to which the packet belongs. Since 1479 the packet has been multicast through the P network, any given 1480 demultiplexing value must have the same meaning to all the egress 1481 PEs. The demultiplexing value is a MPLS label that corresponds to 1482 the multicast VRF to which the packet belongs. This label is placed 1483 by the ingress PE immediately beneath the P-Multicast tree header. 1484 Each of the egress PEs must be able to associate this MPLS label with 1485 the same MVPN. If downstream label assignment were used this would 1486 require all the egress PEs in the MVPN to agree on a common label for 1487 the MVPN. Instead the MPLS label is upstream assigned [MPLS- 1488 UPSTREAM-LABEL]. The label bindings are advertised via BGP updates 1489 originated the ingress PEs. 1491 This procedure requires each egress PE to support a separate label 1492 space for every other PE. The egress PEs create a forwarding entry 1493 for the upstream assigned MPLS label, allocated by the ingress PE, in 1494 this label space. Hence when the egress PE receives a packet over an 1495 Aggregate Tree, it first determines the tree that the packet was 1496 received over. The tree identifier determines the label space in 1497 which the upstream assigned MPLS label lookup has to be performed. 1498 The same label space may be used for all P-multicast trees rooted at 1499 the same ingress PE, or an implementation may decide to use a 1500 separate label space for every P-multicast tree. 1502 The encapsulation format is either MPLS or MPLS-in-something (e.g. 1503 MPLS-in-GRE [MPLS-IP]). When MPLS is used, this label will appear 1504 immediately below the label that identifies the P-multicast tree. 1505 When MPLS-in-GRE is used, this label will be the top MPLS label that 1506 appears when the GRE header is stripped off. 1508 When IP encapsulation is used for the P-multicast Tree, whatever 1509 information that particular encapsulation format uses for identifying 1510 a particular tunnel is used to determine the label space in which the 1511 MPLS label is looked up. 1513 If the P-multicast tree uses MPLS encapsulation, the P-multicast tree 1514 is itself identified by an MPLS label. The egress PE MUST NOT 1515 advertise IMPLICIT NULL or EXPLICIT NULL for that tree. Once the 1516 label representing the tree is popped off the MPLS label stack, the 1517 next label is the demultiplexing information that allows the proper 1518 MVPN to be determined. 1520 This specification requires that, to support this sort of 1521 aggregation, there be at least one upstream-assigned label per MVPN. 1522 It does not require that there be only one. For example, an ingress 1523 PE could assign a unique label to each C-(S,G). (This could be done 1524 using the same technique this is used to assign a particular C-(S,G) 1525 to an S-PMSI, see section 7.3.) 1527 6.4. Mapping Received Packets to MVPNs 1529 When an egress PE receives a C-multicast data packet over a P- 1530 multicast tree, it needs to forward the packet to the CEs that have 1531 receivers in the packet's C-multicast group. It also needs to 1532 determine the RPF interface for the C-multicast data packet. In order 1533 to do this the egress PE needs to determine the tunnel that the 1534 packet was received on. The PE can then determine the MVPN that the 1535 packet belongs to and if needed do any further lookups that are 1536 needed to forward the packet. 1538 6.4.1. Unicast Tunnels 1540 When ingress replication is used, the MVPN to which the received C- 1541 multicast data packet belongs can be determined by the MPLS label 1542 that was allocated by the egress. This label is distributed by the 1543 egress. This also determines the RPF interface for the C-multicast 1544 data packet. 1546 6.4.2. Non-Aggregated P-Multicast Trees 1548 If a P-multicast tree is associated with only one MVPN, determining 1549 the P-multicast tree on which a packet was received is sufficient to 1550 determine the packet's MVPN. All that the egress PE needs to know is 1551 the MVPN the P-multicast tree is associated with. 1553 There are different ways in which the egress PE can learn this 1554 association: 1556 a) Configuration. The P-multicast tree that a particular MVPN 1557 belongs to is configured on each PE. 1559 [Editor's Note: PIM-SM Default MD trees in [ROSEN-8] and 1560 [MVPN-BASE] are examples of configuring the P-multicast tree 1561 and MVPN association] 1563 b) BGP based advertisement of the P-multicast tree - MPVN mapping 1564 after the root of the tree discovers the leaves of the tree. 1565 The root of the tree sets up the tree after discovering each of 1566 the PEs that belong to the MVPN. It then advertises the P- 1567 multicast tree - MVPN mapping to each of the leaves. This 1568 mechanism can be used with both source initiated trees [e.g. 1569 RSVP-TE P2MP LSPs] and receiver initiated trees [e.g. PIM 1570 trees]. 1572 [Editor's Note: Aggregate tree advertisements in [RAGGARWA- 1573 MCAST] are examples of this.] 1575 c) BGP based advertisment of the P-multicast tree - MVPN mapping 1576 as part of the MVPN membership discovery. The root of the tree 1577 advertises, to each of the other PEs that belong to the MVPN, 1578 the P-multicast tree that the MVPN is associated with. This 1579 implies that the root doesn't need to know the leaves of the 1580 tree beforehand. This is possible only for receiver initiated 1581 trees e.g. PIM based trees. 1583 [Editor's Note: PIM-SSM discovery in [ROSEN-8] is an example of 1584 the above] 1586 Both of the above require the BGP based advertisement to contain the 1587 P-multicast tree identifier. This identifier is encoded as a BGP 1588 attribute and contains the following elements: 1590 - Tunnel Type. 1592 - Tunnel identifier. The semantics of the identifier is determined 1593 by the tunnel type. 1595 6.4.3. Aggregate P-Multicast Trees 1597 Once a PE sets up an Aggregate Tree it needs to announce the C- 1598 multicast groups being mapped to this tree to other PEs in the 1599 network. This procedure is referred to as Aggregate Tree discovery. 1600 For an Aggregate Tree with an inclusive mapping this discovery 1601 implies announcing: 1603 - The mapping of all MVPNs mapped to the Tree. 1605 - For each MVPN mapped onto the tree the inner label allocated for 1606 it by the ingress PE. The use of this label is explained in the 1607 demultiplexing procedures of section 6.3.4. 1609 - The P-multicast tree Identifier 1611 The egress PE creates a logical interface corresponding to the tree 1612 identifier. This interface is the RPF interface for all the entries mapped to that tree. 1615 When PIM is used to setup P-multicast trees, the egress PE also Joins 1616 the P-Group Address corresponding to the tree. This results in setup 1617 of the PIM P-multicast tree. 1619 6.5. I-PMSI Instantiation Using Ingress Replication 1621 As described in section 3 a PMSI can be instantiated using Unicast 1622 Tunnels between the PEs that are participating in the MVPN. In this 1623 mechanism the ingress PE replicates a C-multicast data packet 1624 belonging to a particular MVPN and sends a copy to all or a subset of 1625 the PEs that belong to the MVPN. A copy of the packet is tunnelled to 1626 a remote PE over an Unicast Tunnel to the remote PE. IP/GRE Tunnels 1627 or MPLS LSPs are examples of unicast tunnels that may be used. Note 1628 that the same Unicast Tunnel can be used to transport packets 1629 belonging to different MVPNs. 1631 Ingress replication can be used to instantiate a UI-PMSI. The PE sets 1632 up unicast tunnels to each of the remote PEs that support ingress 1633 replication. For a given MVPN all C-multicast data packets are sent 1634 to each of the remote PEs in the MVPN that support ingress 1635 replication. Hence a remote PE may receive C-multicast data packets 1636 for a group even if it doesn't have any receivers in that group. 1638 Ingress replication can also be used to instantiate a MI-PMSI. In 1639 this case each PE has a mesh of unicast tunnels to every other PE in 1640 that MVPN. 1642 However when ingress replication is used it is recommended that only 1643 S-PMSIs be used. Instantiation of S-PMSIs with ingress replication is 1644 described in section 7.2. Note that this requires the use of 1645 explicit tracking, i.e., a PE must know which of the other PEs have 1646 receivers for each C-multicast tree. 1648 6.6. Establishing P-Multicast Trees 1650 It is believed that the architecture outlined in this document places 1651 no limitations on the protocols used to instantiate P-multicast 1652 trees. However, the only protocols being explicitly considered are 1653 PIM-SM, PIM-SSM, PIM-Bidir, RSVP-TE, and mLDP. 1655 A P-multicast tree can be either a source tree or a shared tree. A 1656 source tree is used to carry traffic only for the multicast VRFs that 1657 exist locally on the root of the tree i.e. for which the root has 1658 local CEs. The root is a PE router. Source P-multicast trees can be 1659 instantiated using PIM-SM, PIM-SSM, RSVP-TE P2MP LSPs, and mLDP P2MP 1660 LSPs. 1662 A shared tree on the other hand can be used to carry traffic 1663 belonging to VRFs that exist on other PEs as well. The root of a 1664 shared tree is not necessarily one of the PEs in the MVPN. All PEs 1665 that use the shared tree will send MVPN data packets to the root of 1666 the shared tree; if PIM is being used as the control protocol, PIM 1667 control packets also get sent to the root of the shared tree. This 1668 may require an unicast tunnel between each of these PEs and the root. 1669 The root will then send them on the shared tree and all the PEs that 1670 are leaves of the shared tree will receive the packets. For example a 1671 RP based PIM-SM tree would be a shared tree. Shared trees can be 1672 instantiated using PIM-SM, PIM-SSM, PIM-Bidir, RSVP-TE P2MP LSPs, 1673 mLDP P2MP LSPs, and mLDP MP2MP LSPs.. Aggregation support for 1674 bidirectional P-trees (i.e., PIM-Bidir trees or mLDP MP2MP trees) is 1675 for further study. Shared trees require all the PEs to discover the 1676 root of the shared tree for a MVPN. To achieve this the root of a 1677 shared tree advertises as part of the BGP based MVPN membership 1678 discovery: 1680 - The capability to setup a shared tree for a specified MVPN. 1682 - A downstream assigned label that is to be used by each PE to 1683 encapsulate a MVPN data packet, when they send this packet to the 1684 root of the shared tree. 1686 - A downstream assigned label that is to be used by each PE to 1687 encapsulate a MVPN control packet, when they send this packet to 1688 the root of the shared tree. 1690 Both a source tree and a shared tree can be used to instantiate an 1691 I-PMSI. If a source tree is used to instantiate an UI-PMSI for a 1692 MVPN, all the other PEs that belong to the MVPN, must be leaves of 1693 the source tree. If a shared tree is used to instantiate a UI-PMSI 1694 for a MVPN, all the PEs that are members of the MVPN must be leaves 1695 of the shared tree. 1697 6.7. RSVP-TE P2MP LSPs 1699 This section describes procedures that are specific to the usage of 1700 RSVP-TE P2MP LSPs for instantiating a UI-PMSI. The RSVP-TE P2MP LSP 1701 can be either a source tree or a shared tree. Procedures in [RSVP- 1702 P2MP] are used to signal the LSP. The LSP is signalled after the root 1703 of the LSP discovers the leaves. The egress PEs are discovered using 1704 the MVPN membership procedures described in section 4. RSVP-TE P2MP 1705 LSPs can optionally support aggregation. 1707 6.7.1. P2MP TE LSP Tunnel - MVPN Mapping 1709 P2MP TE LSP Tunnel to MVPN mapping can be learned at the egress PEs 1710 using either option (a) or option (b) described in section 6.4.2. 1711 Option (b) i.e. BGP based advertisements of the P2MP TE LSP Tunnel - 1712 MPVN mapping require that the root of the tree include the P2MP TE 1713 LSP Tunnel identifier as the tunnel identifier in the BGP 1714 advertisements. This identifier contains the following information 1715 elements: 1716 - The type of the tunnel is set to RSVP-TE P2MP Tunnel 1718 - RSVP-TE P2MP Tunnel's SESSION Object 1720 - Optionally RSVP-TE P2MP LSP's SENDER_TEMPLATE Object. This object 1721 is included when it is desired to identify a particular P2MP TE 1722 LSP. 1724 6.7.2. Demultiplexing C-Multicast Data Packets 1726 Demultiplexing the C-multicast data packets at the egress PE follow 1727 procedures described in section 6.3.4. The RSVP-TE P2MP LSP Tunnel 1728 must be signaled with penultimate-hop-popping (PHP) off. Signalling 1729 the P2MP TE LSP Tunnel with PHP off requires an extension to RSVP-TE 1730 which will be described later. 1732 7. Optimizing Multicast Distribution via S-PMSIs 1734 Whenever a particular multicast stream is being sent on an I-PMSI, it 1735 is likely that the data of that stream is being sent to PEs that do 1736 not require it. If a particular stream has a significant amount of 1737 traffic, it may be beneficial to move it to an S-PMSI which includes 1738 only those PEs that are transmitters and/or receivers (or at least 1739 includes fewer PEs that are neither). 1741 If explicit tracking is being done, S-PMSI creation can also be 1742 triggered on other criteria. For instance there could be a "pseudo 1743 wasted bandwidth" criteria: switching to an S-PMSI would be done if 1744 the bandwidth multiplied by the number of uninterested PEs (PE that 1745 are receiving the stream but have no receivers) is above a specified 1746 threshold. The motivation is that (a) the total bandwidth wasted by 1747 many sparsely subscribed low-bandwidth groups may be large, and (b) 1748 there's no point to moving a high-bandwidth group to an S-PMSI if all 1749 the PEs have receivers for it. 1751 Switching to a S-PMSI requires the root of the S-PMSI to: 1753 a) Discover the egress PEs that will receive traffic using the S- 1754 PMSI, if this is required to setup the S-PMSI. There are two cases in 1755 which the PE needs to know the egress PEs: 1757 - If the tunnel is a source initiated tree, such as a RSVP-TE P2MP 1758 Tunnel, the PE needs to know the leaves of the tree before it can 1759 instantiate the S-PMSI. 1761 - If a PE instantiates multiple S-PMSIs, belonging to different 1762 MVPNs, using one P-multicast tree, such a tree is termed an 1763 Aggregate Tree with a selective mapping. The setting up of such 1764 an Aggregate Tree requires the ingress PE to know all the other 1765 PEs that have receivers for multicast groups that are mapped onto 1766 the tree. 1768 Discovering the egress PEs that will receive traffic using the S-PMSI 1769 requires doing explicit tracking for the particular (C-S, C-G) if the 1770 root hasn't already been doing it. 1772 b) Setup the S-PMSI and 1774 c) If required, signal the binding of the multicast stream to the S- 1775 PMSI to the leaves of the tunnel used to instantiate the S-PMSI. 1777 Step (c) is required only when the tunnel is a P-multicast tree. We 1778 specify two methods for achieving this. 1780 7.1. S-PMSI Instantiation Using Ingress Replication 1782 As described in section 6.1.1, ingress replication can be used to 1783 instantiate a UI-PMSI. However this can result in a PE receiving 1784 packets for a multicast group for which it doesn't have any 1785 receivers. This can be avoided if the ingress PE tracks the remote 1786 PEs which have receivers in a particular C-multicast group. In order 1787 to do this it needs to receive C-Joins from each of the remote PEs. 1788 It then replicates the C-multicast data packet and sends it to only 1789 those egress PEs which are on the path to a receiver of that C-group. 1790 It is possible that each PE that is using ingress replication 1791 instantiates only S-PMSIs. It is also possible that some PEs 1792 instantiate UI-PMSIs while others instantiate only S-PMSIs. In both 1793 these cases the PE MUST either unicast MVPN routing information using 1794 PIM or use BGP for exchanging the MVPN routing information. This is 1795 because there may be no MI-PMSI available for it to exchange MVPN 1796 routing information. 1798 Note that the use of ingress replication doesn't require any extra 1799 procedures for signaling the binding of the S-PMSI from the ingress 1800 PE to the egress PEs. The procedures described for I-PMSIs are 1801 sufficient. 1803 7.2. Protocol for Switching to S-PMSIs 1805 We describe two protocols for switching to S-PMSIs. These protocols 1806 can be used when the tunnel that instantiates the S-PMSI is a P- 1807 multicast tree. 1809 7.2.1. A UDP-based Protocol for Switching to S-PMSIs 1811 This procedure can be used for any MVPN which has an MI-PMSI. 1812 Traffic from all multicast streams in a given MPVN is sent, by 1813 default, on the MI-PMSI. Consider a single multicast stream within a 1814 given MVPN, and consider a PE which is attached to a source of 1815 multicast traffic for that stream. The PE can be configured to move 1816 the stream from the MI-PMSI to an S-PMSI if certain configurable 1817 conditions are met. To do this, it needs to inform all the PEs which 1818 attach to receivers for stream. These PEs need to start listening 1819 for traffic on the S-PMSI, and the transmitting PE may start sending 1820 traffic on the S-PMSI when it is reasonably certain that all 1821 receiving PEs are listening on the S-PMSI. 1823 7.2.1.1. Binding a Stream to an S-PMSI 1825 When a PE which attaches to a transmitter for a particular multicast 1826 stream notices that the conditions for moving the stream to an S-PMSI 1827 are met, it begins to periodically send an "S-PMSI Join Message" on 1828 the MI-PMSI. The S-PMSI Join is a UDP-encapsulated message whose 1829 destination address is ALL-PIM-ROUTERS (224.0.0.13), and whose 1830 destination port is 3232. 1832 The S-PMSI Join Message contains the following information: 1834 - An identifier for the particular multicast stream which is to be 1835 bound to the S-PMSI. This can be represented as an (S,G) pair. 1837 - An identifier for the particular S-PMSI to which the stream is to 1838 be bound. This identifier is a structured field which includes 1839 the following information: 1841 * The type of tunnel used to instantiate the S-PMSI 1843 * An identifier for the tunnel. The form of the identifier 1844 will depend upon the tunnel type. The combination of tunnel 1845 identifier and tunnel type should contain enough information 1846 to enable all the PEs to "join" the tunnel and receive 1847 messages from it. 1849 * Any demultiplexing information needed by the tunnel 1850 encapsulation protocol to identify the particular S-PMSI. 1851 This allows a single tunnel to aggregate multiple S-PMSIs. 1852 If a particular tunnel is not aggregating multiple S-PMSIs, 1853 then no demultiplexing information is needed. 1855 A PE router which is not connected to a receiver will still receive 1856 the S-PMSI Joins, and MAY cache the information contained therein. 1857 Then if the PE later finds that it is attached to a receiver, it can 1858 immediately start listening to the S-PMSI. 1860 Upon receiving the S-PMSI Join, PE routers connected to receivers for 1861 the specified stream will take whatever action is necessary to start 1862 receiving multicast data packets on the S-PMSI. The precise action 1863 taken will depend upon the tunnel type. 1865 After a configurable delay, the PE router which is sending the S-PMSI 1866 Joins will start transmitting the stream's data packets on the S- 1867 PMSI. 1869 When the pre-configured conditions are no longer met for a particular 1870 stream, e.g. the traffic stops, the PE router connected to the source 1871 stops announcing S-PMSI Joins for that stream. Any PE that does not 1872 receive, over a configurable interval, an S-PMSI Join for a 1873 particular stream will stop listening to the S-PMSI. 1875 7.2.1.2. Packet Formats and Constants 1877 To be included. 1879 7.2.2. A BGP-based Protocol for Switching to S-PMSIs 1881 This procedure can be used for a MVPN that is using either a UI-PMSI 1882 or a MI-PMSI. Consider a single multicast stream for a C-(S, G) 1883 within a given MVPN, and consider a PE which is attached to a source 1884 of multicast traffic for that stream. The PE can be configured to 1885 move the stream from the MI-PMSI or UI-PMSI to an S-PMSI if certain 1886 configurable conditions are met. Once a PE decides to move the C-(S, 1887 G) for a given MVPN to a S-PMSI, it needs to instantiate the S-PMSI 1888 using a tunnel and announce to all the egress PEs, that are on the 1889 path to receivers of the C-(S, G), of the binding of the S-PMSI to 1890 the C-(S, G). The announcement is done using BGP. Depending on the 1891 tunneling technology used, this announcement may be done before or 1892 after setting up the tunnel. The source and egress PEs have to switch 1893 to using the S-PMSI for the C-(S, G). 1895 7.2.2.1. Advertising C-(S, G) Binding to a S-PMSI using BGP 1897 The ingress PE informs all the PEs that are on the path to receivers 1898 of the C-(S, G) of the binding of the S-PMSI to the C-(S, G). The BGP 1899 announcement is done by sending update for the MCAST-VPN address 1900 family. An A-D route is used, containing the following information: 1902 a) IP address of the originating PE 1904 b) The RD configured locally for the MVPN. This is required to 1905 uniquely identify the as the addresses 1906 could overlap between different MVPNs. This is the same RD 1907 value used in the auto-discovery process. 1909 c) The C-Source address. This address can be a prefix in order to 1910 allow a range of C-Source addresses to be mapped to an 1911 Aggregate Tree. 1913 d) The C-Group address. This address can be a range in order to 1914 allow a range of C-Group addresses to be mapped to an Aggregate 1915 Tree. 1917 e) If the S-PMSI is instantiated using an Aggregate Tree with a 1918 selective mapping, an inner label allocated by the Aggregate 1919 Tree root for the . This allows a single 1920 tunnel to aggregate multiple S-PMSIs. This is the upstream 1921 label whose usage is described in section 6.3.4. 1923 When a PE distributes this information via BGP, it must include the 1924 following: 1926 1. An identifier for the particular S-PMSI to which the stream is 1927 to be bound. This identifier is a structured field which 1928 includes the following information: 1930 * The type of tunnel used to instantiate the S-PMSI 1932 * An identifier for the tunnel. The form of the identifier 1933 will depend upon the tunnel type. The combination of 1934 tunnel identifier and tunnel type should contain enough 1935 information to enable all the PEs to "join" the tunnel and 1936 receive messages from it. 1938 2. Route Target Extended Communities attribute. This is used as 1939 described in section 4. 1941 7.2.2.2. Explicit Tracking 1943 If the PE wants to enable explicit tracking for the specified flow, 1944 it also indicates this in the A-D route it uses to bind the flow to a 1945 particular S-PMSI. Then any PE which receives the A-D route will 1946 respond with a "Leaf A-D Route" in which it identifies itself as a 1947 receiver of the specified flow. The Leaf A-D route will be withdrawn 1948 when the PE is no longer a receiver for the flow. 1950 If the PE needs to enable explicit tracking for a flow before binding 1951 the flow to an S-PMSI, it can do so by sending an A-D route 1952 identifying the flow but not specifying an S-PMSI. This will elicit 1953 the Leaf A-D Routes. This is useful when the PE needs to know the 1954 receivers before selecting an S-PMSI. 1956 7.2.2.3. Switching to S-PMSI 1958 After the egress PEs receive the announcement they setup their 1959 forwarding path to receive traffic on the S-PMSI if they have one or 1960 more receivers interested in the bound to the S-PMSI. This 1961 involves changing the RPF interface for the relevant 1962 entries to the interface that is used to instantiate the S-PMSI. If 1963 an Aggregate Tree is used to instantiate a S-PMSI this also implies 1964 setting up the demultiplexing forwarding entries based on the inner 1965 label as described in section 6.3.4. The egress PEs may perform the 1966 switch to the S-PMSI once the advertisement from the ingress PE is 1967 received or wait for a preconfigured timer to do so. 1969 A source PE may use one of two approaches to decide when to start 1970 transmitting data on the S-PMSI. In the first approach once the 1971 source PE instantiates the S-PMSI, it starts sending multicast 1972 packets for entries mapped to the S-PMSI on both that as 1973 well as on the I-PMSI, which is currently used to send traffic for 1974 the . After some preconfigured timer the PE stops sending 1975 multicast packets for on the I-PMSI. In the second 1976 approach after a certain pre-configured delay after advertising the 1977 entry bound to a S-PMSI, the source PE begins to send 1978 traffic on the S-PMSI. At this point it stops to send traffic for the 1979 on the I-PMSI. This traffic is instead transmitted on the 1980 S-PMSI. 1982 7.3. Aggregation 1984 S-PMSIs can be aggregated on a P-multicast tree. The S-PMSI to C-(S, 1985 G) binding advertisement supports aggregation. Furthermore the 1986 aggregation procedures of section 6.3 apply. It is also possible to 1987 aggregate both S-PMSIs and I-PMSIs on the same P-multicast tree. 1989 7.4. Instantiating the S-PMSI with a PIM Tree 1991 The procedures of section 7.3 tell a PE when it must start listening 1992 and stop listening to a particular S-PMSI. Those procedures also 1993 specify the method for instantiating the S-PMSI. In this section, we 1994 provide the procedures to be used when the S-PMSI is instantiated as 1995 a PIM tree. The PIM tree is created by the PIM P-instance. 1997 If a single PIM tree is being used to aggregate multiple S-PMSIs, 1998 then the PIM tree to which a given stream is bound may have already 1999 been joined by a given receiving PE. If the tree does not already 2000 exist, then the appropriate PIM procedures to create it must be 2001 executed in the P-instance. 2003 If the S-PMSI for a particular multicast stream is instantiated as a 2004 PIM-SM or PIM-Bidir tree, the S-PMSI identifier will specify the RP 2005 and the group P-address, and the PE routers which have receivers for 2006 that stream must build a shared tree toward the RP. 2008 If the S-PMSI is instantiated as a PIM-SSM tree, the PE routers build 2009 a source tree toward the PE router that is advertising the S-PMSI 2010 Join. The IP address root of the tree is the same as the source IP 2011 address which appears in the S-PMSI Join. In this case, the tunnel 2012 identifier in the S-PMSI Join will only need to specify a group P- 2013 address. 2015 The above procedures assume that each PE router has a set of group 2016 P-addresses that it can use for setting up the PIM-trees. Each PE 2017 must be configured with this set of P-addresses. If PIM-SSM is used 2018 to set up the tunnels, then the PEs may be with overlapping sets of 2019 group P-addresses. If PIM-SSM is not used, then each PE must be 2020 configured with a unique set of group P-addresses (i.e., having no 2021 overlap with the set configured at any other PE router). The 2022 management of this set of addresses is thus greatly simplified when 2023 PIM-SSM is used, so the use of PIM-SSM is strongly recommended 2024 whenever PIM trees are used to instantiate S-PMSIs. 2026 If it is known that all the PEs which need to receive data traffic on 2027 a given S-PMSI can support aggregation of multiple S-PMSIs on a 2028 single PIM tree, then the transmitting PE, may, at its discretion, 2029 decide to bind the S-PMSI to a PIM tree which is already bound to 2030 one or more other S-PMSIs, from the same or from different MVPNs. In 2031 this case, appropriate demultiplexing information must be signaled. 2033 7.5. Instantiating S-PMSIs using RSVP-TE P2MP Tunnels 2035 RSVP-TE P2MP Tunnels can be used for instantiating S-PMSIs. 2036 Procedures described in the context of I-PMSIs in section 6.7 apply. 2038 8. Inter-AS Procedures 2040 If an MVPN has sites in more than one AS, it requires one or more 2041 PMSIs to be instantiated by inter-AS tunnels. This document 2042 describes two different types of inter-AS tunnel: 2044 1. "Segmented Inter-AS tunnels" 2046 A segmented inter-AS tunnel consists of a number of independent 2047 segments which are stitched together at the ASBRs. There are 2048 two types of segment, inter-AS segments and intra-AS segments. 2050 The segmented inter-AS tunnel consists of alternating intra-AS 2051 and inter-AS segments. 2053 Inter-AS segments connect adjacent ASBRs of different ASes; 2054 these "one-hop" segments are instantiated as unicast tunnels. 2056 Intra-AS segments connect ASBRs and PEs which are in the same 2057 AS. An intra-AS segment may be of whatever technology is 2058 desired by the SP that administers the that AS. Different 2059 intra-AS segments may be of different technologies. 2061 Note that an intra-AS segment of an inter-AS tunnel is distinct 2062 from any intra-AS tunnel in the AS. 2064 A segmented inter-AS tunnel can be thought of as a tree which 2065 is rooted at a particular AS, and which has as its leaves the 2066 other ASes which need to receive multicast data from the root 2067 AS. 2069 2. "Non-segmented Inter-AS tunnels" 2071 A non-segmented inter-AS tunnel is a single tunnel which spans 2072 AS boundaries. The tunnel technology cannot change from one 2073 point in the tunnel to the next, so all ASes through which the 2074 tunnel passes must support that technology. In essence, AS 2075 boundaries are of no significance to a non-segmented inter-AS 2076 tunnel. 2078 [Editor's Note: This is the model in [ROSEN-8] and [MVPN- 2079 BASE].] 2081 Section 10 of [RFC4364] describes three different options for 2082 supporting unicast Inter-AS BGP/MPLS IP VPNs, known as options A, B, 2083 and C. We describe below how both segmented and non-segmented 2084 inter-AS trees can be supported when option B or option C is used. 2085 (Option A does not pass any routing information through an ASBR at 2086 all, so no special inter-AS procedures are needed.) 2088 8.1. Non-Segmented Inter-AS Tunnels 2090 In this model, the previously described discovery and tunnel setup 2091 mechanisms are used, even though the PEs belonging to a given MVPN 2092 may be in different ASes. The ASBRs play no special role, but 2093 function merely as P routers. 2095 8.1.1. Inter-AS MVPN Auto-Discovery 2097 The previously described BGP-based auto-discovery mechanisms work "as 2098 is" when an MVPN contains PEs that are in different Autonomous 2099 Systems. 2101 8.1.2. Inter-AS MVPN Routing Information Exchange 2103 MVPN routing information exchange can be done by PIM peering (either 2104 lightweight or full) across an MI-PMSI, or by unicasting PIM 2105 messages. The method of using BGP to send MVPN routing information 2106 can also be used. 2108 If any form of PIM peering is used, a PE that sends C-PIM Join/Prune 2109 messages for a particular C-(S,G) must be able to identify the PE 2110 which is its PIM adjacency on the path to S. The identity of the PIM 2111 adjacency is determined from the RPF information associated with the 2112 VPN-IP route to S. 2114 If no RPF information is present, then the identity of the PIM 2115 adjacency is taken from the BGP Next Hop attribute of the VPN-IP 2116 route to S. Note that this will not give the correct result if 2117 option b of section 10 of [RFC4364] is used. To avoid this 2118 possibility of error, the RPF information SHOULD always be present if 2119 MVPN routing information is to be distributed by PIM. 2121 If BGP (rather than PIM) is used to distribute the MVPN routing 2122 information, and if option b of section 10 of [RFC4364] is in use, 2123 then the MVPN routes will be installed in the ASBRs along the path 2124 from each multicast source in the MVPN to each multicast receiver in 2125 the MVPN. If option b is not in use, the MVPN routes are not 2126 installed in the ASBRs. The handling of MVPN routes in either case 2127 is thus exactly analogous to the handling of unicast VPN-IP routes in 2128 the corresponding case. 2130 8.1.3. Inter-AS I-PMSI 2132 The procedures described earlier in this document can be used to 2133 instantiate an I-PMSI with inter-AS tunnels. Specific tunneling 2134 techniques require some explanation: 2136 1. If ingress replication is used, the inter-AS PE-PE tunnels will 2137 use the inter-AS tunneling procedures for the tunneling 2138 technology used. 2140 2. Inter-AS PIM-SM or PIM-SSM based trees rely on a PE joining a 2141 (P-S, P-G) tuple where P-S is the address of a PE in another 2142 AS. This (P-S, P-G) tuple is learned using the MVPN membership 2143 and BGP MVPN-tunnel binding procedures described earlier. 2144 However, if the source of the tree is in a different AS than a 2145 particular P router, it is possible that the P router will not 2146 have a route to the source. For example, the remote AS may be 2147 using BGP to distribute a route to the source, but a particular 2148 P router may be part of a "BGP-free core", in which the P 2149 routers are not aware of BGP-distributed routes. 2151 In such a case it is necessary for a PE to to tell PIM to 2152 construct the tree through a particular BGP speaker, the "BGP 2153 next hop" for the tree source. This can be accomplished with a 2154 PIM extension, in which the P-PIM Join/Prune messages carry a 2155 new "proxy" field which contains the address of that BGP next 2156 hop. As the P-multicast tree is constructed, it is built 2157 towards the proxy (the BGP next hop) rather than towards P-S, 2158 so the P routers will not need to have a route to P-S. 2160 Support for inter-AS trees using PIM-Bidir are for further 2161 study. 2163 When the BGP-based discovery procedures for MVPN are in place, 2164 one can distinguish two different inter-AS routes to a 2165 particular P-S: 2167 - BGP will install a unicast route to P-S along a particular 2168 path, using the IP AFI/SAFI ; 2170 - A PE's MVPN auto-discovery information is advertised by 2171 sending a BGP update whose NLRI is in a special address 2172 family (AFI/SAFI) used for this purpose. The NLRI of the 2173 address family contains the IP address of the PE, as well 2174 as an RD. If the NLRI contains the IP address of P-S, this 2175 in effect creates a second route to P-S. This route might 2176 follow a different path than the route in the unicast IP 2177 family. 2179 When building a PIM tree towards P-S, it may be desirable to 2180 build it along the route on which the MVPN auto-discovery 2181 AFI/SAFI is installed, rather than along the route on which the 2182 IP AFI/SAFI is installed. This enables the inter-AS portion of 2183 the tree to follow a path which is specifically chosen for 2184 multicast (i.e., it allows the inter-AS multicast topology to 2185 be "non-congruent" to the inter-AS unicast topology). 2187 In order for P routers to send P-Join/Prune messages along this 2188 path, they need to make use of the "proxy" field extension 2189 discussed above. The PIM message must also contain the full 2190 NLRI in the MVPN auto-discovery family, so that the BGP 2191 speakers can look up that NLRI to find the BGP next hop. 2193 3. Procedures in [RSVP-P2MP] are used for inter-AS RSVP-TE P2MP 2194 Tunnels. 2196 8.1.4. Inter-AS S-PMSI 2198 The leaves of the tunnel are discovered using the MVPN routing 2199 information. Procedures for setting up the tunnel are similar to the 2200 ones described in section 8.2.3 for an inter-AS I-PMSI. 2202 8.2. Segmented Inter-AS Tunnels 2204 8.2.1. Inter-AS MVPN Auto-Discovery Tunnels 2206 The BGP based MVPN membership discovery procedures of section 4 are 2207 used to auto-discover the intra-AS MVPN membership. This section 2208 describes the additional procedures for inter-AS MVPN membership 2209 discovery. It also describes the procedures for contructing segmented 2210 inter-AS tunnels. 2212 In this case, for a given MVPN in an AS, the objective is to form a 2213 spanning tree of MVPN membership, rooted at the AS. The nodes of this 2214 tree are ASes. The leaves of this tree are only those ASes that have 2215 at least one PE with a member in the MVPN. The inter-AS tunnel used 2216 to instantiate an inter-AS PMSI must traverse this spanning tree. A 2217 given AS needs to announce to another AS only the fact that it has 2218 membership in a given MVPN. It doesn't need to announce the 2219 membership of each PE in the AS to other ASes. 2221 This section defines an inter-AS auto-discovery route as a route that 2222 carries information about an AS that has one or more PEs (directly) 2223 connected to the site(s) of that MVPN. Further it defines an inter-AS 2224 leaf auto-discovery route (leaf auto-discovery route) as a route used 2225 to inform the root of an intra-AS segment, of an inter-AS tunnel, of 2226 a leaf of that intra-AS segment. 2228 8.2.1.1. Originating Inter-AS MVPN A-D Information 2230 A PE in a given AS advertises its MVPN membership to all its IBGP 2231 peers. This IBGP peer may be a route reflector which in turn 2232 advertises this information to only its IBGP peers. In this manner 2233 all the PEs and ASBRs in the AS learn this membership information. 2235 An Autonomous System Border Router (ASBR) may be configured to 2236 support a particular MVPN. If an ASBR is configured to support a 2237 particular MVPN, the ASBR MUST participate in the intra-AS MVPN 2238 auto-discovery/binding procedures for that MVPN within the AS that 2239 the ASBR belongs to, as defined in this document. 2241 Each ASBR then advertises the "AS MVPN membership" to its neighbor 2242 ASBRs using EBGP. This inter-AS auto-discovery route must not be 2243 advertised to the PEs/ASBRs in the same AS as this ASBR. The 2244 advertisement carries the following information elements: 2246 a. A Route Distinguisher for the MVPN. For a givem MVPN each ASBR 2247 in the AS must use the same RD when advertising this 2248 information to other ASBRs. To accomplish this all the ASBRs 2249 within that AS, that are configured to support the MVPN, MUST 2250 be configured with the same RD for that MVPN. This RD MUST be 2251 of Type 0, MUST embed the autonomous system number of the AS. 2253 b. The announcing ASBR's local address as the next-hop for the 2254 above information elements. 2256 c. By default the BGP Update message MUST carry export Route 2257 Targets used by the unicast routing of that VPN. The default 2258 could be modified via configuration by having a set of Route 2259 Targets used for the inter-AS auto-discovery routes being 2260 distinct from the ones used by the unicast routing of that VPN. 2262 8.2.1.2. Propagating Inter-AS MVPN A-D Information 2264 As an inter-AS auto-discovery route originated by an ASBR within a 2265 given AS is propagated via BGP to other ASes, this results in 2266 creation of a data plane tunnel that spans multiple ASes. This tunnel 2267 is used to carry (multicast) traffic from the MVPN sites connected to 2268 the PEs of the AS to the MVPN sites connected to the PEs that are in 2269 the other ASes. Such tunnel consists of multiple intra-AS segments 2270 (one per AS) stitched at ASBRs' boundaries by single hop 2271 LSP segments. 2273 An ASBR originates creation of an intra-AS segment when the ASBR 2274 receives an inter-AS auto-discovery route from an EBGP neighbor. 2275 Creation of the segment is completed as a result of distributing via 2276 IBGP this route within the ASBR's own AS. 2278 For a given inter-AS tunnel each of its intra-AS segments could be 2279 constructed by its own independent mechanism. Moreover, by using 2280 upstream labels within a given AS multiple intra-AS segments of 2281 different inter-AS tunnels of either the same or different MVPNs may 2282 share the same P-Multicast Tree. 2284 Since (aggregated) inter-AS auto-discovery routes have granularity of 2285 , an MVPN that is present in N ASes would have total of N 2286 inter-AS tunnels. Thus for a given MVPN the number of inter-AS 2287 tunnels is independent of the number of PEs that have this MVPN. 2289 The following sections specify procedures for propagation of 2290 (aggregated) inter-AS auto-discovery routes across ASes. 2292 8.2.1.2.1. Inter-AS Auto-Discovery Route received via EBGP 2294 When an ASBR receives from one of its EBGP neighbors a BGP Update 2295 message that carries the inter-AS auto-discovery route if (a) at 2296 least one of the Route Targets carried in the message matches one of 2297 the import Route Targets configured on the ASBR, and (b) the ASBR 2298 determines that the received route is the best route to the 2299 destination carried in the NLRI of the route, the ASBR: 2301 a) Re-advertises this inter-AS auto-discovery route within its own 2302 AS. 2304 If the ASBR uses ingress replication to instantiate the intra- 2305 AS segment of the inter-AS tunnel, the re-advertised route 2306 SHOULD carry a Tunnel attribute with the Tunnel Identifier set 2307 to Ingress Replication, but no MPLS labels. 2309 If a P-Multicast Tree is used to instantiate the intra-AS 2310 segment of the inter-AS tunnel, and in order to advertise the 2311 P-Multicast tree identifier the ASBR doesn't need to know the 2312 leaves of the tree beforehand, then the advertising ASBR SHOULD 2313 advertise the P-Multicast tree identifier in the Tunnel 2314 Identifier of the Tunnel attribute. This, in effect, creates a 2315 binding between the inter-AS auto-discovery route and the P- 2316 Multicast Tree. 2318 If a P-Multicast Tree is used to instantiate the intra-AS 2319 segment of the inter-AS tunnel, and in order to advertise the 2320 P-Multicast tree identifier the advertising ASBR needs to know 2321 the leaves of the tree beforehand, the ASBR first discovers the 2322 leaves using the Auto-Discovery procedures, as specified 2323 further down. It then advertises the binding of the tree to the 2324 inter-AS auto-discovery route using the the original auto- 2325 discovery route with the addition of carrying in the route the 2326 Tunnel attribute that contains the type and the identity of the 2327 tree (encoded in the Tunnel Identifier of the attribute). 2329 b) Re-advertises the received inter-AS auto-discovery route to its 2330 EBGP peers, other than the EBGP neighbor from which the best 2331 inter-AS auto-discovery route was received. 2333 c) Advertises to its neighbor ASBR, from which it received the 2334 best inter-AS autodiscovery route to the destination carried in 2335 the NRLI of the route, a leaf auto-discovery route that carries 2336 an ASBR-ASBR tunnel binding with the tunnel identifier set to 2337 ingress replication. This binding as described in section 6 can 2338 be used by the neighbor ASBR to send traffic to this ASBR. 2340 8.2.1.2.2. Leaf Auto-Discovery Route received via EBGP 2342 When an ASBR receives via EBGP a leaf auto-discovery route, the ASBR 2343 finds an inter-AS auto-discovery route that has the same RD as the 2344 leaf auto-discovery route. The MPLS label carried in the leaf auto- 2345 discovery route is used to stitch a one hop ASBR-ASBR LSP to the tail 2346 of the intra-AS tunnel segment associated with the inter-AS auto- 2347 discovery route. 2349 8.2.1.2.3. Inter-AS Auto-Discovery Route received via IBGP 2351 If a given inter-AS auto-discovery route is advertised within an AS 2352 by multiple ASBRs of that AS, the BGP best route selection performed 2353 by other PE/ASBR routers within the AS does not require all these 2354 PE/ASBR routers to select the route advertised by the same ASBR - to 2355 the contrary different PE/ASBR routers may select routes advertised 2356 by different ASBRs. 2358 Further when a PE/ASBR receives from one of its IBGP neighbors a BGP 2359 Update message that carries a AS MVPN membership tree , if (a) the 2360 route was originated outside of the router's own AS, (b) at least one 2361 of the Route Targets carried in the message matches one of the import 2362 Route Targets configured on the PE/ASBR, and (c) the PE/ASBR 2363 determines that the received route is the best route to the 2364 destination carried in the NLRI of the route, if the router is an 2365 ASBR then the ASBR propagates the route to its EBGP neighbors. In 2366 addition the PE/ASBR performs the following. 2368 If the received inter-AS auto-discovery route carries the Tunnel 2369 attribute with the Tunnel Identifier set to LDP P2MP LSP, or PIM-SSM 2370 tree, or PIM-SM tree, the PE/ASBR SHOULD join the P-Multicast tree 2371 whose identity is carried in the Tunnel Identifier. 2373 If the received source auto-discovery route carries the Tunnel 2374 attribute with the Tunnel Identifier set to RSVP-TE P2MP LSP, then 2375 the ASBR that originated the route MUST signal the local PE/ASBR as 2376 one of leaf LSRs of the RSVP-TE P2MP LSP. This signaling MAY have 2377 been completed before the local PE/ASBR receives the BGP Update 2378 message. 2380 If the NLRI of the route does not carry a label, then this tree is an 2381 intra-AS LSP segment that is part of the inter-AS Tunnel for the MVPN 2382 advertised by the inter-AS auto-discovery route. If the NLRI carries 2383 a (upstream) label, then a combination of this tree and the label 2384 identifies the intra-AS segment. 2386 If this is an ASBR, this intra-AS segment may further be stitched to 2387 ASBR-ASBR inter-AS segment of the inter-AS tunnel. If the PE/ASBR has 2388 local receivers in the MVPN, packets received over the intra-AS 2389 segment must be forwarded to the local receivers using the local VRF. 2391 If the received inter-AS auto-discovery route either does not carry 2392 the Tunnel attribute, or carries the Tunnel attribute with the Tunnel 2393 Identifier set to ingress replication, then the PE/ASBR originates a 2394 new auto-discovery route to allow the ASBR from which the auto- 2395 discovery route was received, to learn of this ASBR as a leaf of the 2396 intra-AS tree. 2398 Thus the AS MVPN membership information propagates across multiple 2399 ASes along a spanning tree. BGP AS-Path based loop prevention 2400 mechanism prevents loops from forming as this information propagates. 2402 8.2.2. Inter-AS MVPN Routing Information Exchange 2404 All of the MVPN routing information exchange methods specified in 2405 section 5 can be supported across ASes. 2407 The objective in this case is to propagate the MVPN routing 2408 information to the remote PE that originates the unicast route to C- 2409 S/C-RP, in the reverse direction of the AS MVPN membership 2410 information announced by the remote PE's origin AS. This information 2411 is processed by each ASBR along this reverse path. 2413 To achieve this the PE that is generating the MVPN routing 2414 advertisement, first determines the source AS of the unicast route to 2415 C-S/C-RP. It then determines from the received AS MVPN membership 2416 information, for the source AS, the ASBR that is the next-hop for the 2417 best path of the source AS MVPN membership. The BGP MVPN routing 2418 update is sent to this ASBR and the ASBR then further propagates the 2419 BGP advertisement. BGP filtering mechanisms ensure that the BGP MVPN 2420 routing information updates flow only to the upstream router on the 2421 reverse path of the inter-AS MVPN membership tree. Details of this 2422 filtering mechanism and the relevant encoding will be specified in a 2423 separate document. 2425 8.2.3. Inter-AS I-PMSI 2427 All PEs in a given AS, use the same inter-AS heterogenous tunnel, 2428 rooted at the AS, to instantiate an I-PMSI for an inter-AS MVPN 2429 service. As explained earlier the intra-AS tunnel segments that 2430 comprise this tunnel can be built using different tunneling 2431 technologies. To instantiate an MI-PMSI service for a MVPN there must 2432 be an inter-AS tunnel rooted at each AS that has at least one PE that 2433 is a member of the MVPN. 2435 A C-multicast data packet is sent using an intra-AS tunnel segment by 2436 the PE that first receives this packet from the MVPN customer site. 2437 An ASBR forwards this packet to any locally connected MVPN receivers 2438 for the multicast stream. If this ASBR has received a tunnel binding 2439 for the AS MVPN membership that it advertised to a neighboring ASBR, 2440 it also forwards this packet to the neighboring ASBR. In this case 2441 the packet is encapsulated in the downstream MPLS label received from 2442 the neighboring ASBR. The neighboring ASBR delivers this packet to 2443 any locally connected MVPN receivers for that multicast stream. It 2444 also transports this packet on an intra-AS tunnel segment, for the 2445 inter-AS MVPN tunnel, and the other PEs and ASBRs in the AS then 2446 receive this packet. The other ASBRs then repeat the procedure 2447 followed by the ASBR in the origin AS and the packet traverses the 2448 overlay inter-AS tunnel along a spanning tree. 2450 8.2.3.1. Support for Unicast VPN Inter-AS Methods 2452 The above procedures for setting up an inter-AS I-PMSI can be 2453 supported for each of the unicast VPN inter-AS models described in 2454 [RFC4364]. These procedures do not depend on the method used to 2455 exchange unicast VPN routes. For Option B and Option C they do 2456 require MPLS encapsulation between the ASBRs. 2458 8.2.4. Inter-AS S-PMSI 2460 An inter-AS tunnel for an S-PMSI is constructed similar to an inter- 2461 AS tunnel for an I-PMSI. Namely, such a tunnel is constructed as a 2462 concatenation of tunnel segments. There are two types of tunnel 2463 segments: an intra-AS tunnel segment (a segment that spans ASBRs 2464 within the same AS), and inter-AS tunnel segment (a segment that 2465 spans adjacent ASBRs in adjacent ASes). ASes that are spanned by a 2466 tunnel are not required to use the same tunneling mechanism to 2467 construct the tunnel - each AS may pick up a tunneling mechanism to 2468 construct the intra-AS tunnel segment of the tunnel on its 2470 The PE that decides to set up a S-PMSI, advertises the S-PMSI tunnel 2471 binding using procedures in section 7.3.2 to the routers in its own 2472 AS. The membership for which the S-PMSI is instantiated, 2473 is propagated along an inter-AS spanning tree. This spanning tree 2474 traverses the same ASBRs as the AS MVPN membership spanning tree. In 2475 addition to the information elements described in section 7.3.2 2476 (Origin AS, RD, next-hop) the C-S and C-G is also advertised. 2478 An ASBR that receives the AS information from its upstream 2479 ASBR using EBGP sends back a tunnel binding for AS 2480 information if a) at least one of the Route Targets carried in the 2481 message matches one of the import Route Targets configured on the 2482 ASBR, and (b) the ASBR determines that the received route is the best 2483 route to the destination carried in the NLRI of the route. If the 2484 ASBR instantiates a S-PMSI for the AS it sends back a 2485 downstream label that is used to forward the packet along its intra- 2486 AS S-PMSI for the . However the ASBR may decide to use an 2487 AS MVPN membership I-PMSI instead, in which case it sends back the 2488 same label that it advertised for the AS MVPN membership I-PMSI. If 2489 the downstream ASBR instantiates a S-PMSI, it further propagates the 2490 membership to its downstream ASes, else it does not. 2492 An AS can instantiate an intra-AS S-PMSI for the inter-AS S-PMSI 2493 tunnel only if the upstream AS instantiates a S-PMSI. The procedures 2494 allow each AS to determine whether it wishes to setup a S-PMSI or not 2495 and the AS is not forced to setup a S-PMSI just because the upstream 2496 AS decides to do so. 2498 The leaves of an intra-AS S-PMSI tunnel will be the PEs that have 2499 local receivers that are interested in and the ASBRs that 2500 have received MVPN routing information for . Note that an 2501 AS can determine these ASBRs as the MVPN routing information is 2502 propagated and processed by each ASBR on the AS MVPN membership 2503 spanning tree. 2505 The C-multicast data traffic is sent on the S-PMSI by the originating 2506 PE. When it reaches an ASBR that is on the spanning tree, it is 2507 delivered to local receivers, if any, and is also forwarded to the 2508 neighbor ASBR after being encapsulated in the label advertised by the 2509 neighbor. The neighbor ASBR either transports this packet on the S- 2510 PMSI for the multicast stream or an I-PMSI, delivering it to the 2511 ASBRs in its own AS. These ASBRs in turn repeat the procedures of the 2512 origin AS ASBRs and the multicast packet traverses the spanning tree. 2514 9. Duplicate Packet Detection and Single Forwarder PE 2516 An egress PE may receive duplicate multicast data packets, from more 2517 than one ingress PE, for a MVPN when a a site that contains C-S or 2518 C-RP is multihomed to more than one PE. An egress PE may also receive 2519 duplicate data packets for a MVPN, from two different ingress PEs, 2520 when the CE-PE routing protocol is PIM-SM and a router or a CE in a 2521 site switches from the C-RP tree to C-S tree. 2523 For a given a PE, say PE1, expects to receive C-data 2524 packets from the upstream PE, say PE2, which PE1 identified as the 2525 upstream multicast hop in the C-Multicast Routing Update that PE1 2526 sent in order to join . If PE1 can determine that a data 2527 packet for was received from the expected upstream PE, 2528 PE2, PE1 will accept the packet. Otherwise, PE1 will drop the 2529 packet. (But see section 10 for an exception case where PE1 will 2530 accept a packet even if it is from an unexpected upstream PE.) This 2531 determination can be performed only if the PMSI on which the packets 2532 are being received and the tunneling technology used to instantiate 2533 the PMSI allows the PE to determine the source PE that sent the 2534 packet. However this determination may not always be possible. 2536 Therefore, procedures are needed to ensure that packets are received 2537 at a PE only from a single upstream PE. This is called single 2538 forwarder PE selection. 2540 Single forwarder PE selection is achieved by the following set of 2541 procedures: 2543 a. If there is more than one PE within the same AS through which 2544 C-S or C-RP of a given MVPN could be reached, and in the case 2545 of C-S not every such PE advertises an S-PMSI for , 2546 all PEs that have this MVPN MUST send the MVPN routing 2547 information update for or to the same 2548 upstream PE. This is achieved using the following procedure: 2550 If the next-hop interface on a PE's route to C-S/C-RP, that is 2551 installed in the VRF, is a VRF interface than the PE should use 2552 that route to reach C-S/C-RP. Else : from all the VPN-IP routes 2553 that could be imported into the VRF and have exactly the same 2554 IP prefix as the route in the VRF, the PE uses the one with the 2555 highest next-hop address to determine the upstream multicast 2556 hop to identify in the C-multicast route. 2558 b. The above procedure ensures that if C-S or C-RP is multi-homed 2559 to PEs within a single AS, a PE will not receive duplicate 2560 traffic as long as all the PEs in that AS are on either the C-S 2561 or C-RP tree. 2563 However the PE may receive duplicate traffic if C-S or C-RP is 2564 multi-homed to different ASes. In this case the PE can detect 2565 duplicate traffic as such duplicate traffic will arrive on a 2566 different tunnel - if the PE was expecting the traffic on an 2567 inter-AS tunnel, duplicate traffic will arrive on an intra-AS 2568 tunnel [this is not an intra-AS tunnel segment, of an inter-AS 2569 tunnel] and vice-versa. 2571 To achieve the above the PE has to keep track of which (inter- 2572 AS) auto-discovery route the PE uses for sending MVPN multicast 2573 routing information towards C-S/C-RP. Then the PE should 2574 receive (multicast) traffic originated by C-S/C-RP only from 2575 the (inter-AS) tunnel that was carried in the best source 2576 auto-discovery route for the MVPN and was originated by the AS 2577 that contains C-S/C-RP (where "the best" is determined by the 2578 PE). All other multicast traffic originated by C-S/C-RP, but 2579 received on any other tunnel should be discarded as duplicated. 2581 The PE may also receive duplicate traffic during a 2582 to switch. The issue and the solution are described 2583 next. 2585 c. If the tunneling technology in use for a particular MVPN does 2586 not allow the egress PEs to identify the ingress PE, then 2587 having all the PEs select the same PE to be the upstream 2588 multicast hop is not sufficient to prevent packet duplication. 2589 The reason is that a single tunnel may be carrying traffic on 2590 both the (C-*, C-G) tree and the (C-S, C-G) tree. If some of 2591 the egress PEs have joined the source tree, but others expect 2592 to receive (S,G) packets from the shared tree, then two copies 2593 of data packet will travel on the tunnel, and the egress PEs 2594 will have no way to determine that only one copy should be 2595 accepted. 2597 To avoid this, it is necessary to ensure that once any PE joins 2598 the (C-S, C-G) tree, any other PE that has joined the (C-*, C- 2599 G) tree also switches to the (C-S, C-G) tree (selecting, of 2600 course, the same upstream multicast hop, as specified above). 2602 Whenever a PE creates an state as a result of 2603 receiving a C-multicast route for from some other 2604 PE, and the C-G group is a Sparse Mode group, the PE that 2605 creates the state MUST originate an auto-discovery route as 2606 specified below. The route is being advertised using the same 2607 procedures as the MVPN auto-discovery/binding (both intra-AS 2608 and inter-AS) specified in this document with the following 2609 modifications: 2611 1. The Multicast Source field MUST be set to C-S. The 2612 Multicast Source Length field is set appropriately to 2613 reflect this. 2615 2. The Multicast Group field MUST be set to C-G. The 2616 Multicast Group Length field is set appropriately to 2617 reflect this. 2619 The route goes to all the PEs of the MVPN. When a PE receives 2620 this route, it checks whether there are any receivers in the 2621 MVPN sites attached to the PE for the group carried in the 2622 route. If yes, then it generates a C-multicast route indicating 2623 Join for . This forces all the PEs (in all ASes) to 2624 switch to the C-S tree for from the C-RP tree. 2626 This is the same type of A-D route used to report active 2627 sources in the scenarios described in section 10. 2629 Note that when a PE thus joins the tree, it may need 2630 to send a PIM (S,G,RPT-bit) prune to one of its CE PIM 2631 neighbors, as determined by ordinary PIM procedures.. 2633 Whenever the PE deletes the state that was 2634 previousely created as a result of receiving a C-multicast 2635 route for from some other PE, the PE that deletes 2636 the state also withdraws the auto-discovery route that was 2637 advertised when the state was created. 2639 N.B.: SINCE ALL PES WITH RECEIVERS FOR GROUP C-G WILL JOIN THE 2640 C-S SOURCE TREE IF ANY OF THEM DO, IT IS NEVER NECESSARY TO 2641 DISTRIBUTE A BGP C-MULTICAST ROUTE FOR THE PURPOSE OF PRUNING 2642 SOURCES FROM THE SHARED TREE. 2644 In summary when the CE-PE routing protocol for all PEs that belong to 2645 a MVPN is not PIM-SM, selection of a consistant upstream PE to reach 2646 C-S is sufficient to eliminate duplicates when C-S is multi-homed to 2647 a single AS. When C-S is multi-homed to multiple ASes, duplicate 2648 packet detection can be performed as the receiver PE can always 2649 determine whether packets arrived on the wrong tunnel. When the CE-PE 2650 routing protocol is PIM-SM, additional procedures as described above 2651 are required to force all PEs within all ASes to switch to the C-S 2652 tree from the C-RP tree when any PE switches to the C-S tree. 2654 10. Deployment Models 2656 This section describes some optional deployment models and specific 2657 procedures for those deployment models. 2659 10.1. Co-locating C-RPs on a PE 2661 [MVPN-REQ] describes C-RP engineering as an issue when PIM-SM (or 2662 bidir-PIM) is used in ASM mode on the VPN customer site. To quote 2663 from [MVPN-REQ]: 2665 "In some cases this engineering problem is not trivial: for instance, 2666 if sources and receivers are located in VPN sites that are different 2667 than that of the RP, then traffic may flow twice through the SP 2668 network and the CE-PE link of the RP (from source to RP, and then 2669 from RP to receivers) ; this is obviously not ideal. A multicast VPN 2670 solution SHOULD propose a way to help on solving this RP engineering 2671 issue." 2673 One of the C-RP deployment models is for the customer to outsource 2674 the RP to the provider. In this case the provider may co-locate the 2675 RP on the PE that is connected to the customer site [MVPN-REQ]. This 2676 model is introduced in [RP-MVPN]. This section describes how 2677 anycast-RP can be used for achieving this by advertising active 2678 sources. This is described below. 2680 10.1.1. Initial Configuration 2682 For a particular MVPN, at least one or more PEs that have sitse in 2683 that MVPN, act as an RP for the sites of that MVPN connected to these 2684 PEs. Within each MVPN all these RPs use the same (anycast) address. 2685 All these RPs use the Anycast RP technique. 2687 10.1.2. Anycast RP Based on Propagating Active Sources 2689 This mechanism is based on propagating active sources between RPs. 2691 [Editor's Note: This is derived from the model in [RP-MVPN].] 2693 10.1.2.1. Receiver(s) Within a Site 2695 The PE which receives C-Join for (*,G) or (S,G) does not send the 2696 information that it has receiver(s) for G until it receives 2697 information about active sources for G from an upstream PE. 2699 On receiving this (described in the next section), the downstream PE 2700 will respond with Join for C-(S,G). Sending this information could be 2701 done using any of the procedures described in section 5. If BGP is 2702 used, the ingress address is set to the upstream PE's address which 2703 has triggered the source active information. Only the upstream PE 2704 will process this information. If unicast PIM is used then a unicast 2705 PIM message will have to be sent to the PE upstream PE that has 2706 triggered the source active information. If a MI-PMSI is used than 2707 further clarification is needed on the upstream neighbor address of 2708 the PIM message and will be provided in a future revision. 2710 10.1.2.2. Source Within a Site 2712 When a PE receives PIM-Register from a site that belongs to a given 2713 VPN, PE follows the normal PIM anycast RP procedures. It then 2714 advertises the source and group of the multicast data packet caried 2715 in PIM-Register message to other PEs in BGP using the following 2716 information elements: 2718 - Active source address 2720 - Active group address 2722 - Route target of the MVPN. 2724 This advertisement goes to all the PEs that belong to that MVPN. When 2725 a PE receives this advertisement, it checks whether there are any 2726 receivers in the sites attached to the PE for the group carried in 2727 the source active advertisement. If yes, then it generates an 2728 advertisement for C-(S,G) as specified in the previous section. 2730 Note that the mechanism described in section 7.3.2. can be leveraged 2731 to advertise a S-PMSI binding along with the source active messages. 2733 10.1.2.3. Receiver Switching from Shared to Source Tree 2735 No additional procedures are required when multicast receivers in 2736 customer's site shift from shared tree to source tree. 2738 10.2. Using MSDP between a PE and a Local C-RP 2740 Section 10.1 describes the case where each PE is a C-RP. This 2741 enables the PEs to know the active multicast sources for each MVPN, 2742 and they can then use BGP to distribute this information to each 2743 other. As a result, the PEs do not have to join any shared C-trees, 2744 and this results in a simplification of the PE operation. 2746 In another deployment scenario, the PEs are not themselves C-RPs, but 2747 use MSDP to talk to the C-RPs. In particular, a PE which attaches to 2748 a site that contains a C-RP becomes an MSDP peer of that C-RP. That 2749 PE then uses BGP to distribute the information about the active 2750 sources to the other PEs. When the PE determines, by MSDP, that a 2751 particular source is no longer active, then it withdraws the 2752 corresponding BGP update. Then the PEs do not have to join any 2753 shared C-trees, but they do not have to be C-RPs either. 2755 MSDP provides the capability for a Source Active message to carry an 2756 encapsulated data packet. This capability can be used to allow an 2757 MSDP speaker to receive the first (or first several) packet(s) of an 2758 (S,G) flow, even though the MSDP speaker hasn't yet joined the (S,G) 2759 tree. (Presumably it will join that tree as a result of receiving 2760 the SA message which carries the encapsulated data packet.) If this 2761 capability is not used, the first several data packets of an (S,G) 2762 stream may be lost. 2764 A PE which is talking MSDP to an RP may receive such an encapsulated 2765 data packet from the RP. The data packet should be decapsulated and 2766 transmitted to the other PEs in the MVPN. If the packet belongs to a 2767 particular (S,G) flow, and if the PE is a transmitter for some S-PMSI 2768 to which (S,G) has already been bound, the decapsulated data packet 2769 should be transmitted on that S-PMSI. Otherwise, if an I-PMSI exists 2770 for that MVPN, the decapsulated data packet should be transmitted on 2771 it. (If a default MI-PMSI exists, this would typically be used.) If 2772 neither of these conditions hold, the decapsulated data packet is not 2773 transmitted to the other PEs in the MVPN. The decision as to whether 2774 and how to transmit the decapsulated data packet does not effect the 2775 processing of the SA control message itself. 2777 Suppose that PE1 transmits a multicast data packet on a PMSI, where 2778 that data packet is part of an (S,G) flow, and PE2 receives that 2779 packet form that PMSI. According to section 9, PE1 is not the PE 2780 that PE2 expects to be transmitting (S,G) packets, then PE2 must 2781 discard the packet. If an MSDP-encapsulated data packet is 2782 transmitted on a PMSI as specified above, this rule from section 9 2783 would likely result in the packet's getting discarded. Therefore, if 2784 MSDP-encapsulated data packets being decapsulated and transmitted on 2785 a PMSI, we need to modify the rules of section 9 as follows: 2787 1. If the receiving PE, PE1, has already joined the (S,G) tree, 2788 and has chosen PE2 as the upstream PE for the (S,G) tree, but 2789 this packet does not come from PE2, PE1 must discard the 2790 packet. 2792 2. If the receiving PE, PE1, has not already joined the (S,G) 2793 tree, but is a PIM adjacency to a CE which is downstream on the 2794 (*,G) tree, the packet should be forwarded to the CE. 2796 11. Encapsulations 2798 The BGP-based auto-discovery procedures will ensure that the PEs in a 2799 single MVPN only use tunnels that they can all support, and for a 2800 given kind of tunnel, that they only use encapsulations that they can 2801 all support. 2803 11.1. Encapsulations for Single PMSI per Tunnel 2805 11.1.1. Encapsulation in GRE 2807 GRE encapsulation can be used for any PMSI that is instantiated by a 2808 mesh of unicast tunnels, as well as for any PMSI that is instantiated 2809 by one or more PIM tunnels of any sort. 2811 Packets received Packets in transit Packets forwarded 2812 at ingress PE in the service by egress PEs 2813 provider network 2815 +---------------+ 2816 | P-IP Header | 2817 +---------------+ 2818 | GRE | 2819 ++=============++ ++=============++ ++=============++ 2820 || C-IP Header || || C-IP Header || || C-IP Header || 2821 ++=============++ >>>>> ++=============++ >>>>> ++=============++ 2822 || C-Payload || || C-Payload || || C-Payload || 2823 ++=============++ ++=============++ ++=============++ 2825 The IP Protocol Number field in the P-IP Header must be set to 47. 2826 The Protocol Type field of the GRE Header must be set to 0x800. 2828 When an encapsulated packet is transmitted by a particular PE, the 2829 source IP address in the P-IP header must be the same address as is 2830 advertised by that PE in the RPF information. 2832 If the PMSI is instantiated by a PIM tree, the destination IP address 2833 in the P-IP header is the group P-address associated with that tree. 2834 The GRE key field value is omitted. 2836 If the PMSI is instantiated by unicast tunnels, the destination IP 2837 address is the address of the destination PE, and the optional GRE 2838 Key field is used to identify a particular MVPN. In this case, each 2839 PE would have to advertise a key field value for each MVPN; each PE 2840 would assign the key field value that it expects to receive. 2842 [RFC2784] specifies an optional GRE checksum, and [RFC2890] specifies 2843 an optional GRE sequence number fields. 2845 The GRE sequence number field is not needed because the transport 2846 layer services for the original application will be provided by the 2847 C-IP Header. 2849 The use of GRE checksum field must follow [RFC2784]. 2851 To facilitate high speed implementation, this document recommends 2852 that the ingress PE routers encapsulate VPN packets without setting 2853 the checksum, or sequence fields. 2855 11.1.2. Encapsulation in IP 2857 IP-in-IP [RFC1853] is also a viable option. When it is used, the 2858 IPv4 Protocol Number field is set to 4. The following diagram shows 2859 the progression of the packet as it enters and leaves the service 2860 provider network. 2862 Packets received Packets in transit Packets forwarded 2863 at ingress PE in the service by egress PEs 2864 provider network 2866 +---------------+ 2867 | P-IP Header | 2868 ++=============++ ++=============++ ++=============++ 2869 || C-IP Header || || C-IP Header || || C-IP Header || 2870 ++=============++ >>>>> ++=============++ >>>>> ++=============++ 2871 || C-Payload || || C-Payload || || C-Payload || 2872 ++=============++ ++=============++ ++=============++ 2874 11.1.3. Encapsulation in MPLS 2876 If the PMSI is instantiated as a P2MP MPLS LSP, MPLS encapsulation is 2877 used. Penultimate-hop-popping must be disabled for the P2MP MPLS LSP. 2878 If the PMSI is instantiated as an RSVP-TE P2MP LSP, additional MPLS 2879 encapsulation procedures are used, as specified in [RSVP-P2MP]. 2881 If other methods of assigning MPLS labels to multicast distribution 2882 trees are in use, these multicast distribution trees may be used as 2883 appropriate to instantiate PMSIs, and any additional MPLS 2884 encapsulation procedures may be used. 2886 Packets received Packets in transit Packets forwarded 2887 at ingress PE in the service by egress PEs 2888 provider network 2890 +---------------+ 2891 | P-MPLS Header | 2892 ++=============++ ++=============++ ++=============++ 2893 || C-IP Header || || C-IP Header || || C-IP Header || 2894 ++=============++ >>>>> ++=============++ >>>>> ++=============++ 2895 || C-Payload || || C-Payload || || C-Payload || 2896 ++=============++ ++=============++ ++=============++ 2898 11.2. Encapsulations for Multiple PMSIs per Tunnel 2900 The encapsulations for transmitting multicast data messages when 2901 there are multiple PMSIs per tunnel are based on the encapsulation 2902 for a single PMSI per tunnel, but with an MPLS label used for 2903 demultiplexing. 2905 The label is upstream-assigned and distributed via BGP as specified 2906 in section 4. The label must enable the receiver to select the 2907 proper VRF, and may enable the receiver to select a particular 2908 multicast routing entry within that VRF. 2910 11.2.1. Encapsulation in GRE 2912 Rather than the IP-in-GRE encapsulation discussed in section 11.1.1, 2913 we use the MPLS-in-GRE encapsulation. This is specified in [draft- 2914 mpls-in-ip-or-gre]. The GRE protocol type MUST be set to 0x8847. 2915 [The reason for using the unicast rather than the multicast value is 2916 specified in [MPLS-MCAST-ENCAPS]. 2918 11.2.2. Encapsulation in IP 2920 Rather than the IP-in-IP encapsulation discussed in section 12.1.2, 2921 we use the MPLS-in-IP encapsulation. This is specified in [draft- 2922 mpls-in-ip-or-gre]. The IP protocol number MUST be set to the value 2923 identifying the payload as an MPLS unicast packet. [There is no "MPLS 2924 multicast packet" protocol number.] 2926 11.3. Encapsulations for Unicasting PIM Control Messages 2928 When PIM control messages are unicast, rather than being sent on an 2929 MI-PMSI, the the receiving PE needs to determine the particular MVPN 2930 whose multicast routing information is being carried in the PIM 2931 message. One method is to use a downstream-assigned MPLS label which 2932 the receiving PE has allocated for this specific purpose. The label 2933 would be distributed via BGP. This can be used with an MPLS, MPLS- 2934 in-GRE, or MPLS-in-IP encapsulation. 2936 A possible alternative to modify the PIM messages themselves so that 2937 they carry information which can be used to identify a particular 2938 MVPN, such as an RT. 2940 This area is still under consideration. 2942 11.4. General Considerations for IP and GRE Encaps 2944 These apply also to the MPLS-in-IP and MPLS-in-GRE encapsulations. 2946 11.4.1. MTU 2948 Path MTU discovery cannot be relied upon to ensure that the 2949 transmitter sends packets which are small enough to reach all the 2950 destinations. This requires that: 2952 1. The ingress PE router (one that does the encapsulation) must 2953 not set the DF bit in the outer header, and 2955 2. If the "DF" bit is cleared in the IP header of the C-Packet, 2956 fragment the C-Packet before encapsulation if appropriate. 2957 This is very important in practice due to the fact that the 2958 performance of reassembly function is significantly lower than 2959 that of decapsulating and forwarding packets on today's router 2960 implementations. 2962 11.4.2. TTL 2964 The ingress PE should not copy the TTL field from the payload IP 2965 header received from a CE router to the delivery IP or MPLS header. 2966 The setting of the TTL of the delivery header is determined by the 2967 local policy of the ingress PE router. 2969 11.4.3. Differentiated Services 2971 By default, the setting of the DS field in the delivery IP header 2972 should follow the guidelines outlined in [RFC2983]. Setting the EXP 2973 field in the delivery MPLS header should follow the guidelines in 2974 [REF]. An SP may also choose to deploy any of the additional 2975 mechanisms the PE routers support. 2977 11.4.4. Avoiding Conflict with Internet Multicast 2979 If the SP is providing Internet multicast, distinct from its VPN 2980 multicast services, and using PIM based P-multicast trees, it must 2981 ensure that the group P-addresses which it used in support of MPVN 2982 services are distinct from any of the group addresses of the Internet 2983 multicasts it supports. This is best done by using administratively 2984 scoped addresses [ADMIN-ADDR]. 2986 The group C-addresses need not be distinct from either the group P- 2987 addresses or the Internet multicast addresses. 2989 12. Security Considerations 2991 To be supplied. 2993 13. IANA Considerations 2995 To be supplied. 2997 14. Other Authors 2999 Sarveshwar Bandi, Yiqun Cai, Thomas Morin, Yakov Rekhter, IJsbrands 3000 Wijnands, Seisho Yasukawa 3002 15. Other Contributors 3004 Significant contributions were made Arjen Boers, Toerless Eckert, 3005 Adrian Farrel, Luyuan Fang, Dino Farinacci, Lenny Guiliano, Shankar 3006 Karuna, Anil Lohiya, Tom Pusateri, Ted Qian, Robert Raszuk, Tony 3007 Speakman, Dan Tappan. 3009 16. Authors' Addresses 3011 Rahul Aggarwal (Editor) 3012 Juniper Networks 3013 1194 North Mathilda Ave. 3014 Sunnyvale, CA 94089 3015 Email: rahul@juniper.net 3017 Sarveshwar Bandi 3018 Motorola 3019 Vanenburg IT park, Madhapur, 3020 Hyderabad, India 3021 Email: sarvesh@motorola.com 3023 Yiqun Cai 3024 Cisco Systems, Inc. 3025 170 Tasman Drive 3026 San Jose, CA, 95134 3027 E-mail: ycai@cisco.com 3029 Thomas Morin 3030 France Telecom R & D 3031 2, avenue Pierre-Marzin 3032 22307 Lannion Cedex 3033 France 3034 Email: thomas.morin@francetelecom.com 3036 Yakov Rekhter 3037 Juniper Networks 3038 1194 North Mathilda Ave. 3039 Sunnyvale, CA 94089 3040 Email: yakov@juniper.net 3041 Eric C. Rosen (Editor) 3042 Cisco Systems, Inc. 3043 1414 Massachusetts Avenue 3044 Boxborough, MA, 01719 3045 E-mail: erosen@cisco.com 3047 IJsbrand Wijnands 3048 Cisco Systems, Inc. 3049 170 Tasman Drive 3050 San Jose, CA, 95134 3051 E-mail: ice@cisco.com 3053 Seisho Yasukawa 3054 NTT Corporation 3055 9-11, Midori-Cho 3-Chome 3056 Musashino-Shi, Tokyo 180-8585, 3057 Japan 3058 Phone: +81 422 59 4769 3059 Email: yasukawa.seisho@lab.ntt.co.jp 3061 17. Normative References 3063 [MVPN-REQ] T. Morin, Ed., "Requirements for Multicast in L3 3064 Provider-Provisioned VPNs", draft-ietf-l3vpn-ppvpn-mcast-reqts- 3065 08.txt, May 2006 3067 [RFC4364] "BGP/MPLS IP VPNs", Rosen, Rekhter, et. al., February 2006 3069 [RFC2119] "Key words for use in RFCs to Indicate Requirement 3070 Levels.", Bradner, March 1997 3072 [PIM-SM] "Protocol Independent Multicast - Sparse Mode (PIM-SM)", 3073 Fenner, Handley, Holbrook, Kouvelas, March 2006, draft-ietf-pim- sm- 3074 v2-new-12.txt 3076 [RSVP-P2MP] R. Aggarwal, et. al., "Extensions to RSVP-TE for Point to 3077 Multipoint TE LSPs", draft-ietf-mpls-rsvp-te-p2mp-05.txt, May 2006 3079 [MPLS-IP] T. Worster, Y. Rekhter, E. Rosen, "Encapsulating MPLS in IP 3080 or Generic Routing Encapsulation (GRE)", RFC 4023, March 2005 3082 [MPLS-MCAST-ENCAPS] T. Eckert, E. Rosen, R. Aggarwal, Y. Rekhter, 3083 "MPLS Multicast Encapsulations", draft-ietf-mpls-multicast-encaps- 3084 00.txt, February 2006 3086 [MPLS-UPSTREAM-LABEL] R. Aggarwal, Y. Rekhter, E. Rosen, "MPLS 3087 Upstream Label Assignment and Context Specific Label Space", draft- 3088 ietf-mpls-upstream-label-01.txt, February 2006 3090 [MVPN-BGP], R. Aggarwal, E. Rosen, T. Morin, Y. Rekhter, C. 3091 Kodeboniya, "BGP Encodings for Multicast in MPLS/BGP IP VPNs", 3092 draft-raggarwa-l3vpn-2547bis-mcast-bgp-02.txt, June 2006 3094 18. Informative References 3096 [ROSEN-8] E. Rosen, Y. Cai, I. Wijnands, "Multicast in MPLS/BGP IP 3097 VPNs", draft-rosen-vpn-mcast-08.txt 3099 [MVPN-BASE] R. Aggarwal, A. Lohiya, T. Pusateri, Y. Rekhter, "Base 3100 Specification for Multicast in MPLS/BGP VPNs", draft-raggarwa-l3vpn- 3101 2547-mvpn-00.txt 3103 [RAGGARWA-MCAST] R. Aggarwal, et. al., "Multicast in BGP/MPLS VPNs 3104 and VPLS", draft-raggarwa-l3vpn-mvpn-vpls-mcast-01.txt". 3106 [RP-MVPN] S. Yasukawa, et. al., "BGP/MPLS IP Multicast VPNs", draft- 3107 yasukawa-l3vpn-p2mp-mcast-00.txt 3109 [RFC2784] D. Farinacci, et. al., "Generic Routing Encapsulation", 3110 March 2000 3112 [RFC2890] G. Dommety, "Key and Sequence Number Extensions to GRE", 3113 September 2000 3115 [RFC1853] W. Simpson, "IP in IP Tunneling", October 1995 3117 [RFC2983] D. Black, "Differentiated Services and Tunnels", October 3118 2000 3120 19. Full Copyright Statement 3122 Copyright (C) The Internet Society (2006). 3124 This document is subject to the rights, licenses and restrictions 3125 contained in BCP 78, and except as set forth therein, the authors 3126 retain all their rights. 3128 This document and the information contained herein are provided on an 3129 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 3130 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 3131 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 3132 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 3133 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 3134 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 3136 20. Intellectual Property 3138 The IETF takes no position regarding the validity or scope of any 3139 Intellectual Property Rights or other rights that might be claimed to 3140 pertain to the implementation or use of the technology described in 3141 this document or the extent to which any license under such rights 3142 might or might not be available; nor does it represent that it has 3143 made any independent effort to identify any such rights. Information 3144 on the procedures with respect to rights in RFC documents can be 3145 found in BCP 78 and BCP 79. 3147 Copies of IPR disclosures made to the IETF Secretariat and any 3148 assurances of licenses to be made available, or the result of an 3149 attempt made to obtain a general license or permission for the use of 3150 such proprietary rights by implementers or users of this 3151 specification can be obtained from the IETF on-line IPR repository at 3152 http://www.ietf.org/ipr. 3154 The IETF invites any interested party to bring to its attention any 3155 copyrights, patents or patent applications, or other proprietary 3156 rights that may cover technology that may be required to implement 3157 this standard. Please address the information to the IETF at ietf- 3158 ipr@ietf.org.