idnits 2.17.1 draft-serbest-l2vpn-vpls-mcast-00.txt: -(218): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.5 on line 1375. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1348. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1355. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1361. ** The document seems to lack an RFC 3978 Section 5.1 IPR Disclosure Acknowledgement -- however, there's a paragraph with a matching beginning. Boilerplate error? ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** The document seems to lack an RFC 3978 Section 5.4 Reference to BCP 78 -- however, there's a paragraph with a matching beginning. Boilerplate error? ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 11 instances of lines with non-ascii characters in the document. == It seems as if not all pages are separated by form feeds - found 0 form feeds but 29 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) == There are 3 instances of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'VPLS-BGP' is mentioned on line 94, but not defined == Unused Reference: 'VPLSD-BGP' is defined on line 1281, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 2362 (Obsoleted by RFC 4601, RFC 5059) Summary: 6 errors (**), 0 flaws (~~), 6 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET DRAFT Y. Serbest 3 Internet Engineering Task Force SBC 4 Document: Marc Lasserre 5 draft-serbest-l2vpn-vpls-mcast-00.txt Rob Nath 6 October 2004 Riverstone 7 Category: Informational Vach Kompella 8 Expires: April 2005 Ray Qiu 9 Sunil Khandekar 10 Alcatel 12 Supporting IP Multicast over VPLS 14 Status of this memo 16 By submitting this Internet-Draft, we represent that any applicable 17 patent or other IPR claims of which we are aware have been 18 disclosed, or will be disclosed, and any of which we are aware have 19 been or will be disclosed, and any of which we become aware will be 20 disclosed in accordance with RFC 3668. 22 This document is an Internet-Draft and is in full conformance with 23 Sections 5 and 6 of RFC 3667 and Section 5 of RFC 3668. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF), its areas, and its working groups. Note that 27 other groups may also distribute working documents as Internet- 28 Drafts. 30 Internet-Drafts are draft documents valid for a maximum of six 31 months and may be updated, replaced, or obsoleted by other documents 32 at any time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 The list of current Internet-Drafts can be accessed at 36 http://www.ietf.org/ietf/1id-abstracts.txt. 38 The list of Internet-Draft Shadow Directories can be accessed at 39 http://www.ietf.org/shadow.html. 41 Abstract 43 In Virtual Private LAN Service (VPLS), the PE devices provide a 44 logical interconnect such that CE devices belonging to a specific 45 VPLS instance appear to be connected by a single LAN. A VPLS 46 solution performs replication for multicast traffic at the ingress 47 PE devices. When replicated at the ingress PE, multicast traffic 48 wastes bandwidth when 1. Multicast traffic is sent to sites with no 49 members, and 2. Pseudo wires to different sites go through a shared 50 path. This document is addressing the former by IGMP and PIM 51 snooping. 53 Conventions used in this document 54 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 55 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in 56 this document are to be interpreted as described in RFC 2119. 58 Table of Contents 59 1 Introduction......................................................3 60 2 Overview of VPLS..................................................3 61 3 Multicast Traffic over VPLS.......................................4 62 4 Constraining of IP Multicast in a VPLS............................5 63 4.1General Rules for IGMP/PIM Snooping in VPLS......................6 64 4.2IGMP Snooping for VPLS...........................................6 65 4.2.1 IGMP Join.................................................7 66 4.2.2 IGMP Leave...............................................11 67 4.2.3 Failure Scenarios........................................11 68 4.3PIM Snooping for VPLS...........................................12 69 4.3.1 PIM-DM...................................................13 70 4.3.2 PIM-SM...................................................17 71 4.3.3 PIM-SSM..................................................21 72 4.3.4 Bidirectional-PIM (BIDIR-PIM)............................23 73 5 Security Considerations..........................................27 74 6 References.......................................................27 75 6.1Normative References............................................28 76 6.2Informative References..........................................28 77 7 Authors' Addresses...............................................28 78 8 Intellectual Property Statement..................................29 79 9 Full copyright statement.........................................29 80 1 Introduction 81 In Virtual Private LAN Service (VPLS), the Provider Edge (PE) 82 devices provide a logical interconnect such that Customer Edge (CE) 83 devices belonging to a specific VPLS instance appear to be connected 84 by a single LAN. Forwarding information base for particular VPLS 85 instance is populated dynamically by source MAC address learning. 86 This is a straightforward solution to support unicast traffic, with 87 reasonable flooding for unicast unknown traffic. Since a VPLS 88 provides LAN emulation for IEEE bridges as wells as for routers, the 89 unicast and multicast traffic need to follow the same path for 90 layer-2 protocols to work properly. As such, multicast traffic is 91 treated as broadcast traffic and is flooded to every site in the 92 VPLS instance. 94 VPLS solutions (i.e., [VPLS-LDP] and [VPLS-BGP]) perform replication 95 for multicast traffic at the ingress PE devices. When replicated at 96 the ingress PE, multicast traffic wastes bandwidth when: 1. 97 Multicast traffic is sent to sites with no members, 2. Pseudo wires 98 to different sites go through a shared path, and 3. Multicast 99 traffic is forwarded along a shortest path tree as opposed to the 100 minimum cost spanning tree. This document is addressing the first 101 problem by IGMP and PIM snooping. Using VPLS in conjunction with 102 IGMP and/or PIM snooping has the following advantages: 103 - It improves VPLS to support IP multicast efficiently (not 104 necessarily optimum, as there can still be bandwidth waste), 105 - It prevents sending multicast traffic to sites with no members, 106 - It keeps P routers in the core stateless, 107 - The Service Provider (SP) does not need to perform the tasks to 108 provide multicast service (e.g., running PIM, managing P-group 109 addresses, managing multicast tunnels etc�) 110 - The SP does not need to maintain PIM adjacencies with the 111 customers. 113 In this document, we describe the procedures for Internet Group 114 Management Protocol (IGMP) and Protocol Independent Multicast (PIM) 115 snooping over VPLS for efficient distribution of IP multicast 116 traffic. 118 2 Overview of VPLS 119 In case of VPLS, the PE devices provide a logical interconnect such 120 that CE devices belonging to a specific VPLS appear to be connected 121 by a single LAN. End-to-end VPLS consists of a bridge module and a 122 LAN emulation module ([L2VPN-FR]). 124 In a VPLS, a customer site receives Layer-2 service from the SP. 125 The PE is attached via an access connection to one or more CEs. The 126 PE performs forwarding of user data packets based on information in 127 the Layer-2 header, that is, MAC destination address. The CE sees a 128 bridge. 130 The details of VPLS reference model, which we summarize here, can be 131 found in [L2VPN_FR]. In VPLS, the PE can be viewed as containing a 132 Virtual Switching Instance (VSI) for each L2VPN that it serves. A 133 CE device attaches, possibly through an access network, to a bridge 134 module of a PE. Within the PE, the bridge module attaches, through 135 an Emulated LAN Interface to an Emulated LAN. For each VPLS, there 136 is an Emulated LAN instance. The Emulated LAN consists of VPLS 137 Forwarder module (one per PE per VPLS service instance) connected by 138 pseudo wires (PW), where the PWs may be traveling through Packet 139 Switched Network (PSN) tunnels over a routed backbone. VSI is a 140 logical entity that contains a VPLS forwarder module and part of the 141 bridge module relevant to the VPLS service instance [L2VPN-FR]. 142 Hence, the VSI terminates PWs for interconnection with other VSIs 143 and also terminates attachment circuits (ACs) for accommodating CEs. 144 A VSI includes the forwarding information base for a L2VPN [L2VPN- 145 FR] which is the set of information regarding how to forward Layer-2 146 frames received over the AC from the CE to VSIs in other PEs 147 supporting the same L2VPN service (and/or to other ACs), and 148 contains information regarding how to forward Layer-2 frames 149 received from PWs to ACs. Forwarding information bases can be 150 populated dynamically (such as by source MAC address learning) or 151 statically (e.g., by configuration). Each PE device is responsible 152 for proper forwarding of the customer traffic to the appropriate 153 destination(s) based on the forwarding information base of the 154 corresponding VSI. 156 3 Multicast Traffic over VPLS 157 In VPLS, if a PE receives a frame from an Attachment Circuit (AC) 158 with no matching entry in the forwarding information base for that 159 particular VPLS instance, it floods the frame to all other PEs 160 (which are part of this VPLS instance) and to directly connected ACs 161 (other than the one that the frame is received from). The flooding 162 of a frame occurs when: 163 - The destination MAC address has not been learned, 164 - The destination MAC address is a broadcast address, 165 - The destination MAC address is a multicast address. 167 Malicious attacks (e.g., receiving unknown frames constantly) aside, 168 the first situation is handled by VPLS solutions as long as 169 destination MAC address can be learned. After that point on, the 170 frames will not be flooded. A PE is REQUIRED to have safeguards, 171 such as unknown unicast limiting and MAC table limiting, against 172 malicious unknown unicast attacks. 174 There is no way around flooding broadcast frames. To prevent 175 runaway broadcast traffic from adversely affecting the VPLS service 176 and the SP network, a PE is REQUIRED to have tools to rate limit the 177 broadcast traffic as well. 179 Similar to broadcast frames, multicast frames are flooded as well, 180 as a PE can not know where multicast members reside. Rate limiting 181 multicast traffic, while possible, should be should be done 182 carefully since several network control protocols relies on 183 multicast. For one thing, layer-2 and layer-3 protocols utilize 184 multicast for their operation. For instance, Bridge Protocol Data 185 Units (BPDUs) use an IEEE assigned all bridges multicast MAC 186 address, and OSPF is multicast to all OSPF routers multicast MAC 187 address. If the rate-limiting of multicast traffic is not done 188 properly, the customer network will experience instability and poor 189 performance. For the other, it is not straightforward to determine 190 the right rate limiting parameters for multicast. 192 A VPLS solution MUST NOT affect the operation of customer layer-2 193 protocols (e.g., BPDUs). Additionally, a VPLS solution MUST NOT 194 affect the operation of layer-3 protocols. 196 In the following section, we describe procedures to constrain the 197 flooding of IP multicast traffic in a VPLS. 199 4 Constraining of IP Multicast in a VPLS 200 The objective of improving the efficiency of VPLS for multicast 201 traffic that we are trying to optimize here has the following 202 constraints: 203 - The service is VPLS, i.e., a layer-2 VPN, 204 - In VPLS, ingress replication is required, 205 - There is no layer-3 adjacency (e.g., PIM) between a CE and a 206 PE. 208 Under these circumstances, the most obvious approach is 209 implementation of IGMP and PIM snooping in VPLS. Other multicast 210 routing protocols such as DVMRP and MOSPF are outside the scope of 211 this document. 213 Another approach to constrain multicast traffic in a VPLS is to 214 utilize point-multipoint LSPs (e.g., [PMP-RSVP-TE]). In such case, 215 one has to establish a point-multipoint LSP from a source PE (i.e., 216 the PE to which the source router is connected to) to all other PEs 217 participating in the VPLS instance. In this case, if nothing is 218 done, all PEs will receive multicast traffic even if they don�t have 219 any members hanging off of them. One can apply IGMP/PIM snooping, 220 but this time IGMP/PIM snooping should be done in P routers as well. 221 One can propose a dynamic way of establishing point-multipoint LSPs, 222 for instance by mapping IGMP/PIM messages to RSVP-TE signaling. One 223 should consider the effect of such approach on the signaling load 224 and on the delay between the time the join request received and the 225 traffic is received (this is important for IPTV application for 226 instance). This approach is outside the scope of this document. 228 Although, in some extremely controlled cases, such as a ring 229 topology of PE routers with no P routers or a tree topology, the 230 efficiency of the replication of IP multicast can be improved. For 231 instance, spoke PWs of a hierarchical VPLS can be daisy-chained 232 together and some replication rules can be devised. These cases are 233 not expected to be common and will not be considered in this 234 document. 236 In the following sections, we provide some guidelines for the 237 implementation of IGMP and PIM snooping in VPLS. 239 4.1 General Rules for IGMP/PIM Snooping in VPLS 240 The following rules for the correct operation of IGMP/PIM snooping 241 MUST be followed. 243 Rule 1: IGMP and PIM messages forwarded by PEs MUST follow the 244 split-horizon rule for mesh PWs as defined in [VPLS-LDP]. 246 Rule 2: IGMP/PIM snooping states in a PE MUST be per VPLS instance. 248 Rule 3: If a PE does not have any entry in a IGMP/PIM snooping state 249 for multicast group (*,G) or (S,G), the multicast traffic to that 250 group in the VPLS instance MUST be flooded. 252 Rule 4: A PE MUST support PIM mode selection per VPLS instance via 253 CLI and/or EMS. Another option could be to deduce the PIM mode from 254 RP address for a specific multicast group. For instance, a RP 255 address can be learned during the Designated Forwarder (DF) election 256 stage for Bidirectional-PIM. 258 4.2 IGMP Snooping for VPLS 259 IGMP is a mechanism to inform the routers on a subnet of a host�s 260 request to become a member of a particular multicast group. IGMP is 261 a stateful protocol. The router (i.e., the querier) regularly 262 verifies that the hosts want to continue to participate in the 263 multicast groups by sending periodic queries, transmitted to all 264 hosts multicast group (IP:224.0.0.1, MAC:01-00-5E-00-00-01) on the 265 subnet. If the hosts are still interested in that particular 266 multicast group, they respond with membership report message, 267 transmitted to the multicast group of which they are members. In 268 IGMPv1 [RFC1112], the hosts simply stop responding to IGMP queries 269 with membership reports, when they want to leave a multicast group. 270 IGMPv2 [RFC2236] adds a leave message that a host will use when it 271 needs to leave a particular multicast group. IGMPv3 [RFC3376] 272 extends the report/leave mechanism beyond multicast group to permit 273 joins and leaves to be issued for specific source/group (S,G) pairs. 275 In IGMP snooping, a PE snoops on the IGMP protocol exchange between 276 hosts and routers, and based on that restricts the flooding of IP 277 multicast traffic. In the following, we explore the mechanisms 278 involved in implementing IGMP snooping for VPLS. Please refer to 279 Figure 1 as an example of VPLS with IGMP snooping. In the figure, 280 Router 1 is the Querier. If multiple routers exist on a single 281 subnet (basically that is what a VPLS instance is), they can 282 mutually elect a designated router (DR) that will manage all of the 283 IGMP messages for that subnet. 285 VPLS Instance 286 +------+ AC1 +------+ +------+ AC4 +------+ 287 | Host |-----| PE |-------------| PE |-----|Router| 288 | 1 | | 1 |\ PW1to3 /| 3 | | 1 | 289 +------+ +------+ \ / +------+ +------+ 290 | \ / | 291 | \ / | 292 | \ /PW2to3 | 293 | \ / | 294 PW1to2| \ |PW3to4 295 | / \ | 296 | / \PW1to4 | 297 | / \ | 298 | / \ | 299 +------+ +------+ / \ +------+ +------+ 300 | Host | | PE |/ PW2to4 \| PE | |Router| 301 | 2 |-----| 2 |-------------| 4 |-----| 2 | 302 +------+ AC2 +------+ +------+ AC5 +------+ 303 | 304 |AC3 305 +------+ 306 | Host | 307 | 3 | 308 +------+ 310 Figure 1 Reference Diagram for IGMP Snooping for VPLS 312 4.2.1 IGMP Join 313 The IGMP snooping mechanism for joining a multicast group (for all 314 IGMP versions) works as follows: 315 - The Querier sends a membership query to all hosts 316 (IP:224.0.0.1, MAC:01-00-5E-00-00-01). The membership query 317 can be either general query or group specific query. 318 - PE 3 replicates the query message and forwards it to all PEs 319 participating in the VPLS instance (i.e., PE 1, PE 2, PE 4). 320 - PE 3 notes that there is already a directly connected Querier. 321 Basically, it keeps a state of [(*,G);Querier: AC4], if it is a 322 group specific query. It keeps a state of [(*,*);Querier:AC4], 323 if it is a general query. 324 - All PEs then forward the query to ACs which are part of the 325 VPLS instance. 326 - At this point all PEs learn the place of the Querier. For 327 instance, for PE 1 it is behind PW1to3, for PE 2 behind PW2to3, 328 for PE 3 behind AC4, for PE 4 behind PW3to4. 330 - Suppose that all hosts (Host 1, Host 2, and Host 3) want to 331 participate in the multicast group. 332 - Host 2 first (for the sake of the example) sends a membership 333 report to the multicast group (e.g., IP: 239.1.1.1, MAC: 01-00- 334 5E-01-01-01), of which Host 2 wants to be a member. 335 - PE 2 replicates the membership report message and forwards it 336 to all PEs participating in the VPLS instance (i.e., PE 1, PE 337 3, PE 4). 338 - PE 2 notes that there is a directly connected host, which is 339 willing to participate in the multicast group and updates its 340 state to [(*,G);Querier:PW2to3;Hosts:AC2]. 342 Guideline 1: A PE MUST NOT forward a membership report message to 343 ACs participating in the VPLS instance, unless it received query 344 message from that AC. This is necessary to avoid report 345 suppression for other members in order for the PEs to construct 346 correct states and to not have any orphan receiver hosts. 348 - PE 2 does not forward the membership report of Host 2 to Host 349 3. 350 - Per the guideline above, PE 1 does not forward the membership 351 report of Host 2 to Host 1. 352 - Per the guideline above, PE 3 does forward the membership 353 report of Host 2 to Router 1 (the Querier). 354 - PE 3 notes that there is a host in the VPLS instance, which is 355 willing to participate in the multicast group and updates its 356 state to [(*,G);Querier:AC4;Hosts:PW2to3] regardless of the 357 type of the query. 358 - Let�s assume that Host 1 subsequently sends a membership report 359 to the same multicast group. 360 - PE 1 replicates the membership report message and forwards it 361 to all PEs participating in the VPLS instance (i.e., PE 2, PE 362 3, PE 4). 363 - PE 1 notes that there is a directly connected host, which is 364 willing to participate in the multicast group. Basically, it 365 keeps a state of [(*,G);Querier:PW1to3;Hosts:AC1,PW1to2]. 366 - Per Guideline 1, PE 2 does not forward the membership report of 367 Host 1 to Host 2 and Host 3. 368 - PE 3 receives the membership report message of Host 1 and 369 checks its states. Per Guideline 1, it sends the report to 370 Router 1. It also updates its state to 371 [(*,G);Querier:AC4;Hosts:PW2to3,PW1to3]. 372 - Now, Host 3 sends a membership report to the same multicast 373 group. 375 - PE 2 updates its state to 376 [(*,G);Querier:PW2to3;Hosts:AC2,AC3,PW1to2]. It then floods the 377 report message to all PEs participating in the VPLS instance. 378 Per Guideline 1, only PE 3 forwards the membership report of 379 Host 3 to Router 1. 380 - At this point, all PEs have necessary states to ensure that no 381 multicast traffic will be sent to sites with no members. 383 The previous steps work the same way for all three (IGMPv1, IGMPv2, 384 and IGMPv3), when the query is general or source specific. 386 The group and source specific query for IGMPv3 is considered 387 separately below. 389 The IGMP snooping mechanism for joining a multicast group (for 390 IGMPv3) works as follows: 391 - The Querier sends a membership query to all hosts 392 (IP:224.0.0.1, MAC:01-00-5E-00-00-01). The membership query is 393 group and source specific query with a list of sources (e.g., 394 S1, S2, �, Sn). 395 - PE 3 replicates the query message and forwards it to all PEs 396 participating in the VPLS instance (i.e., PE 1, PE 2, PE 4). 397 - PE 3 notes that there is already a directly connected Querier. 398 Basically, it keeps a state of {[(S1,G);Querier: AC4], 399 [(S2,G);Querier: AC4], �, [(Sn,G);Querier: AC4]}. 400 - All PEs then forward the query to ACs which are part of the 401 VPLS instance. 402 - At this point, all PEs learn the place of the Querier. For 403 instance, for PE 1 it is behind PW1to3, for PE 2 behind PW2to3, 404 for PE 3 behind AC4, for PE 4 behind PW3to4. 405 - Suppose that all hosts (Host 1, Host 2, and Host 3) want to 406 participate in the multicast group. Host 1 and Host 2 want to 407 subscribe to (Sn,G), and Host 3 wants to subscribe to (S3,G). 408 - Host 2 first (for the sake of the example) sends a membership 409 report message for (Sn,G) to IP address of 224.0.0.22 (MAC:01- 410 00-5E-00-00-16). 411 - PE 2 replicates the membership report message and forwards it 412 to all PEs participating in the VPLS instance (i.e., PE 1, PE 413 3, PE 4). 414 - PE 2 notes that there is a directly connected host, which is 415 willing to participate in the multicast group and updates its 416 state to {[(Sn,G);Querier:PW2to3;Hosts:AC2]}. 417 - Per Guideline 1, PE 2 does not forward the membership report of 418 Host 2 to Host 3. 419 - Per Guideline 1, PE 1 does not forward the membership report of 420 Host 2 to Host 1. 422 - Per Guideline 1, PE 3 does forward the membership report of 423 Host 2 to Router 1 (the Querier). 424 - PE 3 notes that there is a host in the VPLS instance, which is 425 willing to participate in the multicast group. Basically, it 426 updates its state to {[(S1,G);Querier: AC4], [(S2,G);Querier: 427 AC4], �, [(Sn,G);Querier: AC4;Hosts: PW2to3]}. 428 - Let�s say Host 1 now sends a membership report to the same 429 multicast group. 430 - PE 1 replicates the membership report message and forwards it 431 to all PEs participating in the VPLS instance (i.e., PE 2, PE 432 3, PE 4). 433 - PE 1 notes that there is a directly connected host, which is 434 willing to participate in the multicast group. Basically, it 435 keeps a state of {[(Sn,G);Querier:PW1to3;Hosts:AC1,PW1to2]}. 436 - Per Guideline 1, PE 2 does not forward the membership report of 437 Host 1 to Host 2 and Host 3. 438 - PE 3 receives the membership report message of Host 1 and 439 checks its states. It updates its state to {[(S1,G);Querier: 440 AC4], [(S2,G);Querier: AC4], �, [(Sn,G);Querier: AC4;Hosts: 441 PW2to3,PW1to3]}. It then forwards the membership report of Host 442 1 to Router 1 per Guideline 1. 443 - Finally, Host 3 sends a membership report to the same multicast 444 group (S3,G). 445 - PE 2 replicates the membership report message and forwards it 446 to all PEs participating in the VPLS instance (i.e., PE 1, PE 447 3, PE 4). 448 - Per Guideline 1, PE 2 does not forward the membership report of 449 Host 3 to Host 2. 450 - Per Guideline 1, PE 1 does not forward the membership report of 451 Host 3 to Host 1. 452 - Per Guideline 1, PE 3 does forward the membership report of 453 Host 3 to Router 1 (the Querier). 454 - PE 2 notes that there is a directly connected host, which is 455 willing to participate in the multicast group and updates its 456 state to {[(S3,G);Querier:PW2to3;Hosts:AC3] , 457 [(Sn,G);Querier:PW2to3;Hosts:AC2,PW1to2]}. 458 - PE 3 receives the membership report message of Host 3 and 459 checks its states. It updates its state to {[(S1,G);Querier: 460 AC4], [(S2,G);Querier: AC4], [(S3,G);Querier: 461 AC4;Hosts:PW2to3], �, [(Sn,G);Querier: AC4;Hosts: 462 PW2to3,PW1to3]}. It then forwards the membership report to the 463 Querier (Router 1). 465 At this point, all PEs have necessary states to not send multicast 466 traffic to sites with no members. 468 Based on above description of IGMPv3 based snooping for VPLS, one 469 may conclude that the PEs MUST have the capability to store (S,G) 470 state and MUST forward/replicate traffic accordingly. This is, 471 however, not MANDATORY. A PE MAY only keep (*,G) based states 472 rather than on a per (S,G) basis with the understanding that this 473 will result in a less efficient IP multicast forwarding within each 474 VPLS instance. 476 Guideline 2: If a PE receives unsolicited report message and if it 477 does not possess a state for that particular multicast group, it 478 MUST flood that unsolicited membership report message to all PEs 479 participating in the VPLS instance, as well as to the Querier if it 480 is locally attached. 482 4.2.2 IGMP Leave 483 The IGMP snooping mechanism for leaving a multicast group works as 484 follows: 485 - In the case of IGMPv2/IGMPv3, when a PE receives a leave 486 (*,G)/(S,G) message from a host via its AC, first it removes 487 the AC from its state. 489 Guideline 3: A PE MUST NOT forward a leave (*,G)/(S,G) message to 490 ACs participating in the VPLS instance, If the PE still has 491 locally connected hosts or hosts connected over a H-VPLS spoke in 492 its state. 494 Guideline 4: A PE MUST forward a leave (*,G)/(S,G) message to all 495 PEs participating in the VPLS instance. A PE MAY forward the 496 leave (*,G)/(S,G) message to the Querier ONLY, if there are no 497 member hosts in its state. 499 Guideline 5: If a PE does not receive a membership report from an 500 AC for the three consecutive queries, the PE MUST remove the AC 501 from its state. 503 4.2.3 Failure Scenarios 504 Up to now, we did not consider any failures, which we will focus in 505 this section. 506 - In case the Querier fails (e.g., AC to the querier fails), 507 another router in the VPLS instance will be selected as the DR. 508 The new DR will be sending queries. In such circumstances, the 509 IGMP snooping states in the PEs will be updated/overwritten by 510 the same procedure explained above. 511 - In case a host fails (e.g., AC to the host fails), a PE removes 512 the host from its IGMP snooping state for that particular 513 multicast group. Guidelines 3, 4 and 5 still apply here. 514 - In case a PW (which is in IGMP snooping state) fails, the PEs 515 will remove the PW from their IGMP snooping state. For 516 instance, if PW1to3 fails, then PE 1 will remove PW1to3 from 517 its state as the Querier connection, and PE 3 will remove 518 PW1to3 from its state as one of the host connections. 519 Guidelines 3, 4 and 5 still apply here. After PW is restored, 520 the IGMP snooping states in the PEs will be updated/overwritten 521 by the same procedure explained above. One can implement a 522 dead timer before making any changes to IGMP snooping states 523 upon PW failure. In that case, IGMP snooping states will be 524 altered if the PW can not be restored before the dead timer 525 expires. 527 4.3 PIM Snooping for VPLS 528 IGMP snooping procedures described above provide efficient delivery 529 of IP multicast traffic in a given VPLS service when end stations 530 are connected to the VPLS. However, when VPLS is offered as a WAN 531 service it is likely that the CE devices are routers and would run 532 PIM between them. To provide efficient IP multicasting in such 533 cases, it is necessary that the PE routers offering the VPLS service 534 do PIM snooping. This section describes the procedures for PIM 535 snooping. 537 PIM is a multicast routing protocol, which runs exclusively between 538 routers. PIM shares many of the common characteristics of a routing 539 protocol, such as discovery messages (e.g., neighbor discovery using 540 Hello messages), topology information (e.g., multicast tree), and 541 error detection and notification (e.g., dead timer and designated 542 router election). On the other hand, PIM does not participate in 543 any kind of exchange of databases, as it uses the unicast routing 544 table to provide reverse path information for building multicast 545 trees. There are a few variants of PIM. In PIM-DM ([PIM-DM]), 546 multicast data is pushed towards the members similar to broadcast 547 mechanism. PIM-DM constructs a separate delivery tree for each 548 multicast group. As opposed to PIM-DM, other PIM versions (PIM-SM 549 [RFC2362], PIM-SSM [PIM-SSM], and BIDIR-PIM [BIDIR-PIM]) invokes a 550 pull methodology instead of push technique. 552 PIM routers periodically exchange Hello messages to discover and 553 maintain stateful sessions with neighbors. After neighbors are 554 discovered, PIM routers can signal their intentions to join/prune 555 specific multicast groups. This is accomplished by having 556 downstream routers send an explicit join message (for the sake of 557 generalization, consider Graft messages for PIM-DM as join messages) 558 to the upstream routers. The join/prune message can be group 559 specific (*,G) or group and source specific (S,G). 561 In PIM snooping, a PE snoops on the PIM message exchange between 562 routers, and builds its multicast states. Based on the multicast 563 states, it forwards IP multicast traffic accordingly to avoid 564 unnecessary flooding. 566 In the following, the mechanisms involved for implementing PIMv2 567 ([RFC2362]) snooping in VPLS are specified. PIMv1 is out of the 568 scope of this document. Please refer to Figure 2 as an example of 569 VPLS with PIM snooping. 571 VPLS Instance 572 +------+ AC1 +------+ +------+ AC4 +------+ 573 |Router|-----| PE |-------------| PE |-----|Router| 574 | 1 | | 1 |\ PW1to3 /| 3 | | 4 | 575 +------+ +------+ \ / +------+ +------+ 576 | \ / | 577 | \ / | 578 | \ /PW2to3 | 579 | \ / | 580 PW1to2| \ |PW3to4 581 | / \ | 582 | / \PW1to4 | 583 | / \ | 584 | / \ | 585 +------+ +------+ / \ +------+ +------+ 586 |Router| | PE |/ PW2to4 \| PE | |Router| 587 | 2 |-----| 2 |-------------| 4 |-----| 5 | 588 +------+ AC2 +------+ +------+ AC5 +------+ 589 | 590 |AC3 591 +------+ 592 |Router| 593 | 3 | 594 +------+ 596 Figure 2 Reference Diagram for PIM Snooping for VPLS 598 In the following sub-sections, snooping mechanisms for each variety 599 of PIM are specified. 601 4.3.1 PIM-DM 602 The characteristics of PIM-DM is flood and prune behavior. Shortest 603 path trees are built as a multicast source starts transmitting. 605 In Figure 2, the multicast source is behind Router 4, and all 606 routers have at least one receiver except Router 3 and Router 5. 608 The PIM-DM snooping mechanism for neighbor discovery works as 609 follows: 610 - To establish PIM neighbor adjacencies, PIM multicast routers 611 (all routers in this example) send PIM Hello messages to the 612 ALL PIM Routers group address (IPv4: 224.0.0.13, MAC: 01-00-5E- 613 00-00-0D) on every PIM enabled interfaces. The IPv6 ALL PIM 614 Routers group is "ff02::d". In addition, PIM Hello messages 615 are used to elect Designated Router for a multi-access network. 616 In PIM-DM, the DR acts as the Querier if IGMPv1 is used. 617 Otherwise, DR has no function in PIM-DM. 619 Guideline 6: PIM Hello messages MUST be flooded in the VPLS 620 instance. A PE MUST populate its "PIM Neighbors" list according 621 to the snooping results. This is a general PIM snooping guideline 622 and applies to all variants of PIM snooping. 624 Guideline 7: For PIM-DM only. The "Flood to" list is populated 625 with the ACs/PWs in the "PIM Neighbors" list. Changes to the "PIM 626 Neighbors" list MUST be replicated to the "Flood to" list. 628 - Every router starts sending PIM Hello messages. Per Guideline 629 6, every PE replicates Hello messages and forwards them to all 630 PEs participating in the VPLS instance. 631 - Based on PIM Hello exchanges PE routers populate PIM snooping 632 states as follows. PE 1: {[(,); Source:; Flood to: AC1, 633 PW1to2, PW1to3, PW1to4], [PIM Neighbors: (Router 1,AC1), 634 (Router 2,Router 3,PW1to2), (Router 4,PW1to3), (Router 635 5,PW1to4)] }, PE 2: {[(,); Source:; Flood to: AC2, AC3, PW1to2, 636 PW2to3, PW2to4], [PIM Neighbors: (Router 1,PW1to2), (Router 637 2,AC2), (Router 3,AC3), (Router 4,PW2to3), (Router 5,PW2to4)]}, 638 PE 3: {[(,); Source:; Flood to: AC4, PW1to3, PW2to3, PW3to4], 639 [PIM Neighbors: (Router 1,PW1to3), (Router 2,Router 3,PW2to3), 640 (Router 4,AC4), (Router 5,PW3to4)]}, PE 4: {[(,); Source:; 641 Flood to: AC5, PW1to4, PW2to4, PW3to4], [PIM Neighbors: (Router 642 1,PW1to4), (Router 2,Router 3,PW2to4), (Router 4,PW3to4), 643 (Router 5,AC5)]}. The original "Flood to" list is populated 644 with ACs/PWs in the PIM neighbor list per Guideline 7.. 645 - PIM Hello messages contain a Holdtime value, which tells the 646 receiver when to expire the neighbor adjacency (which is three 647 times the Hello period). 649 Guideline 8: If a PE does not receive a Hello message from a 650 router within its Holdtime, the PE MUST remove that router from 651 the PIM snooping state. If a PE receives a Hello message from a 652 router with Holdtime value set to zero, the PE MUST remove that 653 router from the PIM snooping state immediately. PEs MUST track 654 the Hello Holdtime value per PIM neighbor. 656 The PIM-DM snooping mechanism for multicast forwarding works as 657 follows: 658 - When the source starts sending traffic to multicast group 659 (S,G), PE 3 updates its state to PE 3: {[(S,G) ; Source: 660 (Router 4,AC4); Flood to: PW1to3, PW2to3, PW3to4], [PIM 661 Neighbors: (Router 1,PW1to3), (Router 2,Router 3,PW2to3), 662 (Router 4,AC4), (Router 5,PW3to4)]}. AC4 is removed from the 663 "Flood to" list for (S,G), since it is where the multicast 664 traffic comes from. 666 Guideline 9: Multicast traffic MUST be replicated per PW and AC 667 basis, i.e., even if there are more than one PIM neighbor behind a 668 PW/AC, only one replication MUST be sent to that PW/AC. 670 - PE 3 replicates the multicast traffic and sends it to the other 671 PE routers in its "Flood to" list. 672 - Consequently, all PEs update their states as follows. PE 1: 673 {[(S,G); Source: (Router 4,PW1to3); Flood to: AC1], [PIM 674 Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), (Router 675 4,PW1to3), (Router 5,PW1to4)]}, PE 2: {[(S,G); Source: (Router 676 4,PW2to3); Flood to: AC2, AC3], [PIM Neighbors: (Router 677 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 4,PW2to3), 678 (Router 5,PW2to4)]}, PE 4: {[(S,G); Source: (Router 4,PW3to4); 679 Flood to: AC5], [PIM Neighbors: (Router 1,PW1to4), (Router 680 2,Router 3,PW2to4), (Router 4,PW3to4), (Router 5,AC5)]}. 682 At this point all the routers (Router 1, Router 2,Router 3, Router 683 5) receive the multicast traffic. 685 - However, Router 3 and Router 5 do not have any members for that 686 multicast group, so they send prune messages to leave the 687 multicast group to the ALL PIM Routers group. PE 2 updates its 688 state to PE 2: {[(S,G); Source: (Router 4,PW2to3); Flood to: 689 AC2], [PIM Neighbors: (Router 1,PW1to2), (Router 2,AC2), 690 (Router 3,AC3), (Router 4,PW2to3), (Router 5,PW2to4)]}. PE 4 691 also remove Router 3 and Router 5 from its state as well. 693 Guideline 10: PIM join and prune messages MUST be flooded in the 694 VPLS instance. 696 - PE 2 and PE 4 then flood the prune message and forward it to 697 all PEs participating in the VPLS instance per Guideline 10. 698 PE 4 updates its state to PE 4: {[(S,G); Source: (Router 699 4,AC4); Flood to:], [PIM Neighbors: (Router 1,PW1to4), (Router 700 2,Router 3,PW2to4), (Router 4,PW3to4), (Router 5,AC5)]}. 701 - PIM-DM prune messages contain a Holdtime value, which specifies 702 how many seconds the prune state should last. 704 Guideline 11: For PIM-DM only. A PE MUST keep the prune state for 705 a PW/AC according to the Holdtime in the prune message, unless a 706 corresponding Graft message is received. 708 - Upon receiving the prune messages, each PE updates its state 709 accordingly. PE 1: {[(S,G); Source: (Router 4,PW1to3); Flood 710 to: AC1], [PIM Neighbors: (Router 1,AC1), (Router 2,Router 3, 711 PW1to2), (Router 4,PW1to3), (Router 5,PW1to4)]}, PE 2: {[(S,G); 712 Source: (Router 4,PW2to3); Flood to: AC2], [PIM Neighbors: 713 (Router 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 714 4,PW2to3), (Router 5,PW2to4)]}, PE 3: {[(S,G); Source: (Router 715 4,AC4); Flood to: PW1to3, PW2to3], [PIM Neighbors: (Router 716 1,PW1to3), (Router 2,Router 3,PW2to3), (Router 4,AC4), (Router 717 5, PW3to4)]}, PE 4: {[(S,G); Source: (Router 4,PW3to4); Flood 718 to:], [PIM Neighbors: (Router 1,PW1to4), (Router 2,Router 719 3,PW2to4), (Router 4,PW3to4), (Router 5,AC5)]}. 721 Guideline 12: To avoid overriding joins, a PE SHOULD suppress the 722 PIM prune messages to directly connected routers (i.e., ACs), as 723 long as there is a PW/AC in its corresponding "Flood to" list. 725 - In this case, PE 1, PE 2, and PE 3 do not forward the prune 726 messages to their directly connected routers. 728 The multicast traffic is now flowing only to points in the network 729 where receivers are present. 731 Guideline 13: For PIM-DM only. A PE MUST remove the AC/PW from 732 its corresponding prune state when it receives a graft message 733 from the AC/PW. That is, the corresponding AC/PW MUST be added to 734 the "Flood to" list. 736 Guideline 14: For PIM-DM only. PIM-DM graft messages MUST be 737 forwarded based on the destination MAC address. If the 738 destination MAC address is 01-00-5E-00-00-0D, then the graft 739 message MUST be flooded in the VPLS instance. 741 - For the sake of example, suppose now Router 3 has a receiver 742 the multicast group (S,G). Assuming Router 3 sends a graft 743 message in IP unicast to Router 4 to restart the flow of 744 multicast traffic. PE 2 updates its state to PE 2: {[(S,G); 745 Source: (Router 4,PW2to3); Flood to: AC2, AC3], [PIM Neighbors: 746 (Router 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 747 4,PW2to3), (Router 5,PW2to4)]}. PE 2 then forwards the graft 748 message to PE 3 according to Guideline 14. 749 - Upon receiving the graft message, PE 3 updates its state 750 accordingly to PE 3: {[(S,G); Source: (Router 4,AC4); Flood to: 751 PW1to3, PW2to3], [PIM Neighbors: (Router 1,PW1to3), (Router 752 2,Router 3,PW2to3), (Router 4,AC4), (Router 5,PW3to4)]}. 754 Guideline 15: PIM Assert messages MUST be flooded in the VPLS 755 instance. 757 Guideline 16: If an AC/PW goes down, a PE MUST remove it from its 758 PIM snooping state. 760 Failures can be easily handled in PIM-DM snooping, as it uses push 761 technique. If an AC or a PW goes down, PEs in the VPLS instance 762 will remove it from their snooping state (if the AC/PW is not 763 already pruned). After the AC/PW comes back up, it will be 764 automatically added to the snooping state by PE routers, as all 765 PWs/ACs MUST be in the snooping state, unless they are pruned later 766 on. 768 4.3.2 PIM-SM 769 The key characteristics of PIM-SM is explicit join behavior. In 770 this model, the multicast traffic is only sent to locations that 771 specifically request it. The root node of a tree is the Rendezvous 772 Point (RP) in case of a shared tree or the first hop router that is 773 directly connected to the multicast source in the case of a shortest 774 path tree. 776 In Figure 2, the RP is behind Router 4, and all routers have at 777 least one member except Router 3 and Router 5. 779 The PIM-SM snooping mechanism for neighbor discovery works the same 780 way as t procedure defined in PIM-DM section, with the exception of 781 PIM-DM only guidelines. 782 - Based on PIM Hello exchanges PE routers populate PIM snooping 783 states as follows. PE 1: {[(,); Flood to:], [PIM Neighbors: 784 (Router 1,AC1), (Router 2,Router 3,PW1to2), (Router 4,PW1to3), 785 (Router 5,PW1to4)]}, PE 2: {[(,); Flood to:], [PIM Neighbors: 786 (Router 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 787 4,PW2to3), (Router 5,PW2to4)]}, PE 3: {[(,); Flood to:], [PIM 788 Neighbors: (Router 1,PW1to3), (Router 2,Router 3,PW2to3), 789 (Router 4,AC4), (Router 5,PW3to4)]}, PE 4: {[(,); Flood to:], 790 [PIM Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 791 (Router 4,PW3to4), (Router 5,AC5)]}. 793 For PIM-SM to work properly, all routers within the domain must use 794 the same mappings of group addresses to RP addresses. Currently, 795 there are three methods for RP discovery: 1. Static RP 796 configuration, 2, Auto-RP, and 3. PIMv2 Bootstrap Router mechanism. 798 Guideline 17: Cisco RP-Discovery (IP:224.0.1.40, MAC:01-00-5E-00- 799 01-28), Cisco-RP-Announce (IP:224.0.1.39, MAC:01-00-5E-00-01-27), 800 all bootstrap router (BSR) (IP:224.0.0.13, MAC:01-00-5E-00-00-0D) 801 messages MUST be flooded in the VPLS instance. 803 The PIM-SM snooping mechanism for joining a multicast group (*,G) 804 works as follows: 805 - Assume Router 1 wants to join the multicast group (*,G) sends a 806 join message for the multicsat group (*,G). PE 1 replicates 807 the join message and forwards it to all PE routers in the VPLS 808 instance. 810 Guideline 18: A PE MUST add a PW/AC to its (*,G) "Flood to" list, 811 if it receives a (*,G) join message from the PW/AC. 813 - PE 1 updates their states as follows: PE 1: {[(*,G); Flood to: 814 AC1], [PIM Neighbors: (Router 1,AC1), (Router 2,Router 815 3,PW1to2), (Router 4,PW1to3), (Router 5,PW1to4)]}. 817 A periodic refresh mechanism is used in PIM-SM to maintain the 818 proper state. PIM-SM join messages contain a Holdtime value, which 819 specifies for how many seconds the join state should be kept. 821 Guideline 19: If a PE does not receive a refresh join message from 822 a PW/AC within its Holdtime, the PE MUST remove the PW/AC from its 823 "Flood to" list. 825 - PE 1 floods the join message to all PEs in the VPLS instance 826 per Guideline 10. 827 - All PEs update their states accordingly as follows: PE 1: 828 {[(*,G); Flood to: AC1], [PIM Neighbors: (Router 1,AC1), 829 (Router 2,Router 3,PW1to2), (Router 4,PW1to3), (Router 830 5,PW1to4)]}, PE 2: {[(*,G); Flood to: PW1to2], [PIM Neighbors: 831 (Router 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 832 4,PW2to3), (Router 5,PW2to4)]}, PE 3: {[(*,G); Flood to: 833 PW1to3], [PIM Neighbors: (Router 1,PW1to3), (Router 2,Router 834 3,PW2to3), (Router 4,AC4), (Router 5,PW3to4)]}, PE 4: {[(*,G); 835 Flood to: PW1to4], [PIM Neighbors: (Router 1,PW1to4), (Router 836 2,Router 3,PW2to4), (Router 4,PW3to4), (Router 5,AC5)]}. 837 - After Router 2 joins the same multicast group, the states 838 become as follows: PE 1: {[(*,G); Flood to: AC1,PW1to2], [PIM 839 Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), (Router 840 4,PW1to3), (Router 5,PW1to4)]}, PE 2: {[(*,G); Flood to: AC2, 841 PW1to2], [PIM Neighbors: (Router 1,PW1to2), (Router 2,AC2), 842 (Router 3,AC3), (Router 4,PW2to3), (Router 5,PW2to4)]}, PE 3: 843 {[(*,G); Flood to: PW1to3, PW2to3], [PIM Neighbors: (Router 844 1,PW1to3), (Router 2,Router 3,PW2to3), (Router 4,AC4), (Router 845 5,PW3to4)]}, PE 4: {[(*,G); Flood to: PW1to4, PW2to4], [PIM 846 Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 847 (Router 4,PW3to4), (Router 5,AC5)]}. 849 - For the sake of example, Router 3 joins the multicast group. 850 PE 2 floods the join message to the VPLS instance (including 851 the Router 2 via AC2). In turn, PE routers forward the join 852 message to their directly connected routers. The states of the 853 PEs become as follows: PE 1: {[(*,G); Flood to: AC1,PW1to2], 854 [PIM Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), 855 (Router 4,PW1to3), (Router 5,PW1to4)]}, PE 2: {[(*,G); Flood 856 to: AC2, AC3, PW1to2], [PIM Neighbors: (Router 1,PW1to2), 857 (Router 2,AC2), (Router 3,AC3), (Router 4,PW2to3), (Router 858 5,PW2to4)]}, PE 3: {[(*,G); Flood to: PW1to3, PW2to3], [PIM 859 Neighbors: (Router 1,PW1to3), (Router 2,Router 3,PW2to3), 860 (Router 4,AC4), (Router 5,PW3to4)]}, PE 4: {[(*,G); Flood to: 861 PW1to4,PW2to4],[ PIM Neighbors: (Router 1,PW1to4), (Router 862 2,Router 3,PW2to4), (Router 4,PW3to4), (Router 5,AC5)]}. 863 - Next Router 5 joins the group, and the states are updated 864 accordingly. 866 At this point, all PEs have necessary states to not send multicast 867 traffic to sites with no members. 869 The PIM-SM snooping mechanism for leaving a multicast group works as 870 follows: 871 - Assume Router 5 sends a prune message. 873 Guideline 20: A PE MUST remove a PW/AC from its (*,G) "Flood to" 874 list if it receives a (*,G) prune message from the PW/AC. A 875 prune-delay timer SHOULD be implemented to support prune override. 877 - PE 4 floods the (*,G) prune to the VPLS instance. PE routers 878 participating in the VPLS instance also forward the (*,G) prune 879 to the ACs, which are connected to the VPLS instance. The 880 states are updated as follows: PE 1: {[(*,G); Flood to: 881 AC1,PW1to2], [PIM Neighbors: (Router 1,AC1), (Router 2,Router 882 3,PW1to2), (Router 4,PW1to3), (Router 5,PW1to4)]}, PE 2: 883 {[(*,G); Flood to: AC2, AC3, PW1to2], [PIM Neighbors: (Router 884 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 4,PW2to3), 885 (Router 5,PW2to4)]}, PE 3: {[(*,G); Flood to: PW1to3, PW2to3], 886 [PIM Neighbors: (Router 1,PW1to3), (Router 2,Router 3,PW2to3), 887 (Router 4,AC4), (Router 5,PW3to4)]}, PE 4: {[(*,G); Flood to: 888 AC5, PW1to4], [PIM Neighbors: (Router 1,PW1to4), (Router 889 2,Router 3,PW2to4), (Router 4,PW3to4), (Router 5,AC5)]}. 891 The PIM-SM snooping mechanism for source and group specific join 892 works as follows: 894 Guideline 21: A PE MUST add a PW/AC to its (S,G) "Flood to" list 895 if it receives a (S,G) join message from the PW/AC. 897 Guideline 22: A PE MUST remove a PW/AC from its (S,G) "Flood to" 898 list if it receives a (S,G) prune message from the PW/AC. A 899 prune-delay timer SHOULD be implemented to support prune override. 901 Guideline 23: A PE MUST prefer (S,G) state to (*,G), if both S and 902 G match. 904 Guideline 24: When a (S,G) is state is first created, the initial 905 "Flood to" list MUST be populated by copying the "Flood to" list 906 from its parent (*,G) state. 908 Guideline 25: In case (*,G) state changes in a PE, all changes to 909 (*,G) state MUST (additions and deletions in "Flood to" list) be 910 replicated to (S,G) state. 912 - Now, we assume Router 5 sends a source and group specific join 913 (S,G). PE 4 floods the (S,G) join to the VPLS instance. PE 914 routers participating in the VPLS instance also forward the 915 (S,G) join to the ACs, which are connected to the VPLS 916 instance. The states are updated as follows: PE 1: {[(*,G); 917 Flood to: AC1,PW1to2],[(S,G); Flood to: AC1,PW1to2,PW1to4], 918 [PIM Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), 919 (Router 4,PW1to3), (Router 5,PW1to4)]}, PE 2: {[(*,G); Flood 920 to: AC2, AC3, PW1to2], [(S,G); Flood to: 921 AC2,AC3,PW1to2,PW2to4], [PIM Neighbors: (Router 1,PW1to2), 922 (Router 2,AC2), (Router 3,AC3), (Router 4,PW2to3), (Router 923 5,PW2to4)]}, PE 3: {[(*,G); Flood to: PW1to3, PW2to3], [(S,G); 924 Flood to: PW1to3,PW2to3,PW3to4], [PIM Neighbors: (Router 925 1,PW1to3), (Router 2,Router 3,PW2to3), (Router 4,AC4), (Router 926 5,PW3to4)]}, PE 4: {[(*,G); Flood to: AC5, PW1to4], [(S,G); 927 Flood to: AC5,PW1to4], [PIM Neighbors: (Router 1,PW1to4), 928 (Router 2,Router 3,PW2to4), (Router 4,PW3to4), (Router 929 5,AC5)]}. 930 - Afterwards, we assume Router 5 sends a (S,G)RP-bit prune, also 931 known as (S,G,RPT) prune. PE routers flood this prune message 932 and do not take any action. 934 As in the case with IGMPv3 snooping, we assume that the PEs have the 935 capability to store (S,G) states for PIM-SM snooping and 936 forward/replicate traffic accordingly. This is not mandatory. An 937 implementation, can fall back to (*,G) states, if its hardware can 938 not support it. In such case, the efficiency of multicast 939 forwarding will be less. 941 Failures can be easily handled in PIM-SM snooping, as it employs 942 state-refresh technique. PEs in the VPLS instance will remove any 943 entry for non-refreshing routers from their states. 945 There are some special cases to consider for PIM-SM snooping. First 946 one is the RP-on-a-stick. The RP-on-a-stick scenario may occur when 947 the Shortest Path Tree and the Shared Tree shares a common Ethernet 948 segment, as all routers will be connected over a multicast access 949 network (i.e., VPLS). Such a scenario will be handled by PIM-SM 950 rules (particularly, the incoming interface can not also appear in 951 the outgoing interface list) very nicely. Second scenario is the 952 turnaround router. The turnaround router scenario occurs when 953 shortest path tree and shared tree share a common path. The router 954 at which these tree merge is the turnaround router. PIM-SM handles 955 this case by proxy (S,G) join implementation by the turnaround 956 router. 958 In PIM-SM snooping, prune messages are flooded by PE routers. In 959 such implementation, PE routers may receive overriding join 960 messages, which will not affect anything. A PE can do prune 961 suppression to optimize prune messages. Future versions will 962 include such a mechanism. 964 4.3.3 PIM-SSM 965 The key characteristics of PIM-SSM is explicit join behavior, but it 966 eliminates the shared tree and the rendezvous point in PIM-SM. In 967 this model, a shortest path tree for each (S,G) is built with the 968 first hop router (that is directly connected to the multicast 969 source) being the root node. PIM-SSM is ideal for one-to-many 970 multicast services. 972 In Figure 2, S1 is behind Router 1, and S4 is behind Router 4. 973 Routers 2 and 4 want to join (S1,G), while Router 5 wants to join 974 (S4,G). 976 The PIM-SSM snooping mechanism for neighbor discovery works the same 977 way as t procedure defined in PIM-DM section, with the exception of 978 PIM-DM only guidelines. 979 - Based on PIM Hello exchanges PE routers populate PIM snooping 980 states as follows. PE 1: {[(,); Flood to:], [PIM Neighbors: 981 (Router 1,AC1), (Router 2,Router 3,PW1to2), (Router 4,PW1to3), 982 (Router 5,PW1to4)]}, PE 2: {[(,); Flood to:], [PIM Neighbors: 983 (Router 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 984 4,PW2to3), (Router 5,PW2to4)]}, PE 3: {[(,); Flood to:], [PIM 985 Neighbors: (Router 1,PW1to3), (Router 2,Router 3,PW2to3), 986 (Router 4,AC4), (Router 5,PW3to4)]}, PE 4: {[(,); Flood to:], 987 [PIM Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 988 (Router 4,PW3to4), (Router 5,AC5)]}. 990 PIM-SSM snooping is actually simpler than PIM-SM and only the 991 following guidelines (some of which are repetitions from PIM-SM 992 section) apply. 994 Guideline 26: A PE MUST add a PW/AC to its (S,G) "Flood to" list 995 if it receives a (S,G) join message from the PW/AC. 997 Guideline 27: PIM join and prune messages MUST be flooded in the 998 VPLS instance. 1000 Guideline 28: If A PE does not receive a refresh join message from 1001 a PW/AC within its Holdtime, the PE MUST remove the PW/AC from its 1002 "Flood to" list. 1004 Guideline 29: A PE MUST remove a PW/AC from its (S,G) "Flood to" 1005 list if it receives a (S,G) prune message from the PW/AC. A 1006 prune-delay timer SHOULD be implemented to support prune override. 1008 The PIM-SSM snooping mechanism for joining a multicast group works 1009 as follows: 1010 - Assume Router 2 requests to join the multicast group (S1,G). 1011 - PE 2 updates its state, and then flood the join message in the 1012 VPLS instance. 1013 - All PEs update their states as follows: PE 1: {[(S1,G); Flood 1014 to: PW1to2], [PIM Neighbors: (Router 1,AC1), (Router 2,Router 1015 3,PW1to2), (Router 4,PW1to3), (Router 5,PW1to4)]}, PE 2: 1016 {[(S1,G); Flood to: AC2], [PIM Neighbors: (Router 1,PW1to2), 1017 (Router 2,AC2), (Router 3,AC3), (Router 4,PW2to3), (Router 1018 5,PW2to4)]}, PE 3: {[(S1,G); Flood to: PW2to3], [PIM Neighbors: 1019 (Router 1,PW1to3), (Router 2,Router 3,PW2to3), (Router 4,AC4), 1020 (Router 5,PW3to4)]}, PE 4: {[(S1,G); Flood to: PW2to4], [PIM 1021 Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 1022 (Router 4,PW3to4), (Router 5,AC5)]}. 1023 - Next, assume Router 4 sends a join (S1,G) message. Following 1024 the same procedures, all PEs update their states as follows: 1025 PE 1: {[(S1,G); Flood to: PW1to2, PW1to3], [PIM Neighbors: 1026 (Router 1,AC1), (Router 2,Router 3,PW1to2), (Router 4,PW1to3), 1027 (Router 5,PW1to4)]}, PE 2: {[(S1,G); Flood to: AC2, PW2to3], 1028 [PIM Neighbors: (Router 1,PW1to2), (Router 2,AC2), (Router 1029 3,AC3), (Router 4,PW2to3), (Router 5,PW2to4)]}, PE 3: {[(S1,G); 1030 Flood to: PW2to3, AC4], [PIM Neighbors: (Router 1,PW1to3), 1031 (Router 2,Router 3,PW2to3), (Router 4,AC4), (Router 1032 5,PW3to4)]}, PE 4: {[(S1,G); Flood to: PW2to4, PW3to4], [PIM 1033 Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 1034 (Router 4,PW3to4), (Router 5,AC5)]}. 1035 - Then, assume Router 5 requests to join the multicast group 1036 (S4,G). After the same procedures are applied, all PEs update 1037 their states as follows: PE 1: {[(S1,G); Flood to: PW1to2, 1038 PW1to3], [(S4,G); Flood to: PW1to4], [PIM Neighbors: (Router 1039 1,AC1), (Router 2,Router 3,PW1to2), (Router 4,PW1to3), (Router 1040 5,PW1to4)]}, PE 2: {[(S1,G); Flood to: AC2, PW2to3], [(S4,G); 1041 Flood to: PW2to4], [PIM Neighbors: (Router 1,PW1to2), (Router 1042 2,AC2), (Router 3,AC3), (Router 4,PW2to3), (Router 5,PW2to4)]}, 1043 PE 3: {[(S1,G); Flood to: PW2to3, AC4], [(S4,G); Flood to: 1044 PW3to4], [PIM Neighbors: (Router 1,PW1to3), (Router 2,Router 1045 3,PW2to3), (Router 4,AC4), (Router 5,PW3to4)]}, PE 4: {[(S1,G); 1046 Flood to: PW2to4, PW3to4], [(S4,G); Flood to: AC5], [PIM 1047 Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 1048 (Router 4,PW3to4), (Router 5,AC5)]}. 1050 At this point, all PEs have necessary states to not send multicast 1051 traffic to sites with no members. 1053 The PIM-SSM snooping mechanism for leaving a multicast group works 1054 as follows: 1055 - Assume Router 2 sends a (S1,G) prune message to leave the 1056 multicast group. The prune message gets flooded in the VPLS 1057 instance. All PEs update their states as follows: PE 1: 1058 {[(S1,G); Flood to: PW1to3], [(S4,G); Flood to: PW1to4], [PIM 1059 Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), (Router 1060 4,PW1to3), (Router 5,PW1to4)]}, PE 2: {[(S1,G); Flood to: 1061 PW2to3], [(S4,G); Flood to: PW2to4], [PIM Neighbors: (Router 1062 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 4,PW2to3), 1063 (Router 5,PW2to4)]}, PE 3: {[(S1,G); Flood to: AC4], [(S4,G); 1064 Flood to: PW3to4], [PIM Neighbors: (Router 1,PW1to3), (Router 1065 2,Router 3,PW2to3), (Router 4,AC4), (Router 5,PW3to4)]}, PE 4: 1066 {[(S1,G); Flood to: PW3to4], [(S4,G); Flood to: AC5], [PIM 1067 Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 1068 (Router 4,PW3to4), (Router 5,AC5)]}. 1070 We assume that the PEs have the capability to store (S,G) states for 1071 PIM-SSM snooping and constrain multicast flooding scope accordingly. 1072 An implementation, can fall back to (*,G) states, if its hardware 1073 can not support it. In such case, the efficiency of multicast 1074 forwarding will be less. 1076 Similar to PIM-SSM snooping, failures can be easily handled in PIM- 1077 SSM snooping, as it employs state-refresh technique. PEs in the 1078 VPLS instance will remove entry for non-refreshing routers from 1079 their states. 1081 In PIM-SSM snooping, prune messages are flooded by PE routers. 1082 However, a PE can do prune suppression to optimize prune messages. 1083 Future versions might include such a mechanism. 1085 4.3.4 Bidirectional-PIM (BIDIR-PIM) 1086 BIDIR-PIM is a variation of PIM-SM. The main differences between 1087 PIM-SM and Bidirectional-PIM are as follows: 1089 - There are no source-based trees, and source-specific multicast 1090 is not supported (i.e., no (S,G) states) in BIDIR-PIM. 1091 - Multicast traffic can flow up the shared tree in BIDIR-PIM. 1092 - To avoid forwarding loops, one router on each link is elected 1093 as the Designated Forwarder (DF) for each RP in BIDIR-PIM. 1095 The main advantage of BIDIR-PIM is that it scales well for many-to- 1096 many applications. However, the lack of source-based trees means 1097 that multicast traffic is forced to remain on the shared tree. 1099 In Figure 2, the RP for (*,G4) is behind Router 4, and the RP for 1100 (*,G1) is behind Router 1. Router 2 and Router 4 want to join 1101 (*,G1), whereas Router 5 wants to join (*,G4). On the VPLS 1102 instance, Router 4 is the DF for the RP of (*,G4), and Router 1 is 1103 the DF of the RP for (*,G1). 1105 The PIM-SSM snooping mechanism for neighbor discovery works the same 1106 way as t procedure defined in PIM-DM section, with the exception of 1107 PIM-DM only guidelines. 1108 - Based on PIM Hello exchanges PE routers populate PIM snooping 1109 Based on PIM Hello exchanges PE routers populate PIM snooping 1110 states as follows. PE 1: {[(,); Upstream to:; Downstream to:], 1111 [PIM Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), 1112 (Router 4,PW1to3), (Router 5,PW1to4)], [(,), DF:]}, PE 2: 1113 {[(,); Upstream to:; Downstream to:], [PIM Neighbors: (Router 1114 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 4,PW2to3), 1115 (Router 5,PW2to4)], [(,), DF:]}, PE 3: {[(,); Upstream to:; 1116 Downstream to:], [PIM Neighbors: (Router 1,PW1to3), (Router 1117 2,Router 3,PW2to3), (Router 4,AC4), (Router 5,PW3to4)], [(,), 1118 DF:]}, PE 4: {[(,); Upstream to:; Downstream to:], [PIM 1119 Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 1120 (Router 4,PW3to4), (Router 5,AC5)], [(,), DF:]}. 1122 For BIDIR-PIM to work properly, all routers within the domain must 1123 know the address of the RP. There are three methods to do that: 1. 1124 Static RP configuration, 2, Auto-RP, and 3. PIMv2 Bootstrap. 1125 Guideline 17 applies here as well. 1127 During RP discovery time, PIM routers elect DF per subnet for each 1128 RP. The algorithm to elect the DF is as follows: all PIM neighbors 1129 in a subnet advertise their unicast route to elect the RP and the 1130 router with the best route is elected. 1132 Guideline 30: All PEs MUST snoop the DF elections messages and 1133 determine the DF for each [(*,G),RP)] pair. The "Upstream" list 1134 MUST be updated with PW/AC that leads to the DF. When the DF 1135 changes, the "Upstream" list MUST be updated accordingly. 1137 - Based on DF election messages, PE routers populate PIM snooping 1138 states as follows: PE 1: {[(*,G1); Upstream to: AC1; Downstream 1139 to:], [(*,G4); Upstream to: PW1to3; Downstream to:], [PIM 1140 Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), (Router 1141 4,PW1to3), (Router 5,PW1to4)], [(*,G1), DF: AC1], [(*,G4), DF: 1142 PW1to4]}, PE 2: {[(*,G1); Upstream to: PW1to2; Downstream to:], 1143 [(*,G4); Upstream to: PW2to3; Downstream to:], [PIM Neighbors: 1144 (Router 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 1145 4,PW2to3), (Router 5,PW2to4)], [(*,G1), DF:PW1to2], [(*,G4), 1146 DF:PW2to3]}, PE 3: {[(*,G1); Upstream to: PW1to3; Downstream 1147 to:], [(*,G4); Upstream to: AC4; Downstream to:], [PIM 1148 Neighbors: (Router 1,PW1to3), (Router 2,Router 3,PW2to3), 1149 (Router 4,AC4), (Router 5,PW3to4)], [(*,G1), DF: PW1to3], 1150 [(*,G4), DF: AC4]}, PE 4: {[(*,G1); Upstream to: PW1to4; 1151 Downstream to:], [(*,G4); Upstream to: PW3to4; Downstream to:], 1152 [PIM Neighbors: (Router 1,PW1to4), (Router 2,Router 3,PW2to4), 1153 (Router 4,PW3to4), (Router 5,AC5)], [(*,G1), DF: PW1to4], 1154 [(*,G4), DF: PW3to4]}. 1156 The BIDIR-PIM snooping for Join and Prune messages is similar to 1157 PIM-SM and the following guidelines (some of which are repetitions 1158 from PIM-SM section) apply. 1160 Guideline 31: A PE MUST add a PW/AC to its (*,G) "Downstream" list 1161 if it receives a (*,G) join message from the PW/AC. 1163 Guideline 32: PIM join and prune messages MUST be flooded in the 1164 VPLS instance. 1166 Guideline 33: If A PE does not receive a refresh join message from 1167 a PW/AC within its Holdtime, the PE MUST remove the PW/AC from its 1168 "Downstream" list. 1170 Guideline 34: A PE MUST remove a PW/AC from its (*,G) "Downstream" 1171 list if it receives a (*,G) prune message from the PW/AC. A 1172 prune-delay timer SHOULD be implemented to support prune override. 1174 Guideline 35: A PE MUST replicate multicast traffic for (*,G) to 1175 the members in its (*,G) "Upstream" and "Downstream" lists. 1177 The BIDIR-PIM snooping mechanism for joining a multicast group works 1178 as follows: 1179 - Assume Router 2 wants to join the multicast group (*,G1). PE 2 1180 floods the join message in the VPLS instance. All PEs update 1181 their states as follows: PE 1: {[(*,G1); Upstream to: AC1; 1182 Downstream to: PW1to2], [(*,G4); Upstream to: PW1to3; 1183 Downstream to:], [PIM Neighbors: (Router 1,AC1), (Router 1184 2,Router 3,PW1to2), (Router 4,PW1to3), (Router 5,PW1to4)], 1185 [(*,G1), DF: AC1], [(*,G4), DF: PW1to4]}, PE 2: {[(*,G1); 1186 Upstream to: PW1to2; Downstream to: AC2], [(*,G4); Upstream to: 1187 PW2to3; Downstream to:], [PIM Neighbors: (Router 1,PW1to2), 1188 (Router 2,AC2), (Router 3,AC3), (Router 4,PW2to3), (Router 1189 5,PW2to4)], [(*,G1), DF:PW1to2], [(*,G4), DF:PW2to3]}, PE 3: 1190 {[(*,G1); Upstream to: PW1to3; Downstream to: PW2to3], [(*,G4); 1191 Upstream to: AC4; Downstream to:], [PIM Neighbors: (Router 1192 1,PW1to3), (Router 2,Router 3,PW2to3), (Router 4,AC4), (Router 1193 5,PW3to4)], [(*,G1), DF: PW1to3], [(*,G4), DF: AC4]}, PE 4: 1194 {[(*,G1); Upstream to: PW1to4; Downstream to: PW2to4], [(*,G4); 1195 Upstream to: PW3to4; Downstream to:], [PIM Neighbors: (Router 1196 1,PW1to4), (Router 2,Router 3,PW2to4), (Router 4,PW3to4), 1197 (Router 5,AC5)], [(*,G1), DF: PW1to4], [(*,G4), DF: PW3to4]}. 1198 - Next, assume Router 4 wants to join the multicast group (*,G1). 1199 All PEs update their states as follows: PE 1: {[(*,G1); 1200 Upstream to: AC1; Downstream to: PW1to2, PW1to3], [(*,G4); 1201 Upstream to: PW1to3; Downstream to:], [PIM Neighbors: (Router 1202 1,AC1), (Router 2,Router 3,PW1to2), (Router 4,PW1to3), (Router 1203 5,PW1to4)], [(*,G1), DF: AC1], [(*,G4), DF: PW1to4]}, PE 2: 1204 {[(*,G1); Upstream to: PW1to2; Downstream to: AC2, PW2to3], 1205 [(*,G4); Upstream to: PW2to3; Downstream to:], [PIM Neighbors: 1206 (Router 1,PW1to2), (Router 2,AC2), (Router 3,AC3), (Router 1207 4,PW2to3), (Router 5,PW2to4)], [(*,G1), DF:PW1to2], [(*,G4), 1208 DF:PW2to3]}, PE 3: {[(*,G1); Upstream to: PW1to3; Downstream 1209 to: PW2to3, AC4], [(*,G4); Upstream to: AC4; Downstream to:], 1210 [PIM Neighbors: (Router 1,PW1to3), (Router 2,Router 3,PW2to3), 1211 (Router 4,AC4), (Router 5,PW3to4)], [(*,G1), DF: PW1to3], 1212 [(*,G4), DF: AC4]}, PE 4: {[(*,G1); Upstream to: PW1to4; 1213 Downstream to: PW2to4, PW3to4], [(*,G4); Upstream to: PW3to4; 1214 Downstream to:], [PIM Neighbors: (Router 1,PW1to4), (Router 1215 2,Router 3,PW2to4), (Router 4,PW3to4), (Router 5,AC5)], 1216 [(*,G1), DF: PW1to4], [(*,G4), DF: PW3to4]}. 1217 - Then, assume Router 5 wants to join the multicast group (*,G4). 1218 Following the same procedures, all PEs update their states as 1219 follows: PE 1: {[(*,G1); Upstream to: AC1; Downstream to: 1220 PW1to2], [(*,G4); Upstream to: PW1to3; Downstream to:PW1to4], 1221 [PIM Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), 1222 (Router 4,PW1to3), (Router 5,PW1to4)], [(*,G1), DF: AC1], 1223 [(*,G4), DF: PW1to4]}, PE 2: {[(*,G1); Upstream to: PW1to2; 1224 Downstream to: AC2], [(*,G4); Upstream to: PW2to3; Downstream 1225 to: PW2to4], [PIM Neighbors: (Router 1,PW1to2), (Router 2,AC2), 1226 (Router 3,AC3), (Router 4,PW2to3), (Router 5,PW2to4)], [(*,G1), 1227 DF:PW1to2], [(*,G4), DF:PW2to3]}, PE 3: {[(*,G1); Upstream to: 1229 PW1to3; Downstream to: PW2to3], [(*,G4); Upstream to: AC4; 1230 Downstream to:PW3to4], [PIM Neighbors: (Router 1,PW1to3), 1231 (Router 2,Router 3,PW2to3), (Router 4,AC4), (Router 5,PW3to4)], 1232 [(*,G1), DF: PW1to3], [(*,G4), DF: AC4]}, PE 4: {[(*,G1); 1233 Upstream to: PW1to4; Downstream to: PW2to4], [(*,G4); Upstream 1234 to: PW3to4; Downstream to: AC5], [PIM Neighbors: (Router 1235 1,PW1to4), (Router 2,Router 3,PW2to4), (Router 4,PW3to4), 1236 (Router 5,AC5)], [(*,G1), DF: PW1to4], [(*,G4), DF: PW3to4]}. 1238 At this point, all PEs have necessary states to not send multicast 1239 traffic to sites with no members. 1241 One example of the BIDIR-PIM snooping mechanism for leaving a 1242 multicast group works as follows: 1243 - Assume Router 2 wants to leave the multicast group (*,G1) and 1244 sends a (*,G1) prune message. The prune message gets flooded 1245 in the VPLS instance. All PEs update their states as follows: 1246 PE 1: {[(*,G1); Upstream to: AC1; Downstream to: PW1to3], 1247 [(*,G4); Upstream to: PW1to3; Downstream to:PW1to4], [PIM 1248 Neighbors: (Router 1,AC1), (Router 2,Router 3,PW1to2), (Router 1249 4,PW1to3), (Router 5,PW1to4)], [(*,G1), DF: AC1], [(*,G4), DF: 1250 PW1to4]}, PE 2: {[(*,G1); Upstream to: PW1to2; Downstream to: 1251 PW2to3], [(*,G4); Upstream to: PW2to3; Downstream to: PW2to4], 1252 [PIM Neighbors: (Router 1,PW1to2), (Router 2,AC2), (Router 1253 3,AC3), (Router 4,PW2to3), (Router 5,PW2to4)], [(*,G1), 1254 DF:PW1to2], [(*,G4), DF:PW2to3]}, PE 3: {[(*,G1); Upstream to: 1255 PW1to3; Downstream to: AC4], [(*,G4); Upstream to: AC4; 1256 Downstream to:PW3to4], [PIM Neighbors: (Router 1,PW1to3), 1257 (Router 2,Router 3,PW2to3), (Router 4,AC4), (Router 5,PW3to4)], 1258 [(*,G1), DF: PW1to3], [(*,G4), DF: AC4]}, PE 4: {[(*,G1); 1259 Upstream to: PW1to4; Downstream to: PW3to4], [(*,G4); Upstream 1260 to: PW3to4; Downstream to: AC5], [PIM Neighbors: (Router 1261 1,PW1to4), (Router 2,Router 3,PW2to4), (Router 4,PW3to4), 1262 (Router 5,AC5)], [(*,G1), DF: PW1to4], [(*,G4), DF: PW3to4]}. 1264 Once again, failures can be easily handled in BIDIR-PIM snooping, as 1265 it employs state-refresh technique. PEs in the VPLS instance will 1266 remove entry for non-refreshing routers from their states. 1268 Optionally, prune suppression can be done on a PE to optimize prune 1269 message handling. Future versions might include such a mechanism. 1271 5 Security Considerations 1272 Security considerations provided in VPLS solution documents (i.e., 1273 [VPLS-LDP] and [VPLS-BGP) apply to this document as well. 1275 6 References 1276 6.1 Normative References 1278 6.2 Informative References 1279 [VPLS-LDP] Lasserre, M, et al. "Virtual Private LAN Services 1280 over MPLS", work in progress 1281 [VPLSD-BGP] Kompella, K, et al. "Virtual Private LAN Service", 1282 work in progress 1283 [L2VPN-FR] Andersson, L, et al. "L2VPN Framework", work in 1284 progress 1285 [PMP-RSVP-TE] Aggarwal, R, et al. "Extensions to RSVP-TE for 1286 Point to Multipoint TE LSPs", work in progress 1287 [RFC1112] Deering, S., "Host Extensions for IP Multicasting", 1288 RFC 1112, August 1989. 1289 [RFC2236] Fenner, W., "Internet Group Management Protocol, 1290 Version 2", RFC 2236, November 1997. 1291 [RFC3376] Cain, B., et al. "Internet Group Management 1292 Protocol, Version 3", RFC 3376, October 2002. 1293 [PIM-DM] Deering, S., et al. "Protocol Independent Multicast 1294 Version 2 � Dense Mode Specification", draft-ietf- 1295 pim-v2-dm-03.txt, June 1999. 1296 [RFC2362] Estrin, D, et al. "Protocol Independent Multicast- 1297 Sparse Mode (PIM-SM): Protocol Specification", RFC 1298 2362, June 1998. 1299 [PIM-SSM] Holbrook, H., et al. "Source-Specific Multicast for 1300 IP", work in progress 1301 [BIDIR-PIM] Handley, M., et al. "Bi-directional Protocol 1302 Independent Multicast (BIDIR-PIM)", work in 1303 progress 1305 7 Authors' Addresses 1307 Yetik Serbest 1308 SBC Labs 1309 9505 Arboretum Blvd. 1310 Austin, TX 78759 1311 Yetik_serbest@labs.sbc.com 1313 Marc Lasserre 1314 Riverstone Networks 1315 Marc@riverstonenet.com 1317 Rob Nath 1318 Riverstone Networks 1319 5200 Great America Parkway 1320 Santa Clara, CA 95054 1321 Rnath@riverstonenet.com 1323 Vach Kompella 1324 Alcatel North America 1325 701 East Middlefield Rd. 1326 Mountain View, CA 94043 1327 Ray Qiu 1328 Alcatel North America 1329 701 East Middlefield Rd. 1330 Mountain View, CA 94043 1331 Ray_Qiu@alcatel.com 1333 Sunil Khandekar 1334 Alcatel North America 1335 701 East Middlefield Rd. 1336 Mountain View, CA 94043 1337 Sunil_khandekar@alcatel.com 1339 8 Intellectual Property Statement 1341 The IETF takes no position regarding the validity or scope of any 1342 Intellectual Property Rights or other rights that might be claimed 1343 to pertain to the implementation or use of the technology described 1344 in this document or the extent to which any license under such 1345 rights might or might not be available; nor does it represent that 1346 it has made any independent effort to identify any such rights. 1347 Information on the procedures with respect to rights in RFC 1348 documents can be found in BCP 78 and BCP 79. 1350 Copies of IPR disclosures made to the IETF Secretariat and any 1351 assurances of licenses to be made available, or the result of an 1352 attempt made to obtain a general license or permission for the use 1353 of such proprietary rights by implementers or users of this 1354 specification can be obtained from the IETF on-line IPR repository 1355 at http://www.ietf.org/ipr. 1357 The IETF invites any interested party to bring to its attention any 1358 copyrights, patents or patent applications, or other proprietary 1359 rights that may cover technology that may be required to implement 1360 this standard. Please address the information to the IETF at ietf- 1361 ipr@ietf.org. 1363 9 Full copyright statement 1365 Copyright (C) The Internet Society (2004). This document is subject 1366 to the rights, licenses and restrictions contained in BCP 78 and 1367 except as set forth therein, the authors retain all their rights. 1369 This document and the information contained herein are provided on 1370 an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 1371 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE 1372 INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR 1373 IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1374 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1375 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.