idnits 2.17.1 draft-ietf-pals-vpls-pim-snooping-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 14, 2016) is 2750 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 4601 (ref. 'PIM-SM') (Obsoleted by RFC 7761) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 PALS Workgroup O. Dornon 3 Internet-Draft J. Kotalwar 4 Intended status: Informational Nokia 5 Expires: April 17, 2017 V. Hemige 7 R. Qiu 8 mistnet.io 9 Z. Zhang 10 Juniper Networks, Inc. 11 October 14, 2016 13 Protocol Independent Multicast (PIM) over Virtual Private LAN Service 14 (VPLS) 15 draft-ietf-pals-vpls-pim-snooping-03 17 Abstract 19 This document describes the procedures and recommendations for 20 Virtual Private LAN Service (VPLS) Provider Edges (PEs) to facilitate 21 replication of multicast traffic to only certain ports (behind which 22 there are interested Protocol Independent Multicast (PIM) routers 23 and/or Internet Group Management Protocol (IGMP) hosts) via Protocol 24 Independent Multicast (PIM) snooping and proxying. 26 With PIM snooping, PEs passively listen to certain PIM control 27 messages to build control and forwarding states while transparently 28 flooding those messages. With PIM proxying, Provider Edges (PEs) do 29 not flood PIM Join/Prune messages but only generate their own and 30 send out of certain ports, based on the control states built from 31 downstream Join/Prune messages. PIM proxying is required when PIM 32 Join suppression is enabled on the Customer Equipment (CE) devices 33 and useful to reduce PIM control traffic in a VPLS domain. 35 The document also describes PIM relay, which can be viewed as light- 36 weight proxying, where all downstream Join/Prune messages are simply 37 forwarded out of certain ports but not flooded to avoid triggering 38 PIM Join suppression on CE devices. 40 Requirements Language 42 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 43 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 44 document are to be interpreted as described in [RFC2119]. 46 Status of This Memo 48 This Internet-Draft is submitted in full conformance with the 49 provisions of BCP 78 and BCP 79. 51 Internet-Drafts are working documents of the Internet Engineering 52 Task Force (IETF). Note that other groups may also distribute 53 working documents as Internet-Drafts. The list of current Internet- 54 Drafts is at http://datatracker.ietf.org/drafts/current/. 56 Internet-Drafts are draft documents valid for a maximum of six months 57 and may be updated, replaced, or obsoleted by other documents at any 58 time. It is inappropriate to use Internet-Drafts as reference 59 material or to cite them other than as "work in progress." 61 This Internet-Draft will expire on April 17, 2017. 63 Copyright Notice 65 Copyright (c) 2016 IETF Trust and the persons identified as the 66 document authors. All rights reserved. 68 This document is subject to BCP 78 and the IETF Trust's Legal 69 Provisions Relating to IETF Documents 70 (http://trustee.ietf.org/license-info) in effect on the date of 71 publication of this document. Please review these documents 72 carefully, as they describe your rights and restrictions with respect 73 to this document. Code Components extracted from this document must 74 include Simplified BSD License text as described in Section 4.e of 75 the Trust Legal Provisions and are provided without warranty as 76 described in the Simplified BSD License. 78 Table of Contents 80 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 81 1.1. Multicast Snooping in VPLS . . . . . . . . . . . . . . . 4 82 1.2. Assumptions . . . . . . . . . . . . . . . . . . . . . . . 5 83 1.3. Definitions . . . . . . . . . . . . . . . . . . . . . . . 5 84 2. PIM Snooping for VPLS . . . . . . . . . . . . . . . . . . . . 6 85 2.1. PIM protocol background . . . . . . . . . . . . . . . . . 6 86 2.2. General Rules for PIM Snooping in VPLS . . . . . . . . . 7 87 2.2.1. Preserving Assert Trigger . . . . . . . . . . . . . . 7 88 2.3. Some Considerations for PIM Snooping . . . . . . . . . . 8 89 2.3.1. Scaling . . . . . . . . . . . . . . . . . . . . . . . 8 90 2.3.2. IPv6 . . . . . . . . . . . . . . . . . . . . . . . . 9 91 2.3.3. PIM-SM (*,*,RP) . . . . . . . . . . . . . . . . . . . 9 92 2.4. PIM Snooping vs PIM Proxying . . . . . . . . . . . . . . 9 93 2.4.1. Differences between PIM Snooping, Relay and Proxying 9 94 2.4.2. PIM Control Message Latency . . . . . . . . . . . . . 10 95 2.4.3. When to Snoop and When to Proxy . . . . . . . . . . . 11 96 2.5. Discovering PIM Routers . . . . . . . . . . . . . . . . . 12 97 2.6. PIM-SM and PIM-SSM . . . . . . . . . . . . . . . . . . . 13 98 2.6.1. Building PIM-SM States . . . . . . . . . . . . . . . 13 99 2.6.2. Explanation for per (S,G,N) states . . . . . . . . . 16 100 2.6.3. Receiving (*,G) PIM-SM Join/Prune Messages . . . . . 16 101 2.6.4. Receiving (S,G) PIM-SM Join/Prune Messages . . . . . 18 102 2.6.5. Receiving (S,G,rpt) Join/Prune Messages . . . . . . . 20 103 2.6.6. Sending Join/Prune Messages Upstream . . . . . . . . 20 104 2.7. Bidirectional-PIM (BIDIR-PIM) . . . . . . . . . . . . . . 21 105 2.8. Interaction with IGMP Snooping . . . . . . . . . . . . . 22 106 2.9. PIM-DM . . . . . . . . . . . . . . . . . . . . . . . . . 22 107 2.9.1. Building PIM-DM States . . . . . . . . . . . . . . . 22 108 2.9.2. PIM-DM Downstream Per-Port PIM(S,G,N) State Machine . 23 109 2.9.3. Triggering ASSERT election in PIM-DM . . . . . . . . 23 110 2.10. PIM Proxy . . . . . . . . . . . . . . . . . . . . . . . . 23 111 2.10.1. Upstream PIM Proxy behavior . . . . . . . . . . . . 23 112 2.11. Directly Connected Multicast Source . . . . . . . . . . . 24 113 2.12. Data Forwarding Rules . . . . . . . . . . . . . . . . . . 24 114 2.12.1. PIM-SM Data Forwarding Rules . . . . . . . . . . . . 25 115 2.12.2. PIM-DM Data Forwarding Rules . . . . . . . . . . . . 26 116 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 27 117 4. Security Considerations . . . . . . . . . . . . . . . . . . . 27 118 5. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 27 119 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 28 120 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 28 121 7.1. Normative References . . . . . . . . . . . . . . . . . . 28 122 7.2. Informative References . . . . . . . . . . . . . . . . . 28 123 Appendix A. BIDIR-PIM Thoughts . . . . . . . . . . . . . . . . . 29 124 A.1. BIDIR-PIM Data Forwarding Rules . . . . . . . . . . . . . 29 125 Appendix B. Example Network Scenario . . . . . . . . . . . . . . 30 126 B.1. Pim Snooping Example . . . . . . . . . . . . . . . . . . 31 127 B.2. PIM Proxy Example with (S,G) / (*,G) interaction . . . . 34 128 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 39 130 1. Introduction 132 In Virtual Private LAN Service (VPLS), the Provider Edge (PE) devices 133 provide a logical interconnect such that Customer Edge (CE) devices 134 belonging to a specific VPLS instance appear to be connected by a 135 single LAN. Forwarding Information Base for a VPLS instance is 136 populated dynamically by MAC address learning. Once a unicast MAC 137 address is learned and associated with a particular Attachment 138 Circuit (AC) or PseudoWire (PW), a frame destined to that MAC address 139 only needs to be sent on that AC or PW. 141 For a frame not addressed to a known unicast MAC address, flooding 142 has to be used. This happens with the following so called BUM 143 (Broadcast Unknown Multicast) traffic: 145 o B: The destination MAC address is a broadcast address, 147 o U: The destination MAC address is unknown (has not been learned), 149 o M: The destination MAC address is a multicast address. 151 Multicast frames are flooded because a PE cannot know where 152 corresponding multicast group members reside. VPLS solutions (i.e., 153 [VPLS-LDP] and [VPLS-BGP]) perform replication for multicast traffic 154 at the ingress PE devices. As stated in the VPLS Multicast 155 Requirements draft [VPLS-MCAST-REQ], there are two issues with VPLS 156 multicast today: 158 o A. Multicast traffic is replicated to non-member sites. 160 o B. Replication on PWs on shared physical path. 162 Issue A can be solved by multicast snooping - PEs learn sites with 163 multicast group members by snooping multicast protocol control 164 messages on ACs and forward IP multicast traffic only to member 165 sites. This document describes the procedures to achieve this when 166 CE devices are PIM adjacencies of each other. Issue B is outside the 167 scope of this document and discussed in [VPLS-MCAST]. 169 While this document is in the context of VPLS, the procedures apply 170 to regular layer-2 switches interconnected by physical connections as 171 well, albeit this is outside of the scope of this document. In that 172 case, the PW related concept/procedures are not applicable and that's 173 all. 175 1.1. Multicast Snooping in VPLS 177 IGMP snooping procedures described in [IGMP-SNOOP] make sure that IP 178 multicast traffic is only sent on the following: 180 o Attachment Circuits (ACs) connecting to hosts that report related 181 group membership 183 o ACs connecting to routers that join related multicast groups 185 o PseudoWires (PWs) connecting to remote PEs that have the above 186 described ACs 188 Notice that traffic is always sent on ports that have point-to-point 189 connections to routers ot that are attached to a LAN on which there 190 is a router, even those on which there are no snooped group 191 memberships, because IGMP snooping alone can not determine if there 192 are interested receivers beyond those routers. To further restrict 193 traffic sent to those routers, PIM snooping can be used. This 194 document describes the procedures for PIM snooping, including the 195 rules when both IGMP and PIM snooping are enabled in a VPLS instance, 196 which are elaborated in sections Section 2.8 and Section 2.11. 198 Note that for both IGMP and PIM, the term Snooping is used loosely, 199 referring to the fact that a layer-2 device peeks into layer-3 200 routing protocol messages to build relevant control and forwarding 201 states. Depending on how the control messages are handled 202 (transparently flooded, selectively forwarded, aggregated), the 203 procedure/process may be called Snooping or proxy in different 204 contexts. 206 Unless explicitly noted, the procedures in this document are used for 207 either PIM snooping or PIM proxying, and we will largely refer to PIM 208 snooping in this document. The PIM proxying specific procedures are 209 described in Section 2.6.6. Differences that need to be observed 210 while implementing one or the other and recommendations on which 211 method to employ in different scenarios are noted in section 212 Section 2.4. 214 This document also describes PIM relay, which can be viewed as light- 215 weight PIM proxying. Unless explicitly noted, in the rest of the 216 document proxying implicitly includes relay as well. Please refer to 217 Section 2.4.1 for an overview of the differences between snooping, 218 proxying and relay. 220 1.2. Assumptions 222 This document assumes that the reader has good understanding of the 223 PIM protocols. This document is written in the same style as the PIM 224 RFCs to help correlate the concepts and to make it easier to follow. 225 In order to avoid replicating text related to PIM protocol handling 226 from the PIM RFCs, this document cross references corresponding 227 definitions and procedures in these RFCs. Deviations in protocol 228 handling specific to PIM snooping are specified in this document. 230 1.3. Definitions 232 There are several definitions referenced in this document that are 233 well described in the PIM RFCs [PIM-SM], [BIDIR-PIM], [PIM-DM]. The 234 following definitions and abbreviations are used throughout this 235 document: 237 o A port is defined as either an attachment circuit (AC) or a 238 pseudowire (PW). 240 o When we say a PIM message is received on a PE port, it means that 241 the PE is processing the message for snooping/proxying or 242 relaying. 244 Abbreviations used in the document: 246 o S: IP address of the multicast source. 248 o G: IP address of the multicast group. 250 o N: Upstream neighbor field in a Join/Prune/Graft message. 252 o Port(N): Port on which neighbor N is learnt, i.e. the port on 253 which N's Hellos are received. 255 o rpt : Rendezvous Point 257 o PIM-DM: Protocol Independent Multicast - Dense Mode. 259 o PIM-SM: Protocol Independent Multicast - Sparse Mode. 261 o PIM-SSM: Protocol Independent Multicast - Source Specific Mode. 263 Other definitions are explained in the sections where they are 264 introduced. 266 2. PIM Snooping for VPLS 268 2.1. PIM protocol background 270 PIM is a multicast routing protocol running between routers, which 271 are CE devices in a VPLS. It uses the unicast routing table to 272 provide reverse path information for building multicast trees. There 273 are a few variants of PIM. In [PIM-DM], multicast datagrams are 274 pushed towards downstream neighbors, similar to a broadcast 275 mechanism, but in areas of the network where there are no group 276 members, routers prune back branches of the multicast tree towards 277 the source. Unlike PIM-DM, other PIM flavors (PIM-SM [PIM-SM], PIM- 278 SSM [PIM-SSM], and BIDIR-PIM [BIDIR-PIM]) employ a pull methodology 279 via explicit joins instead of the push and prune technique. 281 PIM routers periodically exchange Hello messages to discover and 282 maintain stateful sessions with neighbors. After neighbors are 283 discovered, PIM routers can signal their intentions to join or prune 284 specific multicast groups. This is accomplished by having downstream 285 routers send an explicit Join/Prune message (for the sake of 286 generalization, consider Graft messages for PIM-DM as Join messages) 287 to their corresponding upstream router. The Join/Prune message can 288 be group specific (*,G) or group and source specific(S,G). 290 2.2. General Rules for PIM Snooping in VPLS 292 The following rules for the correct operation of PIM snooping MUST be 293 followed. 295 o PIM snooping MUST NOT affect the operation of customer layer-2 296 protocols (e.g., BPDUs) or layer-3 protocols. 298 o PIM messages and multicast data traffic forwarded by PEs MUST 299 follow the split-horizon rule for mesh PWs. 301 o PIM states in a PE MUST be per VPLS instance. 303 o PIM assert triggers MUST be preserved to the extent necessary to 304 avoid sending duplicate traffic to the same PE (see 305 Section 2.2.1). 307 2.2.1. Preserving Assert Trigger 309 In PIM-SM/DM, there are scenarios where multiple routers could be 310 forwarding the same multicast traffic on a LAN. When this happens, 311 using PIM Assert election process by sending PIM Assert messages, 312 routers ensure that only the Assert winner forwards traffic on the 313 LAN. The Assert election is a data driven event and happens only if 314 a router sees traffic on the interface to which it should be 315 forwarding the traffic. In the case of VPLS with PIM snooping, two 316 routers may forward the same multicast datagrams at the same time but 317 each copy may reach different set of PEs, and that is acceptable from 318 the point of view of avoiding duplicate traffic. If the two copies 319 may reach the same PE then the sending routers must be able to see 320 each other's traffic, in order to trigger Assert election and stop 321 duplicate traffic. To achieve that, PEs enabled with PIM-SSM/SM 322 snooping MUSTforward multicast traffic for an (S,G)/(*,G) not only on 323 the ports on which they snooped Joins(S,G)/Joins(*,G), but also 324 towards the upstream neighbor(s)). In other words, the ports on 325 which the upstream neighbors are learnt must be added to the outgoing 326 port list along with the ports on which Joins are snooped. Please 327 refer to Section 2.6.1 for the rules that determine the set of 328 upstream neighbors for a particular (x,G). 330 Similarly, PIM-DM snooping SHOULD make sure that asserts can be 331 triggered (Section 2.9.3). 333 The above logic needs to be facilitated without breaking VPLS split- 334 horizon forwarding rules. That is, traffic should not be forwarded 335 on the port on which it was received, and traffic arriving on a PW 336 MUST NOT be forwarded onto other PW(s). 338 2.3. Some Considerations for PIM Snooping 340 The PIM snooping solution described here requires a PE to examine and 341 operate on only PIM Hello and PIM Join/Prune packets. The PE does 342 not need to examine any other PIM packets. 344 Most of the PIM snooping procedures for handling Hello/Join/Prune 345 messages are very similar to those executed in a PIM router. 346 However, the PE does not need to have any routing tables like as 347 required in PIM multicast routing. It knows how to forward Join/ 348 Prunes only by looking at the Upstream Neighbor field in the Join/ 349 Prune packets. 351 The PE does not need to know about Rendezvous Points (RP) and does 352 not have to maintain any RP Set. All that is transparent to a PIM 353 snooping PE. 355 In the following sub-sections, we list some considerations and 356 observations for the implementation of PIM snooping in VPLS. 358 2.3.1. Scaling 360 PIM snooping needs to be employed on ACs at the downstream PEs (PEs 361 receiving multicast traffic across the VPLS core) to prevent traffic 362 from being sent out of ACs unnecessarily. PIM snooping techniques 363 can also be employed on PWs at the upstream PEs (PEs receiving 364 traffic from local ACs in a hierarchical VPLS) to prevent traffic 365 from being sent to PEs unnecessarily. This may work well for small 366 to medium scale deployments. However, if there are a large number of 367 VPLS instances with a large number of PEs per instance, then the 368 amount of snooping required at the upstream PEs can overwhelm the 369 upstream PEs. 371 There are two methods to reduce the burden on the upstream PEs. One 372 is to use PIM proxying as described in Section 2.6.6, to reduce the 373 control messages forwarded by a PE. The other is not to snoop on the 374 PWs at all, but PEs signal the snooped states to other PEs out of 375 band via BGP, as described in [VPLS-MCAST]. In this document, it is 376 assumed that snooping is performed on PWs. 378 2.3.2. IPv6 380 In VPLS, PEs forward Ethernet frames received from CEs and as such 381 are agnostic of the layer-3 protocol used by the CEs. However, as a 382 PIM snooping PE, the PE would have to look deeper into the IP and PIM 383 packets and build snooping state based on that. The PIM Protocol 384 specifications handle both IPv4 and IPv6. The specification for PIM 385 snooping in this draft can be applied to both IPv4 and IPv6 payloads. 387 2.3.3. PIM-SM (*,*,RP) 389 This document does not address (*,*,RP) states in the VPLS network. 390 Although [PIM-SM] specifies that routers must support (*,*,RP) 391 states, there are very few implementations that actually support it 392 in actual deployments, and it is being removed from the PIM protocol 393 in its ongoing advancement process in IETF. Given that, this 394 document omits the specification relating to (*,*,RP) support. 396 2.4. PIM Snooping vs PIM Proxying 398 This document has previously alluded to PIM snooping/relay/proxying. 399 Details on the PIM relay/proxying solution are discussed in 400 Section 2.6.6. In this section, a brief description and comparison 401 are given. 403 2.4.1. Differences between PIM Snooping, Relay and Proxying 405 Differences between PIM snooping and relay/proxying can be summarized 406 as the following: 408 +--------------------+---------------------+-----------------------+ 409 | PIM snooping | PIM relay | PIM proxying | 410 +====================|=====================|=======================+ 411 | Join/Prune messages| Join/Prune messages | Join/Prune messages | 412 | snooped and flooded| snooped; forwarded | consumed. Regenerated | 413 | according to VPLS | as is out of certain| ones sent out of | 414 | flooding procedures| upstream ports | certain upstream ports| 415 +--------------------+---------------------+-----------------------+ 416 | No PIM packets | No PIM packets | New Join/Prune | 417 | generated. | generated | messages generated | 418 +--------------------+---------------------+-----------------------+ 419 | CE Join suppression| CE Join Suppression | CE Join suppression | 420 | not allowed | allowed | allowed | 421 +--------------------+---------------------+-----------------------+ 423 Note that the differences apply only to PIM Join/Prune messages. PIM 424 Hello messages are snooped and flooded in all cases. 426 Other than the above differences, most of the procedures are common 427 to PIM snooping and PIM relay/proxying, unless specifically stated 428 otherwise. 430 Pure PIM snooping PEs simply snoop on PIM packets as they are being 431 forwarded in the VPLS. As such they truly provide transparent LAN 432 services since no customer packets are modified or consumed or new 433 packets introduced in the VPLS. It is also simpler to implement than 434 PIM proxying. However for PIM snooping to work correctly, it is a 435 requirement that CE routers MUST disable Join suppression in the 436 VPLS. Otherwise, most of the CE routers with interest in a given 437 multicast data stream will fail to send J/P messages for that stream, 438 and the PEs will not be able to tell which ACs and/or PWs have 439 listeners for that stream. 441 Given that a large number of existing CE deployments do not support 442 disabling of Join suppression and given the operational complexity 443 for a provider to manage disabling of Join suppression in the VPLS, 444 it becomes a difficult solution to deploy. Another disadvantage of 445 PIM snooping is that it does not scale as well as PIM proxying. If 446 there are a large number of CEs in a VPLS, then every CE will see 447 every other CE's Join/Prune messages. 449 PIM relay/proxying has the advantage that it does not require Join 450 suppression to be disabled in the VPLS. Multicast as a VPLS service 451 can be very easily provided without requiring any changes on the CE 452 routers. PIM relay/proxying helps scale VPLS Multicast since Join/ 453 Prune messages are only sent to certain upstream ports instead of 454 flooded, and in case of full proxying (vs. relay) the PEs 455 intelligently generate only one Join/Prune message for a given 456 multicast stream. 458 PIM proxying however loses the transparency argument since Join/ 459 Prunes could get modified or even consumed at a PE. Also, new 460 packets could get introduced in the VPLS. However, this loss of 461 transparency is limited to PIM Join/Prune packets. It is in the 462 interest of optimizing multicast in the VPLS and helping a VPLS 463 network scale much better. Data traffic will still be completely 464 transparent. 466 2.4.2. PIM Control Message Latency 468 A PIM snooping/relay/proxying PE snoops on PIM Hello packets while 469 transparently flooding them in the VPLS. As such there is no latency 470 introduced by the VPLS in the delivery of PIM Hello packets to remote 471 CEs in the VPLS. 473 A PIM snooping PE snoops on PIM Join/Prune packets while 474 transparently flooding them in the VPLS. There is no latency 475 introduced by the VPLS in the delivery of PIM Join/Prune packets when 476 PIM snooping is employed. 478 A PIM relay/proxying PE does not simply flood PIM Join/Prune packets. 479 This can result in additional latency for a downstream CE to receive 480 multicast traffic after it has sent a Join. When a downstream CE 481 prunes a multicast stream, the traffic SHOULD stop flowing to the CE 482 with no additional latency introduced by the VPLS. 484 Performing only proxying of Join/Prune and not Hello messages keeps 485 the PE behavior very similar to that of a PIM router without 486 introducing too much additional complexity. It keeps the PIM 487 proxying solution fairly simple. Since Join/Prunes are forwarded by 488 a PE along the slow-path and all other PIM packet types are forwarded 489 along the fast-path, it is very likely that packets forwarded along 490 the fast-path will arrive "ahead" of Join/Prune packets at a CE 491 router (note the stress on the fact that fast-path messages will 492 never arrive after Join/Prunes). Of particular importance are Hello 493 packets sent along the fast-path. We can construct a variety of 494 scenarios resulting in out of order delivery of Hellos and Join/Prune 495 messages. However, there should be no deviation from normal expected 496 behavior observed at the CE router receiving these messages out of 497 order. 499 2.4.3. When to Snoop and When to Proxy 501 From the above descriptions, factors that affect the choice of 502 snooping/relay/proxying include: 504 o Whether CEs do Join Suppression or not 506 o Whether Join/Prune latency is critical or not 508 o Whether the scale of PIM protocol message/states in a VPLS 509 requires the scaling benefit of proxying 511 Of the above factors, Join Suppression is the hard one - pure 512 snooping can only be used when Join Suppression is disabled on all 513 CEs. The latency associated with relay/proxying is implementation 514 dependent and may not be a concern at all with a particular 515 implementation. The scaling benefit may not be important either, in 516 that on a real LAN with Explicit Tracking (ET) a PIM router will need 517 to receive and process all PIM Join/Prune messages as well. 519 A PIM router indicates that Join Suppression is disabled if the T-bit 520 is set in the LAN Prune Delay option of its Hello message. If all 521 PIM routers on a LAN set the T-bit, Explicit Tracking is possible, 522 allowing an upstream router to track all the downstream neighbors 523 that have Join states for any (S,G) or (*,G). That has two benefits: 525 o No need for PrunePending process - the upstream router may 526 immediately stop forwarding data when it receives a Prune from the 527 last downstream neighbor, and immediately prune to its upstream if 528 that's for the last downstream interface. 530 o For management purpose, the upstream router knows exactly which 531 downstream routers exist for a particular Join State. 533 While full proxying can be used with or without Join Suppression on 534 CEs and does not interfere with an upstream CE's bypass of 535 PrunePending process, it does proxy all its downstream CEs as a 536 single one to the upstream, removing the second benefit mentioned 537 above. 539 Therefore, the general rule is that if Join Suppression is enabled on 540 CEs then proxying or relay MUST be used and if Suppression is known 541 to be disabled on all CEs then either snooping, relay, or proxying 542 MAY be used while snooping or relay SHOULD be used. 544 An implementation MAY choose dynamic determination of which mode to 545 use, through the tracking of the above mentioned T-bit in all snooped 546 PIM Hello messages, or MAY simply require static provisioning. 548 2.5. Discovering PIM Routers 550 A PIM snooping PE MUST snoop on PIM Hellos received on ACs and PWs. 551 i.e., the PE transparently floods the PIM Hello while snooping on it. 552 PIM Hellos are used by the snooping PE to discover PIM routers and 553 their characteristics. 555 For each neighbor discovered by a PE, it includes an entry in the PIM 556 Neighbor Database with the following fields: 558 o Layer 2 encapsulation for the Router sending the PIM Hello. 560 o IP Address and address family of the Router sending the PIM Hello. 562 o Port (AC / PW) on which the PIM Hello was received. 564 o Hello TLVs 566 The PE should be able to interpret and act on Hello TLVs currently 567 defined in the PIM RFCs. The TLVs of particular interest in this 568 document are: 570 o Hello-Hold-Time 572 o Tracking Support 574 o DR Priority 576 Please refer to [PIM-SM] for a list of the Hello TLVs. When a PIM 577 Hello is received, the PE MUST reset the neighbor-expiry-timer to 578 Hello-Hold-Time. If a PE does not receive a Hello message from a 579 router within Hello-Hold-Time, the PE MUST remove that neighbor from 580 its PIM Neighbor Database. If a PE receives a Hello message from a 581 router with Hello-Hold-Time value set to zero, the PE MUST remove 582 that router from the PIM snooping state immediately. 584 From the PIM Neighbor Database, a PE MUST be able to use the 585 procedures defined in [PIM-SM] to identify the PIM Designated Router 586 in the VPLS instance. It should also be able to determine if 587 Tracking Support is active in the VPLS instance. 589 2.6. PIM-SM and PIM-SSM 591 The key characteristic of PIM-SM and PIM-SSM is explicit join 592 behavior. In this model, multicast traffic is only forwarded to 593 locations that specifically request it. All the procedures described 594 in this section apply to both PIM-SM and PIM-SSM, except for the fact 595 that there is no (*,G) state in PIM-SSM. 597 2.6.1. Building PIM-SM States 599 PIM-SM and PIM-SSM states are built by snooping on the PIM-SM Join/ 600 Prune messages received on AC/PWs. 602 The downstream state machine of a PIM-SM snooping PE very closely 603 resembles the downstream state machine of PIM-SM routers. The 604 downstream state consists of: 606 Per downstream (Port, *, G): 608 o DownstreamJPState: One of { "NoInfo" (NI), "Join" (J), "Prune 609 Pending" (PP) } 611 Per downstream (Port, *, G, N): 613 o Prune Pending Timer (PPT(N)) 615 o Join Expiry Timer (ET(N)) 617 Per downstream (Port, S, G): 619 o DownstreamJPState: One of { "NoInfo" (NI), "Join" (J), "Prune 620 Pending" (PP) } 622 Per downstream (Port, S, G, N): 624 o Prune Pending Timer (PPT(N)) 626 o Join Expiry Timer (ET(N)) 628 Per downstream (Port, S, G, rpt): 630 o DownstreamJPRptState: One of { "NoInfo" (NI), "Pruned" (P), "Prune 631 Pending" (PP) } 633 Per downstream (Port, S, G, rpt, N): 635 o Prune Pending Timer (PPT(N)) 637 o Join Expiry Timer (ET(N)) 639 Where S is the address of the multicast source, G is the Group 640 address and N is the upstream neighbor field in the Join/Prune 641 message. Notice that unlike on PIM-SM routers where PPT and ET are 642 per (Interface, S, G), PIM snooping PEs have to maintain PPT and ET 643 per (Port, S, G, N). The reasons for this are explained in 644 Section 2.6.2. 646 Apart from the above states, we define the following state 647 summarization macros. 649 UpstreamNeighbors(*,G): If there is one or more Join(*,G) received on 650 any port with upstream neighbor N and ET(N) is active, then N is 651 added to UpstreamNeighbors(*,G). This set is used to determine if a 652 Join(*,G) or a Prune(*,G) with upstream neighbor N needs to be sent 653 upstream. 655 UpstreamNeighbors(S,G): If there is one or more Join(S,G) received on 656 any port with upstream neighbor N and ET(N) is active, then N is 657 added to UpstreamNeighbors(S,G). This set is used to determine if a 658 Join(S,G) or a Prune(S,G) with upstream neighbor N needs to be sent 659 upstream. 661 UpstreamPorts(*,G): This is the set of all Port(N) ports where N is 662 in the set UpstreamNeighbors(*,G). Multicast Streams forwarded using 663 a (*,G) match MUST be forwarded to these ports. So 664 UpstreamPorts(*,G) MUST be added to OutgoingPortList(*,G). 666 UpstreamPorts(S,G): This is the set of all Port(N) ports where N is 667 in the set UpstreamNeighbors(S,G). UpstreamPorts(S,G) MUST be added 668 to OutgoingPortList(S,G). 670 InheritedUpstreamPorts(S,G): This is the union of UpstreamPorts(S,G) 671 and UpstreamPorts(*,G). 673 UpstreamPorts(S,G,rpt): If PruneDesired(S,G,rpt) becomes true, then 674 this set is set to UpstreamPorts(*,G). Otherwise, this set is empty. 675 UpstreamPorts(*,G) (-) UpstreamPorts(S,G,rpt) MUST be added to 676 OutgoingPortList(S,G). 678 UpstreamPorts(G): This set is the union of all the UpstreamPorts(S,G) 679 and UpstreamPorts(*,G) for a given G. proxy (S,G) Join/Prune and 680 (*,G) Join/Prune messages MUST be sent to a subset of 681 UpstreamPorts(G) as specified in Section 2.6.6.1. 683 PWPorts: This is the set of all PWs. 685 OutgoingPortList(*,G): This is the set of all ports to which traffic 686 needs to be forwarded on a (*,G) match. 688 OutgoingPortList(S,G): This is the set of all ports to which traffic 689 needs to be forwarded on an (S,G) match. 691 See Section 2.12 on Data Forwarding Rules for the specification on 692 how OutgoingPortList is calculated. 694 NumETsActive(Port,*,G): Number of (Port,*,G,N) entries that have 695 Expiry Timer running. This macro keeps track of the number of 696 Join(*,G)s that are received on this Port with different upstream 697 neighbors. 699 NumETsActive(Port,S,G): Number of (Port,S,G,N) entries that have 700 Expiry Timer running. This macro keeps track of the number of 701 Join(S,G)s that are received on this Port with different upstream 702 neighbors. 704 RpfVectorTlvs(*,G): RPF Vectors [RPF-VECTOR] are TLVs that may be 705 present in received Join(*,G) messages. If present, they must be 706 copied to RpfVectorTlvs(*,G). 708 RpfVectorTlvs(S,G): RPF Vectors [RPF-VECTOR] are TLVs that may be 709 present in received Join(S,G) messages. If present, they must be 710 copied to RpfVectorTlvs(S,G). 712 Since there are a few differences between the downstream state 713 machines of PIM-SM Routers and PIM-SM snooping PEs, we specify the 714 details of the downstream state machine of PIM-SM snooping PEs at the 715 risk of repeating most of the text documented in [PIM-SM]. 717 2.6.2. Explanation for per (S,G,N) states 719 In PIM Routing protocols, states are built per (S,G). On a router, 720 an (S,G) has only one RPF-Neighbor. However, a PIM snooping PE does 721 not have the Layer 3 routing information available to the routers in 722 order to determine the RPF-Neighbor for a multicast flow. It merely 723 discovers it by snooping the Join/Prune message. A PE could have 724 snooped on two or more different Join/Prune messages for the same 725 (S,G) that could have carried different Upstream-Neighbor fields. 726 This could happen during transient network conditions or due to dual- 727 homed sources. A PE cannot make assumptions on which one to pick, 728 but instead must allow the CE routers to decide which Upstream 729 Neighbor gets elected the RPF-Neighbor. And for this purpose, the PE 730 will have to track downstream and upstream Join/Prune per (S,G,N). 732 2.6.3. Receiving (*,G) PIM-SM Join/Prune Messages 734 A Join(*,G) or Prune(*,G) is considered "received" if the following 735 conditions are met: 737 o The port on which it arrived is not Port(N) where N is the 738 upstream-neighbor N of the Join/Prune(*,G), or, 740 o if both Port(N) and the arrival port are PWs, then there exists at 741 least one other (*,G,Nx) or (Sx,G,Nx) state with an AC 742 UpstreamPort. 744 For simplicity, the case where both Port(N) and the arrival port are 745 PWs is referred to as PW-only Join/Prune in this document. The PW- 746 only Join/Prune handling is so that the Port(N) PW can be added to 747 the related forwarding entries' OutgoingPortList to trigger Assert, 748 but that is only needed for those states with AC UpstreamPort. Note 749 that in PW-only case, it is OK for the arrival port and Port(N) to be 750 the same. See Appendix B for examples. 752 When a router receives a Join(*,G) or a Prune(*,G) with upstream 753 neighbor N, it must process the message as defined in the state 754 machine below. Note that the macro computations of the various 755 macros resulting from this state machine transition is exactly as 756 specified in the PIM-SM RFC [PIM-SM]. 758 We define the following per-port (*,G,N) macro to help with the state 759 machine below. 761 Figure 1 : Downstream per-port (*,G) state machine in tabular form 763 +---------------++----------------------------------------+ 764 | || Previous State | 765 | ++------------+--------------+------------+ 766 | Event ||NoInfo (NI) | Join (J) | Prune-Pend | 767 +---------------++------------+--------------+------------+ 768 | Receive ||-> J state | -> J state | -> J state | 769 | Join(*,G) || Action | Action | Action | 770 | || RxJoin(N) | RxJoin(N) | RxJoin(N) | 771 +---------------++------------+--------------+------------+ 772 |Receive || - | -> PP state | -> PP state| 773 |Prune(*,G) and || | Start PPT(N) | | 774 |NumETsActive<=1|| | | | 775 +---------------++------------+--------------+------------+ 776 |Receive || - | -> J state | - | 777 |Prune(*,G) and || | Start PPT(N) | | 778 |NumETsActive>1 || | | | 779 +---------------++------------+--------------+------------+ 780 |PPT(N) expires || - | -> J state | -> NI state| 781 | || | Action | Action | 782 | || | PPTExpiry(N) |PPTExpiry(N)| 783 +---------------++------------+--------------+------------+ 784 |ET(N) expires || - | -> NI state | -> NI state| 785 |and || | Action | Action | 786 |NumETsActive<=1|| | ETExpiry(N) | ETExpiry(N)| 787 +---------------++------------+--------------+------------+ 788 |ET(N) expires || - | -> J state | - | 789 |and || | Action | | 790 |NumETsActive>1 || | ETExpiry(N) | | 791 +---------------++------------+--------------+------------+ 793 Action RxJoin(N): 795 If ET(N) is not already running, then start ET(N). Otherwise 796 restart ET(N). If N is not already in UpstreamNeighbors(*,G), 797 then add N to UpstreamNeighbors(*,G) and trigger a Join(*,G) with 798 upstream neighbor N to be forwarded upstream. If there are RPF 799 Vector TLVs in the received (*,G) message and if they are 800 different from the recorded RpfVectorTlvs(*,G), then copy them 801 into RpfVectorTlvs(*,G). 803 Action PPTExpiry(N): 805 Same as Action ETExpiry(N) below, plus Send a Prune-Echo(*,G) with 806 upstream-neighbor N on the downstream port. 808 Action ETExpiry(N): 810 Disable timers ET(N) and PPT(N). Delete neighbor state 811 (Port,*,G,N). If there are no other (Port,*,G) states with 812 NumETsActive(Port,*,G) > 0, transition DownstreamJPState [PIM-SM] 813 to NoInfo. If there are no other (Port,*,G,N) state (different 814 ports but for the same N), remove N from UpstreamPorts(*,G) - this 815 also serves as a trigger for Upstream FSM (JoinDesired(*,G,N) 816 becomes FALSE). 818 2.6.4. Receiving (S,G) PIM-SM Join/Prune Messages 820 A Join(S,G) or Prune(S,G) is considered "received" if the following 821 conditions are met: 823 o The port on which it arrived is not Port(N) where N is the 824 upstream-neighbor N of the Join/Prune(S,G), or, 826 o if both Port(N) and the arrival port are PWs, then there exists at 827 least one other (*,G,Nx) or (S,G,Nx) state with an AC 828 UpstreamPort. 830 For simplicity, the case where both Port(N) and the arrival port are 831 PWs is referred to as PW-only Join/Prune in this document. The PW- 832 only Join/Prune handling is so that the Port(N) PW can be added to 833 the related forwarding entries' OutgoingPortList to trigger Assert, 834 but that is only needed for those states with AC UpstreamPort. See 835 Appendix B for examples. 837 When a router receives a Join(S,G) or a Prune(S,G) with upstream 838 neighbor N, it must process the message as defined in the state 839 machine below. Note that the macro computations of the various 840 macros resulting from this state machine transition is exactly as 841 specified in [PIM-SM][PIM-SM]. 843 Figure 2: Downstream per-port (S,G) state machine in tabular form 845 +---------------++----------------------------------------+ 846 | || Previous State | 847 | ++------------+--------------+------------+ 848 | Event ||NoInfo (NI) | Join (J) | Prune-Pend | 849 +---------------++------------+--------------+------------+ 850 | Receive ||-> J state | -> J state | -> J state | 851 | Join(S,G) || Action | Action | Action | 852 | || RxJoin(N) | RxJoin(N) | RxJoin(N) | 853 +---------------++------------+--------------+------------+ 854 |Receive || - | -> PP state | - | 855 |Prune (S,G) and|| | Start PPT(N) | | 856 |NumETsActive<=1|| | | | 857 +---------------++------------+--------------+------------+ 858 |Receive || - | -> J state | - | 859 |Prune(S,G) and || | Start PPT(N) | | 860 NumETsActive>1 || | | | 861 +---------------++------------+--------------+------------+ 862 |PPT(N) expires || - | -> J state | -> NI state| 863 | || | Action | Action | 864 | || | PPTExpiry(N) |PPTExpiry(N)| 865 +---------------++------------+--------------+------------+ 866 |ET(N) expires || - | -> NI state | -> NI state| 867 |and || | Action | Action | 868 |NumETsActive<=1|| | ETExpiry(N) | ETExpiry(N)| 869 +---------------++------------+--------------+------------+ 870 |ET(N) expires || - | -> J state | - | 871 |and || | Action | | 872 |NumETsActive>1 || | ETExpiry(N) | | 873 +---------------++------------+--------------+------------+ 875 Action RxJoin(N): 877 If ET(N) is not already running, then start ET(N). Otherwise, 878 restart ET(N). 880 If N is not already in UpstreamNeighbors(S,G), then add N to 881 UpstreamNeighbors(S,G) and trigger a Join(S,G) with upstream 882 neighbor N to be forwarded upstream. If there are RPF Vector TLVs 883 in the received (S,G) message and if they are different from the 884 recorded RpfVectorTlvs(S,G), then copy them into 885 RpfVectorTlvs(S,G). 887 Action PPTExpiry(N): 889 Same as Action ETExpiry(N) below, plus Send a Prune-Echo(S,G) with 890 upstream-neighbor N on the downstream port. 892 Action ETExpiry(N): 894 Disable timers ET(N) and PPT(N). Delete neighbor state 895 (Port,S,G,N). If there are no other (Port,S,G) states with 896 NumETsActive(Port,S,G) > 0, transition DownstreamJPState to 897 NoInfo. If there are no other (Port,S,G,N) state (different ports 898 but for the same N), remove N from UpstreamPorts(S,G) - this also 899 serves as a trigger for Upstream FSM (JoinDesired(S,G,N) becomes 900 FALSE). 902 2.6.5. Receiving (S,G,rpt) Join/Prune Messages 904 A Join(S,G,rpt) or Prune(S,G,rpt) is "received" when the port on 905 which it was received is not also the port on which the upstream- 906 neighbor N of the Join/Prune(S,G,rpt) was learnt. 908 While it is important to ensure that the (S,G) and (*,G) state 909 machines allow for handling per (S,G,N) states, it is not as 910 important for (S,G,rpt) states. It suffices to say that the 911 downstream (S,G,rpt) state machine is the same as what is defined in 912 section 4.5.4 of the PIM-SM RFC [PIM-SM]. 914 2.6.6. Sending Join/Prune Messages Upstream 916 This section applies only to a PIM relay/proxying PE and not to a PIM 917 snooping PE. 919 A full PIM proxying (not relay) PE MUST implement the Upstream FSM 920 for which the procedures are similar to what is defined in section 921 4.5.6 of [PIM-SM]. 923 For the purposes of the Upstream FSM, a Join or Prune message with 924 upstream neighbor N is "seen" on a PIM relay/proxying PE if the port 925 on which the message was received is also Port(N), and the port is an 926 AC. The AC requirement is needed because a Join received on the 927 Port(N) PW must not suppress this PE's Join on that PW. 929 A PIM relay PE does not implement the Upstream FSM. It simply 930 forwards received Join/Prune messages out of the same set of upstream 931 ports as in the PIM proxying case. 933 In order to correctly facilitate assert among the CE routers, such 934 Join/Prunes need to send not only towards the upstream neighbor, but 935 also on certain PWs as described below. 937 If RpfVectorTlvs(*,G) is not empty, then it must be encoded in a 938 Join(*,G) message sent upstream. 940 If RpfVectorTlvs(S,G) is not empty, then it must be encoded in a 941 Join(S,G) message sent upstream. 943 2.6.6.1. Where to send Join/Prune messages 945 The following rules apply, to both forwarded (in case of PIM relay), 946 refresh and triggered (in case of PIM proxying) (S,G)/(*,G) Join/ 947 Prune messages. 949 o The upstream neighbor field in the Join/Prune to be sent is set to 950 the N in the corresponding Upstream FSM. 952 o if Port(N) is an AC, send the message to Port(N). 954 o Additionally, if OutgoingPortList(x,G,N) contains at least one AC, 955 then the message MUST be sent to at least all the PWs in 956 UpstreamPorts(G) (for (*,G)) or InheritedUpstreamPorts(S,G) (for 957 (S,G)). Alternatively, the message MAY be sent to all PWs. 959 Sending to a subset of PWs as described above guarantees that if 960 traffic (of the same flow) from two upstream routers were to reach 961 this PE, then the two routers will receive from each other, 962 triggering assert. 964 Sending to all PWs guarantees that if two upstream routers both send 965 traffic for the same flow (even if it is to different sets of 966 downstream PEs), then they'll receive from each other, triggering 967 assert. 969 2.7. Bidirectional-PIM (BIDIR-PIM) 971 BIDIR-PIM is a variation of PIM-SM. The main differences between 972 PIM-SM and Bidirectional-PIM are as follows: 974 o There are no source-based trees, and source-specific multicast is 975 not supported (i.e., no (S,G) states) in PIM- BIDIR. 977 o Multicast traffic can flow up the shared tree in BIDIR-PIM. 979 o To avoid forwarding loops, one router on each link is elected as 980 the Designated Forwarder (DF) for each RP in BIDIR-PIM. 982 The main advantage of BIDIR-PIM is that it scales well for many-to- 983 many applications. However, the lack of source-based trees means 984 that multicast traffic is forced to remain on the shared tree. 986 As described in [BIDIR-PIM], parts of a BIDIR-PIM enabled network may 987 forward traffic without exchanging Join/Prune messages, for instance 988 between DF's and the Rendezvous Point Link (RPL). 990 As the described procedures for PIM snooping rely on the presence of 991 Join/Prune messages, enabling PIM snooping on BIDIR-PIM networks 992 could break the BIDIR-PIM functionality. Deploying PIM snooping on 993 BIDIR-PIM enabled networks will require some further study. Some 994 thoughts are gathered in Appendix A. 996 2.8. Interaction with IGMP Snooping 998 Whenever IGMP snooping is enabled in conjunction with PIM snooping in 999 the same VPLS instance the PE SHOULD follow these rules: 1001 o To maintain the list of multicast routers and ports on which they 1002 are attached, the PE SHOULD NOT use the rules as described in 1003 RFC4541 [IGMP-SNOOP] but SHOULD rely on the neighbors discovered 1004 by PIM snooping . This list SHOULD then be used to apply the 1005 forwarding rule as described in 2.1.1.(1) of RFC4541 [IGMP-SNOOP]. 1007 o If the PE supports proxy-reporting, an IGMP membership learned 1008 only on a port to which a PIM neighbor is attached but not 1009 elsewhere SHOULD NOT be included in the summarized upstream report 1010 sent to that port. 1012 2.9. PIM-DM 1014 The characteristics of PIM-DM is flood and prune behavior. Shortest 1015 path trees are built as a multicast source starts transmitting. 1017 2.9.1. Building PIM-DM States 1019 PIM-DM states are built by snooping on the PIM-DM Join, Prune, Graft 1020 and State Refresh messages received on AC/PWs and State- Refresh 1021 Messages sent on AC/PWs. By snooping on these PIM-DM messages, a PE 1022 builds the following states per (S,G,N) where S is the address of the 1023 multicast source, G is the Group address and N is the upstream 1024 neighbor to which Prunes/Grafts are sent by downstream CEs: 1026 Per PIM (S,G,N): 1028 Port PIM (S,G,N) Prune State: 1030 * DownstreamPState(S,G,N,Port): One of {"NoInfo" (NI), "Pruned" 1031 (P), "PrunePending" (PP)} 1033 * Prune Pending Timer (PPT) 1035 * Prune Timer (PT) 1037 * Upstream Port (valid if the PIM(S,G,N) Prune State is 1038 "Pruned"). 1040 2.9.2. PIM-DM Downstream Per-Port PIM(S,G,N) State Machine 1042 The downstream per-port PIM(S,G,N) state machine is as defined in 1043 section 4.4.2 of [PIM-DM] with a few changes relevant to PIM 1044 snooping. When reading section 4.4.2 of [PIM-DM] for the purposes of 1045 PIM-snooping please be aware that the downstream states are built per 1046 (S, G, N, Downstream-Port} in PIM-snooping and not per {Downstream- 1047 Interface, S, G} as in a PIM-DM router. As noted in the previous 1048 Section 2.9.1, the states (DownstreamPState) and timers (PPT and PT) 1049 are per (S,G,N,P). 1051 2.9.3. Triggering ASSERT election in PIM-DM 1053 Since PIM-DM is a flood-and-prune protocol, traffic is flooded to all 1054 routers unless explicitly pruned. Since PIM-DM routers do not prune 1055 on non-RPF interfaces, PEs should typically not receive Prunes on 1056 Port(RPF-neighbor). So the asserting routers should typically be in 1057 pim_oiflist(S,G). In most cases, assert election should occur 1058 naturally without any special handling since data traffic will be 1059 forwarded to the asserting routers. 1061 However, there are some scenarios where a prune might be received on 1062 a port which is also an upstream port (UP). If we prune the port 1063 from pim_oiflist(S,G), then it would not be possible for the 1064 asserting routers to determine if traffic arrived on their downstream 1065 port. This can be fixed by adding pim_iifs(S,G) to pim_oiflist(S,G) 1066 so that data traffic flows to the UP ports. 1068 2.10. PIM Proxy 1070 As noted earlier, PIM snooping will work correctly only if Join 1071 Suppression is disabled in the VPLS. If Join Suppression is enabled 1072 in the VPLS, then PEs MUST do PIM relay/proxying for VPLS multicast 1073 to work correctly. This section applies specifically to the full 1074 proxying case and not relay. 1076 2.10.1. Upstream PIM Proxy behavior 1078 A PIM proxying PE consumes Join/Prune messages and regenerates PIM 1079 Join/Prune messages to be sent upstream by implementing Upstream FSM 1080 as specified in the PIM RFC. This is the only difference from PIM 1081 relay. 1083 The source IP address in PIM packets sent upstream SHOULD be the 1084 address of a PIM downstream neighbor in the corresponding join/prune 1085 state. The address picked MUST NOT be the upstream neighbor field to 1086 be encoded in the packet. The layer 2 encapsulation for the selected 1087 source IP address MUST be the encapsulation recorded in the PIM 1088 Neighbor database for that IP address. 1090 2.11. Directly Connected Multicast Source 1092 If there is a source in the CE network that connects directly into 1093 the VPLS instance, then multicast traffic from that source MUST be 1094 sent to all PIM routers on the VPLS instance in addition to the IGMP 1095 receivers in the VPLS. If there is already (S,G) or (*,G) snooping 1096 state that is formed on any PE, this will not happen per the current 1097 forwarding rules and guidelines. So, in order to determine if 1098 traffic needs to be flooded to all routers, a PE must be able to 1099 determine if the traffic came from a host on that LAN. There are 1100 three ways to address this problem: 1102 o The PE would have to do ARP snooping to determine if a source is 1103 directly connected. 1105 o Another option is to have configuration on all PEs to say there 1106 are CE sources that are directly connected to the VPLS instance 1107 and disallow snooping for the groups for which the source is going 1108 to send traffic. This way traffic from that source to those 1109 groups will always be flooded within the provider network. 1111 o A third option is to require that sources of CE multicast traffic 1112 must be behind a router. 1114 This document recommends the third option - sources traffic must be 1115 behind a router. 1117 2.12. Data Forwarding Rules 1119 First we define the rules that are common to PIM-SM and PIM-DM PEs. 1120 Forwarding rules for each protocol type is specified in the sub- 1121 sections. 1123 If there is no matching forwarding state, then the PE SHOULD discard 1124 the packet, i.e., the UserDefinedPortList below SHOULD be empty. 1126 The following general rules MUST be followed when forwarding 1127 multicast traffic in a VPLS: 1129 o Traffic arriving on a port MUST NOT be forwarded back onto the 1130 same port. 1132 o Due to VPLS Split-Horizon rules, traffic ingressing on a PW MUST 1133 NOT be forwarded to any other PW. 1135 2.12.1. PIM-SM Data Forwarding Rules 1137 Per the rules in [PIM-SM] and per the additional rules specified in 1138 this document, 1140 OutgoingPortList(*,G) = immediate_olist(*,G) (+) 1141 UpstreamPorts(*,G) (+) 1142 Port(PimDR) 1144 OutgoingPortList(S,G) = inherited_olist(S,G) (+) 1145 UpstreamPorts(S,G) (+) 1146 (UpstreamPorts(*,G) (-) 1147 UpstreamPorts(S,G,rpt)) (+) 1148 Port(PimDR) 1150 [PIM-SM] specifies how immediate_olist(*,G) and inherited_olist(S,G) 1151 are built. PimDR is the IP address of the PIM DR in the VPLS. 1153 The PIM-SM snooping forwarding rules are defined below in pseudocode: 1155 BEGIN 1156 iif is the incoming port of the multicast packet. 1157 S is the Source IP Address of the multicast packet. 1158 G is the Destination IP Address of the multicast packet. 1160 If there is (S,G) state on the PE 1161 Then 1162 OutgoingPortList = OutgoingPortList(S,G) 1163 Else if there is (*,G) state on the PE 1164 Then 1165 OutgoingPortList = OutgoingPortList(*,G) 1166 Else 1167 OutgoingPortList = UserDefinedPortList 1168 Endif 1170 If iif is an AC 1171 Then 1172 OutgoingPortList = OutgoingPortList (-) iif 1173 Else 1174 ## iif is a PW 1175 OutgoingPortList = OutgoingPortList (-) PWPorts 1176 Endif 1178 Forward the packet to OutgoingPortList. 1179 END 1181 First if there is (S,G) state on the PE, then the set of outgoing 1182 ports is OutgoingPortList(S,G). 1184 Otherwise if there is (*,G) state on the PE, the set of outgoing 1185 ports is OutgoingPortList(*,G). 1187 The packet is forwarded to the selected set of outgoing ports while 1188 observing the general rules above in Section 2.12 1190 2.12.2. PIM-DM Data Forwarding Rules 1192 The PIM-DM snooping data forwarding rules are defined below in 1193 pseudocode: 1195 BEGIN 1196 iif is the incoming port of the multicast packet. 1197 S is the Source IP Address of the multicast packet. 1198 G is the Destination IP Address of the multicast packet. 1200 If there is (S,G) state on the PE 1201 Then 1202 OutgoingPortList = olist(S,G) 1203 Else 1204 OutgoingPortList = UserDefinedPortList 1205 Endif 1207 If iif is an AC 1208 Then 1209 OutgoingPortList = OutgoingPortList (-) iif 1210 Else 1211 ## iif is a PW 1212 OutgoingPortList = OutgoingPortList (-) PWPorts 1213 Endif 1215 Forward the packet to OutgoingPortList. 1216 END 1218 If there is forwarding state for (S,G), then forward the packet to 1219 olist(S,G) while observing the general rules above in section 1220 Section 2.12 1222 [PIM-DM] specifies how olist(S,G) is constructed. 1224 3. IANA Considerations 1226 This document makes no request of IANA. 1228 Note to RFC Editor: this section may be removed on publication as an 1229 RFC. 1231 4. Security Considerations 1233 Security considerations provided in VPLS solution documents (i.e., 1234 [VPLS-LDP] and [VPLS-BGP]) apply to this document as well. 1236 5. Contributors 1238 Yetik Serbest, Suresh Boddapati co-authored earlier versions. 1240 Karl (Xiangrong) Cai and Princy Elizabeth made significant 1241 contributions to bring the specification to its current state, 1242 especially in the area of Join forwarding rules. 1244 6. Acknowledgements 1246 Many members of the former L2VPN and PIM working groups have 1247 contributed to and provided valuable comments and feedback to this 1248 document, including Vach Kompella, Shane Amante, Sunil Khandekar, Rob 1249 Nath, Marc Lassere, Yuji Kamite, Yiqun Cai, Ali Sajassi, Jozef Raets, 1250 Himanshu Shah (Ciena), Himanshu Shah (Alcatel-Lucent). 1252 7. References 1254 7.1. Normative References 1256 [BIDIR-PIM] 1257 Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 1258 "Bidirectional Protocol Independent Multicast (BIDIR- 1259 PIM)", RFC 5015, 2007. 1261 [PIM-DM] Adams, A., Nicholas, J., and W. Siadak, "Protocol 1262 Independent Multicast Version 2 - Dense Mode 1263 Specification", RFC 3973, 2005. 1265 [PIM-SM] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 1266 "Protocol Independent Multicast- Sparse Mode (PIM-SM): 1267 Protocol Specification (Revised)", RFC 4601, 2006. 1269 [PIM-SSM] Holbrook, H. and B. Cain, "Source-Specific Multicast for 1270 IP", RFC 4607, 2006. 1272 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1273 Requirement Levels", BCP 14, RFC 2119, 1997. 1275 [RPF-VECTOR] 1276 Wijnands, I., Boers, A., and E. Rosen, "The Reverse Path 1277 Forwarding (RPF) Vector TLV", RFC 5496, 2009. 1279 7.2. Informative References 1281 [IGMP-SNOOP] 1282 Christensen, M., Kimball, K., and F. Solensky, 1283 "Considerations for IGMP and MLD snooping PEs", RFC 4541, 1284 2006. 1286 [VPLS-BGP] 1287 Kompella, K. and Y. Rekhter, "Virtual Private LAN Service 1288 using BGP for Auto-Discovery and Signaling", RFC 4761, 1289 2007. 1291 [VPLS-LDP] 1292 Lasserre, M. and V. Kompella, "Virtual Private LAN 1293 Services using LDP Signaling", RFC 4762, 2007. 1295 [VPLS-MCAST] 1296 Aggarwal, R., Kamite, Y., Fang, L., and Y. Rekhter, 1297 "Multicast in Virtual Private LAN Servoce (VPLS)", 1298 RFC 7117, 2014. 1300 [VPLS-MCAST-REQ] 1301 Kamite, Y., Wada, Y., Serbest, Y., Morin, T., and L. Fang, 1302 "Requirements for Multicast Support in Virtual Private LAN 1303 Services", RFC 5501, 2009. 1305 Appendix A. BIDIR-PIM Thoughts 1307 This section describes some guidelines that may be used to preserve 1308 BIDIR-PIM functionality in combination with Pim snooping. 1310 In order to preserve BIDIR-PIM Pim snooping routers need to set up 1311 forwarding states so that : 1313 o on the RPL all traffic is forwarded to all Port(N) 1315 o on any other interface traffic is always forwarded to the DF 1317 The information needed to setup these states may be obtained by : 1319 o determining the mapping between group(range) and RP 1321 o snooping and storing DF election information 1323 o determining where the RPL is, this could be achieved by static 1324 configuration, or by combining the information mentioned in 1325 previous bullets. 1327 A.1. BIDIR-PIM Data Forwarding Rules 1329 The BIDIR-PIM snooping forwarding rules are defined below in 1330 pseudocode: 1332 BEGIN 1333 iif is the incoming port of the multicast packet. 1334 G is the Destination IP Address of the multicast packet. 1336 If there is forwarding state for G 1337 Then 1338 OutgoingPortList = olist(G) 1339 Else 1340 OutgoingPortList = UserDefinedPortList 1341 Endif 1343 If iif is an AC 1344 Then 1345 OutgoingPortList = OutgoingPortList (-) iif 1346 Else 1347 ## iif is a PW 1348 OutgoingPortList = OutgoingPortList (-) PWPorts 1349 Endif 1351 Forward the packet to OutgoingPortList. 1352 END 1354 If there is forwarding state for G, then forward the packet to 1355 olist(G) while observing the general rules above in Section 2.12 1357 [BIDIR-PIM] specifies how olist(G) is constructed. 1359 Appendix B. Example Network Scenario 1361 Let us consider the scenario in Figure 3. 1363 An Example Network for Triggering Assert 1365 +------+ AC3 +------+ 1366 | PE2 |-----| CE3 | 1367 /| | +------+ 1368 / +------+ | 1369 / | | 1370 / | | 1371 /PW12 | | 1372 / | /---\ 1373 / |PW23 | S | 1374 / | \---/ 1375 / | | 1376 / | | 1377 / | | 1378 +------+ / +------+ | 1379 +------+ | PE1 |/ PW13 | PE3 | +------+ 1380 | CE1 |-----| |-------------| |-----| CE4 | 1381 +------+ AC1 +------+ +------+ AC4 +------+ 1382 | 1383 |AC2 1384 +------+ 1385 | CE2 | 1386 +------+ 1388 In the examples below, JT(Port,S,G,N) is the downstream Join Expiry 1389 Timer on the specified Port for the (S,G) with upstream neighbor N. 1391 B.1. Pim Snooping Example 1393 In the network depicted in Figure 3, S is the source of a multicast 1394 stream (S,G). CE1 and CE2 both have two ECMP routes to reach the 1395 source. 1397 1. CE1 Sends a Join(S,G) with Upstream Neighbor(S,G) = CE3. 1398 2. PE1 snoops on the Join(S,G) and builds forwarding states since it 1399 is received on an AC. It also floods the Join(S,G) in the VPLS. 1400 PE2 snoops on the Join(S,G) and builds forwarding state since the 1401 Join(S,G)is targeting a neighbor residing on an AC. PE3 does not 1402 create forwarding state for (S,G) because this is a PW-only join 1403 and there is neither existing (*,G) state with an AC in 1404 UpstreamPorts(*,G) nor an existing (S,G) state with an AC in 1405 UpstreamPorts(S,G). Both PE2 and PE3 will also flood the 1406 Join(S,G) in the VPLS 1408 The resulting states at the PEs is as follows: 1410 At PE1: 1412 JT(AC1,S,G,CE3) = JP_HoldTime 1413 UpstreamNeighbors(S,G) = { CE3 } 1414 UpstreamPorts(S,G) = { PW12 } 1415 OutgoingPortList(S,G) = { AC1, PW12 } 1417 At PE2: 1418 JT(PW12,S,G,CE3) = JP_HoldTime 1419 UpstreamNeighbors(S,G) = { CE3 } 1420 UpstreamPorts(S,G) = { AC3 } 1421 OutgoingPortList(S,G) = { PW12, AC3 } 1423 At PE3: 1424 No (S,G) state 1426 3. The multicast stream (S,G) flows along 1427 CE3 -> PE2 -> PE1 -> CE1 1428 4. Now CE2 sends a Join(S,G) with Upstream Neighbor(S,G) = CE4. 1429 5. All PEs snoop on the Join(S,G), build forwarding state and 1430 flood the Join(S,G) in the VPLS. Note that for PE2 even though 1431 this is a PW-only join, forwarding state is built on this 1432 Join(S,G) since PE2 has existing (S,G) state with an AC in 1433 UpstreamPorts(S,G) 1435 The resulting states at the PEs: 1437 At PE1: 1438 JT(AC1,S,G,CE3) = active 1439 JT(AC2,S,G,CE4) = JP_HoldTime 1440 UpstreamNeighbors(S,G) = { CE3, CE4 } 1441 UpstreamPorts(S,G) = { PW12, PW13 } 1442 OutgoingPortList(S,G) = { AC1, PW12, AC2, PW13 } 1444 At PE2: 1445 JT(PW12,S,G,CE4) = JP_HoldTime 1446 JT(PW12,S,G,CE3) = active 1447 UpstreamNeighbors(S,G) = { CE3, CE4 } 1448 UpstreamPorts(S,G) = { AC3, PW23 } 1449 OutgoingPortList(S,G) = { PW12, AC3, PW23 } 1451 At PE3: 1452 JT(PW13,S,G,CE4) = JP_HoldTime 1453 UpstreamNeighbors(S,G) = { CE4 } 1454 UpstreamPorts(S,G) = { AC4 } 1455 OutgoingPortList(S,G) = { PW13, AC4 } 1457 6. The multicast stream (S,G) flows into the VPLS from the two CEs 1458 CE3 and CE4. PE2 forwards the stream received from CE3 to PW23 1459 and PE3 forwards the stream to AC4. This facilitates the CE 1460 routers to trigger assert election. Let us say CE3 becomes the 1461 assert winner. 1462 7. CE3 sends an Assert message to the VPLS. The PEs flood the 1463 Assert message without examining it. 1464 8. CE4 stops sending the multicast stream to the VPLS. 1465 9. CE2 notices an RPF change due to Assert and sends a Prune(S,G) 1466 with Upstream Neighbor = CE4. CE2 also sends a Join(S,G) with 1467 Upstream Neighbor = CE3. 1468 10. All the PEs start a prune-pend timer on the ports on which 1469 they received the Prune(S,G). When the prune-pend timer expires, 1470 all PEs will remove the downstream (S,G,CE4) states. 1472 Resulting states at the PEs: 1474 At PE1: 1475 JT(AC1,S,G,CE3) = active 1476 UpstreamNeighbors(S,G) = { CE3 } 1477 UpstreamPorts(S,G) = { PW12 } 1478 OutgoingPortList(S,G) = { AC1, AC2, PW12 } 1480 At PE2: 1481 JT(PW12,S,G,CE3) = active 1482 UpstreamNeighbors(S,G) = { CE3 } 1483 UpstreamPorts(S,G) = { AC3 } 1484 OutgoingPortList(S,G) = { PW12, AC3 } 1486 At PE3: 1487 JT(PW13,S,G,CE3) = JP_HoldTime 1488 UpstreamNeighbors(S,G) = { CE3 } 1489 UpstreamPorts(S,G) = { PW23 } 1490 OutgoingPortList(S,G) = { PW13, PW23 } 1492 Note that at this point at PE3, since there is no AC in 1493 OutgoingPortList(S,G) and no (*,G) or (S,G) state with an AC in 1494 UpstreamPorts(*,G) or UpstreamPorts(S,G) respectively, the 1495 existing (S,G) state at PE3 can also be removed. So finally: 1497 At PE3: 1498 No (S,G) state 1500 Note that at the end of the assert election, there should be no 1501 duplicate traffic forwarded downstream and traffic should flow only 1502 on the desired path. Also note that there are no unnecessary (S,G) 1503 states on PE3 after the assert election. 1505 B.2. PIM Proxy Example with (S,G) / (*,G) interaction 1507 In the same network, let us assume CE4 is the Upstream Neighbor 1508 towards the RP for G. 1510 JPST(S,G,N) is the JP sending timer for the (S,G) with upstream 1511 neighbor N. 1513 1. CE1 Sends a Join(S,G) with Upstream Neighbor(S,G) = CE3. 1515 2. PE1 consumes the Join(S,G) and builds forwarding state since the 1516 Join(S,G) is received on an AC. 1518 PE2 consumes the Join(S,G) and builds forwarding state since the 1519 Join(S,G) is targeting a neighbor residing on an AC. 1521 PE3 consumes the Join(S,G) but does not create forwarding state 1522 for (S,G) since this is a PW-only join and there is neither 1523 existing (*,G) state with an AC in UpstreamPorts(*,G) nor an 1524 existing (S,G) state with an AC in UpstreamPorts(S,G) 1526 The resulting states at the PEs is as follows: 1528 PE1 states: 1529 JT(AC1,S,G,CE3) = JP_HoldTime 1530 JPST(S,G,CE3) = t_periodic 1531 UpstreamNeighbors(S,G) = { CE3 } 1532 UpstreamPorts(S,G) = { PW12 } 1533 OutgoingPortList(S,G) = { AC1, PW12 } 1535 PE2 states: 1536 JT(PW12,S,G,CE3) = JP_HoldTime 1537 JPST(S,G,CE3) = t_periodic 1538 UpstreamNeighbors(S,G) = { CE3 } 1539 UpstreamPorts(S,G) = { AC3 } 1540 OutgoingPortList(S,G) = { PW12, AC3 } 1542 PE3 states: 1543 No (S,G) state 1545 Joins are triggered as follows: 1546 PE1 triggers a Join(S,G) targeting CE3. Since the Join(S,G) was 1547 received on an AC and is targeting a neighbor that is residing 1548 across a PW, the triggered Join(S,G) is sent on all PWs. 1550 PE2 triggers a Join(S,G) targeting CE3. Since the Joins(S,G) is 1551 targeting a neighbor residing on an AC, it only sends the join 1552 on AC3. 1554 PE3 ignores the Join(S,G) since this is a PW-only join and there 1555 is neither existing (*,G) state with an AC in UpstreamPorts(*,G) 1556 nor an existing (S,G) state with an AC in UpstreamPorts(S,G) 1558 3. The multicast stream (S,G) flows along CE3 -> PE2 -> PE1 -> CE1. 1560 4. Now let us say CE2 sends a Join(*,G) with 1561 UpstreamNeighbor(*,G) = CE4. 1563 5. PE1 consumes the Join(*,G) and builds forwarding state since the 1564 Join(*,G) is received on an AC. 1566 PE2 consumes the Join(*,G) and though this is a PW-only join, 1567 forwarding state is build on this Join(*,G) since PE2 has 1568 existing (S,G) state with an AC in UpstreamPorts(S,G). 1569 However, since this is a PW-only join, PE2 only adds the PW 1570 towards PE3 (PW23) into UpstreamPorts(*,G) and hence into 1571 OutgoingPortList(*,G). It does not add the PW towards 1572 PE1 (PW12) into OutgoingPortsList(*,G) 1574 PE3 consumes the Join(*,G) and builds forwarding state since 1575 the Join(*,G) is targeting a neighbor residing on an AC. 1577 The resulting states at the PEs is as follows: 1579 PE1 states: 1580 JT(AC1,*,G,CE4) = JP_HoldTime 1581 JPST(*,G,CE4) = t_periodic 1582 UpstreamNeighbors(*,G) = { CE4 } 1583 UpstreamPorts(*,G) = { PW13 } 1584 OutgoingPortList(*,G) = { AC2, PW13 } 1586 JT(AC1,S,G,CE3) = active 1587 JPST(S,G,CE3) = active 1588 UpstreamNeighbors(S,G) = { CE3 } 1589 UpstreamPorts(S,G) = { PW12 } 1590 OutgoingPortList(S,G) = { AC1, PW12, PW13 } 1592 PE2 states: 1593 JT(PW12,*,G,CE4) = JP_HoldTime 1594 UpstreamNeighbors(*,G) = { CE4 } 1595 UpstreamPorts(G) = { PW23 } 1596 OutgoingPortList(*,G) = { PW23 } 1598 JT(PW12,S,G,CE3) = active 1599 JPST(S,G,CE3) = active 1600 UpstreamNeighbors(S,G) = { CE3 } 1601 UpstreamPorts(S,G) = { AC3 } 1602 OutgoingPortList(S,G) = { PW12, AC3, PW23 } 1604 PE3 states: 1605 JT(PW13,*,G,CE4) = JP_HoldTime 1606 JPST(*,G,CE4) = t_periodic 1607 UpstreamNeighbors(*,G) = { CE4 } 1608 UpstreamPorts(*,G) = { AC4 } 1609 OutgoingPortList(*,G) = { PW13, AC4 } 1611 Joins are triggered as follows: 1612 PE1 triggers a Join(*,G) targeting CE4. Since the Join(*,G) was 1613 received on an AC and is targeting a neighbor that is residing 1614 across a PW, the triggered Join(S,G) is sent on all PWs. 1616 PE2 does not trigger a Join(*,G) based on this join since this 1617 is a PW-only join. 1619 PE3 triggers a Join(*,G) targeting CE4. Since the Join(*,G) is 1620 targeting a neighbor residing on an AC, it only sends the join 1621 on AC4. 1623 6. In case traffic is not flowing yet (i.e. step 3 is delayed to 1624 come after step 6) and in the interim JPST(S,G,CE3) on PE1 1625 expires, causing it to send a refresh Join(S,G) targeting CE3, 1626 since the refresh Join(S,G) is targeting a neighbor that is 1627 residing across a PW, the refresh Join(S,G) is sent on all PWs. 1629 7. Note that PE1 refreshes its JT timer based on reception of 1630 refresh joins from CE1 and CE2 1632 PE2 consumes the Join(S,G) and refreshes the JT(PW12,S,G,CE3) 1633 timer. 1635 PE3 consumes the Join(S,G). It also builds forwarding state on 1636 this Join(S,G), even though this is a PW-only join, since now 1637 PE2 has existing (*,G) state with an AC in UpstreamPorts(*,G). 1638 However, since this is a PW-only join, PE3 only adds the PW 1639 towards PE2 (PW23) into UpstreamPorts(S,G) and hence into 1640 OutgoingPortList(S,G). It does not add the PW towards 1641 PE1 (PW13) into OutgoingPortList(S,G). 1643 PE3 States: 1644 JT(PW13,*,G,CE4) = active 1645 JPST(S,G,CE4) = active 1646 UpstreamNeighbors(*,G) = { CE4 } 1647 UpstreamPorts(*,G) = { AC4 } 1648 OutgoingPortList(*,G) = { PW13, AC4 } 1649 JT(PW13,S,G,CE3) = JP_HoldTime 1650 UpstreamNeighbors(*,G) = { CE3 } 1651 UpstreamPorts(*,G) = { PW23 } 1652 OutgoingPortList(*,G) = { PW13, AC4, PW23 } 1654 Joins are triggered as follows: 1655 PE2 already has (S,G) state, so it does not trigger a Join(S,G) 1656 based on reception of this refresh join. 1658 PE3 does not trigger a Join(S,G) based on this join since this 1659 is a PW-only join. 1661 8. The multicast stream (S,G) flows into the VPLS from the two 1662 CEs, CE3 and CE4. PE2 forwards the stream received from CE3 to 1663 PW12 and PW23. At the same time PE3 forwards the stream 1664 received from CE4 to PW13 and PW23. 1666 The stream received over PW12 and PW13 is forwarded by PE1 to 1667 AC1 and AC2. 1669 The stream received by PE3 over PW23 is forwarded to AC4. The 1670 stream received by PE2 over PW23 is forwarded to AC3. Either of 1671 these facilitates the CE routers to trigger assert election. 1673 9. CE3 and/or CE4 send(s) Assert message(s) to the VPLS. The PEs 1674 flood the Assert message(s) without examining it. 1676 10. CE3 becomes the (S,G) assert winner and CE4 stops sending the 1677 multicast stream to the VPLS. 1679 11. CE2 notices an RPF change due to Assert and sends a 1680 Prune(S,G,rpt) with Upstream Neighbor = CE4. 1682 12. PE1 consumes the Prune(S,G,rpt) and since 1683 PruneDesired(S,G,Rpt,CE4) is TRUE, it triggers a Prune(S,G,rpt) 1684 to CE4. Since the prune is targeting a neighbor across a PW, it 1685 is sent on all PWs. 1687 PE2 consumes the Prune(S,G,rpt) and does not trigger any prune 1688 based on this Prune(S,G,rpt) since this was a PW-only prune. 1690 PE3 consumes the Prune(S,G,rpt) and since 1691 PruneDesired(S,G,rpt,CE4) is TRUE it sends the Prune(S,G,rpt) 1692 on AC4. 1694 PE1 states: 1695 JT(AC2,*,G,CE4) = active 1696 JPST(*,G,CE4) = active 1697 UpstreamNeighbors(*,G) = { CE4 } 1698 UpstreamPorts(*,G) = { PW13 } 1699 OutgoingPortList(*,G) = { AC2, PW13 } 1701 JT(AC2,S,G,CE4) = JP_Holdtime with FLAG sgrpt prune 1702 JPST(S,G,CE4) = none, since this is sent along 1703 with the Join(*,G) to CE4 based 1704 on JPST(*,G,CE4) expiry 1705 UpstreamPorts(S,G,rpt) = { PW13 } 1706 UpstreamNeighbors(S,G,rpt) = { CE4 } 1708 JT(AC1,S,G,CE3) = active 1709 JPST(S,G,CE3) = active 1710 UpstreamNeighbors(S,G) = { CE3 } 1711 UpstreamPorts(S,G) = { PW12 } 1712 OutgoingPortList(S,G) = { AC1, PW12, AC2 } 1714 At PE2: 1715 JT(PW12,*,G,CE4) = active 1716 UpstreamNeighbors(*,G) = { CE4 } 1717 UpstreamPorts(*,G) = { PW23 } 1718 OutgoingPortList(*,G) = { PW23 } 1720 JT(PW12,S,G,CE4) = JP_Holdtime with FLAG sgrpt prune 1721 JPST(S,G,CE4) = none, since this was created 1722 off a PW-only prune 1723 UpstreamPorts(S,G,rpt) = { PW23 } 1724 UpstreamNeighbors(S,G,rpt) = { CE4 } 1726 JT(PW12,S,G,CE3) = active 1727 JPST(S,G,CE3) = active 1728 UpstreamNeighbors(S,G) = { CE3 } 1729 UpstreamPorts(S,G) = { AC3 } 1730 OutgoingPortList(*,G) = { PW12, AC3 } 1732 At PE3: 1733 JT(PW13,*,G,CE4) = active 1734 JPST(*,G,CE4) = active 1735 UpstreamNeighbors(*,G) = { CE4 } 1736 UpstreamPorts(*,G) = { AC4 } 1737 OutgoingPortList(*,G) = { PW13, AC4 } 1739 JT(PW13,S,G,CE4) = JP_Holdtime with S,G,rpt prune 1740 flag 1741 JPST(S,G,CE4) = none, since this is sent along 1742 with the Join(*,G) to CE4 based 1743 on JPST(*,G,CE4) expiry 1744 UpstreamNeighbors(S,G,rpt) = { CE4 } 1745 UpstreamPorts(S,G,rpt) = { AC4 } 1747 JT(PW13,S,G,CE3) = active 1748 JPST(S,G,CE3) = none, since this state is 1749 created by PW-only join 1750 UpstreamNeighbors(S,G) = { CE3 } 1751 UpstreamPorts(S,G) = { PW23 } 1752 OutgoingPortList(S,G) = { PW23 } 1754 Even in this example, at the end of the (S,G) / (*,G) assert 1755 election, there should be no duplicate traffic forwarded downstream 1756 and traffic should flow only to the desired CEs. 1758 However, the reason we don't have duplicate traffic is because one of 1759 the CEs stops sending traffic due to assert, not because we don't 1760 have any forwarding state in the PEs to do this forwarding. 1762 Authors' Addresses 1764 Olivier Dornon 1765 Nokia 1766 50 Copernicuslaan 1767 Antwerp, B2018 1769 Email: olivier.dornon@nokia.com 1771 Jayant Kotalwar 1772 Nokia 1773 701 East Middlefield Rd. 1774 Mountain View, CA 94043 1776 Email: jayant.kotalwar@nokia.com 1778 Venu Hemige 1780 Email: vhemige@gmail.com 1782 Ray Qiu 1783 mistnet.io 1785 Email: ray@mistnet.io 1786 Jeffrey Zhang 1787 Juniper Networks, Inc. 1788 10 Technology Park Drive 1789 Westford, MA 01886 1791 Email: zzhang@juniper.net