idnits 2.17.1 draft-ietf-pals-vpls-pim-snooping-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 16, 2015) is 3115 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 4601 (ref. 'PIM-SM') (Obsoleted by RFC 7761) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 PALS Workgroup O. Dornon 3 Internet-Draft J. Kotalwar 4 Intended status: Informational Alcatel-Lucent 5 Expires: April 18, 2016 V. Hemige 7 R. Qiu 8 Z. Zhang 9 Juniper Networks, Inc. 10 October 16, 2015 12 Protocol Independent Multicast (PIM) over Virtual Private LAN Service 13 (VPLS) 14 draft-ietf-pals-vpls-pim-snooping-01 16 Abstract 18 This document describes the procedures and recommendations for 19 Virtual Private LAN Service (VPLS) Provider Edges (PEs) to facilitate 20 replication of multicast traffic to only certain ports (behind which 21 there are interested Protocol Independent Multicast (PIM) routers 22 and/or Internet Group Management Protocol (IGMP) hosts) via Protocol 23 Independent Multicast (PIM) snooping and proxying. 25 With PIM snooping, PEs passively listen to certain PIM control 26 messages to build control and forwarding states while transparently 27 flooding those messages. With PIM proxying, Provider Edges (PEs) do 28 not flood PIM Join/Prune messages but only generate their own and 29 send out of certain ports, based on the control states built from 30 downstream Join/Prune messages. PIM proxying is required when PIM 31 Join suppression is enabled on the Customer Equipment (CE) devices 32 and useful to reduce PIM control traffic in a VPLS domain. 34 The document also describes PIM relay, which can be viewed as light- 35 weight proxying, where all downstream Join/Prune messages are simply 36 forwarded out of certain ports but not flooded to avoid triggering 37 PIM Join suppression on CE devices. 39 Requirements Language 41 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 42 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 43 document are to be interpreted as described in [RFC2119]. 45 Status of This Memo 47 This Internet-Draft is submitted in full conformance with the 48 provisions of BCP 78 and BCP 79. 50 Internet-Drafts are working documents of the Internet Engineering 51 Task Force (IETF). Note that other groups may also distribute 52 working documents as Internet-Drafts. The list of current Internet- 53 Drafts is at http://datatracker.ietf.org/drafts/current/. 55 Internet-Drafts are draft documents valid for a maximum of six months 56 and may be updated, replaced, or obsoleted by other documents at any 57 time. It is inappropriate to use Internet-Drafts as reference 58 material or to cite them other than as "work in progress." 60 This Internet-Draft will expire on April 18, 2016. 62 Copyright Notice 64 Copyright (c) 2015 IETF Trust and the persons identified as the 65 document authors. All rights reserved. 67 This document is subject to BCP 78 and the IETF Trust's Legal 68 Provisions Relating to IETF Documents 69 (http://trustee.ietf.org/license-info) in effect on the date of 70 publication of this document. Please review these documents 71 carefully, as they describe your rights and restrictions with respect 72 to this document. Code Components extracted from this document must 73 include Simplified BSD License text as described in Section 4.e of 74 the Trust Legal Provisions and are provided without warranty as 75 described in the Simplified BSD License. 77 Table of Contents 79 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 80 1.1. Multicast Snooping in VPLS . . . . . . . . . . . . . . . 4 81 1.2. Assumptions . . . . . . . . . . . . . . . . . . . . . . . 5 82 1.3. Definitions . . . . . . . . . . . . . . . . . . . . . . . 5 83 2. PIM Snooping for VPLS . . . . . . . . . . . . . . . . . . . . 6 84 2.1. PIM protocol background . . . . . . . . . . . . . . . . . 6 85 2.2. General Rules for PIM Snooping in VPLS . . . . . . . . . 7 86 2.2.1. Preserving Assert Trigger . . . . . . . . . . . . . . 7 87 2.3. Some Considerations for PIM Snooping . . . . . . . . . . 8 88 2.3.1. Scaling . . . . . . . . . . . . . . . . . . . . . . . 8 89 2.3.2. IPv6 . . . . . . . . . . . . . . . . . . . . . . . . 9 90 2.3.3. PIM-SM (*,*,RP) . . . . . . . . . . . . . . . . . . . 9 91 2.4. PIM Snooping vs PIM Proxying . . . . . . . . . . . . . . 9 92 2.4.1. Differences between PIM Snooping, Relay and Proxying 9 93 2.4.2. PIM Control Message Latency . . . . . . . . . . . . . 10 94 2.4.3. When to Snoop and When to Proxy . . . . . . . . . . . 11 95 2.5. Discovering PIM Routers . . . . . . . . . . . . . . . . . 12 96 2.6. PIM-SM and PIM-SSM . . . . . . . . . . . . . . . . . . . 13 97 2.6.1. Building PIM-SM Snooping States . . . . . . . . . . . 13 98 2.6.2. Explanation for per (S,G,N) states . . . . . . . . . 16 99 2.6.3. Receiving (*,G) PIM-SM Join/Prune Messages . . . . . 16 100 2.6.4. Receiving (S,G) PIM-SM Join/Prune Messages . . . . . 18 101 2.6.5. Receiving (S,G,rpt) Join/Prune Messages . . . . . . . 20 102 2.6.6. Sending Join/Prune Messages Upstream . . . . . . . . 20 103 2.7. Bidirectional-PIM (BIDIR-PIM) . . . . . . . . . . . . . . 21 104 2.8. Interaction with IGMP Snooping . . . . . . . . . . . . . 22 105 2.9. PIM-DM . . . . . . . . . . . . . . . . . . . . . . . . . 22 106 2.9.1. Building PIM-DM Snooping States . . . . . . . . . . . 22 107 2.9.2. PIM-DM Downstream Per-Port PIM(S,G,N) State Machine . 23 108 2.9.3. Triggering ASSERT election in PIM-DM . . . . . . . . 23 109 2.10. PIM Proxy . . . . . . . . . . . . . . . . . . . . . . . . 23 110 2.10.1. Upstream PIM Proxy behavior . . . . . . . . . . . . 23 111 2.11. Directly Connected Multicast Source . . . . . . . . . . . 24 112 2.12. Data Forwarding Rules . . . . . . . . . . . . . . . . . . 24 113 2.12.1. PIM-SM Data Forwarding Rules . . . . . . . . . . . . 25 114 2.12.2. PIM-DM Data Forwarding Rules . . . . . . . . . . . . 26 115 3. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 27 116 4. Security Considerations . . . . . . . . . . . . . . . . . . . 27 117 5. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 27 118 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 28 119 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 28 120 7.1. Normative References . . . . . . . . . . . . . . . . . . 28 121 7.2. Informative References . . . . . . . . . . . . . . . . . 28 122 Appendix A. BIDIR-PIM Thoughts . . . . . . . . . . . . . . . . . 29 123 A.1. BIDIR-PIM Data Forwarding Rules . . . . . . . . . . . . . 29 124 Appendix B. Example Network Scenario . . . . . . . . . . . . . . 30 125 B.1. Pim Snooping Example . . . . . . . . . . . . . . . . . . 31 126 B.2. PIM Proxy Example with (S,G) / (*,G) interaction . . . . 34 127 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 39 129 1. Introduction 131 In Virtual Private LAN Service (VPLS), the Provider Edge (PE) devices 132 provide a logical interconnect such that Customer Edge (CE) devices 133 belonging to a specific VPLS instance appear to be connected by a 134 single LAN. Forwarding Information Base for a VPLS instance is 135 populated dynamically by MAC address learning. Once a unicast MAC 136 address is learned and associated with a particular Attachment 137 Circuit (AC) or PseudoWire (PW), a frame destined to that MAC address 138 only needs to be sent on that AC or PW. 140 For a frame not addressed to a known unicast MAC address, flooding 141 has to be used. This happens with the following so called BUM 142 (Broadcast Unknown Multicast) traffic: 144 o B: The destination MAC address is a broadcast address, 146 o U: The destination MAC address is unknown (has not been learned), 148 o M: The destination MAC address is a multicast address. 150 Multicast frames are flooded because a PE cannot know where 151 corresponding multicast group members reside. VPLS solutions (i.e., 152 [VPLS-LDP] and [VPLS-BGP]) perform replication for multicast traffic 153 at the ingress PE devices. As stated in the VPLS Multicast 154 Requirements draft [VPLS-MCAST-REQ], there are two issues with VPLS 155 multicast today: 157 o A. Multicast traffic is replicated to non-member sites. 159 o B. Replication on PWs on shared physical path. 161 Issue A can be solved by multicast snooping - PEs learn sites with 162 multicast group members by snooping multicast protocol control 163 messages on ACs and forward IP multicast traffic only to member 164 sites. This document describes the procedures to achieve this when 165 CE devices are PIM adjacencies of each other. Issue B is outside the 166 scope of this document and discussed in [VPLS-MCAST]. 168 While this document is in the context of VPLS, the procedures apply 169 to regular layer-2 switches interconnected by physical connections as 170 well, albeit this is outside of the scope of this document. In that 171 case, the PW related concept/procedures are not applicable and that's 172 all. 174 1.1. Multicast Snooping in VPLS 176 IGMP snooping procedures described in [IGMP-SNOOP] make sure that IP 177 multicast traffic is only sent on the following: 179 o Attachment Circuits (ACs) connecting to hosts that report related 180 group membership 182 o ACs connecting to routers that join related multicast groups 184 o PseudoWires (PWs) connecting to remote PEs that have the above 185 described ACs 187 Notice that traffic is always sent on ports that have point-to-point 188 connections to routers ot that are attached to a LAN on which there 189 is a router, even those on which there are no snooped group 190 memberships, because IGMP snooping alone can not determine if there 191 are interested receivers beyond those routers. To further restrict 192 traffic sent to those routers, PIM snooping can be used. This 193 document describes the procedures for PIM snooping, including the 194 rules when both IGMP and PIM snooping are enabled in a VPLS instance, 195 which are elaborated in sections Section 2.8 and Section 2.11. 197 Note that for both IGMP and PIM, the term Snooping is used loosely, 198 referring to the fact that a layer-2 device peeks into layer-3 199 routing protocol messages to build relevant control and forwarding 200 states. Depending on how the control messages are handled 201 (transparently flooded, selectively forwarded, aggregated), the 202 procedure/process may be called Snooping or proxy in different 203 contexts. 205 Unless explicitly noted, the procedures in this document are used for 206 either PIM snooping or PIM proxying, and we will largely refer to PIM 207 snooping in this document. The PIM proxying specific procedures are 208 described in Section 2.6.6. Differences that need to be observed 209 while implementing one or the other and recommendations on which 210 method to employ in different scenarios are noted in section 211 Section 2.4. 213 This document also describes PIM relay, which can be viewed as light- 214 weight PIM proxying. Unless explicitly noted, in the rest of the 215 document proxying implicitly includes relay as well. Please refer to 216 Section 2.4.1 for an overview of the differences between snooping, 217 proxying and relay. 219 1.2. Assumptions 221 This document assumes that the reader has good understanding of the 222 PIM protocols. This document is written in the same style as the PIM 223 RFCs to help correlate the concepts and to make it easier to follow. 224 In order to avoid replicating text related to PIM protocol handling 225 from the PIM RFCs, this document cross references corresponding 226 definitions and procedures in these RFCs. Deviations in protocol 227 handling specific to PIM snooping are specified in this document. 229 1.3. Definitions 231 There are several definitions referenced in this document that are 232 well described in the PIM RFCs [PIM-SM], [BIDIR-PIM], [PIM-DM]. The 233 following definitions and abbreviations are used throughout this 234 document: 236 o A port is defined as either an attachment circuit (AC) or a 237 pseudowire (PW). 239 o When we say a PIM message is received on a PE port, it means that 240 the PE is processing the message for snooping/proxying or 241 relaying. 243 Abbreviations used in the document: 245 o S: IP address of the multicast source. 247 o G: IP address of the multicast group. 249 o N: Upstream neighbor field in a Join/Prune/Graft message. 251 o Port(N): Port on which neighbor N is learnt, i.e. the port on 252 which N's Hellos are received. 254 o rpt : Rendezvous Point 256 o PIM-DM: Protocol Independent Multicast - Dense Mode. 258 o PIM-SM: Protocol Independent Multicast - Sparse Mode. 260 o PIM-SSM: Protocol Independent Multicast - Source Specific Mode. 262 Other definitions are explained in the sections where they are 263 introduced. 265 2. PIM Snooping for VPLS 267 2.1. PIM protocol background 269 PIM is a multicast routing protocol running between routers, which 270 are CE devices in a VPLS. It uses the unicast routing table to 271 provide reverse path information for building multicast trees. There 272 are a few variants of PIM. In [PIM-DM], multicast datagrams are 273 pushed towards downstream neighbors, similar to a broadcast 274 mechanism, but in areas of the network where there are no group 275 members, routers prune back branches of the multicast tree towards 276 the source. Unlike PIM-DM, other PIM flavors (PIM-SM [PIM-SM], PIM- 277 SSM [PIM-SSM], and BIDIR-PIM [BIDIR-PIM]) employ a pull methodology 278 via explicit joins instead of the push and prune technique. 280 PIM routers periodically exchange Hello messages to discover and 281 maintain stateful sessions with neighbors. After neighbors are 282 discovered, PIM routers can signal their intentions to join or prune 283 specific multicast groups. This is accomplished by having downstream 284 routers send an explicit Join/Prune message (for the sake of 285 generalization, consider Graft messages for PIM-DM as Join messages) 286 to their corresponding upstream router. The Join/Prune message can 287 be group specific (*,G) or group and source specific(S,G). 289 2.2. General Rules for PIM Snooping in VPLS 291 The following rules for the correct operation of PIM snooping MUST be 292 followed. 294 o PIM snooping MUST NOT affect the operation of customer layer-2 295 protocols (e.g., BPDUs) or layer-3 protocols. 297 o PIM messages and multicast data traffic forwarded by PEs MUST 298 follow the split-horizon rule for mesh PWs. 300 o PIM snooping states in a PE MUST be per VPLS instance. 302 o PIM assert triggers MUST be preserved to the extent necessary to 303 avoid sending duplicate traffic to the same PE (see 304 Section 2.2.1). 306 2.2.1. Preserving Assert Trigger 308 In PIM-SM/DM, there are scenarios where multiple routers could be 309 forwarding the same multicast traffic on a LAN. When this happens, 310 using PIM Assert election process by sending PIM Assert messages, 311 routers ensure that only the Assert winner forwards traffic on the 312 LAN. The Assert election is a data driven event and happens only if 313 a router sees traffic on the interface to which it should be 314 forwarding the traffic. In the case of VPLS with PIM snooping, two 315 routers may forward the same multicast datagrams at the same time but 316 each copy may reach different set of PEs, and that is acceptable from 317 the point of view of avoiding duplicate traffic. If the two copies 318 may reach the same PE then the sending routers must be able to see 319 each other's traffic, in order to trigger Assert election and stop 320 duplicate traffic. To achieve that, PEs enabled with PIM-SSM/SM 321 snooping MUSTforward multicast traffic for an (S,G)/(*,G) not only on 322 the ports on which they snooped Joins(S,G)/Joins(*,G), but also 323 towards the upstream neighbor(s)). In other words, the ports on 324 which the upstream neighbors are learnt must be added to the outgoing 325 port list along with the ports on which Joins are snooped. Please 326 refer to Section 2.6.1 for the rules that determine the set of 327 upstream neighbors for a particular (x,G). 329 Similarly, PIM-DM snooping SHOULD make sure that asserts can be 330 triggered (Section 2.9.3). 332 The above logic needs to be facilitated without breaking VPLS split- 333 horizon forwarding rules. That is, traffic should not be forwarded 334 on the port on which it was received, and traffic arriving on a PW 335 MUST NOT be forwarded onto other PW(s). 337 2.3. Some Considerations for PIM Snooping 339 The PIM snooping solution described here requires a PE to examine and 340 operate on only PIM Hello and PIM Join/Prune packets. The PE does 341 not need to examine any other PIM packets. 343 Most of the PIM snooping procedures for handling Hello/Join/Prune 344 messages are very similar to those executed in a PIM router. 345 However, the PE does not need to have any routing tables like as 346 required in PIM multicast routing. It knows how to forward Join/ 347 Prunes only by looking at the Upstream Neighbor field in the Join/ 348 Prune packets. 350 The PE does not need to know about Rendezvous Points (RP) and does 351 not have to maintain any RP Set. All that is transparent to a PIM 352 snooping PE. 354 In the following sub-sections, we list some considerations and 355 observations for the implementation of PIM snooping in VPLS. 357 2.3.1. Scaling 359 PIM snooping needs to be employed on ACs at the downstream PEs (PEs 360 receiving multicast traffic across the VPLS core) to prevent traffic 361 from being sent out of ACs unnecessarily. PIM snooping techniques 362 can also be employed on PWs at the upstream PEs (PEs receiving 363 traffic from local ACs in a hierarchical VPLS) to prevent traffic 364 from being sent to PEs unnecessarily. This may work well for small 365 to medium scale deployments. However, if there are a large number of 366 VPLS instances with a large number of PEs per instance, then the 367 amount of snooping required at the upstream PEs can overwhelm the 368 upstream PEs. 370 There are two methods to reduce the burden on the upstream PEs. One 371 is to use PIM proxying as described in Section 2.6.6, to reduce the 372 control messages forwarded by a PE. The other is not to snoop on the 373 PWs at all, but PEs signal the snooped states to other PEs out of 374 band via BGP, as described in [VPLS-MCAST]. In this document, it is 375 assumed that snooping is performed on PWs. 377 2.3.2. IPv6 379 In VPLS, PEs forward Ethernet frames received from CEs and as such 380 are agnostic of the layer-3 protocol used by the CEs. However, as a 381 PIM snooping PE, the PE would have to look deeper into the IP and PIM 382 packets and build snooping state based on that. The PIM Protocol 383 specifications handle both IPv4 and IPv6. The specification for PIM 384 snooping in this draft can be applied to both IPv4 and IPv6 payloads. 386 2.3.3. PIM-SM (*,*,RP) 388 This document does not address (*,*,RP) states in the VPLS network. 389 Although [PIM-SM] specifies that routers must support (*,*,RP) 390 states, there are very few implementations that actually support it 391 in actual deployments, and it is being removed from the PIM protocol 392 in its ongoing advancement process in IETF. Given that, this 393 document omits the specification relating to (*,*,RP) support. 395 2.4. PIM Snooping vs PIM Proxying 397 This document has previously alluded to PIM snooping/relay/proxying. 398 Details on the PIM relay/proxying solution are discussed in 399 Section 2.6.6. In this section, a brief description and comparison 400 are given. 402 2.4.1. Differences between PIM Snooping, Relay and Proxying 404 Differences between PIM snooping and relay/proxying can be summarized 405 as the following: 407 +--------------------+---------------------+-----------------------+ 408 | PIM snooping | PIM relay | PIM proxying | 409 +====================|=====================|=======================+ 410 | Join/Prune messages| Join/Prune messages | Join/Prune messages | 411 | snooped and flooded| snooped; forwarded | consumed. Regenerated | 412 | according to VPLS | as is out of certain| ones sent out of | 413 | flooding procedures| upstream ports | certain upstream ports| 414 +--------------------+---------------------+-----------------------+ 415 | No PIM packets | No PIM packets | New Join/Prune | 416 | generated. | generated | messages generated | 417 +--------------------+---------------------+-----------------------+ 418 | CE Join suppression| CE Join Suppression | CE Join suppression | 419 | not allowed | allowed | allowed | 420 +--------------------+---------------------+-----------------------+ 422 Note that the differences apply only to PIM Join/Prune messages. PIM 423 Hello messages are snooped and flooded in all cases. 425 Other than the above differences, most of the procedures are common 426 to PIM snooping and PIM relay/proxying, unless specifically stated 427 otherwise. 429 Pure PIM snooping PEs simply snoop on PIM packets as they are being 430 forwarded in the VPLS. As such they truly provide transparent LAN 431 services since no customer packets are modified or consumed or new 432 packets introduced in the VPLS. It is also simpler to implement than 433 PIM proxying. However for PIM snooping to work correctly, it is a 434 requirement that CE routers MUST disable Join suppression in the 435 VPLS. Otherwise, most of the CE routers with interest in a given 436 multicast data stream will fail to send J/P messages for that stream, 437 and the PEs will not be able to tell which ACs and/or PWs have 438 listeners for that stream. 440 Given that a large number of existing CE deployments do not support 441 disabling of Join suppression and given the operational complexity 442 for a provider to manage disabling of Join suppression in the VPLS, 443 it becomes a difficult solution to deploy. Another disadvantage of 444 PIM snooping is that it does not scale as well as PIM proxying. If 445 there are a large number of CEs in a VPLS, then every CE will see 446 every other CE's Join/Prune messages. 448 PIM relay/proxying has the advantage that it does not require Join 449 suppression to be disabled in the VPLS. Multicast as a VPLS service 450 can be very easily provided without requiring any changes on the CE 451 routers. PIM relay/proxying helps scale VPLS Multicast since Join/ 452 Prune messages are only sent to certain upstream ports instead of 453 flooded, and in case of full proxying (vs. relay) the PEs 454 intelligently generate only one Join/Prune message for a given 455 multicast stream. 457 PIM proxying however loses the transparency argument since Join/ 458 Prunes could get modified or even consumed at a PE. Also, new 459 packets could get introduced in the VPLS. However, this loss of 460 transparency is limited to PIM Join/Prune packets. It is in the 461 interest of optimizing multicast in the VPLS and helping a VPLS 462 network scale much better. Data traffic will still be completely 463 transparent. 465 2.4.2. PIM Control Message Latency 467 A PIM snooping/relay/proxying PE snoops on PIM Hello packets while 468 transparently flooding them in the VPLS. As such there is no latency 469 introduced by the VPLS in the delivery of PIM Hello packets to remote 470 CEs in the VPLS. 472 A PIM snooping PE snoops on PIM Join/Prune packets while 473 transparently flooding them in the VPLS. There is no latency 474 introduced by the VPLS in the delivery of PIM Join/Prune packets when 475 PIM snooping is employed. 477 A PIM relay/proxying PE does not simply flood PIM Join/Prune packets. 478 This can result in additional latency for a downstream CE to receive 479 multicast traffic after it has sent a Join. When a downstream CE 480 prunes a multicast stream, the traffic SHOULD stop flowing to the CE 481 with no additional latency introduced by the VPLS. 483 Performing only proxying of Join/Prune and not Hello messages keeps 484 the PE behavior very similar to that of a PIM router without 485 introducing too much additional complexity. It keeps the PIM 486 proxying solution fairly simple. Since Join/Prunes are forwarded by 487 a PE along the slow-path and all other PIM packet types are forwarded 488 along the fast-path, it is very likely that packets forwarded along 489 the fast-path will arrive "ahead" of Join/Prune packets at a CE 490 router (note the stress on the fact that fast-path messages will 491 never arrive after Join/Prunes). Of particular importance are Hello 492 packets sent along the fast-path. We can construct a variety of 493 scenarios resulting in out of order delivery of Hellos and Join/Prune 494 messages. However, there should be no deviation from normal expected 495 behavior observed at the CE router receiving these messages out of 496 order. 498 2.4.3. When to Snoop and When to Proxy 500 From the above descriptions, factors that affect the choice of 501 snooping/relay/proxying include: 503 o Whether CEs do Join Suppression or not 505 o Whether Join/Prune latency is critical or not 507 o Whether the scale of PIM protocol message/states in a VPLS 508 requires the scaling benefit of proxying 510 Of the above factors, Join Suppression is the hard one - pure 511 snooping can only be used when Join Suppression is disabled on all 512 CEs. The latency associated with relay/proxying is implementation 513 dependent and may not be a concern at all with a particular 514 implementation. The scaling benefit may not be important either, in 515 that on a real LAN with Explicit Tracking (ET) a PIM router will need 516 to receive and process all PIM Join/Prune messages as well. 518 A PIM router indicates that Join Suppression is disabled if the T-bit 519 is set in the LAN Prune Delay option of its Hello message. If all 520 PIM routers on a LAN set the T-bit, Explicit Tracking is possible, 521 allowing an upstream router to track all the downstream neighbors 522 that have Join states for any (S,G) or (*,G). That has two benefits: 524 o No need for PrunePending process - the upstream router may 525 immediately stop forwarding data when it receives a Prune from the 526 last downstream neighbor, and immediately prune to its upstream if 527 that's for the last downstream interface. 529 o For management purpose, the upstream router knows exactly which 530 downstream routers exist for a particular Join State. 532 While full proxying can be used with or without Join Suppression on 533 CEs and does not interfere with an upstream CE's bypass of 534 PrunePending process, it does proxy all its downstream CEs as a 535 single one to the upstream, removing the second benefit mentioned 536 above. 538 Therefore, the general rule is that if Join Suppression is enabled on 539 CEs then proxying or relay MUST be used and if Suppression is known 540 to be disabled on all CEs then either snooping, relay, or proxying 541 MAY be used while snooping or relay SHOULD be used. 543 An implementation MAY choose dynamic determination of which mode to 544 use, through the tracking of the above mentioned T-bit in all snooped 545 PIM Hello messages, or MAY simply require static provisioning. 547 2.5. Discovering PIM Routers 549 A PIM snooping PE MUST snoop on PIM Hellos received on ACs and PWs. 550 i.e., the PE transparently floods the PIM Hello while snooping on it. 551 PIM Hellos are used by the snooping PE to discover PIM routers and 552 their characteristics. 554 For each neighbor discovered by a PE, it includes an entry in the PIM 555 Neighbor Database with the following fields: 557 o Layer 2 encapsulation for the Router sending the PIM Hello. 559 o IP Address and address family of the Router sending the PIM Hello. 561 o Port (AC / PW) on which the PIM Hello was received. 563 o Hello TLVs 565 The PE should be able to interpret and act on Hello TLVs currently 566 defined in the PIM RFCs. The TLVs of particular interest in this 567 document are: 569 o Hello-Hold-Time 571 o Tracking Support 573 o DR Priority 575 Please refer to [PIM-SM] for a list of the Hello TLVs. When a PIM 576 Hello is received, the PE MUST reset the neighbor-expiry-timer to 577 Hello-Hold-Time. If a PE does not receive a Hello message from a 578 router within Hello-Hold-Time, the PE MUST remove that neighbor from 579 its PIM Neighbor Database. If a PE receives a Hello message from a 580 router with Hello-Hold-Time value set to zero, the PE MUST remove 581 that router from the PIM snooping state immediately. 583 From the PIM Neighbor Database, a PE MUST be able to use the 584 procedures defined in [PIM-SM] to identify the PIM Designated Router 585 in the VPLS instance. It should also be able to determine if 586 Tracking Support is active in the VPLS instance. 588 2.6. PIM-SM and PIM-SSM 590 The key characteristic of PIM-SM and PIM-SSM is explicit join 591 behavior. In this model, multicast traffic is only forwarded to 592 locations that specifically request it. All the procedures described 593 in this section apply to both PIM-SM and PIM-SSM, except for the fact 594 that there is no (*,G) state in PIM-SSM. 596 2.6.1. Building PIM-SM Snooping States 598 PIM-SM and PIM-SSM snooping states are built by snooping on the PIM- 599 SM Join/Prune messages received on AC/PWs. 601 The downstream state machine of a PIM-SM snooping PE very closely 602 resembles the downstream state machine of PIM-SM routers. The 603 downstream state consists of: 605 Per downstream (Port, *, G): 607 o DownstreamJPState: One of { "NoInfo" (NI), "Join" (J), "Prune 608 Pending" (PP) } 610 Per downstream (Port, *, G, N): 612 o Prune Pending Timer (PPT(N)) 614 o Join Expiry Timer (ET(N)) 616 Per downstream (Port, S, G): 618 o DownstreamJPState: One of { "NoInfo" (NI), "Join" (J), "Prune 619 Pending" (PP) } 621 Per downstream (Port, S, G, N): 623 o Prune Pending Timer (PPT(N)) 625 o Join Expiry Timer (ET(N)) 627 Per downstream (Port, S, G, rpt): 629 o DownstreamJPRptState: One of { "NoInfo" (NI), "Pruned" (P), "Prune 630 Pending" (PP) } 632 Per downstream (Port, S, G, rpt, N): 634 o Prune Pending Timer (PPT(N)) 636 o Join Expiry Timer (ET(N)) 638 Where S is the address of the multicast source, G is the Group 639 address and N is the upstream neighbor field in the Join/Prune 640 message. Notice that unlike on PIM-SM routers where PPT and ET are 641 per (Interface, S, G), PIM snooping PEs have to maintain PPT and ET 642 per (Port, S, G, N). The reasons for this are explained in 643 Section 2.6.2. 645 Apart from the above states, we define the following state 646 summarization macros. 648 UpstreamNeighbors(*,G): If there is one or more Join(*,G) received on 649 any port with upstream neighbor N and ET(N) is active, then N is 650 added to UpstreamNeighbors(*,G). This set is used to determine if a 651 Join(*,G) or a Prune(*,G) with upstream neighbor N needs to be sent 652 upstream. 654 UpstreamNeighbors(S,G): If there is one or more Join(S,G) received on 655 any port with upstream neighbor N and ET(N) is active, then N is 656 added to UpstreamNeighbors(S,G). This set is used to determine if a 657 Join(S,G) or a Prune(S,G) with upstream neighbor N needs to be sent 658 upstream. 660 UpstreamPorts(*,G): This is the set of all Port(N) ports where N is 661 in the set UpstreamNeighbors(*,G). Multicast Streams forwarded using 662 a (*,G) match MUST be forwarded to these ports. So 663 UpstreamPorts(*,G) MUST be added to OutgoingPortList(*,G). 665 UpstreamPorts(S,G): This is the set of all Port(N) ports where N is 666 in the set UpstreamNeighbors(S,G). UpstreamPorts(S,G) MUST be added 667 to OutgoingPortList(S,G). 669 InheritedUpstreamPorts(S,G): This is the union of UpstreamPorts(S,G) 670 and UpstreamPorts(*,G). 672 UpstreamPorts(S,G,rpt): If PruneDesired(S,G,rpt) becomes true, then 673 this set is set to UpstreamPorts(*,G). Otherwise, this set is empty. 674 UpstreamPorts(*,G) (-) UpstreamPorts(S,G,rpt) MUST be added to 675 OutgoingPortList(S,G). 677 UpstreamPorts(G): This set is the union of all the UpstreamPorts(S,G) 678 and UpstreamPorts(*,G) for a given G. proxy (S,G) Join/Prune and 679 (*,G) Join/Prune messages MUST be sent to a subset of 680 UpstreamPorts(G) as specified in Section 2.6.6.1. 682 PWPorts: This is the set of all PWs. 684 OutgoingPortList(*,G): This is the set of all ports to which traffic 685 needs to be forwarded on a (*,G) match. 687 OutgoingPortList(S,G): This is the set of all ports to which traffic 688 needs to be forwarded on an (S,G) match. 690 See Section 2.12 on Data Forwarding Rules for the specification on 691 how OutgoingPortList is calculated. 693 NumETsActive(Port,*,G): Number of (Port,*,G,N) entries that have 694 Expiry Timer running. This macro keeps track of the number of 695 Join(*,G)s that are received on this Port with different upstream 696 neighbors. 698 NumETsActive(Port,S,G): Number of (Port,S,G,N) entries that have 699 Expiry Timer running. This macro keeps track of the number of 700 Join(S,G)s that are received on this Port with different upstream 701 neighbors. 703 RpfVectorTlvs(*,G): RPF Vectors [RPF-VECTOR] are TLVs that may be 704 present in received Join(*,G) messages. If present, they must be 705 copied to RpfVectorTlvs(*,G). 707 RpfVectorTlvs(S,G): RPF Vectors [RPF-VECTOR] are TLVs that may be 708 present in received Join(S,G) messages. If present, they must be 709 copied to RpfVectorTlvs(S,G). 711 Since there are a few differences between the downstream state 712 machines of PIM-SM Routers and PIM-SM snooping PEs, we specify the 713 details of the downstream state machine of PIM-SM snooping PEs at the 714 risk of repeating most of the text documented in [PIM-SM]. 716 2.6.2. Explanation for per (S,G,N) states 718 In PIM Routing protocols, states are built per (S,G). On a router, 719 an (S,G) has only one RPF-Neighbor. However, a PIM snooping PE does 720 not have the Layer 3 routing information available to the routers in 721 order to determine the RPF-Neighbor for a multicast flow. It merely 722 discovers it by snooping the Join/Prune message. A PE could have 723 snooped on two or more different Join/Prune messages for the same 724 (S,G) that could have carried different Upstream-Neighbor fields. 725 This could happen during transient network conditions or due to dual- 726 homed sources. A PE cannot make assumptions on which one to pick, 727 but instead must allow the CE routers to decide which Upstream 728 Neighbor gets elected the RPF-Neighbor. And for this purpose, the PE 729 will have to track downstream and upstream Join/Prune per (S,G,N). 731 2.6.3. Receiving (*,G) PIM-SM Join/Prune Messages 733 A Join(*,G) or Prune(*,G) is considered "received" if the following 734 conditions are met: 736 o The port on which it arrived is not Port(N) where N is the 737 upstream-neighbor N of the Join/Prune(*,G), or, 739 o if both Port(N) and the arrival port are PWs, then there exists at 740 least one other (*,G,Nx) or (Sx,G,Nx) state with an AC 741 UpstreamPort. 743 For simplicity, the case where both Port(N) and the arrival port are 744 PWs is referred to as PW-only Join/Prune in this document. The PW- 745 only Join/Prune handling is so that the Port(N) PW can be added to 746 the related forwarding entries' OutgoingPortList to trigger Assert, 747 but that is only needed for those states with AC UpstreamPort. Note 748 that in PW-only case, it is OK for the arrival port and Port(N) to be 749 the same. See Appendix B for examples. 751 When a router receives a Join(*,G) or a Prune(*,G) with upstream 752 neighbor N, it must process the message as defined in the state 753 machine below. Note that the macro computations of the various 754 macros resulting from this state machine transition is exactly as 755 specified in the PIM-SM RFC [PIM-SM]. 757 We define the following per-port (*,G,N) macro to help with the state 758 machine below. 760 Figure 1 : Downstream per-port (*,G) state machine in tabular form 762 +---------------++----------------------------------------+ 763 | || Previous State | 764 | ++------------+--------------+------------+ 765 | Event ||NoInfo (NI) | Join (J) | Prune-Pend | 766 +---------------++------------+--------------+------------+ 767 | Receive ||-> J state | -> J state | -> J state | 768 | Join(*,G) || Action | Action | Action | 769 | || RxJoin(N) | RxJoin(N) | RxJoin(N) | 770 +---------------++------------+--------------+------------+ 771 |Receive || - | -> PP state | -> PP state| 772 |Prune(*,G) and || | Start PPT(N) | | 773 |NumETsActive<=1|| | | | 774 +---------------++------------+--------------+------------+ 775 |Receive || - | -> J state | - | 776 |Prune(*,G) and || | Start PPT(N) | | 777 |NumETsActive>1 || | | | 778 +---------------++------------+--------------+------------+ 779 |PPT(N) expires || - | -> J state | -> NI state| 780 | || | Action | Action | 781 | || | PPTExpiry(N) |PPTExpiry(N)| 782 +---------------++------------+--------------+------------+ 783 |ET(N) expires || - | -> NI state | -> NI state| 784 |and || | Action | Action | 785 |NumETsActive<=1|| | ETExpiry(N) | ETExpiry(N)| 786 +---------------++------------+--------------+------------+ 787 |ET(N) expires || - | -> J state | - | 788 |and || | Action | | 789 |NumETsActive>1 || | ETExpiry(N) | | 790 +---------------++------------+--------------+------------+ 792 Action RxJoin(N): 794 If ET(N) is not already running, then start ET(N). Otherwise 795 restart ET(N). If N is not already in UpstreamNeighbors(*,G), 796 then add N to UpstreamNeighbors(*,G) and trigger a Join(*,G) with 797 upstream neighbor N to be forwarded upstream. If there are RPF 798 Vector TLVs in the received (*,G) message and if they are 799 different from the recorded RpfVectorTlvs(*,G), then copy them 800 into RpfVectorTlvs(*,G). 802 Action PPTExpiry(N): 804 Same as Action ETExpiry(N) below, plus Send a Prune-Echo(*,G) with 805 upstream-neighbor N on the downstream port. 807 Action ETExpiry(N): 809 Disable timers ET(N) and PPT(N). Delete neighbor state 810 (Port,*,G,N). If there are no other (Port,*,G) states with 811 NumETsActive(Port,*,G) > 0, transition DownstreamJPState [PIM-SM] 812 to NoInfo. If there are no other (Port,*,G,N) state (different 813 ports but for the same N), remove N from UpstreamPorts(*,G) - this 814 also serves as a trigger for Upstream FSM (JoinDesired(*,G,N) 815 becomes FALSE). 817 2.6.4. Receiving (S,G) PIM-SM Join/Prune Messages 819 A Join(S,G) or Prune(S,G) is considered "received" if the following 820 conditions are met: 822 o The port on which it arrived is not Port(N) where N is the 823 upstream-neighbor N of the Join/Prune(S,G), or, 825 o if both Port(N) and the arrival port are PWs, then there exists at 826 least one other (*,G,Nx) or (S,G,Nx) state with an AC 827 UpstreamPort. 829 For simplicity, the case where both Port(N) and the arrival port are 830 PWs is referred to as PW-only Join/Prune in this document. The PW- 831 only Join/Prune handling is so that the Port(N) PW can be added to 832 the related forwarding entries' OutgoingPortList to trigger Assert, 833 but that is only needed for those states with AC UpstreamPort. See 834 Appendix B for examples. 836 When a router receives a Join(S,G) or a Prune(S,G) with upstream 837 neighbor N, it must process the message as defined in the state 838 machine below. Note that the macro computations of the various 839 macros resulting from this state machine transition is exactly as 840 specified in [PIM-SM][PIM-SM]. 842 Figure 2: Downstream per-port (S,G) state machine in tabular form 844 +---------------++----------------------------------------+ 845 | || Previous State | 846 | ++------------+--------------+------------+ 847 | Event ||NoInfo (NI) | Join (J) | Prune-Pend | 848 +---------------++------------+--------------+------------+ 849 | Receive ||-> J state | -> J state | -> J state | 850 | Join(S,G) || Action | Action | Action | 851 | || RxJoin(N) | RxJoin(N) | RxJoin(N) | 852 +---------------++------------+--------------+------------+ 853 |Receive || - | -> PP state | - | 854 |Prune (S,G) and|| | Start PPT(N) | | 855 |NumETsActive<=1|| | | | 856 +---------------++------------+--------------+------------+ 857 |Receive || - | -> J state | - | 858 |Prune(S,G) and || | Start PPT(N) | | 859 NumETsActive>1 || | | | 860 +---------------++------------+--------------+------------+ 861 |PPT(N) expires || - | -> J state | -> NI state| 862 | || | Action | Action | 863 | || | PPTExpiry(N) |PPTExpiry(N)| 864 +---------------++------------+--------------+------------+ 865 |ET(N) expires || - | -> NI state | -> NI state| 866 |and || | Action | Action | 867 |NumETsActive<=1|| | ETExpiry(N) | ETExpiry(N)| 868 +---------------++------------+--------------+------------+ 869 |ET(N) expires || - | -> J state | - | 870 |and || | Action | | 871 |NumETsActive>1 || | ETExpiry(N) | | 872 +---------------++------------+--------------+------------+ 874 Action RxJoin(N): 876 If ET(N) is not already running, then start ET(N). Otherwise, 877 restart ET(N). 879 If N is not already in UpstreamNeighbors(S,G), then add N to 880 UpstreamNeighbors(S,G) and trigger a Join(S,G) with upstream 881 neighbor N to be forwarded upstream. If there are RPF Vector TLVs 882 in the received (S,G) message and if they are different from the 883 recorded RpfVectorTlvs(S,G), then copy them into 884 RpfVectorTlvs(S,G). 886 Action PPTExpiry(N): 888 Same as Action ETExpiry(N) below, plus Send a Prune-Echo(S,G) with 889 upstream-neighbor N on the downstream port. 891 Action ETExpiry(N): 893 Disable timers ET(N) and PPT(N). Delete neighbor state 894 (Port,S,G,N). If there are no other (Port,S,G) states with 895 NumETsActive(Port,S,G) > 0, transition DownstreamJPState to 896 NoInfo. If there are no other (Port,S,G,N) state (different ports 897 but for the same N), remove N from UpstreamPorts(S,G) - this also 898 serves as a trigger for Upstream FSM (JoinDesired(S,G,N) becomes 899 FALSE). 901 2.6.5. Receiving (S,G,rpt) Join/Prune Messages 903 A Join(S,G,rpt) or Prune(S,G,rpt) is "received" when the port on 904 which it was received is not also the port on which the upstream- 905 neighbor N of the Join/Prune(S,G,rpt) was learnt. 907 While it is important to ensure that the (S,G) and (*,G) state 908 machines allow for handling per (S,G,N) states, it is not as 909 important for (S,G,rpt) states. It suffices to say that the 910 downstream (S,G,rpt) state machine is the same as what is defined in 911 section 4.5.4 of the PIM-SM RFC [PIM-SM]. 913 2.6.6. Sending Join/Prune Messages Upstream 915 This section applies only to a PIM relay/proxying PE and not to a PIM 916 snooping PE. 918 A full PIM proxying (not relay) PE MUST implement the Upstream FSM 919 for which the procedures are similar to what is defined in section 920 4.5.6 of [PIM-SM]. 922 For the purposes of the Upstream FSM, a Join or Prune message with 923 upstream neighbor N is "seen" on a PIM relay/proxying PE if the port 924 on which the message was received is also Port(N), and the port is an 925 AC. The AC requirement is needed because a Join received on the 926 Port(N) PW must not suppress this PE's Join on that PW. 928 A PIM relay PE does not implement the Upstream FSM. It simply 929 forwards received Join/Prune messages out of the same set of upstream 930 ports as in the PIM proxying case. 932 In order to correctly facilitate assert among the CE routers, such 933 Join/Prunes need to send not only towards the upstream neighbor, but 934 also on certain PWs as described below. 936 If RpfVectorTlvs(*,G) is not empty, then it must be encoded in a 937 Join(*,G) message sent upstream. 939 If RpfVectorTlvs(S,G) is not empty, then it must be encoded in a 940 Join(S,G) message sent upstream. 942 2.6.6.1. Where to send Join/Prune messages 944 The following rules apply, to both forwarded (in case of PIM relay), 945 refresh and triggered (in case of PIM proxying) (S,G)/(*,G) Join/ 946 Prune messages. 948 o The upstream neighbor field in the Join/Prune to be sent is set to 949 the N in the corresponding Upstream FSM. 951 o if Port(N) is an AC, send the message to Port(N). 953 o Additionally, if OutgoingPortList(x,G,N) contains at least one AC, 954 then the message MUST be sent to at least all the PWs in 955 UpstreamPorts(G) (for (*,G)) or InheritedUpstreamPorts(S,G) (for 956 (S,G)). Alternatively, the message MAY be sent to all PWs. 958 Sending to a subset of PWs as described above guarantees that if 959 traffic (of the same flow) from two upstream routers were to reach 960 this PE, then the two routers will receive from each other, 961 triggering assert. 963 Sending to all PWs guarantees that if two upstream routers both send 964 traffic for the same flow (even if it is to different sets of 965 downstream PEs), then they'll receive from each other, triggering 966 assert. 968 2.7. Bidirectional-PIM (BIDIR-PIM) 970 BIDIR-PIM is a variation of PIM-SM. The main differences between 971 PIM-SM and Bidirectional-PIM are as follows: 973 o There are no source-based trees, and source-specific multicast is 974 not supported (i.e., no (S,G) states) in PIM- BIDIR. 976 o Multicast traffic can flow up the shared tree in BIDIR-PIM. 978 o To avoid forwarding loops, one router on each link is elected as 979 the Designated Forwarder (DF) for each RP in BIDIR-PIM. 981 The main advantage of BIDIR-PIM is that it scales well for many-to- 982 many applications. However, the lack of source-based trees means 983 that multicast traffic is forced to remain on the shared tree. 985 As described in [BIDIR-PIM], parts of a BIDIR-PIM enabled network may 986 forward traffic without exchanging Join/Prune messages, for instance 987 between DF's and the Rendezvous Point Link (RPL). 989 As the described procedures for PIM snooping rely on the presence of 990 Join/Prune messages, enabling PIM snooping on BIDIR-PIM networks 991 could break the BIDIR-PIM functionality. Deploying PIM snooping on 992 BIDIR-PIM enabled networks will require some further study. Some 993 thoughts are gathered in Appendix A. 995 2.8. Interaction with IGMP Snooping 997 Whenever IGMP snooping is enabled in conjunction with PIM snooping in 998 the same VPLS instance the PE SHOULD follow these rules: 1000 o To maintain the list of multicast routers and ports on which they 1001 are attached, the PE SHOULD NOT use the rules as described in 1002 RFC4541 [IGMP-SNOOP] but SHOULD rely on the neighbors discovered 1003 by PIM snooping . This list SHOULD then be used to apply the 1004 forwarding rule as described in 2.1.1.(1) of RFC4541 [IGMP-SNOOP]. 1006 o If the PE supports proxy-reporting, an IGMP membership learned 1007 only on a port to which a PIM neighbor is attached but not 1008 elsewhere SHOULD NOT be included in the summarized upstream report 1009 sent to that port. 1011 2.9. PIM-DM 1013 The characteristics of PIM-DM is flood and prune behavior. Shortest 1014 path trees are built as a multicast source starts transmitting. 1016 2.9.1. Building PIM-DM Snooping States 1018 PIM-DM snooping states are built by snooping on the PIM-DM Join, 1019 Prune, Graft and State Refresh messages received on AC/PWs and State- 1020 Refresh Messages sent on AC/PWs. By snooping on these PIM-DM 1021 messages, a PE builds the following states per (S,G,N) where S is the 1022 address of the multicast source, G is the Group address and N is the 1023 upstream neighbor to which Prunes/Grafts are sent by downstream CEs: 1025 Per PIM (S,G,N): 1027 Port PIM (S,G,N) Prune State: 1029 * DownstreamPState(S,G,N,Port): One of {"NoInfo" (NI), "Pruned" 1030 (P), "PrunePending" (PP)} 1032 * Prune Pending Timer (PPT) 1034 * Prune Timer (PT) 1036 * Upstream Port (valid if the PIM(S,G,N) Prune State is 1037 "Pruned"). 1039 2.9.2. PIM-DM Downstream Per-Port PIM(S,G,N) State Machine 1041 The downstream per-port PIM(S,G,N) state machine is as defined in 1042 section 4.4.2 of [PIM-DM] with a few changes relevant to PIM 1043 snooping. When reading section 4.4.2 of [PIM-DM] for the purposes of 1044 PIM-snooping please be aware that the downstream states are built per 1045 (S, G, N, Downstream-Port} in PIM-snooping and not per {Downstream- 1046 Interface, S, G} as in a PIM-DM router. As noted in the previous 1047 Section 2.9.1, the states (DownstreamPState) and timers (PPT and PT) 1048 are per (S,G,N,P). 1050 2.9.3. Triggering ASSERT election in PIM-DM 1052 Since PIM-DM is a flood-and-prune protocol, traffic is flooded to all 1053 routers unless explicitly pruned. Since PIM-DM routers do not prune 1054 on non-RPF interfaces, PEs should typically not receive Prunes on 1055 Port(RPF-neighbor). So the asserting routers should typically be in 1056 pim_oiflist(S,G). In most cases, assert election should occur 1057 naturally without any special handling since data traffic will be 1058 forwarded to the asserting routers. 1060 However, there are some scenarios where a prune might be received on 1061 a port which is also an upstream port (UP). If we prune the port 1062 from pim_oiflist(S,G), then it would not be possible for the 1063 asserting routers to determine if traffic arrived on their downstream 1064 port. This can be fixed by adding pim_iifs(S,G) to pim_oiflist(S,G) 1065 so that data traffic flows to the UP ports. 1067 2.10. PIM Proxy 1069 As noted earlier, PIM snooping will work correctly only if Join 1070 Suppression is disabled in the VPLS. If Join Suppression is enabled 1071 in the VPLS, then PEs MUST do PIM relay/proxying for VPLS multicast 1072 to work correctly. This section applies specifically to the full 1073 proxying case and not relay. 1075 2.10.1. Upstream PIM Proxy behavior 1077 A PIM proxying PE consumes Join/Prune messages and regenerates PIM 1078 Join/Prune messages to be sent upstream by implementing Upstream FSM 1079 as specified in the PIM RFC. This is the only difference from PIM 1080 relay. 1082 The source IP address in PIM packets sent upstream SHOULD be the 1083 address of a PIM downstream neighbor in the corresponding join/prune 1084 state. The address picked MUST NOT be the upstream neighbor field to 1085 be encoded in the packet. The layer 2 encapsulation for the selected 1086 source IP address MUST be the encapsulation recorded in the PIM 1087 Neighbor database for that IP address. 1089 2.11. Directly Connected Multicast Source 1091 If there is a source in the CE network that connects directly into 1092 the VPLS instance, then multicast traffic from that source MUST be 1093 sent to all PIM routers on the VPLS instance in addition to the IGMP 1094 receivers in the VPLS. If there is already (S,G) or (*,G) snooping 1095 state that is formed on any PE, this will not happen per the current 1096 forwarding rules and guidelines. So, in order to determine if 1097 traffic needs to be flooded to all routers, a PE must be able to 1098 determine if the traffic came from a host on that LAN. There are 1099 three ways to address this problem: 1101 o The PE would have to do ARP snooping to determine if a source is 1102 directly connected. 1104 o Another option is to have configuration on all PEs to say there 1105 are CE sources that are directly connected to the VPLS instance 1106 and disallow snooping for the groups for which the source is going 1107 to send traffic. This way traffic from that source to those 1108 groups will always be flooded within the provider network. 1110 o A third option is to require that sources of CE multicast traffic 1111 must be behind a router. 1113 This document recommends the third option - sources traffic must be 1114 behind a router. 1116 2.12. Data Forwarding Rules 1118 First we define the rules that are common to PIM-SM and PIM-DM PEs. 1119 Forwarding rules for each protocol type is specified in the sub- 1120 sections. 1122 If there is no matching forwarding state, then the PE SHOULD discard 1123 the packet, i.e., the UserDefinedPortList below SHOULD be empty. 1125 The following general rules MUST be followed when forwarding 1126 multicast traffic in a VPLS: 1128 o Traffic arriving on a port MUST NOT be forwarded back onto the 1129 same port. 1131 o Due to VPLS Split-Horizon rules, traffic ingressing on a PW MUST 1132 NOT be forwarded to any other PW. 1134 2.12.1. PIM-SM Data Forwarding Rules 1136 Per the rules in [PIM-SM] and per the additional rules specified in 1137 this document, 1139 OutgoingPortList(*,G) = immediate_olist(*,G) (+) 1140 UpstreamPorts(*,G) (+) 1141 Port(PimDR) 1143 OutgoingPortList(S,G) = inherited_olist(S,G) (+) 1144 UpstreamPorts(S,G) (+) 1145 (UpstreamPorts(*,G) (-) 1146 UpstreamPorts(S,G,rpt)) (+) 1147 Port(PimDR) 1149 [PIM-SM] specifies how immediate_olist(*,G) and inherited_olist(S,G) 1150 are built. PimDR is the IP address of the PIM DR in the VPLS. 1152 The PIM-SM snooping forwarding rules are defined below in pseudocode: 1154 BEGIN 1155 iif is the incoming port of the multicast packet. 1156 S is the Source IP Address of the multicast packet. 1157 G is the Destination IP Address of the multicast packet. 1159 If there is (S,G) state on the PE 1160 Then 1161 OutgoingPortList = OutgoingPortList(S,G) 1162 Else if there is (*,G) state on the PE 1163 Then 1164 OutgoingPortList = OutgoingPortList(*,G) 1165 Else 1166 OutgoingPortList = UserDefinedPortList 1167 Endif 1169 If iif is an AC 1170 Then 1171 OutgoingPortList = OutgoingPortList (-) iif 1172 Else 1173 ## iif is a PW 1174 OutgoingPortList = OutgoingPortList (-) PWPorts 1175 Endif 1177 Forward the packet to OutgoingPortList. 1178 END 1180 First if there is (S,G) state on the PE, then the set of outgoing 1181 ports is OutgoingPortList(S,G). 1183 Otherwise if there is (*,G) state on the PE, the set of outgoing 1184 ports is OutgoingPortList(*,G). 1186 The packet is forwarded to the selected set of outgoing ports while 1187 observing the general rules above in Section 2.12 1189 2.12.2. PIM-DM Data Forwarding Rules 1191 The PIM-DM snooping data forwarding rules are defined below in 1192 pseudocode: 1194 BEGIN 1195 iif is the incoming port of the multicast packet. 1196 S is the Source IP Address of the multicast packet. 1197 G is the Destination IP Address of the multicast packet. 1199 If there is (S,G) state on the PE 1200 Then 1201 OutgoingPortList = olist(S,G) 1202 Else 1203 OutgoingPortList = UserDefinedPortList 1204 Endif 1206 If iif is an AC 1207 Then 1208 OutgoingPortList = OutgoingPortList (-) iif 1209 Else 1210 ## iif is a PW 1211 OutgoingPortList = OutgoingPortList (-) PWPorts 1212 Endif 1214 Forward the packet to OutgoingPortList. 1215 END 1217 If there is forwarding state for (S,G), then forward the packet to 1218 olist(S,G) while observing the general rules above in section 1219 Section 2.12 1221 [PIM-DM] specifies how olist(S,G) is constructed. 1223 3. IANA Considerations 1225 This document makes no request of IANA. 1227 Note to RFC Editor: this section may be removed on publication as an 1228 RFC. 1230 4. Security Considerations 1232 Security considerations provided in VPLS solution documents (i.e., 1233 [VPLS-LDP] and [VPLS-BGP]) apply to this document as well. 1235 5. Contributors 1237 Yetik Serbest, Suresh Boddapati co-authored earlier versions. 1239 Karl (Xiangrong) Cai and Princy Elizabeth made significant 1240 contributions to bring the specification to its current state, 1241 especially in the area of Join forwarding rules. 1243 6. Acknowledgements 1245 Many members of the former L2VPN and PIM working groups have 1246 contributed to and provided valuable comments and feedback to this 1247 document, including Vach Kompella, Shane Amante, Sunil Khandekar, Rob 1248 Nath, Marc Lassere, Yuji Kamite, Yiqun Cai, Ali Sajassi, Jozef Raets, 1249 Himanshu Shah (Ciena), Himanshu Shah (Alcatel-Lucent). 1251 7. References 1253 7.1. Normative References 1255 [BIDIR-PIM] 1256 Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 1257 "Bidirectional Protocol Independent Multicast (BIDIR- 1258 PIM)", RFC 5015, 2007. 1260 [PIM-DM] Adams, A., Nicholas, J., and W. Siadak, "Protocol 1261 Independent Multicast Version 2 - Dense Mode 1262 Specification", RFC 3973, 2005. 1264 [PIM-SM] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 1265 "Protocol Independent Multicast- Sparse Mode (PIM-SM): 1266 Protocol Specification (Revised)", RFC 4601, 2006. 1268 [PIM-SSM] Holbrook, H. and B. Cain, "Source-Specific Multicast for 1269 IP", RFC 4607, 2006. 1271 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1272 Requirement Levels", BCP 14, RFC 2119, 1997. 1274 [RPF-VECTOR] 1275 Wijnands, I., Boers, A., and E. Rosen, "The Reverse Path 1276 Forwarding (RPF) Vector TLV", RFC 5496, 2009. 1278 7.2. Informative References 1280 [IGMP-SNOOP] 1281 Christensen, M., Kimball, K., and F. Solensky, 1282 "Considerations for IGMP and MLD snooping PEs", RFC 4541, 1283 2006. 1285 [VPLS-BGP] 1286 Kompella, K. and Y. Rekhter, "Virtual Private LAN Service 1287 using BGP for Auto-Discovery and Signaling", RFC 4761, 1288 2007. 1290 [VPLS-LDP] 1291 Lasserre, M. and V. Kompella, "Virtual Private LAN 1292 Services using LDP Signaling", RFC 4762, 2007. 1294 [VPLS-MCAST] 1295 Aggarwal, R., Kamite, Y., Fang, L., and Y. Rekhter, 1296 "Multicast in Virtual Private LAN Servoce (VPLS)", RFC 1297 7117, 2014. 1299 [VPLS-MCAST-REQ] 1300 Kamite, Y., Wada, Y., Serbest, Y., Morin, T., and L. Fang, 1301 "Requirements for Multicast Support in Virtual Private LAN 1302 Services", RFC 5501, 2009. 1304 Appendix A. BIDIR-PIM Thoughts 1306 This section describes some guidelines that may be used to preserve 1307 BIDIR-PIM functionality in combination with Pim snooping. 1309 In order to preserve BIDIR-PIM Pim snooping routers need to set up 1310 forwarding states so that : 1312 o on the RPL all traffic is forwarded to all Port(N) 1314 o on any other interface traffic is always forwarded to the DF 1316 The information needed to setup these states may be obtained by : 1318 o determining the mapping between group(range) and RP 1320 o snooping and storing DF election information 1322 o determining where the RPL is, this could be achieved by static 1323 configuration, or by combining the information mentioned in 1324 previous bullets. 1326 A.1. BIDIR-PIM Data Forwarding Rules 1328 The BIDIR-PIM snooping forwarding rules are defined below in 1329 pseudocode: 1331 BEGIN 1332 iif is the incoming port of the multicast packet. 1333 G is the Destination IP Address of the multicast packet. 1335 If there is forwarding state for G 1336 Then 1337 OutgoingPortList = olist(G) 1338 Else 1339 OutgoingPortList = UserDefinedPortList 1340 Endif 1342 If iif is an AC 1343 Then 1344 OutgoingPortList = OutgoingPortList (-) iif 1345 Else 1346 ## iif is a PW 1347 OutgoingPortList = OutgoingPortList (-) PWPorts 1348 Endif 1350 Forward the packet to OutgoingPortList. 1351 END 1353 If there is forwarding state for G, then forward the packet to 1354 olist(G) while observing the general rules above in Section 2.12 1356 [BIDIR-PIM] specifies how olist(G) is constructed. 1358 Appendix B. Example Network Scenario 1360 Let us consider the scenario in Figure 3. 1362 An Example Network for Triggering Assert 1364 +------+ AC3 +------+ 1365 | PE2 |-----| CE3 | 1366 /| | +------+ 1367 / +------+ | 1368 / | | 1369 / | | 1370 /PW12 | | 1371 / | /---\ 1372 / |PW23 | S | 1373 / | \---/ 1374 / | | 1375 / | | 1376 / | | 1377 +------+ / +------+ | 1378 +------+ | PE1 |/ PW13 | PE3 | +------+ 1379 | CE1 |-----| |-------------| |-----| CE4 | 1380 +------+ AC1 +------+ +------+ AC4 +------+ 1381 | 1382 |AC2 1383 +------+ 1384 | CE2 | 1385 +------+ 1387 In the examples below, JT(Port,S,G,N) is the downstream Join Expiry 1388 Timer on the specified Port for the (S,G) with upstream neighbor N. 1390 B.1. Pim Snooping Example 1392 In the network depicted in Figure 3, S is the source of a multicast 1393 stream (S,G). CE1 and CE2 both have two ECMP routes to reach the 1394 source. 1396 1. CE1 Sends a Join(S,G) with Upstream Neighbor(S,G) = CE3. 1397 2. PE1 snoops on the Join(S,G) and builds forwarding states since it 1398 is received on an AC. It also floods the Join(S,G) in the VPLS. 1399 PE2 snoops on the Join(S,G) and builds forwarding state since the 1400 Join(S,G)is targeting a neighbor residing on an AC. PE3 does not 1401 create forwarding state for (S,G) because this is a PW-only join 1402 and there is neither existing (*,G) state with an AC in 1403 UpstreamPorts(*,G) nor an existing (S,G) state with an AC in 1404 UpstreamPorts(S,G). Both PE2 and PE3 will also flood the 1405 Join(S,G) in the VPLS 1407 The resulting states at the PEs is as follows: 1409 At PE1: 1411 JT(AC1,S,G,CE3) = JP_HoldTime 1412 UpstreamNeighbors(S,G) = { CE3 } 1413 UpstreamPorts(S,G) = { PW12 } 1414 OutgoingPortList(S,G) = { AC1, PW12 } 1416 At PE2: 1417 JT(PW12,S,G,CE3) = JP_HoldTime 1418 UpstreamNeighbors(S,G) = { CE3 } 1419 UpstreamPorts(S,G) = { AC3 } 1420 OutgoingPortList(S,G) = { PW12, AC3 } 1422 At PE3: 1423 No (S,G) state 1425 3. The multicast stream (S,G) flows along 1426 CE3 -> PE2 -> PE1 -> CE1 1427 4. Now CE2 sends a Join(S,G) with Upstream Neighbor(S,G) = CE4. 1428 5. All PEs snoop on the Join(S,G), build forwarding state and 1429 flood the Join(S,G) in the VPLS. Note that for PE2 even though 1430 this is a PW-only join, forwarding state is built on this 1431 Join(S,G) since PE2 has existing (S,G) state with an AC in 1432 UpstreamPorts(S,G) 1434 The resulting states at the PEs: 1436 At PE1: 1437 JT(AC1,S,G,CE3) = active 1438 JT(AC2,S,G,CE4) = JP_HoldTime 1439 UpstreamNeighbors(S,G) = { CE3, CE4 } 1440 UpstreamPorts(S,G) = { PW12, PW13 } 1441 OutgoingPortList(S,G) = { AC1, PW12, AC2, PW13 } 1443 At PE2: 1444 JT(PW12,S,G,CE4) = JP_HoldTime 1445 JT(PW12,S,G,CE3) = active 1446 UpstreamNeighbors(S,G) = { CE3, CE4 } 1447 UpstreamPorts(S,G) = { AC3, PW23 } 1448 OutgoingPortList(S,G) = { PW12, AC3, PW23 } 1450 At PE3: 1451 JT(PW13,S,G,CE4) = JP_HoldTime 1452 UpstreamNeighbors(S,G) = { CE4 } 1453 UpstreamPorts(S,G) = { AC4 } 1454 OutgoingPortList(S,G) = { PW13, AC4 } 1456 6. The multicast stream (S,G) flows into the VPLS from the two CEs 1457 CE3 and CE4. PE2 forwards the stream received from CE3 to PW23 1458 and PE3 forwards the stream to AC4. This facilitates the CE 1459 routers to trigger assert election. Let us say CE3 becomes the 1460 assert winner. 1461 7. CE3 sends an Assert message to the VPLS. The PEs flood the 1462 Assert message without examining it. 1463 8. CE4 stops sending the multicast stream to the VPLS. 1464 9. CE2 notices an RPF change due to Assert and sends a Prune(S,G) 1465 with Upstream Neighbor = CE4. CE2 also sends a Join(S,G) with 1466 Upstream Neighbor = CE3. 1467 10. All the PEs start a prune-pend timer on the ports on which 1468 they received the Prune(S,G). When the prune-pend timer expires, 1469 all PEs will remove the downstream (S,G,CE4) states. 1471 Resulting states at the PEs: 1473 At PE1: 1474 JT(AC1,S,G,CE3) = active 1475 UpstreamNeighbors(S,G) = { CE3 } 1476 UpstreamPorts(S,G) = { PW12 } 1477 OutgoingPortList(S,G) = { AC1, AC2, PW12 } 1479 At PE2: 1480 JT(PW12,S,G,CE3) = active 1481 UpstreamNeighbors(S,G) = { CE3 } 1482 UpstreamPorts(S,G) = { AC3 } 1483 OutgoingPortList(S,G) = { PW12, AC3 } 1485 At PE3: 1486 JT(PW13,S,G,CE3) = JP_HoldTime 1487 UpstreamNeighbors(S,G) = { CE3 } 1488 UpstreamPorts(S,G) = { PW23 } 1489 OutgoingPortList(S,G) = { PW13, PW23 } 1491 Note that at this point at PE3, since there is no AC in 1492 OutgoingPortList(S,G) and no (*,G) or (S,G) state with an AC in 1493 UpstreamPorts(*,G) or UpstreamPorts(S,G) respectively, the 1494 existing (S,G) state at PE3 can also be removed. So finally: 1496 At PE3: 1497 No (S,G) state 1499 Note that at the end of the assert election, there should be no 1500 duplicate traffic forwarded downstream and traffic should flow only 1501 on the desired path. Also note that there are no unnecessary (S,G) 1502 states on PE3 after the assert election. 1504 B.2. PIM Proxy Example with (S,G) / (*,G) interaction 1506 In the same network, let us assume CE4 is the Upstream Neighbor 1507 towards the RP for G. 1509 JPST(S,G,N) is the JP sending timer for the (S,G) with upstream 1510 neighbor N. 1512 1. CE1 Sends a Join(S,G) with Upstream Neighbor(S,G) = CE3. 1514 2. PE1 consumes the Join(S,G) and builds forwarding state since the 1515 Join(S,G) is received on an AC. 1517 PE2 consumes the Join(S,G) and builds forwarding state since the 1518 Join(S,G) is targeting a neighbor residing on an AC. 1520 PE3 consumes the Join(S,G) but does not create forwarding state 1521 for (S,G) since this is a PW-only join and there is neither 1522 existing (*,G) state with an AC in UpstreamPorts(*,G) nor an 1523 existing (S,G) state with an AC in UpstreamPorts(S,G) 1525 The resulting states at the PEs is as follows: 1527 PE1 states: 1528 JT(AC1,S,G,CE3) = JP_HoldTime 1529 JPST(S,G,CE3) = t_periodic 1530 UpstreamNeighbors(S,G) = { CE3 } 1531 UpstreamPorts(S,G) = { PW12 } 1532 OutgoingPortList(S,G) = { AC1, PW12 } 1534 PE2 states: 1535 JT(PW12,S,G,CE3) = JP_HoldTime 1536 JPST(S,G,CE3) = t_periodic 1537 UpstreamNeighbors(S,G) = { CE3 } 1538 UpstreamPorts(S,G) = { AC3 } 1539 OutgoingPortList(S,G) = { PW12, AC3 } 1541 PE3 states: 1542 No (S,G) state 1544 Joins are triggered as follows: 1545 PE1 triggers a Join(S,G) targeting CE3. Since the Join(S,G) was 1546 received on an AC and is targeting a neighbor that is residing 1547 across a PW, the triggered Join(S,G) is sent on all PWs. 1549 PE2 triggers a Join(S,G) targeting CE3. Since the Joins(S,G) is 1550 targeting a neighbor residing on an AC, it only sends the join 1551 on AC3. 1553 PE3 ignores the Join(S,G) since this is a PW-only join and there 1554 is neither existing (*,G) state with an AC in UpstreamPorts(*,G) 1555 nor an existing (S,G) state with an AC in UpstreamPorts(S,G) 1557 3. The multicast stream (S,G) flows along CE3 -> PE2 -> PE1 -> CE1. 1559 4. Now let us say CE2 sends a Join(*,G) with 1560 UpstreamNeighbor(*,G) = CE4. 1562 5. PE1 consumes the Join(*,G) and builds forwarding state since the 1563 Join(*,G) is received on an AC. 1565 PE2 consumes the Join(*,G) and though this is a PW-only join, 1566 forwarding state is build on this Join(*,G) since PE2 has 1567 existing (S,G) state with an AC in UpstreamPorts(S,G). 1568 However, since this is a PW-only join, PE2 only adds the PW 1569 towards PE3 (PW23) into UpstreamPorts(*,G) and hence into 1570 OutgoingPortList(*,G). It does not add the PW towards 1571 PE1 (PW12) into OutgoingPortsList(*,G) 1573 PE3 consumes the Join(*,G) and builds forwarding state since 1574 the Join(*,G) is targeting a neighbor residing on an AC. 1576 The resulting states at the PEs is as follows: 1578 PE1 states: 1579 JT(AC1,*,G,CE4) = JP_HoldTime 1580 JPST(*,G,CE4) = t_periodic 1581 UpstreamNeighbors(*,G) = { CE4 } 1582 UpstreamPorts(*,G) = { PW13 } 1583 OutgoingPortList(*,G) = { AC2, PW13 } 1585 JT(AC1,S,G,CE3) = active 1586 JPST(S,G,CE3) = active 1587 UpstreamNeighbors(S,G) = { CE3 } 1588 UpstreamPorts(S,G) = { PW12 } 1589 OutgoingPortList(S,G) = { AC1, PW12, PW13 } 1591 PE2 states: 1592 JT(PW12,*,G,CE4) = JP_HoldTime 1593 UpstreamNeighbors(*,G) = { CE4 } 1594 UpstreamPorts(G) = { PW23 } 1595 OutgoingPortList(*,G) = { PW23 } 1597 JT(PW12,S,G,CE3) = active 1598 JPST(S,G,CE3) = active 1599 UpstreamNeighbors(S,G) = { CE3 } 1600 UpstreamPorts(S,G) = { AC3 } 1601 OutgoingPortList(S,G) = { PW12, AC3, PW23 } 1603 PE3 states: 1604 JT(PW13,*,G,CE4) = JP_HoldTime 1605 JPST(*,G,CE4) = t_periodic 1606 UpstreamNeighbors(*,G) = { CE4 } 1607 UpstreamPorts(*,G) = { AC4 } 1608 OutgoingPortList(*,G) = { PW13, AC4 } 1610 Joins are triggered as follows: 1611 PE1 triggers a Join(*,G) targeting CE4. Since the Join(*,G) was 1612 received on an AC and is targeting a neighbor that is residing 1613 across a PW, the triggered Join(S,G) is sent on all PWs. 1615 PE2 does not trigger a Join(*,G) based on this join since this 1616 is a PW-only join. 1618 PE3 triggers a Join(*,G) targeting CE4. Since the Join(*,G) is 1619 targeting a neighbor residing on an AC, it only sends the join 1620 on AC4. 1622 6. In case traffic is not flowing yet (i.e. step 3 is delayed to 1623 come after step 6) and in the interim JPST(S,G,CE3) on PE1 1624 expires, causing it to send a refresh Join(S,G) targeting CE3, 1625 since the refresh Join(S,G) is targeting a neighbor that is 1626 residing across a PW, the refresh Join(S,G) is sent on all PWs. 1628 7. Note that PE1 refreshes its JT timer based on reception of 1629 refresh joins from CE1 and CE2 1631 PE2 consumes the Join(S,G) and refreshes the JT(PW12,S,G,CE3) 1632 timer. 1634 PE3 consumes the Join(S,G). It also builds forwarding state on 1635 this Join(S,G), even though this is a PW-only join, since now 1636 PE2 has existing (*,G) state with an AC in UpstreamPorts(*,G). 1637 However, since this is a PW-only join, PE3 only adds the PW 1638 towards PE2 (PW23) into UpstreamPorts(S,G) and hence into 1639 OutgoingPortList(S,G). It does not add the PW towards 1640 PE1 (PW13) into OutgoingPortList(S,G). 1642 PE3 States: 1643 JT(PW13,*,G,CE4) = active 1644 JPST(S,G,CE4) = active 1645 UpstreamNeighbors(*,G) = { CE4 } 1646 UpstreamPorts(*,G) = { AC4 } 1647 OutgoingPortList(*,G) = { PW13, AC4 } 1648 JT(PW13,S,G,CE3) = JP_HoldTime 1649 UpstreamNeighbors(*,G) = { CE3 } 1650 UpstreamPorts(*,G) = { PW23 } 1651 OutgoingPortList(*,G) = { PW13, AC4, PW23 } 1653 Joins are triggered as follows: 1654 PE2 already has (S,G) state, so it does not trigger a Join(S,G) 1655 based on reception of this refresh join. 1657 PE3 does not trigger a Join(S,G) based on this join since this 1658 is a PW-only join. 1660 8. The multicast stream (S,G) flows into the VPLS from the two 1661 CEs, CE3 and CE4. PE2 forwards the stream received from CE3 to 1662 PW12 and PW23. At the same time PE3 forwards the stream 1663 received from CE4 to PW13 and PW23. 1665 The stream received over PW12 and PW13 is forwarded by PE1 to 1666 AC1 and AC2. 1668 The stream received by PE3 over PW23 is forwarded to AC4. The 1669 stream received by PE2 over PW23 is forwarded to AC3. Either of 1670 these facilitates the CE routers to trigger assert election. 1672 9. CE3 and/or CE4 send(s) Assert message(s) to the VPLS. The PEs 1673 flood the Assert message(s) without examining it. 1675 10. CE3 becomes the (S,G) assert winner and CE4 stops sending the 1676 multicast stream to the VPLS. 1678 11. CE2 notices an RPF change due to Assert and sends a 1679 Prune(S,G,rpt) with Upstream Neighbor = CE4. 1681 12. PE1 consumes the Prune(S,G,rpt) and since 1682 PruneDesired(S,G,Rpt,CE4) is TRUE, it triggers a Prune(S,G,rpt) 1683 to CE4. Since the prune is targeting a neighbor across a PW, it 1684 is sent on all PWs. 1686 PE2 consumes the Prune(S,G,rpt) and does not trigger any prune 1687 based on this Prune(S,G,rpt) since this was a PW-only prune. 1689 PE3 consumes the Prune(S,G,rpt) and since 1690 PruneDesired(S,G,rpt,CE4) is TRUE it sends the Prune(S,G,rpt) 1691 on AC4. 1693 PE1 states: 1694 JT(AC2,*,G,CE4) = active 1695 JPST(*,G,CE4) = active 1696 UpstreamNeighbors(*,G) = { CE4 } 1697 UpstreamPorts(*,G) = { PW13 } 1698 OutgoingPortList(*,G) = { AC2, PW13 } 1700 JT(AC2,S,G,CE4) = JP_Holdtime with FLAG sgrpt prune 1701 JPST(S,G,CE4) = none, since this is sent along 1702 with the Join(*,G) to CE4 based 1703 on JPST(*,G,CE4) expiry 1704 UpstreamPorts(S,G,rpt) = { PW13 } 1705 UpstreamNeighbors(S,G,rpt) = { CE4 } 1707 JT(AC1,S,G,CE3) = active 1708 JPST(S,G,CE3) = active 1709 UpstreamNeighbors(S,G) = { CE3 } 1710 UpstreamPorts(S,G) = { PW12 } 1711 OutgoingPortList(S,G) = { AC1, PW12, AC2 } 1713 At PE2: 1714 JT(PW12,*,G,CE4) = active 1715 UpstreamNeighbors(*,G) = { CE4 } 1716 UpstreamPorts(*,G) = { PW23 } 1717 OutgoingPortList(*,G) = { PW23 } 1719 JT(PW12,S,G,CE4) = JP_Holdtime with FLAG sgrpt prune 1720 JPST(S,G,CE4) = none, since this was created 1721 off a PW-only prune 1722 UpstreamPorts(S,G,rpt) = { PW23 } 1723 UpstreamNeighbors(S,G,rpt) = { CE4 } 1725 JT(PW12,S,G,CE3) = active 1726 JPST(S,G,CE3) = active 1727 UpstreamNeighbors(S,G) = { CE3 } 1728 UpstreamPorts(S,G) = { AC3 } 1729 OutgoingPortList(*,G) = { PW12, AC3 } 1731 At PE3: 1732 JT(PW13,*,G,CE4) = active 1733 JPST(*,G,CE4) = active 1734 UpstreamNeighbors(*,G) = { CE4 } 1735 UpstreamPorts(*,G) = { AC4 } 1736 OutgoingPortList(*,G) = { PW13, AC4 } 1738 JT(PW13,S,G,CE4) = JP_Holdtime with S,G,rpt prune 1739 flag 1740 JPST(S,G,CE4) = none, since this is sent along 1741 with the Join(*,G) to CE4 based 1742 on JPST(*,G,CE4) expiry 1743 UpstreamNeighbors(S,G,rpt) = { CE4 } 1744 UpstreamPorts(S,G,rpt) = { AC4 } 1746 JT(PW13,S,G,CE3) = active 1747 JPST(S,G,CE3) = none, since this state is 1748 created by PW-only join 1749 UpstreamNeighbors(S,G) = { CE3 } 1750 UpstreamPorts(S,G) = { PW23 } 1751 OutgoingPortList(S,G) = { PW23 } 1753 Even in this example, at the end of the (S,G) / (*,G) assert 1754 election, there should be no duplicate traffic forwarded downstream 1755 and traffic should flow only to the desired CEs. 1757 However, the reason we don't have duplicate traffic is because one of 1758 the CEs stops sending traffic due to assert, not because we don't 1759 have any forwarding state in the PEs to do this forwarding. 1761 Authors' Addresses 1763 Olivier Dornon 1764 Alcatel-Lucent 1765 50 Copernicuslaan 1766 Antwerp, B2018 1768 Email: olivier.dornon@alcatel-lucent.com 1770 Jayant Kotalwar 1771 Alcatel-Lucent 1772 701 East Middlefield Rd. 1773 Mountain View, CA 94043 1775 Email: jayant.kotalwar@alcatel-lucent.com 1777 Venu Hemige 1779 Email: vhemige@gmail.com 1781 Ray Qiu 1782 Juniper Networks, Inc. 1783 1194 North Mathilda Avenue 1784 Sunnyvale CA 94089 1786 Email: rqiu@juniper.net 1787 Jeffrey Zhang 1788 Juniper Networks, Inc. 1789 10 Technology Park Drive 1790 Westford, MA 01886 1792 Email: zzhang@juniper.net