idnits 2.17.1 draft-ietf-bess-bgp-multicast-controller-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 2 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 9, 2021) is 1015 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC5331' is mentioned on line 280, but not defined == Missing Reference: 'RFC 7752' is mentioned on line 366, but not defined ** Obsolete undefined reference: RFC 7752 (Obsoleted by RFC 9552) == Unused Reference: 'RFC7752' is defined on line 1052, but no explicit reference was found in the text == Unused Reference: 'RFC6513' is defined on line 1081, but no explicit reference was found in the text == Unused Reference: 'RFC7060' is defined on line 1090, but no explicit reference was found in the text == Outdated reference: A later version (-07) exists of draft-ietf-bess-bgp-multicast-03 == Outdated reference: A later version (-26) exists of draft-ietf-idr-segment-routing-te-policy-11 == Outdated reference: A later version (-11) exists of draft-ietf-idr-wide-bgp-communities-05 == Outdated reference: A later version (-08) exists of draft-ietf-pim-sr-p2mp-policy-02 == Outdated reference: A later version (-19) exists of draft-ietf-spring-sr-replication-segment-04 == Outdated reference: A later version (-04) exists of draft-zzhang-idr-rt-derived-community-01 ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) == Outdated reference: A later version (-05) exists of draft-hb-idr-sr-p2mp-policy-01 Summary: 2 errors (**), 0 flaws (~~), 14 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BESS Z. Zhang 3 Internet-Draft Juniper Networks 4 Intended status: Standards Track R. Raszuk 5 Expires: January 10, 2022 NTT Network Innovations 6 D. Pacella 7 Verizon 8 A. Gulko 9 Edward Jones Wealth Management 10 July 9, 2021 12 Controller Based BGP Multicast Signaling 13 draft-ietf-bess-bgp-multicast-controller-07 15 Abstract 17 This document specifies a way that one or more centralized 18 controllers can use BGP to set up multicast distribution trees 19 (identified by either IP source/destination address pair, mLDP FEC, 20 or SR-P2MP Tree-ID) in a network. Since the controllers calculate 21 the trees, they can use sophisticated algorithms and constraints to 22 achieve traffic engineering. The controllers directly signal dynamic 23 replication state to tree nodes, leading to very simple multicast 24 control plane on the tree nodes, as if they were using static routes. 25 This can be used for both underlay and overlay multicast trees, 26 including replacing BGP-MVPN signaling. 28 Requirements Language 30 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 31 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 32 "OPTIONAL" in this document are to be interpreted as described in BCP 33 14 [RFC2119] [RFC8174] when, and only when, they appear in all 34 capitals, as shown here. 36 Status of This Memo 38 This Internet-Draft is submitted in full conformance with the 39 provisions of BCP 78 and BCP 79. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF). Note that other groups may also distribute 43 working documents as Internet-Drafts. The list of current Internet- 44 Drafts is at https://datatracker.ietf.org/drafts/current/. 46 Internet-Drafts are draft documents valid for a maximum of six months 47 and may be updated, replaced, or obsoleted by other documents at any 48 time. It is inappropriate to use Internet-Drafts as reference 49 material or to cite them other than as "work in progress." 51 This Internet-Draft will expire on January 10, 2022. 53 Copyright Notice 55 Copyright (c) 2021 IETF Trust and the persons identified as the 56 document authors. All rights reserved. 58 This document is subject to BCP 78 and the IETF Trust's Legal 59 Provisions Relating to IETF Documents 60 (https://trustee.ietf.org/license-info) in effect on the date of 61 publication of this document. Please review these documents 62 carefully, as they describe your rights and restrictions with respect 63 to this document. Code Components extracted from this document must 64 include Simplified BSD License text as described in Section 4.e of 65 the Trust Legal Provisions and are provided without warranty as 66 described in the Simplified BSD License. 68 Table of Contents 70 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 3 71 1.1. Introduction . . . . . . . . . . . . . . . . . . . . . . 3 72 1.2. Resilience . . . . . . . . . . . . . . . . . . . . . . . 4 73 1.3. Signaling . . . . . . . . . . . . . . . . . . . . . . . . 5 74 1.4. Label Allocation . . . . . . . . . . . . . . . . . . . . 6 75 1.4.1. Using a Common per-tree Label for All Routers . . . . 7 76 1.4.2. Upstream-assignment from Controller's Local Label 77 Space . . . . . . . . . . . . . . . . . . . . . . . . 8 78 1.5. Determining Root/Leaves . . . . . . . . . . . . . . . . . 9 79 1.5.1. PIM-SSM/Bidir or mLDP . . . . . . . . . . . . . . . . 9 80 1.5.2. PIM ASM . . . . . . . . . . . . . . . . . . . . . . . 9 81 1.6. Multiple Domains . . . . . . . . . . . . . . . . . . . . 9 82 1.7. SR-P2MP . . . . . . . . . . . . . . . . . . . . . . . . . 11 83 2. Alternative to BGP-MVPN . . . . . . . . . . . . . . . . . . . 11 84 3. Specification . . . . . . . . . . . . . . . . . . . . . . . . 13 85 3.1. Enhancements to TEA . . . . . . . . . . . . . . . . . . . 13 86 3.1.1. Any-Encapsulation Tunnel . . . . . . . . . . . . . . 13 87 3.1.2. Load-balancing Tunnel . . . . . . . . . . . . . . . . 13 88 3.1.3. Receiving MPLS Label Stack . . . . . . . . . . . . . 14 89 3.1.4. RPF Sub-TLV . . . . . . . . . . . . . . . . . . . . . 14 90 3.1.5. Tree Label Stack sub-TLV . . . . . . . . . . . . . . 15 91 3.1.6. Backup Tunnel sub-TLV . . . . . . . . . . . . . . . . 15 92 3.2. Context Label TLV in BGP-LS Node Attribute . . . . . . . 16 93 3.3. Replicate State Route Type . . . . . . . . . . . . . . . 17 94 3.4. SR P2MP Signaling . . . . . . . . . . . . . . . . . . . . 17 95 3.4.1. Replication State Route for SR P2MP . . . . . . . . . 18 96 3.4.2. BGP Community Container for SR P2MP Policy . . . . . 18 97 3.4.3. Tunnel Encapsulation Attribute for SR-P2MP . . . . . 19 98 3.4.3.1. TEA with Tunnel TLVs Being Replication Branches . 19 99 3.4.3.2. TEA with a Single SR-P2MP Policy Tunnel . . . . . 20 100 3.5. Replication State Route with Label Stack for Tree 101 Identification . . . . . . . . . . . . . . . . . . . . . 20 102 4. Procedures . . . . . . . . . . . . . . . . . . . . . . . . . 21 103 5. Security Considerations . . . . . . . . . . . . . . . . . . . 21 104 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 105 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 22 106 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 107 8.1. Normative References . . . . . . . . . . . . . . . . . . 22 108 8.2. Informative References . . . . . . . . . . . . . . . . . 23 109 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 24 111 1. Overview 113 1.1. Introduction 115 [I-D.ietf-bess-bgp-multicast] describes a way to use BGP as a 116 replacement signaling for PIM [RFC7761] or mLDP [RFC6388]. The BGP- 117 based multicast signaling described there provides a mechanism for 118 setting up both (s,g)/(*,g) multicast trees (as PIM does, but 119 optionally with labels) and labeled (MPLS) multicast tunnels (as mLDP 120 does). Each router on a tree performs essentially the same 121 procedures as it would perform if using PIM or mLDP, but all the 122 inter-router signaling is done using BGP. 124 These procedures allow the routers to set up a separate tree for each 125 individual multicast (x,g) flow where the 'x' could be either 's' or 126 '*', but they also allow the routers to set up trees that are used 127 for more than one flow. In the latter case, the trees are often 128 referred to as "multicast tunnels" or "multipoint tunnels", and 129 specifically in this document they are mLDP tunnels (except that they 130 are set up with BGP signaling). While it actually does not have to 131 be restricted to mLDP tunnels, mLDP FEC is conveniently borrowed to 132 identify the tunnel. In the rest of the document, the term tree and 133 tunnel are used interchangeably. 135 The trees/tunnels are set up using the "receiver-initiated join" 136 technique of PIM/mLDP, hop by hop from downstream routers towards the 137 root. The BGP messages are either sent hop by hop between downstream 138 routers and their upstream neighbors, or can be reflected by Route 139 Reflectors (RRs). 141 As an alternative to each hop independently determining its upstream 142 router and signaling upstream towards the root (following PIM/mLDP 143 model), the entire tree can be calculated by a centralized 144 controller, and the signaling can be entirely done from the 145 controller. For that, some additional procedures and optimizations 146 are specified in this document. 148 [I-D.ietf-bess-bgp-multicast] uses S-PMSI, Leaf, and Source Active 149 Auto-Discovery (A-D) routes because the main procedures and concepts 150 are borrowed from the BGP-MVPN [RFC6514]. While the same Leaf A-D 151 routes can be used to signal replication state to tree nodes from 152 controllers, this document introduces a new route type "Replication 153 State" for the same functionality, so that familiarity with the BGP- 154 MVPN concepts is not required. 156 While it is outside the scope of this document, signaling from the 157 controllers could be done via other means as well, like Netconf or 158 any other SDN methods. 160 1.2. Resilience 162 Each router could establish direct BGP sessions with one or more 163 controllers, or it could establish BGP sessions with RRs who in turn 164 peer with controllers. For the same tree/tunnel, each controller may 165 independently calculate the tree/tunnel and signal the routers on the 166 tree/tunnel using MCAST-TREE Replication State routes. How the 167 calculation is done are outside the scope of this document. 169 On each router, BGP route selection rules will lead to one 170 controller's route for the tree/tunnel being selected as the active 171 route and used for setting up forwarding state. As long as all the 172 routers on a tree/tunnel consistently pick the same controller's 173 routes for the tree/tunnel, the setup should be consistent. If the 174 tree/tunnel is labeled, different labels will be used from different 175 controllers so there is no traffic loop issue even if the routers do 176 not consistently select the same controlle's routes. In the 177 unlabeled case, to ensure the consistency the selection SHOULD be 178 solely based on the identifier of the controller. 180 Another consistency issue is when a bidirectional tree/tunnel needs 181 to be re-routed. Because this is no longer triggered hop-by-hop from 182 downstream to upstream, it is possible that the upstream change 183 happens before the downstream, causing traffic loop. In the 184 unlabeled case, there is no good solution (other than that the 185 controller issues upstream change only after it gets acknowledgement 186 from downstream). In the labeled case, as long as a new label is 187 used there should be no problem. 189 Besides the traffic loop issue, there could be transient traffic loss 190 before both the upstream and downstream's forwarding state are 191 updated. This could be mitigated if the upstream keep sending 192 traffic on the old path (in addition to the new path) and the 193 downstream keep accepting traffic on the old path (but not on the new 194 path) for some time. It is a local matter when for the downstream to 195 switch to the new path - it could be data driven (e.g., after traffic 196 arrives on the new path) or timer driven. 198 For each tree, multiple disjoint instances could be calculated and 199 signaled for live-live protection. Different labels are used for 200 different instances, so that the leaves can differentiate incoming 201 traffic on different instances. As far as transit routers are 202 concerned, the instances are just independent. Note that the two 203 instances are not expected to share common transit routers (it is 204 otherwise outside the scope of this document/revision). 206 1.3. Signaling 208 When a router receives a Replication State route, the re- 209 advertisement is blocked if a configured import RT matches the RT of 210 the route, which indicates that this router is the target and 211 consumer of the route hence it should not be re-advertised further. 212 The routes includes the forwarding information in the form of Tunnel 213 Encapsulation Attributes (TEA) [RFC9012], with enhancements specified 214 in this document. 216 Suppose that for a particular tree, there are two downstream routers 217 D1 and D2 for a particular upstream router U. A controller C sends 218 one Replication State route to U, with the Tree Node's IP Address 219 field (see Section 3.3) set to U's IP address and the TEA specifying 220 both the two downstreams and its upstream (see Section 3.1.4). In 221 this case, the Originating Router's Address field of the Replication 222 State route is set to the controller's address. Note that for a TEA 223 attached to a unicast NLRI, only one of the tunnels in a TEA is used 224 for forwarding a particular packet, while all the tunnels in a TEA 225 are used to reach multiple endpoints when it is attached to a 226 multicast NLRI. 228 Notice that, in case of labeled trees, the (x,g), mLDP FEC, or SR- 229 P2MP tree identification (Section 1.7) signaling is actually not 230 needed to transit routers but only needed to tunnel root/leaves. 231 However, for consistency among the root/leaf/transit nodes, and for 232 consistency with the hop-by-hop signaling, the same signaling (with 233 tree identification encoded in the NLRI) is used to all routers. 235 Nonetheless, a new NLRI route type is defined to encode label/SID 236 instead of tree identification in the NLRI, for scenarios where there 237 is really no need to signal tree identification, e.g. as described in 238 Section 2. On a tunnel root, the tree's binding SID can be encoded 239 in the NLRI. 241 For a tree node to acknowledge to the controller that it has received 242 the signaling and installed corresponding forwarding state, it 243 advertises a corresponding Replication State route, with the 244 Originating Router's IP Address set to itself and with a Route Target 245 to match the controller. For comparison, the tree signaling 246 Replication State route from the controller has the Originating 247 Router's IP Address set to the controller and the Route Target 248 matching the tree node. The two Replication State routes (for 249 controller to signal to a tree node and for a tree node to 250 acknowledge back) differ only in those two aspects. 252 With the acknowledgement Replication State routes, the controller 253 knows if tree setup is complete. The information can be used for 254 many purposes, e.g. the controller may instruct the ingress to start 255 forwarding traffic onto a tree only after it knows that the tree 256 setup has completed. 258 1.4. Label Allocation 260 In the case of labeled multicast signaled hop by hop towards the 261 root, whether it's (x,g) multicast or "mLDP" tunnel, labels are 262 assigned by a downstream router and advertised to its upstream router 263 (from traffic direction point of view). In the case of controller 264 based signaling, routers do not originate tree join routes anymore, 265 so the controllers have to assign labels on behalf of routers, and 266 there are three options for label assignment: 268 o From each router's SRLB that the controller learns 270 o From the common SRGB that the controller learns 272 o From the controller's local label space 274 Assignment from each router's SRLB is no different from each router 275 assigning labels from its own local label space in the hop-by-hop 276 signaling case. The assignments for one router is independent of 277 assignments for another router, even for the same tree. 279 Assignment from the controller's local label space is upstream- 280 assigned [RFC5331]. It is used if the controller does not learn the 281 common SRGB or each router's SRLB. Assignment from the SRGB 282 [RFC8402] is only meaningful if all SRGBs are the same and a single 283 common label is used for all the routers on a tree in case of 284 unidirectional tree/tunnel (Section 1.4.1). Otherwise, assignment 285 from SRLB is preferred. 287 The choice of which of the options to use depends on many factors. 288 An operator may want to use a single common label per tree for ease 289 of monitoring and debugging, but that requires explicit RPF checking 290 and either common SRGB or upstream assigned labels, which may not be 291 supported due to either the software or hardware limitations (e.g. 292 label imposition/disposition limits). In an SR network, assignment 293 from the common SRGB if it's required to use a single common label 294 per unidirectional tree, or otherwise assignment from SRLB is a good 295 choice because it does not require support for context label spaces. 297 1.4.1. Using a Common per-tree Label for All Routers 299 MPLS labels only have local significance. For an LSP that goes 300 through a series of routers, each router allocates a label 301 independently and it swaps the incoming label (that it advertised to 302 its upstream) to an outgoing label (that it received from its 303 downstream) when it forwards a labeled packet. Even if the incoming 304 and outgoing labels happen to be the same on a particular router, 305 that is just incidental. 307 With Segment Routing, it is becoming a common practice that all 308 routers use the same SRGB so that a SID maps to the same label on all 309 routers. This makes it easier for operators to monitor and debug 310 their network. The same concept applies to multicast trees as well - 311 a common per-tree label can be used for a router to receive traffic 312 from its upstream neighbor and replicate traffic to all its 313 downstream neighbor. 315 However, a common per-tree label can only be used for unidirectional 316 trees. Additionally, unless the entire tree is updated for every 317 tree node to use a new common per-tree label with any change in the 318 tree (no matter how small and local the change is), it requires each 319 router to do explicit RPF check, so that only packets from its 320 expected upstream neighbor are accepted. Otherwise, traffic loop may 321 form during topology changes, because the forwarding state update is 322 no longer ordered. 324 Traditionally, p2mp mpls forwarding does not require explicit RPF 325 check as a downstream router advertises a label only to its upstream 326 router and all traffic with that incoming label is presumed to be 327 from the upstream router and accepted. When a downstream router 328 switches to a different upstream router a different label will be 329 advertised, so it can determine if traffic is from its expected 330 upstream neighbor purely based on the label. Now with a single 331 common label used for all routers on a tree to send and receive 332 traffic with, a router can no longer determine if the traffic is from 333 its expected neighbor just based on that common tree label. 334 Therefore, explicit RPF check is needed. Instead of interface based 335 RPF checking as in PIM case, neighbor based RPF checking is used - a 336 label identifying the upstream neighbor precedes the common tree 337 label and the receiving router checks if that preceding neighbor 338 label matches its expected upstream neighbor. Notice that this is 339 similar to what's described in Section "9.1.1 Discarding Packets from 340 Wrong PE" of RFC 6513 (an egress PE discards traffic sent from a 341 wrong ingress PE). The only difference is one is used for label 342 based forwarding and the other is used for (s,g) based forwarding. 343 [note: for bidirectional trees, we may be able to use two labels per 344 tree - one for upstream traffic and one for downstream traffic. This 345 needs further verification]. 347 Both the common per-tree label and the neighbor label are allocated 348 either from the common SRGB or from the controller's local label 349 space. In the latter case, an additional label identifying the 350 controller's label space is needed, as described in the following 351 section. 353 1.4.2. Upstream-assignment from Controller's Local Label Space 355 In this case in the multicast packet's label stack the tree label and 356 upstream neighbor label (if used in case of single common-label per 357 tree) are preceded by a downstream-assigned "context label". The 358 context label identifies a context-specific label space (the 359 controller's local label space), and the upstream-assigned label that 360 follows it is looked up in that space. 362 This specification requires that, in case of upstream-assignment from 363 a controller's local label space, each router D to assign, 364 corresponding to each controller C, a context label that identifies 365 the upstream-assigned label space used by that controller. This 366 label, call it Lc-D, is communicated by D to C via BGP-LS [RFC 7752]. 368 Suppose a controller is setting up unidirectional tree T. It assigns 369 that tree the label Lt, and assigns label Lu to identify router U 370 which is the upstream of router D on tree T. C needs to tell U: "to 371 send a packet on the given tree/tunnel, one of the things you have to 372 do is push Lt onto the packet's label stack, then push Lu, then push 373 Lc-D onto the packet's label stack, then unicast the packet to D". 374 Controller C also needs to inform router D of the correspondence 375 between and tree T. 377 To achieve that, when C sends a Replication State route, for each 378 tunnel in the TEA, it may include a label stack Sub-TLV [RFC9012], 379 with the outer label being the context label Lc-D (received by the 380 controller from the corresponding downstream), the next label being 381 the upstream neighbor label Lu, and the inner label being the label 382 Lt assigned by the controller for the tree. The router receiving the 383 route will use the label stacks to send traffic to its downstreams. 385 For C to signal the expected label stack for D to receive traffic 386 with, we overload a tunnel TLV in the TEA of the Replication State 387 route sent to D - if the tunnel TLV has a RPF sub-TLV 388 (Section 3.1.4), then it indicates that this is actually for 389 receiving traffic from the upstream. 391 1.5. Determining Root/Leaves 393 For the controller to calculate a tree, it needs to determine the 394 root and leaves of the tree. This may be based on provisioning 395 (static or dynamically programmed), or based on BGP signaling as 396 described in the following two sections. 398 In both of the following cases, the BGP updates are targeted at the 399 controller, via an address specific Route Target with Global 400 Administration Field set to the controller's address and the Local 401 Administration Field set to 0. 403 1.5.1. PIM-SSM/Bidir or mLDP 405 In this case, the PIM Last Hop Routers (LHRs) with interested 406 receivers or mLDP tunnel leaves encode a Leaf A-D route 407 ([I-D.ietf-bess-bgp-multicast]) with the Upstream Router's IP Address 408 field set to the controller's address and the Originating Router's IP 409 Address set to the address of the LHR or the P2MP tunnel leaf. The 410 encoded PIM SSM source or mLDP FEC provides root information and the 411 Originating Router's IP Address provides leaf information. 413 1.5.2. PIM ASM 415 In this case, the First Hop Routers (FHRs) originate Source Active 416 routes which provides root information, and the LHRs originate Leaf 417 A-D routes, encoded as in the PIM-SSM case except that it is (*,G) 418 instead of (S,G). The Leaf A-D routes provide leaf information. 420 1.6. Multiple Domains 422 An end to end multicast tree may span multiple routing domains, and 423 the setup of the tree in each domain may be done differently as 424 specified in [I-D.ietf-bess-bgp-multicast]. This section discusses a 425 few aspects specific to controller signaling. 427 Consider two adjacent domains each with its own controller in the 428 following configuration where router B is an upstream node of C for a 429 multicast tree: 431 | 432 domain 1 | domain 2 433 | 434 ctrlr1 | ctrlr2 435 /\ | /\ 436 / \ | / \ 437 / \ | / \ 438 A--...-B--|--C--...-D 439 | 441 In the case of native (un-labeled) IP multicast, nothing special is 442 needed. Controller 1 signals B to send traffic out of B-C link while 443 Controller 2 signals C to accept traffic on the B-C link. 445 In the case of labeled IP multicast or mLDP tunnel, the controllers 446 may be able to coordinate their actions such that Controller 1 447 signals B to send traffic out of B-C link with label X while 448 Controller 2 signals C to accept traffic with the same label X on the 449 B-C link. If the coordination is not possible, then C needs to use 450 hop-by-hop BGP signaling to signal towards B, as specified in 451 [I-D.ietf-bess-bgp-multicast]. 453 The configuration could also be as following, where router B borders 454 both domain 1 and domain 2 and is controlled by both controllers: 456 | 457 domain 1 | domain 2 458 | 459 ctrlr1 | ctrlr2 460 /\ | /\ 461 / \ | / \ 462 / \ | / \ 463 / \|/ \ 464 A--...---B--...---C 465 | 467 As discussed in Section 1.2, when B receives signaling from both 468 Controller 1 and Controller 2, only one of the routes would be 469 selected as the best route and used for programming the forwarding 470 state of the corresponding segment. For B to stitch the two segments 471 together, it is expected for B to know by provisioning that it is a 472 border router so that B will look for the other segment (represented 473 by the signaling from the other controller) and stitch the two 474 together. 476 1.7. SR-P2MP 478 [I-D.ietf-pim-sr-p2mp-policy] describes an architecture to construct 479 a Point-to-Multipoint (P2MP) tree to deliver Multi-point services in 480 a Segment Routing domain. An SR P2MP tree is constructed by 481 stitching together a set of Replication Segments that are specified 482 in [I-D.ietf-spring-sr-replication-segment]. An SR Point-to- 483 Multipoint (SR P2MP) Policy is used to define and instantiate a P2MP 484 tree which is computed by a controller. 486 An SR P2MP tree is no different from an mLDP tunnel in MPLS 487 forwarding plane. The difference is in control plane - instead of 488 hop-by-hop mLDP signaling from leaves towards the root, to set up SR 489 P2MP trees controllers program forwarding state (referred to as 490 Replication Segments) to the root, leaves, and intermediate 491 replication points using Netconf, PCEP, BGP or any other reasonable 492 signaling/programming methods. 494 Procedures in this document can be used for controllers to set up SR 495 P2MP trees with just an additional SR P2MP tree type and 496 corresponding tree identification in the Replication State route. 498 If/once the SR Replication Segment is extended to bi-redirectional, 499 and SR MP2MP is introduced, the same procedures in this document 500 would apply to SR MP2MP as well. 502 2. Alternative to BGP-MVPN 504 Multicast with BGP signaling from controllers can be an alternative 505 to BGP-MVPN [RFC6514]. It is an attractive option especially when 506 the controller can easily determine the source and leaf information. 508 With BGP-MVPN, distributed signaling is used for the following: 510 o Egress PEs advertise C-multicast (Type-6/7) Auto-Discovery (A-D) 511 routes to join C-multicast trees at the overlay (PE-PE). 513 o In case of ASM, ingress PEs advertise Source Active (Type-5) A-D 514 routes to signal sources so that egress PEs can establish Shortest 515 Path Trees (SPT). 517 o PEs advertise I/S-PMSI (Type-1/2/3) A-D routes to signal the 518 binding of overlay/customer traffic to underlay/provider tunnels. 519 For some types of tunnels, Leaf (Type-4) A-D routes are advertised 520 by egress PEs in response to I/S-PMSI A-D routes to join the 521 tunnels. 523 Based on the above signaled information, an ingress PE builds 524 forwarding state to forward traffic arriving on the PE-CE interface 525 to the provider tunnel (and local interfaces if there are local 526 downstream receivers), and an egress PE builds forwarding state to 527 forward traffic arriving on a provider tunnel to local interfaces 528 with downstream receivers. 530 Notice that multicast with BGP signaling from controllers essentially 531 programs "static" forwarding state onto multicast tree nodes. As 532 long as a controller can determine how a C-multicast flow should be 533 forwarded on ingress/egress PEs, it can signal to the ingress/egress 534 PEs using the procedures in this document to set up forwarding state, 535 removing the need of the above-mentioned distributed signaling and 536 processing. 538 For the controller to learn the egress PEs for a C-multicast tree (so 539 that it can set up or find a corresponding provider tunnel), the 540 egress PEs advertise MCAST-TREE Leaf A-D routes (Section 1.5.1) 541 towards the controller to signal its desire to join C-multicast 542 trees, each with an appropriate RD and an extended community derived 543 from the Route Target for the VPN 544 ([I-D.zzhang-idr-rt-derived-community]) so that the controller knows 545 which VPN it is for. The controller then advertises corresponding 546 MCAST-TREE Replication State routes to set up C-multicast forwarding 547 state on ingress and egress PEs. To encode the provider tunnel 548 information in the MCAST-TREE Replication State route for an ingress 549 PE, the TEA can explicitly list all replication branches of the 550 tunnel, or just just the binding SID for the provider tunnel in the 551 form of Segment List tunnel type, if the tunnel has a binding SID. 553 The Replication State route may also have a PMSI Tunnel Attribute 554 (PTA) attached to specify the provider tunnel while the TEA specifies 555 the local PE-CE interfaces where traffic need to be sent out. This 556 not only allows provider tunnel without a binding SID (e.g., in a 557 non-SR network) to be specified without explicitly listing its 558 replication branches, but also allows the service controller for MVPN 559 overlay state to be independent of provider tunnel setup (which could 560 be from a different transport controller or even without a 561 controller). 563 However, notice that if the service controller and transport 564 controller are different, then the service controller needs to signal 565 the transport controller the tree information: identification, set of 566 leaves, and applicable constraints. While this can be achieved (see 567 Section 1.5.1), it is easier for the service and transport controller 568 to be the same. 570 Depending on local policy, a PE may add PE-CE interfaces to its 571 replication state based on local signaling (e.g., IGMP/PIM) instead 572 of completely relying on signaling from controllers. 574 If dynamic switching between inclusive and selective tunnels based on 575 data rate is needed, the ingress PE can advertise/withdraw S-PMSI 576 routes targeted only at the controllers, without PMSI Tunnel 577 Attribute attached. The controller then updates relevant MCAST-TREE 578 Replication State routes to update C-multicast forwarding states on 579 PEs to switch to a new tunnel. 581 3. Specification 583 3.1. Enhancements to TEA 585 This document specifies two new Tunnel Types and four new sub-TLVs. 586 The type codes will be assigned by IANA from the "BGP Tunnel 587 Encapsulation Attribute Tunnel Types". 589 3.1.1. Any-Encapsulation Tunnel 591 When a multicast packet needs to be sent from an upstream node to a 592 downstream node, it may not matter how it is sent - natively when the 593 two nodes are directly connected or tunneled otherwise. In case of 594 tunneling, it may not matter what kind of tunnel is used - MPLS, GRE, 595 IPinIP, or whatever. 597 To support this, an "Any-Encapsulation" tunnel type of value 20 is 598 defined. This tunnel MAY have a Tunnel Endpoint and other Sub-TLVs. 599 The Tunnel Endpoint Sub-TLV specifies an IP address, which could be 600 any of the following: 602 o An interface's local address - when a packet needs to sent out of 603 the corresponding interface natively. On a LAN multicast MAC 604 address MUST be used. 606 o A directly connected neighbor's interface address - when a packet 607 needs to unicast to the address natively. 609 o An address that is not directly connected - when a packet needs to 610 be tunneled to the address (any tunnel type/instance can be used). 612 3.1.2. Load-balancing Tunnel 614 Consider that a multicast packet needs to be sent to a downstream 615 node, which could be reached via four paths P1~P4. If it does not 616 matter which of path is taken, an "Any-Encapsulation" tunnel with the 617 Tunnel Endpoint Sub-TLV specifying the downstream node's loopback 618 address works well. If the controller wants to specify that only 619 P1~P2 should be used, then a "Load-balancing" tunnel needs to be 620 used, listing P1 and P2 as member tunnels of the "Load-balancing" 621 tunnel. 623 A load-balancing tunnel has one "Member Tunnels" Sub-TLV defined in 624 this document. The Sub-TLV is a list of tunnels, each specifying a 625 way to reach the downstream. A packet will be sent out of one of the 626 tunnels listed in the Member Tunnels Sub-TLV of the load-balancing 627 tunnel. 629 3.1.3. Receiving MPLS Label Stack 631 While [I-D.ietf-bess-bgp-multicast] uses S-PMSI A-D routes to signal 632 forwarding information for MP2MP upstream traffic, when controller 633 signaling is used, a single Replication State route is used for both 634 upstream and downstream traffic. Since different upstream and 635 downstream labels need to be used, a new "Receiving MPLS Label Stack" 636 of type TBD is added as a tunnel sub-TLV in addition to the existing 637 MPLS Label Stack sub-TLV. Other than type difference, the two are 638 the encoded the same way. 640 The Receiving MPLS Label Stack sub-TLV is added to each downstream 641 tunnel in the TEA of Replication State route for an MP2MP tunnel to 642 specify the forwarding information for upstream traffic from the 643 corresponding downstream node. A label stack instead of a single 644 label is used because of the need for neighbor based RPF check, as 645 further explained in the following section. 647 The Receiving MPLS Label Stack sub-TLV is also used for downstream 648 traffic from the upstream for both P2MP and MP2MP, as specified 649 below. 651 3.1.4. RPF Sub-TLV 653 The RPF sub-TLV is of type 124 allocated by IANA and has a one-octet 654 length. The length is 0 currently, but if necessary in the future, 655 sub-sub-TLVs could be placed in its value part. If the RPF sub-TLV 656 appears in a tunnel, it indicates that the "tunnel" is for the 657 upstream node instead of a downstream node. 659 In case of MPLS, the tunnel contains an Receiving MPLS Label Stack 660 sub-TLV for downstream traffic from the upstream node, and in case of 661 MP2MP it also contains a regular MPLS Label Stack sub-TLV for 662 upstream traffic to the upstream node. 664 The inner most label in the Receiving MPLS Label Stack is the 665 incoming label identifying the tree (for comparison the inner most 666 label for a regular MPLS Label Stack is the outgoing label). If the 667 Receiving MPLS Label Stack sub-TLVe has more than one labels, the 668 second inner most label in the stack identifies the expected upstream 669 neighbor and explicit RPF checking needs to be set up for the tree 670 label accordingly. 672 3.1.5. Tree Label Stack sub-TLV 674 The MPLS Label Stack sub-TLV can be used to specify the complete 675 label stack used to send traffic, with the stack including both a 676 transport label (stack) and label(s) that identify the (tree, 677 neighbor) to the downstream node. There are cases where the 678 controller only wants to specify the tree-identifying labels but 679 leave the transport details to the router itself. For example, the 680 router could locally determine a transport label (stack) and combine 681 with the tree-identifying labels signaled from the controller to get 682 the complete outgoing label stack. 684 For that purpose, a new Tree Label Stack sub-TLV of type 125 is 685 defined, with a one-octet length field. The value field contains a 686 label stack with the same encoding as value part of the MPLS Label 687 Stack sub-TLV, but with a different type. A stack is specified 688 because it may take up to three labels (see Section 1.4): 690 o If different nodes use different labels (allocated from the common 691 SRGB or the node's SRLB) for a (tree, neighbor) tuple, only a 692 single label is in the stack. This is similar to current mLDP hop 693 by hop signaling case. 695 o If different nodes use the same tree label, then an additional 696 neighbor-identifying label is needed in front of the tree label. 698 o For the previous bullet, if the neighbor-identifying label is 699 allocated from the controller's local label space, then an 700 additional context label is needed in front of the neighbor label. 702 3.1.6. Backup Tunnel sub-TLV 704 The Backup Tunnel sub-TLV is used to specify the backup paths for the 705 tunnel. The length is two-octet. The value part encodes a one-octet 706 flags field and a variable length Tunnel Encapsulation Attribute. If 707 the tunnel goes down, traffic that is normally sent out of the tunnel 708 is fast rerouted to the tunnels listed in the encoded TEA. 710 +--------------------------------+ 711 | Sub-TLV Type (1 Octet, TBD) | 712 +--------------------------------+ 713 | Sub-TLV Length (2 Octets) | 714 +--------------------------------+ 715 | P | rest of 1 Octet Flags | 716 +--------------------------------+ 717 | Backup TEA (variable length) | 718 +--------------------------------+ 720 The backup tunnels can be going to the same or different nodes 721 reached by the original tunnel. 723 If the tunnel carries a RPF sub-TLV and a Backup Tunnel sub-TLV, then 724 both traffic arriving on the original tunnel and on the tunnels 725 encoded in the Backup Tunnel sub-TLV's TEA can be accepted, if the 726 Parallel (P-)bit in the flags field is set. If the P-bit is not set, 727 then traffic arriving on the backup tunnel is accepted only if router 728 has switched to receiving on the backup tunnel (this is the 729 equivalent of PIM/mLDP MoFRR). 731 3.2. Context Label TLV in BGP-LS Node Attribute 733 For a router to signal the context label that it assigns for a 734 controller (or any label allocator that assigns labels - from its 735 local label space - that will be received by this router), a new BGP- 736 LS Node Attribute TLV is defined: 738 0 1 2 3 739 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 740 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 741 | Type | Length | 742 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 743 | Context Label | 744 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 745 | IPv4/v6 Address of Label Space Owner | 746 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 748 The Length field implies the type of the address. Multiple Context 749 Label TLVs may be included in a Node Attribute, one for each label 750 space owner. 752 An as example, a controller with address 11.11.11.11 allocates label 753 200 from its own label space, and router A assigns label 100 to 754 identify this controller's label space. The router includes the 755 Context Label TLV (100, 11.11.11.11) in its BGP-LS Node Attribute and 756 the controller instructs router B to send traffic to router A with a 757 label stack (100, 200), and router A uses label 100 to determine the 758 Label FIB in which to look up label 200. 760 3.3. Replicate State Route Type 762 The NLRI route type for signaling from controllers to tree nodes is 763 "Replication State". The NLRI has the following format: 765 +-----------------------------------+ 766 | Route Type - Replication State | 767 +-----------------------------------+ 768 | Length (1 octet) | 769 +-----------------------------------+ 770 | Tree Type (1 octet) | 771 +-----------------------------------+ 772 |Tree Type Specific Length (1 octet)| 773 +-----------------------------------+ 774 ~ Tree Identification (variable) ~ 775 +-----------------------------------+ 776 | Tree Node's IP Address | 777 +-----------------------------------+ 778 | Originator's IP Address | 779 +-----------------------------------+ 781 Replication State NLRI 783 Notice that Replication State is just a new route type with the same 784 format of Leaf A-D route except some fields are renamed: 786 o Tree Type in Replication State route matches the PMSI route type 787 in Leaf A-D route 789 o Tree Node's IP Address matches the Upstream Router's IP Address of 790 the PMSI route key in Leaf A-D route 792 With this arrangement, IP multicast tree and mLDP tunnel can be 793 signaled via Replication State routes from controllers, or via Leaf 794 A-D routes either hop by hop or from controllers with maximum code 795 reuse, while newer types of trees like SR-P2MP can be signaled via 796 Replication State routes with maximum code reuse as well. 798 3.4. SR P2MP Signaling 800 An SR P2MP policy for an SR P2MP tree is identified by a (Root, Tree- 801 id) tuple. It has a set of leaves and set of Candidate Paths (CPs). 802 The policy is instantiated on the root of the tree, with 803 corresponding Replication Segments - identified by (Root, Tree-id, 804 Tree-Node-id) - instantiated on the tree nodes (root, leaves, and 805 intermediate replication points). 807 3.4.1. Replication State Route for SR P2MP 809 For SR P2MP, forwarding on tree nodes state are represented as 810 Replication Segments and are signaled from controllers to tree nodes 811 via Replication State routes. A Replication State route for SR P2MP 812 has a Tree Type 1 and the Tree Identification includes (Route 813 Distinguisher, Root ID, Tree ID), where the RD implicitly identifies 814 the candidate path. 816 +-----------------------------------+ 817 | Route Type - Replication State | 818 +-----------------------------------+ 819 | Length (1 octet) | 820 +-----------------------------------+ 821 | Tree Type (1 - SR P2MP) | 822 +-----------------------------------+ 823 |Tree Type Specific Length (1 octet)| 824 +-----------------------------------+ 825 | RD (8 octets) | 826 +-----------------------------------+ 827 | Root ID (4 or 16 octets) | 828 +-----------------------------------+ 829 | Tree ID (4 octets) | 830 +-----------------------------------+ 831 | Tree Node's IP Address | 832 +-----------------------------------+ 833 | Originating Router's IP Address | 834 +-----------------------------------+ 836 Replication State route for SR Replication Segment 838 3.4.2. BGP Community Container for SR P2MP Policy 840 The Replication State route for Replication Segments signaled to the 841 root is also used to signal (parts of) the SR P2MP Policy - the 842 policy name, the set of leaves (optional, for informational purpose), 843 preference of the CP and other information are all encoded in a newly 844 defined BGP Community Container (BCC) 845 [I-D.ietf-idr-wide-bgp-communities] called SR P2MP Policy BCC. 847 The SR P2MP Policy BCC has a BGP Community Container type to be 848 assigned by IANA. It is composed of a fixed 4-octet Candidate Path 849 Preference value, optionally followed by TLVs. 851 0 1 2 3 852 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 853 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 854 | Candidate Path Preference | 855 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 856 | | 857 | TLVs (optional) | 858 | | 859 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 861 BGP Community Container for SR P2MP Policy 863 One optional TLV is to enclose the following optional Atoms TLVs that 864 are already defined in [I-D.ietf-idr-wide-bgp-communities]: 866 o An IPv4 or IPv6 Prefix list - for the set of leaves 868 o A UTF-8 string - for the policy name 870 If more information for the policy are needed, more Atoms TLVs or SR 871 P2MP Policy BCC specific TLVs can be defined. 873 The root receives one Replication State route for each Candidate Path 874 of the policy. Only one of the routes need to, though more than one 875 MAY include the above listed optional Atom TLVs in the SR P2MP Policy 876 BCC. 878 Alternatively, an additional route type can be used to carry policy 879 information instead. Details/decision to be specified in a future 880 revision. 882 3.4.3. Tunnel Encapsulation Attribute for SR-P2MP 884 For SR-P2MP, there are two methods of encoding forwarding information 885 in the TEA, as described below. 887 3.4.3.1. TEA with Tunnel TLVs Being Replication Branches 889 In this method, a TEA with tunnels being replication branches as 890 specified in earlier sections can be used just as in non SR-P2NP 891 cases. 893 Additionally, a replication branch can also be encoded as a segment 894 list, with a "Segment List" tunnel type. The tunnel has a Segment 895 List sub-TLV as specified in Section 2.4.4 of 896 [I-D.ietf-idr-segment-routing-te-policy]. 898 For a "Segment List" tunnel, the last segment in the segment list 899 represents the SID of the tree. When it is without the RPF sub-TLV, 900 the previous segments in the list steer traffic to the downstream 901 node, and the segment before the last one MAY also be a binding SID 902 for another P2MP tunnel, meaning that the replication branch 903 represented by this "Segment List" is actually a P2MP tunnel to a set 904 of downstream nodes. 906 3.4.3.2. TEA with a Single SR-P2MP Policy Tunnel 908 Alternatively, a TEA with a single SR-P2MP Policy tunnel type similar 909 to the SR Policy tunnel type can be used. The details are specified 910 in [I-D.hb-idr-sr-p2mp-policy] but may be moved here depending on WG 911 consensus. 913 3.5. Replication State Route with Label Stack for Tree Identification 915 As described in Section 1.3, tree label instead of tree 916 identification could be encoded in the NLRI to identify the tree in 917 the control plane as well as in the forwarding plane. For that a new 918 Tree Type of 2 is used and the Replication State route has the 919 following format: 921 +-------------------------------------+ 922 | Route Type - Replication State | 923 +-------------------------------------+ 924 | Length (1 octet) | 925 +-------------------------------------+ 926 | Tree Type 2 (Label as Tree ID) | 927 +-------------------------------------+ 928 |Tree Type specific Length (1 octet) | 929 +-------------------------------------+ 930 | RD (8 octets) | 931 +-------------------------------------+ 932 ~ Label Stack (variable) ~ 933 +-------------------------------------+ 934 | Tree Node's IP Address | 935 +-------------------------------------+ 936 | Originating Router's IP Address | 937 +-------------------------------------+ 939 Replication State route for tree identification by label stack 941 As discussed in Section 1.4.2, a label stack may have to be used to 942 identify a tree in the data plane so a label stack is encoded here. 943 The number of labels is derived from the Tree Type Specific Length 944 field. Each label stack entry is encoded as following: 946 0 1 2 3 947 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 948 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 949 | Label |0 0 0 0 0 0 0 0 0 0 0 0| 950 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 952 4. Procedures 954 Details to be added. The general idea is described in the 955 introduction section. 957 5. Security Considerations 959 This document does not introduce new security risks. 961 6. IANA Considerations 963 IANA has assigned the following code points: 965 o "Any-Encapsulation" tunnel type 78 from "BGP Tunnel Encapsulation 966 Attribute Tunnel Types" registry 968 o "RPF" sub-TLV type 124 and "Tree Label Stack" sub-TLV type 125 969 from "BGP Tunnel Encapsulation Attribute Sub-TLVs" registry 971 This document makes the following additional IANA requests: 973 o Assign "Segment List" and "Load-balancing" tunnel types from the 974 "BGP Tunnel Encapsulation Attribute Tunnel Types" registry 976 o Assign "Member Tunnels" and "Receiving MPLS Label Stack" sub-TLV 977 types from the "BGP Tunnel Encapsulation Attribute Sub-TLVs" 978 registry. The "Member Tunnels" sub-TLV has a two-octet value 979 length (so the type should be in the 128-255 range), while the 980 "Receiving MPLS Label Stack" sub-TLV has a one-octet value length. 982 o Assign "Context Label TLV" type from the "BGP-LS Node Descriptor, 983 Link Descriptor, Prefix Descriptor, and Attribute TLVs" registry. 985 o Assign "Replication State" route type from the "BGP MCAST-TREE 986 Route Types" registry. 988 o Create a "Tree Type Registry for Replication State Route", with 989 the following initial assignments: 991 * 1: SR-P2MP 993 * 2: P2MP Tree with Label as Identification 994 * 3: IP Multicast 996 * 0x43: mLDP 998 o Assign a new BGP Community Container type "SR P2MP Policy", and to 999 create an "SR P2MP Policy Community Container TLV Registry", with 1000 an initial entry for "TLV for Atoms". 1002 7. Acknowledgements 1004 The authors Eric Rosen for his questions, suggestions, and help 1005 finding solutions to some issues like the neighbor based explicit RPF 1006 checking. The authors also thank Lenny Giuliano, Sanoj Vivekanandan 1007 and IJsbrand Wijnands for their review and comments. 1009 8. References 1011 8.1. Normative References 1013 [I-D.ietf-bess-bgp-multicast] 1014 Zhang, Z., Giuliano, L., Patel, K., Wijnands, I., Mishra, 1015 M., and A. Gulko, "BGP Based Multicast", draft-ietf-bess- 1016 bgp-multicast-03 (work in progress), January 2021. 1018 [I-D.ietf-idr-segment-routing-te-policy] 1019 Previdi, S., Filsfils, C., Talaulikar, K., Mattes, P., 1020 Rosen, E., Jain, D., and S. Lin, "Advertising Segment 1021 Routing Policies in BGP", draft-ietf-idr-segment-routing- 1022 te-policy-11 (work in progress), November 2020. 1024 [I-D.ietf-idr-wide-bgp-communities] 1025 Raszuk, R., Haas, J., Lange, A., Decraene, B., Amante, S., 1026 and P. Jakma, "BGP Community Container Attribute", draft- 1027 ietf-idr-wide-bgp-communities-05 (work in progress), July 1028 2018. 1030 [I-D.ietf-pim-sr-p2mp-policy] 1031 Voyer, D., Filsfils, C., Parekh, R., Bidgoli, H., and Z. 1032 Zhang, "Segment Routing Point-to-Multipoint Policy", 1033 draft-ietf-pim-sr-p2mp-policy-02 (work in progress), 1034 February 2021. 1036 [I-D.ietf-spring-sr-replication-segment] 1037 Voyer, D., Filsfils, C., Parekh, R., Bidgoli, H., and Z. 1038 Zhang, "SR Replication Segment for Multi-point Service 1039 Delivery", draft-ietf-spring-sr-replication-segment-04 1040 (work in progress), February 2021. 1042 [I-D.zzhang-idr-rt-derived-community] 1043 Zhang, Z., "Extended Communities Derived from Route 1044 Targets", draft-zzhang-idr-rt-derived-community-01 (work 1045 in progress), March 2021. 1047 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1048 Requirement Levels", BCP 14, RFC 2119, 1049 DOI 10.17487/RFC2119, March 1997, 1050 . 1052 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 1053 S. Ray, "North-Bound Distribution of Link-State and 1054 Traffic Engineering (TE) Information Using BGP", RFC 7752, 1055 DOI 10.17487/RFC7752, March 2016, 1056 . 1058 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1059 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1060 May 2017, . 1062 [RFC9012] Patel, K., Van de Velde, G., Sangli, S., and J. Scudder, 1063 "The BGP Tunnel Encapsulation Attribute", RFC 9012, 1064 DOI 10.17487/RFC9012, April 2021, 1065 . 1067 8.2. Informative References 1069 [I-D.hb-idr-sr-p2mp-policy] 1070 Bidgoli, H., Voyer, D., Stone, A., Parekh, R., Krier, S., 1071 and A. Venkateswaran, "Advertising p2mp policies in BGP", 1072 draft-hb-idr-sr-p2mp-policy-01 (work in progress), October 1073 2020. 1075 [RFC6388] Wijnands, IJ., Ed., Minei, I., Ed., Kompella, K., and B. 1076 Thomas, "Label Distribution Protocol Extensions for Point- 1077 to-Multipoint and Multipoint-to-Multipoint Label Switched 1078 Paths", RFC 6388, DOI 10.17487/RFC6388, November 2011, 1079 . 1081 [RFC6513] Rosen, E., Ed. and R. Aggarwal, Ed., "Multicast in MPLS/ 1082 BGP IP VPNs", RFC 6513, DOI 10.17487/RFC6513, February 1083 2012, . 1085 [RFC6514] Aggarwal, R., Rosen, E., Morin, T., and Y. Rekhter, "BGP 1086 Encodings and Procedures for Multicast in MPLS/BGP IP 1087 VPNs", RFC 6514, DOI 10.17487/RFC6514, February 2012, 1088 . 1090 [RFC7060] Napierala, M., Rosen, E., and IJ. Wijnands, "Using LDP 1091 Multipoint Extensions on Targeted LDP Sessions", RFC 7060, 1092 DOI 10.17487/RFC7060, November 2013, 1093 . 1095 [RFC7761] Fenner, B., Handley, M., Holbrook, H., Kouvelas, I., 1096 Parekh, R., Zhang, Z., and L. Zheng, "Protocol Independent 1097 Multicast - Sparse Mode (PIM-SM): Protocol Specification 1098 (Revised)", STD 83, RFC 7761, DOI 10.17487/RFC7761, March 1099 2016, . 1101 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 1102 Decraene, B., Litkowski, S., and R. Shakir, "Segment 1103 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 1104 July 2018, . 1106 Authors' Addresses 1108 Zhaohui Zhang 1109 Juniper Networks 1111 EMail: zzhang@juniper.net 1113 Robert Raszuk 1114 NTT Network Innovations 1116 EMail: robert@raszuk.net 1118 Dante Pacella 1119 Verizon 1121 EMail: dante.j.pacella@verizon.com 1123 Arkadiy Gulko 1124 Edward Jones Wealth Management 1126 EMail: arkadiy.gulko@edwardjones.com