idnits 2.17.1 draft-ietf-bier-use-cases-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (January 26, 2016) is 3012 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-08) exists of draft-ietf-bier-architecture-02 == Outdated reference: A later version (-11) exists of draft-ietf-bier-mvpn-01 == Outdated reference: A later version (-15) exists of draft-ietf-spring-segment-routing-04 -- Obsolete informational reference (is this intentional?): RFC 4601 (Obsoleted by RFC 7761) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group N. Kumar 3 Internet-Draft R. Asati 4 Intended status: Informational Cisco 5 Expires: July 29, 2016 M. Chen 6 X. Xu 7 Huawei 8 A. Dolganow 9 Alcatel-Lucent 10 T. Przygienda 11 Ericsson 12 A. Gulko 13 Thomson Reuters 14 D. Robinson 15 id3as-company Ltd 16 V. Arya 17 DirecTV Inc 18 C. Bestler 19 Nexenta 20 January 26, 2016 22 BIER Use Cases 23 draft-ietf-bier-use-cases-02.txt 25 Abstract 27 Bit Index Explicit Replication (BIER) is an architecture that 28 provides optimal multicast forwarding through a "BIER domain" without 29 requiring intermediate routers to maintain any multicast related per- 30 flow state. BIER also does not require any explicit tree-building 31 protocol for its operation. A multicast data packet enters a BIER 32 domain at a "Bit-Forwarding Ingress Router" (BFIR), and leaves the 33 BIER domain at one or more "Bit-Forwarding Egress Routers" (BFERs). 34 The BFIR router adds a BIER header to the packet. The BIER header 35 contains a bit-string in which each bit represents exactly one BFER 36 to forward the packet to. The set of BFERs to which the multicast 37 packet needs to be forwarded is expressed by setting the bits that 38 correspond to those routers in the BIER header. 40 This document describes some of the use-cases for BIER. 42 Status of This Memo 44 This Internet-Draft is submitted in full conformance with the 45 provisions of BCP 78 and BCP 79. 47 Internet-Drafts are working documents of the Internet Engineering 48 Task Force (IETF). Note that other groups may also distribute 49 working documents as Internet-Drafts. The list of current Internet- 50 Drafts is at http://datatracker.ietf.org/drafts/current/. 52 Internet-Drafts are draft documents valid for a maximum of six months 53 and may be updated, replaced, or obsoleted by other documents at any 54 time. It is inappropriate to use Internet-Drafts as reference 55 material or to cite them other than as "work in progress." 57 This Internet-Draft will expire on July 29, 2016. 59 Copyright Notice 61 Copyright (c) 2016 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (http://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with respect 69 to this document. Code Components extracted from this document must 70 include Simplified BSD License text as described in Section 4.e of 71 the Trust Legal Provisions and are provided without warranty as 72 described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 77 2. Specification of Requirements . . . . . . . . . . . . . . . . 3 78 3. BIER Use Cases . . . . . . . . . . . . . . . . . . . . . . . 3 79 3.1. Multicast in L3VPN Networks . . . . . . . . . . . . . . . 3 80 3.2. BUM in EVPN . . . . . . . . . . . . . . . . . . . . . . . 4 81 3.3. IPTV and OTT Services . . . . . . . . . . . . . . . . . . 5 82 3.4. Multi-service, converged L3VPN network . . . . . . . . . 6 83 3.5. Control-plane simplification and SDN-controlled networks 7 84 3.6. Data center Virtualization/Overlay . . . . . . . . . . . 7 85 3.7. Financial Services . . . . . . . . . . . . . . . . . . . 8 86 3.8. 4k broadcast video services . . . . . . . . . . . . . . . 9 87 3.9. Distributed Storage Cluster . . . . . . . . . . . . . . . 10 88 4. Security Considerations . . . . . . . . . . . . . . . . . . . 11 89 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11 90 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 11 91 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 92 7.1. Normative References . . . . . . . . . . . . . . . . . . 12 93 7.2. Informative References . . . . . . . . . . . . . . . . . 12 94 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 13 96 1. Introduction 98 Bit Index Explicit Replication (BIER) [I-D.ietf-bier-architecture] is 99 an architecture that provides optimal multicast forwarding through a 100 "BIER domain" without requiring intermediate routers to maintain any 101 multicast related per-flow state. BIER also does not require any 102 explicit tree-building protocol for its operation. A multicast data 103 packet enters a BIER domain at a "Bit-Forwarding Ingress Router" 104 (BFIR), and leaves the BIER domain at one or more "Bit-Forwarding 105 Egress Routers" (BFERs). The BFIR router adds a BIER header to the 106 packet. The BIER header contains a bit-string in which each bit 107 represents exactly one BFER to forward the packet to. The set of 108 BFERs to which the multicast packet needs to be forwarded is 109 expressed by setting the bits that correspond to those routers in the 110 BIER header. 112 The obvious advantage of BIER is that there is no per flow multicast 113 state in the core of the network and there is no tree building 114 protocol that sets up tree on demand based on users joining a 115 multicast flow. In that sense, BIER is potentially applicable to 116 many services where Multicast is used and not limited to the examples 117 described in this draft. In this document we are describing a few 118 use-cases where BIER could provide benefit over using existing 119 mechanisms. 121 2. Specification of Requirements 123 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 124 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 125 document are to be interpreted as described in [RFC2119]. 127 3. BIER Use Cases 129 3.1. Multicast in L3VPN Networks 131 The Multicast L3VPN architecture [RFC6513] describes many different 132 profiles in order to transport L3 Multicast across a providers 133 network. Each profile has its own different tradeoffs (see section 134 2.1 [RFC6513]). When using "Multidirectional Inclusive" "Provider 135 Multicast Service Interface" (MI-PMSI) an efficient tree is build per 136 VPN, but causes flooding of egress PE's that are part of the VPN, but 137 have not joined a particular C-multicast flow. This problem can be 138 solved with the "Selective" PMSI to build a special tree for only 139 those PE's that have joined the C-multicast flow for that specific 140 VPN. The more S-PMSI's, the less bandwidth is wasted due to 141 flooding, but causes more state to be created in the providers 142 network. This is a typical problem network operators are faced with 143 by finding the right balance between the amount of state carried in 144 the network and how much flooding (waste of bandwidth) is acceptable. 145 Some of the complexity with L3VPN's comes due to providing different 146 profiles to accommodate these trade-offs. 148 With BIER there is no trade-off between State and Flooding. Since 149 the receiver information is explicitly carried within the packet, 150 there is no need to build S-PMSI's to deliver multicast to a sub-set 151 of the VPN egress PE's. Due to that behaviour, there is no need for 152 S-PMSI's. 154 Mi-PMSI's and S-PMSI's are also used to provide the VPN context to 155 the Egress PE router that receives the multicast packet. Also, in 156 some MVPN profiles it is also required to know which Ingress PE 157 forwarded the packet. Based on the PMSI the packet is received from, 158 the target VPN is determined. This also means there is a requirement 159 to have a least a PMSI per VPN or per VPN/Ingress PE. This means the 160 amount of state created in the network is proportional to the VPN and 161 ingress PE's. Creating PMSI state per VPN can be prevented by 162 applying the procedures as documented in [RFC5331]. This however has 163 not been very much adopted/implemented due to the excessive flooding 164 it would cause to Egress PE's since *all* VPN multicast packets are 165 forwarded to *all* PE's that have one or more VPN's attached to it. 167 With BIER, the destination PE's are identified in the multicast 168 packet, so there is no flooding concern when implementing [RFC5331]. 169 For that reason there is no need to create multiple BIER domain's per 170 VPN, the VPN context can be carry in the multicast packet using the 171 procedures as defined in [RFC5331]. Also see [I-D.ietf-bier-mvpn] 172 for more information. 174 With BIER only a few MVPN profiles will remain relevant, simplifying 175 the operational cost and making it easier to be interoperable among 176 different vendors. 178 3.2. BUM in EVPN 180 The current widespread adoption of L2VPN services [RFC4664], 181 especially the upcoming EVPN solution [I-D.ietf-l2vpn-evpn] which 182 transgresses many limitations of VPLS, introduces the need for an 183 efficient mechanism to replicate broadcast, unknown and multicast 184 (BUM) traffic towards the PEs that participate in the same EVPN 185 instances (EVIs). As simplest deployable mechanism, ingress 186 replication is used but poses accordingly a high burden on the 187 ingress node as well as saturating the underlying links with many 188 copies of the same frame headed to different PEs. Fortunately 189 enough, EVPN signals internally P-Multicast Service Interface (PMSI) 190 [RFC6513] attribute to establish transport for BUM frames and with 191 that allows to deploy a plethora of multicast replication services 192 that the underlying network layer can provide. It is therefore 193 relatively simple to deploy BIER P-Tunnels for EVPN and with that 194 distribute BUM traffic without building of P-router state in the core 195 required by PIM, mLDP or comparable solutions. 197 Specifically, the same I-PMSI attribute suggested for mVPN can be 198 used easily in EVPN and given EVPN can multiplex and disassociate BUM 199 frames on p2mp and mp2mp trees using upstream assigned labels, BIER 200 P-Tunnel will support BUM flooding for any number of EVIs over a 201 single sub-domain for maximum scalability but allow at the other 202 extreme of the spectrum to use a single BIER sub-domain per EVI if 203 such a deployment is necessary. 205 Multiplexing EVIs onto the same PMSI forces the PMSI to span more 206 than the necessary number of PEs normally, i.e. the union of all PEs 207 participating in the EVIs multiplexed on the PMSI. Given the 208 properties of BIER it is however possible to encode in the receiver 209 bitmask only the PEs that participate in the EVI the BUM frame 210 targets. In a sense BIER is an inclusive as well as a selective tree 211 and can allow to deliver the frame to only the set of receivers 212 interested in a frame even though many others participate in the same 213 PMSI. 215 As another significant advantage, it is imaginable that the same BIER 216 tunnel needed for BUM frames can optimize the delivery of the 217 multicast frames though the signaling of group memberships for the 218 PEs involved has not been specified as of date. 220 3.3. IPTV and OTT Services 222 IPTV is a service, well known for its characteristics of allowing 223 both live and on-demand delivery of media traffic over end-to-end 224 Managed IP network. 226 Over The Top (OTT) is a similar service, well known for its 227 characteristics of allowing live and on-demand delivery of media 228 traffic between IP domains, where the source is often on an external 229 network relative to the receivers. 231 Content Delivery Networks (CDN) operators provide layer 4 232 applications, and often some degree of managed layer 3 IP network, 233 that enable media to be securely and reliably delivered to many 234 receivers. In some models they may place applications within third 235 party networks, or they may place those applications at the edges of 236 their own managed network peerings and similar inter-domain 237 connections. CDNs provide capabilities to help publishers scale to 238 meet large audience demand. Their applications are not limited to 239 audio and video delivery, but may include static and dynamic web 240 content, or optimized delivery for Massive Multiplayer Gaming and 241 similar. Most publishers will use a CDN for public Internet 242 delivery, and some publishers will use a CDN internally within their 243 IPTV networks to resolve layer 4 complexity. 245 In a typical IPTV environment the egress routers connecting to the 246 receivers will build the tree towards the ingress router connecting 247 to the IPTV servers. The egress routers would rely on IGMP/MLD 248 (static or dynamic) to learn about the receiver's interest in one or 249 more multicast group/channels. Interestingly, BIER could allows 250 provisioning any new multicast group/channel by only modifying the 251 channel mapping on ingress routers. This is deemed beneficial for 252 the linear IPTV video broadcasting in which every receivers behind 253 every egress PE routers would receive the IPTV video traffic. 255 With BIER in IPTV environment, there is no need of tree building from 256 egress to ingress. Further, any addition of new channel or new 257 egress routers can be directly controlled from ingress router. When 258 a new channel is included, the multicast group is mapped to Bit 259 string that includes all egress routers. Ingress router would start 260 sending the new channel and deliver it to all egress routers. As it 261 can be observed, there is no need for static IGMP provisioning in 262 each egress routers whenever a new channel/stream is added. Instead, 263 it can be controlled from ingress router itself by configuring the 264 new group to Bit Mask mapping on ingress router. 266 With BIER in OTT environment, these edge routers in CDN domain 267 terminating the OTT user session connect to the Ingress BIER routers 268 connecting content provider domains or a local cache server and 269 leverage the scalability benefit that BIER could provide. This may 270 rely on MBGP interoperation (or similar) between the egress of one 271 domain and the ingress of the next domain, or some other SDN control 272 plane may prove a more effective and simpler way to deploy BIER. For 273 a single CDN operator this could be well managed in the Layer 4 274 applications that they provide and it may be that the initial 275 receiver in a remote domain is actually an application operated by 276 the CDN which in turn acts as a source for the Ingress BIER router in 277 that remote domain, and by doing so keeps the BIER more descrete on a 278 domain by domain basis. 280 3.4. Multi-service, converged L3VPN network 282 Increasingly operators deploy single networks for multiple-services. 283 For example a single Metro Core network could be deployed to provide 284 Residential IPTV retail service, residential IPTV wholesale service, 285 and business L3VPN service with multicast. It may often be desired 286 by an operator to use a single architecture to deliver multicast for 287 all of those services. In some cases, governing regulations may 288 additionally require same service capabilities for both wholesale and 289 retail multicast services. To meet those requirements, some 290 operators use multicast architecture as defined in [RFC5331]. 291 However, the need to support many L3VPNs, with some of those L3VPNs 292 scaling to hundreds of egress PE's and thousands of C-multicast 293 flows, make scaling/efficiency issues defined in earlier sections of 294 this document even more prevalent. Additionally support for ten's of 295 millions of BGP multicast A-D and join routes alone could be required 296 in such networks with all consequences such a scale brings. 298 With BIER, again there is no need of tree building from egress to 299 ingress for each L3VPN or individual or group of c-multicast flows. 300 As described earlier on, any addition of a new IPTV channel or new 301 egress router can be directly controlled from ingress router and 302 there is no flooding concern when implementing [RFC5331]. 304 3.5. Control-plane simplification and SDN-controlled networks 306 With the advent of Software Defined Networking, some operators are 307 looking at various ways to reduce the overall cost of providing 308 networking services including multicast delivery. Some of the 309 alternatives being consider include minimizing capex cost through 310 deployment of network-elements with simplified control plane 311 function, minimizing operational cost by reducing control protocols 312 required to achieve a particular service, etc. Segment routing as 313 described in [I-D.ietf-spring-segment-routing] provides a solution 314 that could be used to provide simplified control-plane architecture 315 for unicast traffic. With Segment routing deployed for unicast, a 316 solution that simplifies control-plane for multicast would thus also 317 be required, or operational and capex cost reductions will not be 318 achieved to their full potential. 320 With BIER, there is no longer a need to run control protocols 321 required to build a distribution tree. If L3VPN with multicast, for 322 example, is deployed using [RFC5331] with MPLS in P-instance, the 323 MPLS control plane would no longer be required. BIER also allows 324 migration of C-multicast flows from non-BIER to BIER-based 325 architecture, which makes transition to control-plane simplified 326 network simpler to operationalize. Finally, for operators, who would 327 desire centralized, offloaded control plane, multicast overlay as 328 well as BIER forwarding could migrate to controller-based 329 programming. 331 3.6. Data center Virtualization/Overlay 333 Virtual eXtensible Local Area Network (VXLAN) [RFC7348] is a kind of 334 network virtualization overlay technology which is intended for 335 multi-tenancy data center networks. To emulate a layer2 flooding 336 domain across the layer3 underlay, it requires to have a mapping 337 between the VXLAN Virtual Network Instance (VNI) and the IP multicast 338 group in a ratio of 1:1 or n:1. In other words, it requires to 339 enable the multicast capability in the underlay. For instance, it 340 requires to enable PIM-SM [RFC4601] or PIM-BIDIR [RFC5015] multicast 341 routing protocol in the underlay. VXLAN is designed to support 16M 342 VNIs at maximum. In the mapping ratio of 1:1, it would require 16M 343 multicast groups in the underlay which would become a significant 344 challenge to both the control plane and the data plane of the data 345 center switches. In the mapping ratio of n:1, it would result in 346 inefficiency bandwidth utilization which is not optimal in data 347 center networks. More importantly, it is recognized by many data 348 center operators as a unaffordable burden to run multicast in data 349 center networks from network operation and maintenance perspectives. 350 As a result, many VXLAN implementations are claimed to support the 351 ingress replication capability since ingress replication eliminates 352 the burden of running multicast in the underlay. Ingress replication 353 is an acceptable choice in small-sized networks where the average 354 number of receivers per multicast flow is not too large. However, in 355 multi-tenant data center networks, especially those in which the NVE 356 functionality is enabled on a high amount of physical servers, the 357 average number of NVEs per VN instance would be very large. As a 358 result, the ingress replication scheme would result in a serious 359 bandwidth waste in the underlay and a significant replication burden 360 on ingress NVEs. 362 With BIER, there is no need for maintaining that huge amount of 363 multicast states in the underlay anymore while the delivery 364 efficiency of overlay BUM traffic is the same as if any kind of 365 stateful multicast protocols such as PIM-SM or PIM-BIDIR is enabled 366 in the underlay. 368 3.7. Financial Services 370 Financial services extensively rely on IP Multicast to deliver stock 371 market data and its derivatives, and critically require optimal 372 latency path (from publisher to subscribers), deterministic 373 convergence (so as to deliver market data derivatives fairly to each 374 client) and secured delivery. 376 Current multicast solutions e.g. PIM, mLDP etc., however, don't 377 sufficiently address the above requirements. The reason is that the 378 current solutions are primarily subscriber driven i.e. multicast tree 379 is setup using reverse path forwarding techniques, and as a result, 380 the chosen path for market data may not be latency optimal from 381 publisher to the (market data) subscribers. 383 As the number of multicast flows grows, the convergence time might 384 increase and make it somewhat nondeterministic from the first to the 385 last flow depending on platforms/implementations. Also, by having 386 more protocols in the network, the variability to ensure secured 387 delivery of multicast data increases, thereby undermining the overall 388 security aspect. 390 BIER enables setting up the most optimal path from publisher to 391 subscribers by leveraging unicast routing relevant for the 392 subscribers. With BIER, the multicast convergence is as fast as 393 unicast, uniform and deterministic regardless of number of multicast 394 flows. This makes BIER a perfect multicast technology to achieve 395 fairness for market derivatives per each subscriber. 397 3.8. 4k broadcast video services 399 In a broadcast network environment, the media content is sourced from 400 various content providers across different locations. The 4k 401 broadcast video is an evolving service with enormous demand on 402 network infrastructure in terms of Low latency, faster convergence, 403 high throughput, and high bandwidth. 405 In a typical broadcast satellite network environment, the receivers 406 are the satellite Terminal nodes which will receive the content from 407 various sources and feed the data to the satellite. Typically a 408 multicast group address is assigned for each source. Currently the 409 receivers can join the sources using either PIM-SM [RFC4601] or PIM- 410 SSM [RFC4607]. 412 In such network scenarios, normally PIM will be the multicast routing 413 protocol used to establish the tree between Ingress connecting the 414 content media sources to egress routers connecting the receivers. In 415 PIM-SM mode, the receivers relies on shared tree to learn the source 416 address and build source tree while in PIM-SSM mode, IGMPv3 is used 417 by receiver to signal the source address to the egress router. In 418 either case, as the number of sources increases, the number of 419 multicast trees in the core also increases resulting with more 420 multicast state entries in the core and increasing the convergence 421 time. 423 With BIER in 4k broadcast satellite network environment, there is no 424 need to run PIM in the core and no need to maintain any multicast 425 state. The obvious advantage with BIER is the low multicast state 426 maintained in the core and the faster convergence (which is typically 427 at par with the unicast convergence). The edge router at the content 428 source facility can act as BIFR router and the edge router at the 429 receiver facility can act as BFER routers. Any addition of a new 430 content source or new satellite Terminal nodes can be added 431 seamlessly in to the BEIR domain. The group membership from the 432 receivers to the sources can be provisioned either by BGP or SDN 433 controller. 435 3.9. Distributed Storage Cluster 437 Distributed Storage Clusters can benefit from dynamically targeted 438 multicast messaging both for dynamic load-balancing negotiations and 439 efficient concurrent replication of content to multiple targets. 441 For example, in the NexentaEdge storage cluster (by Nexenta Systems) 442 a Chunk Put transaction is accomplished with the following steps: 444 o The Client multicast a Chunk Put Request to a multicast group 445 known as a Negotiating Group. This group holds a small number of 446 storage targets that are collectively responsible for providing 447 storage for a stable subset of the chunks to be stored. In 448 NexentaEdge this is based upon a cryptographic hash of the Object 449 Name or the Chunk payload. 451 o Each recipient of the Chunk Put Request unicast a Chunk Put 452 Response to the Client indicating when it could accept a transfer 453 of the Chunk. 455 o The Client selects a different multicast group (a Rendezvous 456 Group) which will target the set storage targets selected to hold 457 the Chunk. This is a subset of the Negotiation Group, presumably 458 selected so as to complete the transfer as early as possible. 460 o >The Client multicast a Chunk Put Accept message to inform the 461 Negotiation Group of what storage targets have been selected, when 462 the transfer will occur and over what multicast group. 464 o The client performs the multicast transfer over the Rendezvous 465 Group at the agreed upon time. 467 o Each recipient sends a Chunk Put Ack to positively or negatively 468 acknowledge the chunk transfer. 470 o The client will retry the entire transaction as needed if there 471 are not yet sufficient replicas of the Chunk. 473 Chunks are retrieved by multicasting a Chunk Get Request to the same 474 Negotiating Group, collecting Chunk Get Responses, picking one source 475 from those responses, sending a Chunk Get Accept message to identify 476 the selected source and having the selected storage server unicast 477 the chunk to the source. 479 Chunks are found by the Object Name or by having the payload 480 cryptographic hash of payload chunks be recorded in a "chunk 481 reference" in a metadata chunk. The metadata chunks are found using 482 the Object Name. 484 The general pattern in use here, which should apply to other cluster 485 applications, is that multicast messages are sent amongst a 486 dynamically selected subset of the entire cluster, which may result 487 in exchanging further messages over a smaller subset even more 488 dynamically selected. 490 Currently the distributed storage application discussed use of MLD 491 managed IPV6 multicast groups. This in turn requires either a push- 492 based mechanism for dynamically configuring Rendezvous Groups or pre- 493 provisioning a very large number of potential Rendezvous Groups and 494 dynamically selecting the multicast group that will deliver to the 495 selected set of storage targets. 497 BIER would eliminate the need for a vast number of multicast groups. 498 The entire cluster can be represented as a single BIER domain using 499 only the default sub-domain. Each Negotiating Group is simply a 500 subset of the whole that is deterministically selected by the 501 Cryptographic Hash of the Object Name or Chunk Payload. Each 502 Rendezvous Group is a further subset of the Negotiating Group. 504 In a simple mapping of the MLD managed multicast groups, each 505 Negotiating Group could be represented by a short Bitstring selected 506 by a Set Identifier. The Set Indentier effectively becomes the 507 Negotiating Group. To address the entire Negotiating Group you set 508 the Bitstring to all ones. To later address a subset of the group a 509 subset Bitstring is used. 511 This allows a short fixed size BIER header to multicast to a very 512 large storage cluster. 514 4. Security Considerations 516 There are no security issues introduced by this draft. 518 5. IANA Considerations 520 There are no IANA consideration introduced by this draft. 522 6. Acknowledgments 524 The authors would like to thank IJsbrand Wijnands, Greg Shepherd and 525 Christian Martin for their contribution. 527 7. References 529 7.1. Normative References 531 [I-D.ietf-bier-architecture] 532 Wijnands, I., Rosen, E., Dolganow, A., Przygienda, T., and 533 S. Aldrin, "Multicast using Bit Index Explicit 534 Replication", draft-ietf-bier-architecture-02 (work in 535 progress), July 2015. 537 [I-D.ietf-bier-mvpn] 538 Rosen, E., Sivakumar, M., Wijnands, I., Aldrin, S., 539 Dolganow, A., and T. Przygienda, "Multicast VPN Using 540 BIER", draft-ietf-bier-mvpn-01 (work in progress), July 541 2015. 543 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 544 Requirement Levels", BCP 14, RFC 2119, 545 DOI 10.17487/RFC2119, March 1997, 546 . 548 7.2. Informative References 550 [I-D.ietf-l2vpn-evpn] 551 Sajassi, A., Aggarwal, R., Bitar, N., Isaac, A., and J. 552 Uttaro, "BGP MPLS Based Ethernet VPN", draft-ietf-l2vpn- 553 evpn-11 (work in progress), October 2014. 555 [I-D.ietf-spring-segment-routing] 556 Filsfils, C., Previdi, S., Decraene, B., Litkowski, S., 557 and r. rjs@rob.sh, "Segment Routing Architecture", draft- 558 ietf-spring-segment-routing-04 (work in progress), July 559 2015. 561 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 562 "Protocol Independent Multicast - Sparse Mode (PIM-SM): 563 Protocol Specification (Revised)", RFC 4601, 564 DOI 10.17487/RFC4601, August 2006, 565 . 567 [RFC4607] Holbrook, H. and B. Cain, "Source-Specific Multicast for 568 IP", RFC 4607, DOI 10.17487/RFC4607, August 2006, 569 . 571 [RFC4664] Andersson, L., Ed. and E. Rosen, Ed., "Framework for Layer 572 2 Virtual Private Networks (L2VPNs)", RFC 4664, 573 DOI 10.17487/RFC4664, September 2006, 574 . 576 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 577 "Bidirectional Protocol Independent Multicast (BIDIR- 578 PIM)", RFC 5015, DOI 10.17487/RFC5015, October 2007, 579 . 581 [RFC5331] Aggarwal, R., Rekhter, Y., and E. Rosen, "MPLS Upstream 582 Label Assignment and Context-Specific Label Space", 583 RFC 5331, DOI 10.17487/RFC5331, August 2008, 584 . 586 [RFC6513] Rosen, E., Ed. and R. Aggarwal, Ed., "Multicast in MPLS/ 587 BGP IP VPNs", RFC 6513, DOI 10.17487/RFC6513, February 588 2012, . 590 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, 591 L., Sridhar, T., Bursell, M., and C. Wright, "Virtual 592 eXtensible Local Area Network (VXLAN): A Framework for 593 Overlaying Virtualized Layer 2 Networks over Layer 3 594 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, 595 . 597 Authors' Addresses 599 Nagendra Kumar 600 Cisco 601 7200 Kit Creek Road 602 Research Triangle Park, NC 27709 603 US 605 Email: naikumar@cisco.com 607 Rajiv Asati 608 Cisco 609 7200 Kit Creek Road 610 Research Triangle Park, NC 27709 611 US 613 Email: rajiva@cisco.com 615 Mach(Guoyi) Chen 616 Huawei 618 Email: mach.chen@huawei.com 619 Xiaohu Xu 620 Huawei 622 Email: xuxiaohu@huawei.com 624 Andrew Dolganow 625 Alcatel-Lucent 626 600 March Road 627 Ottawa, ON K2K2E6 628 Canada 630 Email: andrew.dolganow@alcatel-lucent.com 632 Tony Przygienda 633 Ericsson 634 300 Holger Way 635 San Jose, CA 95134 636 USA 638 Email: antoni.przygienda@ericsson.com 640 Arkadiy Gulko 641 Thomson Reuters 642 195 Broadway 643 New York NY 10007 644 USA 646 Email: arkadiy.gulko@thomsonreuters.com 648 Dom Robinson 649 id3as-company Ltd 650 UK 652 Email: Dom@id3as.co.uk 654 Vishal Arya 655 DirecTV Inc 656 2230 E Imperial Hwy 657 CA 90245 658 USA 660 Email: varya@directv.com 661 Caitlin Bestler 662 Nexenta Systems 663 451 El Camino Real 664 Santa Clara, CA 665 US 667 Email: caitlin.bestler@nexenta.com