idnits 2.17.1 draft-ietf-bier-use-cases-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (January 16, 2018) is 2282 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-08) exists of draft-ietf-bier-architecture-02 == Outdated reference: A later version (-11) exists of draft-ietf-bier-mvpn-01 == Outdated reference: A later version (-15) exists of draft-ietf-spring-segment-routing-04 -- Obsolete informational reference (is this intentional?): RFC 4601 (Obsoleted by RFC 7761) Summary: 0 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group N. Kumar 3 Internet-Draft R. Asati 4 Intended status: Informational Cisco 5 Expires: July 20, 2018 M. Chen 6 X. Xu 7 Huawei 8 A. Dolganow 9 Nokia 10 T. Przygienda 11 Ericsson 12 A. Gulko 13 Thomson Reuters 14 D. Robinson 15 id3as-company Ltd 16 V. Arya 17 DirecTV Inc 18 C. Bestler 19 Nexenta 20 January 16, 2018 22 BIER Use Cases 23 draft-ietf-bier-use-cases-06.txt 25 Abstract 27 Bit Index Explicit Replication (BIER) is an architecture that 28 provides optimal multicast forwarding through a "BIER domain" without 29 requiring intermediate routers to maintain any multicast related per- 30 flow state. BIER also does not require any explicit tree-building 31 protocol for its operation. A multicast data packet enters a BIER 32 domain at a "Bit-Forwarding Ingress Router" (BFIR), and leaves the 33 BIER domain at one or more "Bit-Forwarding Egress Routers" (BFERs). 34 The BFIR router adds a BIER header to the packet. The BIER header 35 contains a bit-string in which each bit represents exactly one BFER 36 to forward the packet to. The set of BFERs to which the multicast 37 packet needs to be forwarded is expressed by setting the bits that 38 correspond to those routers in the BIER header. 40 This document describes some of the use-cases for BIER. 42 Status of This Memo 44 This Internet-Draft is submitted in full conformance with the 45 provisions of BCP 78 and BCP 79. 47 Internet-Drafts are working documents of the Internet Engineering 48 Task Force (IETF). Note that other groups may also distribute 49 working documents as Internet-Drafts. The list of current Internet- 50 Drafts is at https://datatracker.ietf.org/drafts/current/. 52 Internet-Drafts are draft documents valid for a maximum of six months 53 and may be updated, replaced, or obsoleted by other documents at any 54 time. It is inappropriate to use Internet-Drafts as reference 55 material or to cite them other than as "work in progress." 57 This Internet-Draft will expire on July 20, 2018. 59 Copyright Notice 61 Copyright (c) 2018 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (https://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with respect 69 to this document. Code Components extracted from this document must 70 include Simplified BSD License text as described in Section 4.e of 71 the Trust Legal Provisions and are provided without warranty as 72 described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 77 2. Specification of Requirements . . . . . . . . . . . . . . . . 3 78 3. BIER Use Cases . . . . . . . . . . . . . . . . . . . . . . . 3 79 3.1. Multicast in L3VPN Networks . . . . . . . . . . . . . . . 3 80 3.2. BUM in EVPN . . . . . . . . . . . . . . . . . . . . . . . 4 81 3.3. IPTV and OTT Services . . . . . . . . . . . . . . . . . . 5 82 3.4. Multi-service, converged L3VPN network . . . . . . . . . 6 83 3.5. Control-plane simplification and SDN-controlled networks 7 84 3.6. Data center Virtualization/Overlay . . . . . . . . . . . 7 85 3.7. Financial Services . . . . . . . . . . . . . . . . . . . 8 86 3.8. 4k broadcast video services . . . . . . . . . . . . . . . 9 87 3.9. Distributed Storage Cluster . . . . . . . . . . . . . . . 10 88 3.10. HTTP-Level Multicast . . . . . . . . . . . . . . . . . . 11 89 4. Security Considerations . . . . . . . . . . . . . . . . . . . 13 90 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 91 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 13 92 7. Contributing Authors . . . . . . . . . . . . . . . . . . . . 13 93 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 94 8.1. Normative References . . . . . . . . . . . . . . . . . . 14 95 8.2. Informative References . . . . . . . . . . . . . . . . . 14 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 98 1. Introduction 100 Bit Index Explicit Replication (BIER) [I-D.ietf-bier-architecture] is 101 an architecture that provides optimal multicast forwarding through a 102 "BIER domain" without requiring intermediate routers to maintain any 103 multicast related per-flow state. BIER also does not require any 104 explicit tree-building protocol for its operation. A multicast data 105 packet enters a BIER domain at a "Bit-Forwarding Ingress Router" 106 (BFIR), and leaves the BIER domain at one or more "Bit-Forwarding 107 Egress Routers" (BFERs). The BFIR router adds a BIER header to the 108 packet. The BIER header contains a bit-string in which each bit 109 represents exactly one BFER to forward the packet to. The set of 110 BFERs to which the multicast packet needs to be forwarded is 111 expressed by setting the bits that correspond to those routers in the 112 BIER header. 114 The obvious advantage of BIER is that there is no per flow multicast 115 state in the core of the network and there is no tree building 116 protocol that sets up tree on demand based on users joining a 117 multicast flow. In that sense, BIER is potentially applicable to 118 many services where Multicast is used and not limited to the examples 119 described in this draft. In this document we are describing a few 120 use-cases where BIER could provide benefit over using existing 121 mechanisms. 123 2. Specification of Requirements 125 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 126 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 127 document are to be interpreted as described in [RFC2119]. 129 3. BIER Use Cases 131 3.1. Multicast in L3VPN Networks 133 The Multicast L3VPN architecture [RFC6513] describes many different 134 profiles in order to transport L3 Multicast across a providers 135 network. Each profile has its own different tradeoffs (see section 136 2.1 [RFC6513]). When using "Multidirectional Inclusive" "Provider 137 Multicast Service Interface" (MI-PMSI) an efficient tree is build per 138 VPN, but causes flooding of egress PE's that are part of the VPN, but 139 have not joined a particular C-multicast flow. This problem can be 140 solved with the "Selective" PMSI to build a special tree for only 141 those PE's that have joined the C-multicast flow for that specific 142 VPN. The more S-PMSI's, the less bandwidth is wasted due to 143 flooding, but causes more state to be created in the providers 144 network. This is a typical problem network operators are faced with 145 by finding the right balance between the amount of state carried in 146 the network and how much flooding (waste of bandwidth) is acceptable. 147 Some of the complexity with L3VPN's comes due to providing different 148 profiles to accommodate these trade-offs. 150 With BIER there is no trade-off between State and Flooding. Since 151 the receiver information is explicitly carried within the packet, 152 there is no need to build S-PMSI's to deliver multicast to a sub-set 153 of the VPN egress PE's. Due to that behaviour, there is no need for 154 S-PMSI's. 156 Mi-PMSI's and S-PMSI's are also used to provide the VPN context to 157 the Egress PE router that receives the multicast packet. Also, in 158 some MVPN profiles it is also required to know which Ingress PE 159 forwarded the packet. Based on the PMSI the packet is received from, 160 the target VPN is determined. This also means there is a requirement 161 to have a least a PMSI per VPN or per VPN/Ingress PE. This means the 162 amount of state created in the network is proportional to the VPN and 163 ingress PE's. Creating PMSI state per VPN can be prevented by 164 applying the procedures as documented in [RFC5331]. This however has 165 not been very much adopted/implemented due to the excessive flooding 166 it would cause to Egress PE's since *all* VPN multicast packets are 167 forwarded to *all* PE's that have one or more VPN's attached to it. 169 With BIER, the destination PE's are identified in the multicast 170 packet, so there is no flooding concern when implementing [RFC5331]. 171 For that reason there is no need to create multiple BIER domain's per 172 VPN, the VPN context can be carry in the multicast packet using the 173 procedures as defined in [RFC5331]. Also see [I-D.ietf-bier-mvpn] 174 for more information. 176 With BIER only a few MVPN profiles will remain relevant, simplifying 177 the operational cost and making it easier to be interoperable among 178 different vendors. 180 3.2. BUM in EVPN 182 The current widespread adoption of L2VPN services [RFC4664], 183 especially the upcoming EVPN solution [RFC7432] which transgresses 184 many limitations of VPLS, introduces the need for an efficient 185 mechanism to replicate broadcast, unknown and multicast (BUM) traffic 186 towards the PEs that participate in the same EVPN instances (EVIs). 187 As simplest deployable mechanism, ingress replication is used but 188 poses accordingly a high burden on the ingress node as well as 189 saturating the underlying links with many copies of the same frame 190 headed to different PEs. Fortunately enough, EVPN signals internally 191 P-Multicast Service Interface (PMSI) [RFC6513] attribute to establish 192 transport for BUM frames and with that allows to deploy a plethora of 193 multicast replication services that the underlying network layer can 194 provide. It is therefore relatively simple to deploy BIER P-Tunnels 195 for EVPN and with that distribute BUM traffic without building of 196 P-router state in the core required by PIM, mLDP or comparable 197 solutions. 199 Specifically, the same I-PMSI attribute suggested for mVPN can be 200 used easily in EVPN and given EVPN can multiplex and disassociate BUM 201 frames on p2mp and mp2mp trees using upstream assigned labels, BIER 202 P-Tunnel will support BUM flooding for any number of EVIs over a 203 single sub-domain for maximum scalability but allow at the other 204 extreme of the spectrum to use a single BIER sub-domain per EVI if 205 such a deployment is necessary. 207 Multiplexing EVIs onto the same PMSI forces the PMSI to span more 208 than the necessary number of PEs normally, i.e. the union of all PEs 209 participating in the EVIs multiplexed on the PMSI. Given the 210 properties of BIER it is however possible to encode in the receiver 211 bitmask only the PEs that participate in the EVI the BUM frame 212 targets. In a sense BIER is an inclusive as well as a selective tree 213 and can allow to deliver the frame to only the set of receivers 214 interested in a frame even though many others participate in the same 215 PMSI. 217 As another significant advantage, it is imaginable that the same BIER 218 tunnel needed for BUM frames can optimize the delivery of the 219 multicast frames though the signaling of group memberships for the 220 PEs involved has not been specified as of date. 222 3.3. IPTV and OTT Services 224 IPTV is a service, well known for its characteristics of allowing 225 both live and on-demand delivery of media traffic over end-to-end 226 Managed IP network. 228 Over The Top (OTT) is a similar service, well known for its 229 characteristics of allowing live and on-demand delivery of media 230 traffic between IP domains, where the source is often on an external 231 network relative to the receivers. 233 Content Delivery Networks (CDN) operators provide layer 4 234 applications, and often some degree of managed layer 3 IP network, 235 that enable media to be securely and reliably delivered to many 236 receivers. In some models they may place applications within third 237 party networks, or they may place those applications at the edges of 238 their own managed network peerings and similar inter-domain 239 connections. CDNs provide capabilities to help publishers scale to 240 meet large audience demand. Their applications are not limited to 241 audio and video delivery, but may include static and dynamic web 242 content, or optimized delivery for Massive Multiplayer Gaming and 243 similar. Most publishers will use a CDN for public Internet 244 delivery, and some publishers will use a CDN internally within their 245 IPTV networks to resolve layer 4 complexity. 247 In a typical IPTV environment the egress routers connecting to the 248 receivers will build the tree towards the ingress router connecting 249 to the IPTV servers. The egress routers would rely on IGMP/MLD 250 (static or dynamic) to learn about the receiver's interest in one or 251 more multicast group/channels. Interestingly, BIER could allows 252 provisioning any new multicast group/channel by only modifying the 253 channel mapping on ingress routers. This is deemed beneficial for 254 the linear IPTV video broadcasting in which every receivers behind 255 every egress PE routers would receive the IPTV video traffic. 257 With BIER in IPTV environment, there is no need of tree building from 258 egress to ingress. Further, any addition of new channel or new 259 egress routers can be directly controlled from ingress router. When 260 a new channel is included, the multicast group is mapped to Bit 261 string that includes all egress routers. Ingress router would start 262 sending the new channel and deliver it to all egress routers. As it 263 can be observed, there is no need for static IGMP provisioning in 264 each egress routers whenever a new channel/stream is added. Instead, 265 it can be controlled from ingress router itself by configuring the 266 new group to Bit Mask mapping on ingress router. 268 With BIER in OTT environment, these edge routers in CDN domain 269 terminating the OTT user session connect to the Ingress BIER routers 270 connecting content provider domains or a local cache server and 271 leverage the scalability benefit that BIER could provide. This may 272 rely on MBGP interoperation (or similar) between the egress of one 273 domain and the ingress of the next domain, or some other SDN control 274 plane may prove a more effective and simpler way to deploy BIER. For 275 a single CDN operator this could be well managed in the Layer 4 276 applications that they provide and it may be that the initial 277 receiver in a remote domain is actually an application operated by 278 the CDN which in turn acts as a source for the Ingress BIER router in 279 that remote domain, and by doing so keeps the BIER more descrete on a 280 domain by domain basis. 282 3.4. Multi-service, converged L3VPN network 284 Increasingly operators deploy single networks for multiple-services. 285 For example a single Metro Core network could be deployed to provide 286 Residential IPTV retail service, residential IPTV wholesale service, 287 and business L3VPN service with multicast. It may often be desired 288 by an operator to use a single architecture to deliver multicast for 289 all of those services. In some cases, governing regulations may 290 additionally require same service capabilities for both wholesale and 291 retail multicast services. To meet those requirements, some 292 operators use multicast architecture as defined in [RFC5331]. 293 However, the need to support many L3VPNs, with some of those L3VPNs 294 scaling to hundreds of egress PE's and thousands of C-multicast 295 flows, make scaling/efficiency issues defined in earlier sections of 296 this document even more prevalent. Additionally support for ten's of 297 millions of BGP multicast A-D and join routes alone could be required 298 in such networks with all consequences such a scale brings. 300 With BIER, again there is no need of tree building from egress to 301 ingress for each L3VPN or individual or group of c-multicast flows. 302 As described earlier on, any addition of a new IPTV channel or new 303 egress router can be directly controlled from ingress router and 304 there is no flooding concern when implementing [RFC5331]. 306 3.5. Control-plane simplification and SDN-controlled networks 308 With the advent of Software Defined Networking, some operators are 309 looking at various ways to reduce the overall cost of providing 310 networking services including multicast delivery. Some of the 311 alternatives being consider include minimizing capex cost through 312 deployment of network-elements with simplified control plane 313 function, minimizing operational cost by reducing control protocols 314 required to achieve a particular service, etc. Segment routing as 315 described in [I-D.ietf-spring-segment-routing] provides a solution 316 that could be used to provide simplified control-plane architecture 317 for unicast traffic. With Segment routing deployed for unicast, a 318 solution that simplifies control-plane for multicast would thus also 319 be required, or operational and capex cost reductions will not be 320 achieved to their full potential. 322 With BIER, there is no longer a need to run control protocols 323 required to build a distribution tree. If L3VPN with multicast, for 324 example, is deployed using [RFC5331] with MPLS in P-instance, the 325 MPLS control plane would no longer be required. BIER also allows 326 migration of C-multicast flows from non-BIER to BIER-based 327 architecture, which makes transition to control-plane simplified 328 network simpler to operationalize. Finally, for operators, who would 329 desire centralized, offloaded control plane, multicast overlay as 330 well as BIER forwarding could migrate to controller-based 331 programming. 333 3.6. Data center Virtualization/Overlay 335 Virtual eXtensible Local Area Network (VXLAN) [RFC7348] is a kind of 336 network virtualization overlay technology which is intended for 337 multi-tenancy data center networks. To emulate a layer2 flooding 338 domain across the layer3 underlay, it requires to have a mapping 339 between the VXLAN Virtual Network Instance (VNI) and the IP multicast 340 group in a ratio of 1:1 or n:1. In other words, it requires to 341 enable the multicast capability in the underlay. For instance, it 342 requires to enable PIM-SM [RFC4601] or PIM-BIDIR [RFC5015] multicast 343 routing protocol in the underlay. VXLAN is designed to support 16M 344 VNIs at maximum. In the mapping ratio of 1:1, it would require 16M 345 multicast groups in the underlay which would become a significant 346 challenge to both the control plane and the data plane of the data 347 center switches. In the mapping ratio of n:1, it would result in 348 inefficiency bandwidth utilization which is not optimal in data 349 center networks. More importantly, it is recognized by many data 350 center operators as a unaffordable burden to run multicast in data 351 center networks from network operation and maintenance perspectives. 352 As a result, many VXLAN implementations are claimed to support the 353 ingress replication capability since ingress replication eliminates 354 the burden of running multicast in the underlay. Ingress replication 355 is an acceptable choice in small-sized networks where the average 356 number of receivers per multicast flow is not too large. However, in 357 multi-tenant data center networks, especially those in which the NVE 358 functionality is enabled on a high amount of physical servers, the 359 average number of NVEs per VN instance would be very large. As a 360 result, the ingress replication scheme would result in a serious 361 bandwidth waste in the underlay and a significant replication burden 362 on ingress NVEs. 364 With BIER, there is no need for maintaining that huge amount of 365 multicast states in the underlay anymore while the delivery 366 efficiency of overlay BUM traffic is the same as if any kind of 367 stateful multicast protocols such as PIM-SM or PIM-BIDIR is enabled 368 in the underlay. 370 3.7. Financial Services 372 Financial services extensively rely on IP Multicast to deliver stock 373 market data and its derivatives, and critically require optimal 374 latency path (from publisher to subscribers), deterministic 375 convergence (so as to deliver market data derivatives fairly to each 376 client) and secured delivery. 378 Current multicast solutions e.g. PIM, mLDP etc., however, don't 379 sufficiently address the above requirements. The reason is that the 380 current solutions are primarily subscriber driven i.e. multicast tree 381 is setup using reverse path forwarding techniques, and as a result, 382 the chosen path for market data may not be latency optimal from 383 publisher to the (market data) subscribers. 385 As the number of multicast flows grows, the convergence time might 386 increase and make it somewhat nondeterministic from the first to the 387 last flow depending on platforms/implementations. Also, by having 388 more protocols in the network, the variability to ensure secured 389 delivery of multicast data increases, thereby undermining the overall 390 security aspect. 392 BIER enables setting up the most optimal path from publisher to 393 subscribers by leveraging unicast routing relevant for the 394 subscribers. With BIER, the multicast convergence is as fast as 395 unicast, uniform and deterministic regardless of number of multicast 396 flows. This makes BIER a perfect multicast technology to achieve 397 fairness for market derivatives per each subscriber. 399 3.8. 4k broadcast video services 401 In a broadcast network environment, the media content is sourced from 402 various content providers across different locations. The 4k 403 broadcast video is an evolving service with enormous demand on 404 network infrastructure in terms of Low latency, faster convergence, 405 high throughput, and high bandwidth. 407 In a typical broadcast satellite network environment, the receivers 408 are the satellite Terminal nodes which will receive the content from 409 various sources and feed the data to the satellite. Typically a 410 multicast group address is assigned for each source. Currently the 411 receivers can join the sources using either PIM-SM [RFC4601] or PIM- 412 SSM [RFC4607]. 414 In such network scenarios, normally PIM will be the multicast routing 415 protocol used to establish the tree between Ingress connecting the 416 content media sources to egress routers connecting the receivers. In 417 PIM-SM mode, the receivers relies on shared tree to learn the source 418 address and build source tree while in PIM-SSM mode, IGMPv3 is used 419 by receiver to signal the source address to the egress router. In 420 either case, as the number of sources increases, the number of 421 multicast trees in the core also increases resulting with more 422 multicast state entries in the core and increasing the convergence 423 time. 425 With BIER in 4k broadcast satellite network environment, there is no 426 need to run PIM in the core and no need to maintain any multicast 427 state. The obvious advantage with BIER is the low multicast state 428 maintained in the core and the faster convergence (which is typically 429 at par with the unicast convergence). The edge router at the content 430 source facility can act as BIFR router and the edge router at the 431 receiver facility can act as BFER routers. Any addition of a new 432 content source or new satellite Terminal nodes can be added 433 seamlessly in to the BEIR domain. The group membership from the 434 receivers to the sources can be provisioned either by BGP or SDN 435 controller. 437 3.9. Distributed Storage Cluster 439 Distributed Storage Clusters can benefit from dynamically targeted 440 multicast messaging both for dynamic load-balancing negotiations and 441 efficient concurrent replication of content to multiple targets. 443 For example, in the NexentaEdge storage cluster (by Nexenta Systems) 444 a Chunk Put transaction is accomplished with the following steps: 446 o The Client multicast a Chunk Put Request to a multicast group 447 known as a Negotiating Group. This group holds a small number of 448 storage targets that are collectively responsible for providing 449 storage for a stable subset of the chunks to be stored. In 450 NexentaEdge this is based upon a cryptographic hash of the Object 451 Name or the Chunk payload. 453 o Each recipient of the Chunk Put Request unicast a Chunk Put 454 Response to the Client indicating when it could accept a transfer 455 of the Chunk. 457 o The Client selects a different multicast group (a Rendezvous 458 Group) which will target the set storage targets selected to hold 459 the Chunk. This is a subset of the Negotiation Group, presumably 460 selected so as to complete the transfer as early as possible. 462 o >The Client multicast a Chunk Put Accept message to inform the 463 Negotiation Group of what storage targets have been selected, when 464 the transfer will occur and over what multicast group. 466 o The client performs the multicast transfer over the Rendezvous 467 Group at the agreed upon time. 469 o Each recipient sends a Chunk Put Ack to positively or negatively 470 acknowledge the chunk transfer. 472 o The client will retry the entire transaction as needed if there 473 are not yet sufficient replicas of the Chunk. 475 Chunks are retrieved by multicasting a Chunk Get Request to the same 476 Negotiating Group, collecting Chunk Get Responses, picking one source 477 from those responses, sending a Chunk Get Accept message to identify 478 the selected source and having the selected storage server unicast 479 the chunk to the source. 481 Chunks are found by the Object Name or by having the payload 482 cryptographic hash of payload chunks be recorded in a "chunk 483 reference" in a metadata chunk. The metadata chunks are found using 484 the Object Name. 486 The general pattern in use here, which should apply to other cluster 487 applications, is that multicast messages are sent amongst a 488 dynamically selected subset of the entire cluster, which may result 489 in exchanging further messages over a smaller subset even more 490 dynamically selected. 492 Currently the distributed storage application discussed use of MLD 493 managed IPV6 multicast groups. This in turn requires either a push- 494 based mechanism for dynamically configuring Rendezvous Groups or pre- 495 provisioning a very large number of potential Rendezvous Groups and 496 dynamically selecting the multicast group that will deliver to the 497 selected set of storage targets. 499 BIER would eliminate the need for a vast number of multicast groups. 500 The entire cluster can be represented as a single BIER domain using 501 only the default sub-domain. Each Negotiating Group is simply a 502 subset of the whole that is deterministically selected by the 503 Cryptographic Hash of the Object Name or Chunk Payload. Each 504 Rendezvous Group is a further subset of the Negotiating Group. 506 In a simple mapping of the MLD managed multicast groups, each 507 Negotiating Group could be represented by a short Bitstring selected 508 by a Set Identifier. The Set Indentier effectively becomes the 509 Negotiating Group. To address the entire Negotiating Group you set 510 the Bitstring to all ones. To later address a subset of the group a 511 subset Bitstring is used. 513 This allows a short fixed size BIER header to multicast to a very 514 large storage cluster. 516 3.10. HTTP-Level Multicast 518 Scenarios where a number of HTTP-level clients are quasi- 519 synchronously accessing the same HTTP-level resource can benefit from 520 the the dynamic multicast group formation enabled by BIER. 522 For example, in the FLIPS (Flexible IP Services) solution by 523 InterDigital, network attachment points (NAPs) provide a protocol 524 mapping from HTTP to an efficient BIER-compliant transfer along a 525 bit-indexed path between an ingress (here the NAP to which the 526 clients connect) and an egress (here the NAP to which the HTTP-level 527 server connects). This is accomplished with the following steps: 529 o at the client NAP, the HTTP request is terminated at the HTTP 530 level at a local HTTP proxy. 532 o the HTTP request is published by the client NAP towards the FQDN 533 of the server defined in the HTTP request 535 * if no local BIER forwarding information exists to the server 536 (NAP), a path computation entity (PCE) is consulted, which 537 calculates a unicast path to the egress NAP (here the server 538 NAP). The PCE provides the forwarding information to the 539 client NAP, which in turn caches the result. 541 + if the local BIER forwarding information exists in the NAP- 542 local cache, it is used instead. 544 o Upon arrival of a client NAP request at the server NAP, the server 545 NAP proxy forwards the HTTP request as a well-formed HTTP request 546 locally to the server. 548 * If no client NAP forwarding information exists for the reverse 549 direction, this information is requested from the PCE. Upon 550 arrival of such reverse direction forwarding information, it is 551 stored in a local table for future use. 553 o Upon arrival of any further client NAP request at the server NAP 554 to an HTTP request whose response is still outstanding, the client 555 NAP is added to an internal request table and the request is 556 suppressed from being sent to the server. 558 * If no client NAP forwarding information exists for the reverse 559 direction, this information is requested from the PCE. Upon 560 arrival of such reverse direction forwarding information, it is 561 stored in a local table for future use. 563 o Upon arrival of an HTTP response at the server NAP, the server NAP 564 consults its internal request table for any outstanding HTTP 565 requests to the same request 567 the server NAP retrieves the stored BIER forwarding information 568 for the reverse direction for all outstanding HTTP requests 569 found above and determines the path information to all client 570 NAPs through a binary OR over all BIER forwarding identifiers 571 with the same SI field. This newly formed joint BIER multicast 572 response identifier is used to send the HTTP response across 573 the network, while the procedure is executed until all requests 574 have been served. 576 o Upon arrival of the HTTP response at a client NAP, it will be sent 577 by the client NAP proxy to the locally connected client. 579 A number of solutions exist to manage necessary updates in locally 580 stored BIER forwarding information for cases of client/server 581 mobility as well as for resilience purposes. 583 Applications for HTTP-level multicast are manifold. Examples are 584 HTTP-level streaming (HLS) services, provided as an OTT offering, 585 either at the level of end user clients (connected to BIER-enabled 586 NAPs) or site-level clients. Others are corporate intranet storage 587 cluster solutions that utilize HTTP- level synchronization. In 588 multi-tenant data centre scenarios such as outlined in Section 3.6., 589 the aforementioned solution can satisfy HTTP-level requests to 590 popular services and content in a multicast delivery manner. 592 BIER enables such solution through the bitfield representation of 593 forwarding information, which is in turn used for ad-hoc multicast 594 group formation at the HTTP request level. While such solution works 595 well in SDN-enabled intra- domain scenarios, BIER would enable the 596 realization of such scenarios in multi-domain scenarios over legacy 597 transport networks without relying on SDN-controlled infrastructure. 599 4. Security Considerations 601 There are no security issues introduced by this draft. 603 5. IANA Considerations 605 There are no IANA consideration introduced by this draft. 607 6. Acknowledgments 609 The authors would like to thank IJsbrand Wijnands, Greg Shepherd and 610 Christian Martin for their contribution. 612 7. Contributing Authors 614 Dirk Trossen 615 InterDigital Inc 616 Email: dirk.trossen@interdigital.com 618 8. References 619 8.1. Normative References 621 [I-D.ietf-bier-architecture] 622 Wijnands, I., Rosen, E., Dolganow, A., Przygienda, T., and 623 S. Aldrin, "Multicast using Bit Index Explicit 624 Replication", draft-ietf-bier-architecture-02 (work in 625 progress), July 2015. 627 [I-D.ietf-bier-mvpn] 628 Rosen, E., Sivakumar, M., Wijnands, I., Aldrin, S., 629 Dolganow, A., and T. Przygienda, "Multicast VPN Using 630 BIER", draft-ietf-bier-mvpn-01 (work in progress), July 631 2015. 633 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 634 Requirement Levels", BCP 14, RFC 2119, 635 DOI 10.17487/RFC2119, March 1997, 636 . 638 8.2. Informative References 640 [I-D.ietf-spring-segment-routing] 641 Filsfils, C., Previdi, S., Decraene, B., Litkowski, S., 642 and r. rjs@rob.sh, "Segment Routing Architecture", draft- 643 ietf-spring-segment-routing-04 (work in progress), July 644 2015. 646 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 647 "Protocol Independent Multicast - Sparse Mode (PIM-SM): 648 Protocol Specification (Revised)", RFC 4601, 649 DOI 10.17487/RFC4601, August 2006, 650 . 652 [RFC4607] Holbrook, H. and B. Cain, "Source-Specific Multicast for 653 IP", RFC 4607, DOI 10.17487/RFC4607, August 2006, 654 . 656 [RFC4664] Andersson, L., Ed. and E. Rosen, Ed., "Framework for Layer 657 2 Virtual Private Networks (L2VPNs)", RFC 4664, 658 DOI 10.17487/RFC4664, September 2006, 659 . 661 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 662 "Bidirectional Protocol Independent Multicast (BIDIR- 663 PIM)", RFC 5015, DOI 10.17487/RFC5015, October 2007, 664 . 666 [RFC5331] Aggarwal, R., Rekhter, Y., and E. Rosen, "MPLS Upstream 667 Label Assignment and Context-Specific Label Space", 668 RFC 5331, DOI 10.17487/RFC5331, August 2008, 669 . 671 [RFC6513] Rosen, E., Ed. and R. Aggarwal, Ed., "Multicast in MPLS/ 672 BGP IP VPNs", RFC 6513, DOI 10.17487/RFC6513, February 673 2012, . 675 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, 676 L., Sridhar, T., Bursell, M., and C. Wright, "Virtual 677 eXtensible Local Area Network (VXLAN): A Framework for 678 Overlaying Virtualized Layer 2 Networks over Layer 3 679 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, 680 . 682 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A., 683 Uttaro, J., Drake, J., and W. Henderickx, "BGP MPLS-Based 684 Ethernet VPN", RFC 7432, DOI 10.17487/RFC7432, February 685 2015, . 687 Authors' Addresses 689 Nagendra Kumar 690 Cisco 691 7200 Kit Creek Road 692 Research Triangle Park, NC 27709 693 US 695 Email: naikumar@cisco.com 697 Rajiv Asati 698 Cisco 699 7200 Kit Creek Road 700 Research Triangle Park, NC 27709 701 US 703 Email: rajiva@cisco.com 705 Mach(Guoyi) Chen 706 Huawei 708 Email: mach.chen@huawei.com 709 Xiaohu Xu 710 Huawei 712 Email: xuxiaohu@huawei.com 714 Andrew Dolganow 715 Nokia 716 750D Chai Chee Rd 717 06-06 Viva Business Park 469004 718 Singapore 720 Email: andrew.dolganow@nokia.com 722 Tony Przygienda 723 Ericsson 724 300 Holger Way 725 San Jose, CA 95134 726 USA 728 Email: antoni.przygienda@ericsson.com 730 Arkadiy Gulko 731 Thomson Reuters 732 195 Broadway 733 New York NY 10007 734 USA 736 Email: arkadiy.gulko@thomsonreuters.com 738 Dom Robinson 739 id3as-company Ltd 740 UK 742 Email: Dom@id3as.co.uk 744 Vishal Arya 745 DirecTV Inc 746 2230 E Imperial Hwy 747 CA 90245 748 USA 750 Email: varya@directv.com 751 Caitlin Bestler 752 Nexenta Systems 753 451 El Camino Real 754 Santa Clara, CA 755 US 757 Email: caitlin.bestler@nexenta.com