idnits 2.17.1 draft-ietf-bier-use-cases-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (March 4, 2020) is 1507 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-06) exists of draft-ietf-bier-multicast-http-response-03 -- Obsolete informational reference (is this intentional?): RFC 7231 (Obsoleted by RFC 9110) Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group N. Kumar 3 Internet-Draft R. Asati 4 Intended status: Informational Cisco 5 Expires: September 5, 2020 M. Chen 6 X. Xu 7 Huawei 8 A. Dolganow 9 Nokia 10 T. Przygienda 11 Juniper Networks 12 A. Gulko 13 Thomson Reuters 14 D. Robinson 15 id3as-company Ltd 16 V. Arya 17 DirecTV Inc 18 C. Bestler 19 Nexenta 20 March 4, 2020 22 BIER Use Cases 23 draft-ietf-bier-use-cases-11.txt 25 Abstract 27 Bit Index Explicit Replication (BIER) is an architecture that 28 provides optimal multicast forwarding through a "BIER domain" without 29 requiring intermediate routers to maintain any multicast related per- 30 flow state. BIER also does not require any explicit tree-building 31 protocol for its operation. A multicast data packet enters a BIER 32 domain at a "Bit-Forwarding Ingress Router" (BFIR), and leaves the 33 BIER domain at one or more "Bit-Forwarding Egress Routers" (BFERs). 34 The BFIR router adds a BIER header to the packet. The BIER header 35 contains a bit-string in which each bit represents exactly one BFER 36 to forward the packet to. The set of BFERs to which the multicast 37 packet needs to be forwarded is expressed by setting the bits that 38 correspond to those routers in the BIER header. 40 This document describes some of the use cases for BIER. 42 Status of This Memo 44 This Internet-Draft is submitted in full conformance with the 45 provisions of BCP 78 and BCP 79. 47 Internet-Drafts are working documents of the Internet Engineering 48 Task Force (IETF). Note that other groups may also distribute 49 working documents as Internet-Drafts. The list of current Internet- 50 Drafts is at https://datatracker.ietf.org/drafts/current/. 52 Internet-Drafts are draft documents valid for a maximum of six months 53 and may be updated, replaced, or obsoleted by other documents at any 54 time. It is inappropriate to use Internet-Drafts as reference 55 material or to cite them other than as "work in progress." 57 This Internet-Draft will expire on September 5, 2020. 59 Copyright Notice 61 Copyright (c) 2020 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (https://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with respect 69 to this document. Code Components extracted from this document must 70 include Simplified BSD License text as described in Section 4.e of 71 the Trust Legal Provisions and are provided without warranty as 72 described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 77 2. Specification of Requirements . . . . . . . . . . . . . . . . 3 78 3. BIER Use Cases . . . . . . . . . . . . . . . . . . . . . . . 3 79 3.1. Multicast in L3VPN Networks . . . . . . . . . . . . . . . 3 80 3.2. Broadcast, Unknown unicast and Multicast (BUM) in EVPN . 4 81 3.3. IPTV and OTT Services . . . . . . . . . . . . . . . . . . 5 82 3.4. Multi-Service, Converged L3VPN Network . . . . . . . . . 6 83 3.5. Control-Plane Simplification and SDN-Controlled Networks 7 84 3.6. Data Center Virtualization/Overlay . . . . . . . . . . . 8 85 3.7. Financial Services . . . . . . . . . . . . . . . . . . . 8 86 3.8. 4K Broadcast Video Services . . . . . . . . . . . . . . . 9 87 3.9. Distributed Storage Cluster . . . . . . . . . . . . . . . 10 88 3.10. Hyper Text Transfer Protocol (HTTP) Level Multicast . . . 11 89 4. Security Considerations . . . . . . . . . . . . . . . . . . . 13 90 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 91 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 13 92 7. Contributing Authors . . . . . . . . . . . . . . . . . . . . 14 93 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 14 94 8.1. Normative References . . . . . . . . . . . . . . . . . . 14 95 8.2. Informative References . . . . . . . . . . . . . . . . . 14 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 16 98 1. Introduction 100 Bit Index Explicit Replication (BIER) [RFC8279] is an architecture 101 that provides optimal multicast forwarding through a "BIER domain" 102 without requiring intermediate routers to maintain any multicast 103 related per-flow state. BIER also does not require any explicit 104 tree-building protocol for its operation. A multicast data packet 105 enters a BIER domain at a "Bit-Forwarding Ingress Router" (BFIR), and 106 leaves the BIER domain at one or more "Bit-Forwarding Egress Routers" 107 (BFERs). The BFIR router adds a BIER header to the packet. The BIER 108 header contains a bit-string in which each bit represents exactly one 109 BFER to forward the packet to. The set of BFERs to which the 110 multicast packet needs to be forwarded is expressed by setting the 111 bits that correspond to those routers in the BIER header. 113 The obvious advantage of BIER is that there is no per flow multicast 114 state in the core of the network and there is no tree building 115 protocol that sets up tree on demand based on users joining a 116 multicast flow. In that sense, BIER is potentially applicable to 117 many services where multicast is used and not limited to the examples 118 described in this draft. In this document we are describing a few 119 use cases where BIER could provide benefit over using existing 120 mechanisms. 122 2. Specification of Requirements 124 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 125 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 126 "OPTIONAL" in this document are to be interpreted as described in RFC 127 2119 [RFC2119] RFC 8174 [RFC8174] when and only when, they appear in 128 all capitals, as shown here. 130 3. BIER Use Cases 132 3.1. Multicast in L3VPN Networks 134 The Multicast L3VPN architecture [RFC6513] describes many different 135 profiles in order to transport L3 multicast across a provider's 136 network. Each profile has its own different tradeoffs (see section 137 2.1 [RFC6513]). When using "Multidirectional Inclusive" "Provider 138 Multicast Service Interface" (MI-PMSI) an efficient tree is built per 139 VPN, but causes flooding of egress PEs that are part of the VPN, but 140 have not joined a particular C-multicast flow. This problem can be 141 solved with the "Selective" PMSI (S-PMSI) by building a special tree 142 for only those PEs that have joined the C-multicast flow for that 143 specific VPN. The more S-PMSI's, the less bandwidth is wasted due to 144 flooding, but causes more state to be created in the provider's 145 network. This is a typical problem network operators are faced with 146 by finding the right balance between the amount of state carried in 147 the network and how much flooding (waste of bandwidth) is acceptable. 148 Some of the complexity with L3VPN's comes due to providing different 149 profiles to accommodate these trade-offs. 151 With BIER there is no trade-off between State and Flooding. Since 152 the receiver information is explicitly carried within the packet, 153 there is no need to build S-PMSI's to deliver multicast to a sub-set 154 of the VPN egress PEs. Due to that behaviour, there is no need for 155 S-PMSI's. 157 MI-PMSI's and S-PMSI's are also used to provide the VPN context to 158 the egress PE router that receives the multicast packet. Also, in 159 some MVPN profiles it is also required to know which Ingress PE 160 forwarded the packet. Based on the PMSI the packet is received from, 161 the target VPN is determined. This also means there is a requirement 162 to have at least a PMSI per VPN or per VPN/ingress PE. This means 163 the amount of state created in the network is proportional to the VPN 164 and ingress PEs. Creating PMSI state per VPN can be prevented by 165 applying the procedures as documented in [RFC5331]. This however has 166 not been very much adopted/implemented due to the excessive flooding 167 it would cause to egress PEs since *all* VPN multicast packets are 168 forwarded to *all* PEs that have one or more VPNs attached to it. 170 With BIER, the destination PEs are identified in the multicast 171 packet, so there is no flooding concern when implementing [RFC5331]. 172 For that reason there is no need to create multiple BIER domains per 173 VPN, the VPN context can be carry in the multicast packet using the 174 procedures as defined in [RFC5331]. Also see [RFC8556] for more 175 information. 177 With BIER only a few MVPN profiles will remain relevant, simplifying 178 the operational cost and making it easier to be interoperable among 179 different vendors. 181 3.2. Broadcast, Unknown unicast and Multicast (BUM) in EVPN 183 The current widespread adoption of L2VPN services [RFC4664], 184 especially the upcoming EVPN solution [RFC7432] which transgresses 185 many limitations of Virtual Private LAN Service (VPLS), introduces 186 the need for an efficient mechanism to replicate broadcast, unknown 187 unicast and multicast (BUM) traffic towards the PEs that participate 188 in the same EVPN instances (EVIs). As simplest deployable mechanism, 189 ingress replication is used but poses accordingly a high burden on 190 the ingress node as well as saturating the underlying links with many 191 copies of the same frame headed to different PEs. Fortunately 192 enough, EVPN signals internally PMSI attribute [RFC6513] to establish 193 transport for BUM frames and with that allows to deploy a plethora of 194 multicast replication services that the underlying network layer can 195 provide. It is therefore relatively simple to deploy BIER P-Tunnels 196 for EVPN and with that distribute BUM traffic without creating 197 P-router states in the core that are required by Protocol Independent 198 Multicast (PIM), Multipoint LDP (mLDP) or comparable solutions. 200 Specifically, the same I-PMSI attribute suggested for mVPN can be 201 used easily in EVPN, and given that EVPN can multiplex and 202 disassociate BUM frames on p2mp and mp2mp trees using upstream 203 assigned labels, BIER P-Tunnel will support BUM flooding for any 204 number of EVIs over a single sub-domain for maximum scalability but 205 allow at the other extreme of the spectrum to use a single BIER sub- 206 domain per EVI if such a deployment is necessary. 208 Multiplexing EVIs onto the same PMSI forces the PMSI to span more 209 than the necessary number of PEs normally, i.e. the union of all PEs 210 participating in the EVIs multiplexed on the PMSI. Given the 211 properties of BIER it is however possible to encode in the receiver 212 bitmask only the PEs that participate in the EVI that the BUM frame 213 targets. In a sense, BIER is an inclusive as well as a selective 214 tree and can allow delivering the frame to only the set of receivers 215 interested in a frame even though many others participate in the same 216 PMSI. 218 As another significant advantage, it is imaginable that the same BIER 219 tunnel needed for BUM frames can optimize the delivery of the 220 multicast frames though the signaling of group memberships for the 221 PEs involved, but has not been specified as of date. 223 3.3. IPTV and OTT Services 225 IPTV is a service, well known for its characteristics of allowing 226 both live and on-demand delivery of media traffic over an end-to-end 227 managed IP network. 229 Over The Top (OTT) is a similar service, well known for its 230 characteristics of allowing live and on-demand delivery of media 231 traffic between IP domains, where the source is often on an external 232 network relative to the receivers. 234 Content Delivery Networks (CDN) operators provide layer 4 235 applications, and often some degree of managed layer 3 IP networks, 236 that enable media to be securely and reliably delivered to many 237 receivers. In some models they may place applications within third 238 party networks, or they may place those applications at the edges of 239 their own managed network peerings and similar inter-domain 240 connections. CDNs provide capabilities to help publishers scale to 241 meet large audience demand. Their applications are not limited to 242 audio and video delivery, but may include static and dynamic web 243 content, or optimized delivery for Massive Multiplayer Gaming and 244 similar. Most publishers will use a CDN for public Internet 245 delivery, and some publishers will use a CDN internally within their 246 IPTV networks to resolve layer 4 complexity. 248 In a typical IPTV environment the egress routers connecting to the 249 receivers will build the tree towards the ingress router connecting 250 to the IPTV servers. The egress routers would rely on IGMP/MLD 251 (static or dynamic) to learn about the receivers interest in one or 252 more multicast groups/channels. Interestingly, BIER could allow 253 provisioning any new multicast group/channel by only modifying the 254 channel mapping on ingress routers. This is deemed beneficial for 255 the linear IPTV video broadcasting in which all receivers behind all 256 egress PE routers would receive the IPTV video traffic. 258 With BIER in an IPTV environment, there is no need for tree building 259 from egress to ingress. Further, any addition of new channels or new 260 egress routers can be directly controlled from the ingress router. 261 When a new channel is included, the multicast group is mapped to a 262 bit string that includes all egress routers. Ingress router would 263 start sending the new channel and deliver it to all egress routers. 264 As it can be observed, there is no need for static IGMP provisioning 265 in each egress router whenever a new group/channel is added. 266 Instead, it can be controlled from ingress router itself by 267 configuring the new group to bit mask mapping on ingress router. 269 With BIER in OTT environment, the edge routers in CDN domain 270 terminating the OTT user session connect to the ingress BIER routers 271 connecting content provider domains or a local cache server and 272 leverage the scalability benefit that BIER could provide. This may 273 rely on Multi-Protocol BGP (MP-BGP) interoperation (or similar) 274 between the egress of one domain and the ingress of the next domain, 275 or some other SDN control plane may prove a more effective and 276 simpler way to deploy BIER. For a single CDN operator this could be 277 well managed in the layer 4 applications that they provide and it may 278 be that the initial receiver in a remote domain is actually an 279 application operated by the CDN which in turn acts as a source for 280 the ingress BIER router in that remote domain, and by doing so keeps 281 the BIER domains discrete. 283 3.4. Multi-Service, Converged L3VPN Network 285 Increasingly operators deploy single networks for multiple services. 286 For example a single metro core network could be deployed to provide 287 residential IPTV retail service, residential IPTV wholesale service, 288 and business L3VPN service with multicast. It may often be desired 289 by an operator to use a single architecture to deliver multicast for 290 all of those services. In some cases, governing regulations may 291 additionally require same service capabilities for both wholesale and 292 retail multicast services. To meet those requirements, some 293 operators use the multicast architecture as defined in [RFC5331]. 294 However, the need to support many L3VPNs, with some of those L3VPNs 295 scaling to hundreds of egress PEs and thousands of C-multicast flows, 296 make scaling/efficiency issues defined in earlier sections of this 297 document even more prevalent. Additionally support for tens of 298 millions of BGP multicast A-D and join routes alone could be required 299 in such networks with all of the consequences that such a scale 300 brings. 302 With BIER, again there is no need of tree building from egress to 303 ingress for each L3VPN or individual or group of c-multicast flows. 304 As described earlier, any addition of a new IPTV channel or new 305 egress router can be directly controlled from ingress router and 306 there is no flooding concern when implementing [RFC5331]. 308 3.5. Control-Plane Simplification and SDN-Controlled Networks 310 With the advent of Software Defined Networking, some operators are 311 looking at various ways to reduce the overall cost of providing 312 networking services including multicast delivery. Some of the 313 alternatives being considered include minimizing capex cost through 314 deployment of network elements with a simplified control plane 315 function, minimizing operational cost by reducing control protocols 316 required to achieve a particular service, etc. Segment routing as 317 described in [RFC8402] provides a solution that could be used to 318 provide simplified control plane architecture for unicast traffic. 319 With Segment routing deployed for unicast, a solution that simplifies 320 control plane for multicast would thus also be required, or 321 operational and capex cost reductions will not be achieved to their 322 full potential. 324 With BIER, there is no longer a need to run control protocols 325 required to build a distribution tree. If L3VPN with multicast, for 326 example, is deployed using [RFC5331] with MPLS in P-instance, the 327 MPLS control plane would no longer be required. BIER also allows 328 migration of C-multicast flows from non-BIER to BIER-based 329 architecture, which simplifies the operation of transitioning the 330 control plane. Finally, for operators, who desire a centralized, 331 offloaded control plane, multicast overlay as well as BIER forwarding 332 could be used with controller-based programming. 334 3.6. Data Center Virtualization/Overlay 336 Virtual eXtensible Local Area Network (VXLAN) [RFC7348] is a kind of 337 network virtualization overlay technology which is intended for 338 multi-tenancy data center networks. To emulate a layer 2 flooding 339 domain across the layer 3 underlay, it requires a 1:1 or n:1 mapping 340 between the VXLAN Virtual Network Instance (VNI) and the 341 corresponding IP multicast group. In other words, it requires 342 enabling the multicast capability in the underlay. For instance, it 343 requires enabling PIM-SM [RFC7761] or PIM-BIDIR [RFC5015] multicast 344 routing protocol in the underlay. VXLAN is designed to support 16M 345 VNIs at maximum. In the mapping ratio of 1:1, it would require 16M 346 multicast groups in the underlay which would become a significant 347 challenge to both the control plane and the data plane of the data 348 center switches. In the mapping ratio of n:1, it would result in 349 inefficiency bandwidth utilization which is not optimal in data 350 center networks. More importantly, it is recognized by many data 351 center operators as an undesireable burden to run multicast in data 352 center networks from the perspective of network operation and 353 maintenance. As a result, many VXLAN implementations claim to 354 support the ingress replication capability since ingress replication 355 eliminates the burden of running multicast in the underlay. Ingress 356 replication is an acceptable choice in small-sized networks where the 357 average number of receivers per multicast flow is not too large. 358 However, in multi-tenant data center networks, especially those in 359 which the Network Virtualization Edge (NVE)functionality is enabled 360 on a large number of physical servers, the average number of NVEs per 361 VN instance would be very large. As a result, the ingress 362 replication scheme would result in a serious bandwidth waste in the 363 underlay and a significant replication burden on ingress NVEs. 365 With BIER, there is no need for maintaining that huge amount of 366 multicast state in the underlay anymore while the delivery efficiency 367 of overlay BUM traffic is the same as if any kind of stateful 368 multicast protocols such as PIM-SM or PIM-BIDIR is enabled in the 369 underlay. 371 3.7. Financial Services 373 Financial services extensively rely on IP multicast to deliver stock 374 market data and its derivatives, and critically require optimal 375 latency path (from publisher to subscribers), deterministic 376 convergence (so as to deliver market data derivatives fairly to each 377 client) and secured delivery. 379 Current multicast solutions, e.g. PIM, mLDP, etc., however, don't 380 sufficiently address the above requirements. The reason is that the 381 current solutions are primarily subscriber driven, i.e. multicast 382 tree is setup using reverse path forwarding techniques, and as a 383 result, the chosen path for market data may not be latency optimal 384 from publisher to the (market data) subscribers. 386 As the number of multicast flows grows, the convergence time might 387 increase and make it somewhat nondeterministic from the first to the 388 last flow depending on platforms/implementations. Also, by having 389 more protocols in the network, the variability to ensure secured 390 delivery of multicast data increases, thereby undermining the overall 391 security aspect. 393 BIER enables setting up the most optimal path from publisher to 394 subscribers by leveraging unicast routing relevant for the 395 subscribers. With BIER, the multicast convergence is as fast as 396 unicast, uniform and deterministic regardless of number of multicast 397 flows. This makes BIER a perfect multicast technology to achieve 398 fairness for market derivatives per each subscriber. 400 3.8. 4K Broadcast Video Services 402 In a broadcast network environment, the media content is sourced from 403 various content providers across different locations. The 4k 404 broadcast video is an evolving service placing enormous demand on 405 network infrastructure in terms of low latency, faster convergence, 406 high throughput, and high bandwidth. 408 In a typical broadcast satellite network environment, the receivers 409 are the satellite terminal nodes which will receive the content from 410 various sources and feed the data to the satellite. Typically a 411 multicast group address is assigned for each source. Currently the 412 receivers can join the sources using either PIM-SM [RFC7761] or PIM- 413 SSM [RFC4607]. 415 In such network scenarios, normally PIM will be the multicast routing 416 protocol used to establish the tree between ingress connecting the 417 content media sources to egress routers connecting the receivers. In 418 PIM-SM mode, the receivers relies on shared tree to learn the source 419 address and build source tree while in PIM-SSM mode, IGMPv3 is used 420 by receiver to signal the source address to the egress router. In 421 either case, as the number of sources increases, the number of 422 multicast trees in the core also increases resulting in more 423 multicast state entries in the core and increasing the convergence 424 time. 426 With BIER in 4k broadcast satellite network environment, there is no 427 need to run PIM in the core and no need to maintain any multicast 428 state. The obvious advantage with BIER is the low multicast state 429 maintained in the core and the faster convergence (which is typically 430 at par with the unicast convergence). The edge router at the content 431 source facility can act as BIFR router and the edge router at the 432 receiver facility can act as BFER routers. Any addition of a new 433 content source or new satellite Terminal nodes can be added 434 seamlessly in to the BEIR domain. The group membership from the 435 receivers to the sources can be provisioned either by Border Gateway 436 Protocol (BGP) or an SDN controller. 438 3.9. Distributed Storage Cluster 440 Distributed Storage Clusters can benefit from dynamically targeted 441 multicast messaging both for dynamic load-balancing negotiations and 442 efficient concurrent replication of content to multiple targets. 444 For example, in the NexentaEdge storage cluster (by Nexenta Systems) 445 a Chunk Put transaction is accomplished with the following steps: 447 o The Client multicasts a 'Chunk Put Request' to a multicast group 448 known as a Negotiating Group. This group holds a small number of 449 storage targets that are collectively responsible for providing 450 storage for a stable subset of the chunks to be stored. In 451 NexentaEdge this is based upon a cryptographic hash of the Object 452 Name or the Chunk payload. 454 o Each recipient of the 'Chunk Put Request' unicasts a 'Chunk Put 455 Response' to the Client indicating when it could accept a transfer 456 of the Chunk. 458 o The Client selects a different multicast group (a Rendezvous 459 Group) which will target the set storage targets selected to hold 460 the Chunk. This is a subset of the Negotiation Group, presumably 461 selected so as to complete the transfer as early as possible. 463 o The Client multicasts a 'Chunk Put Accept' message to inform the 464 Negotiation Group of what storage targets have been selected, when 465 the transfer will occur and over what multicast group. 467 o The client performs the multicast transfer over the Rendezvous 468 Group at the agreed upon time. 470 o Each recipient sends a 'Chunk Put Ack' to positively or negatively 471 acknowledge the chunk transfer. 473 o The client will retry the entire transaction as needed if there 474 are not yet sufficient replicas of the Chunk. 476 Chunks are retrieved by multicasting a 'Chunk Get Request' to the 477 same Negotiating Group, collecting 'Chunk Get Responses', picking one 478 source from those responses, sending a 'Chunk Get Accept' message to 479 identify the selected source and having the selected storage server 480 unicast the chunk to the source. 482 Chunks are found by the Object Name or by having the payload 483 cryptographic hash of payload chunks be recorded in a "chunk 484 reference" in a metadata chunk. The metadata chunks are found using 485 the Object Name. 487 The general pattern in use here, which should apply to other cluster 488 applications, is that multicast messages are sent amongst a 489 dynamically selected subset of the entire cluster, which may result 490 in exchanging further messages over a smaller subset even more 491 dynamically selected. 493 Currently the distributed storage application discussed use of 494 Multicast Listener Discovery (MLD) [RFC3810] managed IPV6 multicast 495 groups. This in turn requires either a push-based mechanism for 496 dynamically configuring Rendezvous Groups or pre-provisioning a very 497 large number of potential Rendezvous Groups and dynamically selecting 498 the multicast group that will deliver to the selected set of storage 499 targets. 501 BIER would eliminate the need for a vast number of multicast groups. 502 The entire cluster can be represented as a single BIER domain using 503 only the default sub-domain. Each Negotiating Group is simply a 504 subset of the whole that is deterministically selected by the 505 Cryptographic Hash of the Object Name or Chunk Payload. Each 506 Rendezvous Group is a further subset of the Negotiating Group. 508 In a simple mapping of the MLD managed multicast groups, each 509 Negotiating Group could be represented by a short bit string selected 510 by a Set Identifier. The Set Identier effectively becomes the 511 Negotiating Group. To address the entire Negotiating Group the bit 512 string is set to all ones. To later address a subset of the group a 513 subset bit string is used. 515 This allows a short fixed size BIER header to multicast to a very 516 large storage cluster. 518 3.10. Hyper Text Transfer Protocol (HTTP) Level Multicast 520 Scenarios where a number of HTTP [RFC7231] clients are quasi- 521 synchronously accessing the same HTTP-level resource can benefit from 522 the dynamic multicast group formation enabled by BIER. 524 For example, in the FLIPS (Flexible IP Services) solution by 525 InterDigital, network attachment points (NAPs) provide a protocol 526 mapping from HTTP to an efficient BIER-compliant transfer along a 527 bit-indexed path between an ingress (here the NAP to which the 528 clients connect) and an egress (here the NAP to which the HTTP-level 529 server connects). This is accomplished with the following steps: 531 o at the client NAP, the HTTP request is terminated at the HTTP 532 level at a local HTTP proxy. 534 o the HTTP request is published by the client NAP towards the Fully 535 Qualified Domain Names (FQDN) of the server defined in the HTTP 536 request 538 * if no local BIER forwarding information exists to the server 539 (NAP), a path computation entity (PCE) is consulted, which 540 calculates a unicast path to the egress NAP (here the server 541 NAP). The PCE provides the forwarding information to the 542 client NAP, which in turn caches the result. 544 + if the local BIER forwarding information exists in the NAP- 545 local cache, it is used instead. 547 o Upon arrival of a client NAP request at the server NAP, the server 548 NAP proxy forwards the HTTP request as a well-formed HTTP request 549 locally to the server. 551 * If no client NAP forwarding information exists for the reverse 552 direction, this information is requested from the PCE. Upon 553 arrival of such reverse direction forwarding information, it is 554 stored in a local table for future use. 556 o Upon arrival of any further client NAP request at the server NAP 557 to an HTTP request whose response is still outstanding, the client 558 NAP is added to an internal request table and the request is 559 suppressed from being sent to the server. 561 * If no client NAP forwarding information exists for the reverse 562 direction, this information is requested from the PCE. Upon 563 arrival of such reverse direction forwarding information, it is 564 stored in a local table for future use. 566 o Upon arrival of an HTTP response at the server NAP, the server NAP 567 consults its internal request table for any outstanding HTTP 568 requests to the same request 570 the server NAP retrieves the stored BIER forwarding information 571 for the reverse direction for all outstanding HTTP requests 572 found above and determines the path information to all client 573 NAPs through a binary OR over all BIER forwarding identifiers 574 with the same SI field. This newly formed joint BIER multicast 575 response identifier is used to send the HTTP response across 576 the network, while the procedure is executed until all requests 577 have been served. 579 o Upon arrival of the HTTP response at a client NAP, it will be sent 580 by the client NAP proxy to the locally connected client. 582 A number of solutions exist to manage necessary updates in locally 583 stored BIER forwarding information for cases of client/server 584 mobility as well as for resilience purposes. 586 Applications for HTTP-level multicast are manifold. Examples are 587 HTTP-level streaming (HLS) services, provided as an OTT offering, 588 either at the level of end user clients (connected to BIER-enabled 589 NAPs) or site-level clients. Others are corporate intranet storage 590 cluster solutions that utilize HTTP- level synchronization. In 591 multi-tenant data centre scenarios such as outlined in Section 3.6., 592 the aforementioned solution can satisfy HTTP-level requests to 593 popular services and content in a multicast delivery manner. 595 BIER enables such solution through the bitfield representation of 596 forwarding information, which is in turn used for ad-hoc multicast 597 group formation at the HTTP request level. While such solution works 598 well in SDN-enabled intra- domain scenarios, BIER would enable the 599 realization of such scenarios in multi-domain scenarios over legacy 600 transport networks without relying on SDN-controlled infrastructure. 601 Also see [I-D.ietf-bier-multicast-http-response] for more 602 information. 604 4. Security Considerations 606 There are no security issues introduced by this draft. 608 5. IANA Considerations 610 There are no IANA consideration introduced by this draft. 612 6. Acknowledgments 614 The authors would like to thank IJsbrand Wijnands, Greg Shepherd and 615 Christian Martin for their contribution. 617 The authors would also like to thank Anoop Ghanwani for his thorough 618 review and comments. 620 7. Contributing Authors 622 Dirk Trossen 623 InterDigital Inc 624 Email: dirk.trossen@interdigital.com 626 8. References 628 8.1. Normative References 630 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 631 Requirement Levels", BCP 14, RFC 2119, 632 DOI 10.17487/RFC2119, March 1997, 633 . 635 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 636 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 637 May 2017, . 639 [RFC8279] Wijnands, IJ., Ed., Rosen, E., Ed., Dolganow, A., 640 Przygienda, T., and S. Aldrin, "Multicast Using Bit Index 641 Explicit Replication (BIER)", RFC 8279, 642 DOI 10.17487/RFC8279, November 2017, 643 . 645 [RFC8556] Rosen, E., Ed., Sivakumar, M., Przygienda, T., Aldrin, S., 646 and A. Dolganow, "Multicast VPN Using Bit Index Explicit 647 Replication (BIER)", RFC 8556, DOI 10.17487/RFC8556, April 648 2019, . 650 8.2. Informative References 652 [I-D.ietf-bier-multicast-http-response] 653 Trossen, D., Rahman, A., Wang, C., and T. Eckert, 654 "Applicability of BIER Multicast Overlay for Adaptive 655 Streaming Services", draft-ietf-bier-multicast-http- 656 response-03 (work in progress), February 2020. 658 [RFC3810] Vida, R., Ed. and L. Costa, Ed., "Multicast Listener 659 Discovery Version 2 (MLDv2) for IPv6", RFC 3810, 660 DOI 10.17487/RFC3810, June 2004, 661 . 663 [RFC4607] Holbrook, H. and B. Cain, "Source-Specific Multicast for 664 IP", RFC 4607, DOI 10.17487/RFC4607, August 2006, 665 . 667 [RFC4664] Andersson, L., Ed. and E. Rosen, Ed., "Framework for Layer 668 2 Virtual Private Networks (L2VPNs)", RFC 4664, 669 DOI 10.17487/RFC4664, September 2006, 670 . 672 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 673 "Bidirectional Protocol Independent Multicast (BIDIR- 674 PIM)", RFC 5015, DOI 10.17487/RFC5015, October 2007, 675 . 677 [RFC5331] Aggarwal, R., Rekhter, Y., and E. Rosen, "MPLS Upstream 678 Label Assignment and Context-Specific Label Space", 679 RFC 5331, DOI 10.17487/RFC5331, August 2008, 680 . 682 [RFC6513] Rosen, E., Ed. and R. Aggarwal, Ed., "Multicast in MPLS/ 683 BGP IP VPNs", RFC 6513, DOI 10.17487/RFC6513, February 684 2012, . 686 [RFC7231] Fielding, R., Ed. and J. Reschke, Ed., "Hypertext Transfer 687 Protocol (HTTP/1.1): Semantics and Content", RFC 7231, 688 DOI 10.17487/RFC7231, June 2014, 689 . 691 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, 692 L., Sridhar, T., Bursell, M., and C. Wright, "Virtual 693 eXtensible Local Area Network (VXLAN): A Framework for 694 Overlaying Virtualized Layer 2 Networks over Layer 3 695 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, 696 . 698 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A., 699 Uttaro, J., Drake, J., and W. Henderickx, "BGP MPLS-Based 700 Ethernet VPN", RFC 7432, DOI 10.17487/RFC7432, February 701 2015, . 703 [RFC7761] Fenner, B., Handley, M., Holbrook, H., Kouvelas, I., 704 Parekh, R., Zhang, Z., and L. Zheng, "Protocol Independent 705 Multicast - Sparse Mode (PIM-SM): Protocol Specification 706 (Revised)", STD 83, RFC 7761, DOI 10.17487/RFC7761, March 707 2016, . 709 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 710 Decraene, B., Litkowski, S., and R. Shakir, "Segment 711 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 712 July 2018, . 714 Authors' Addresses 716 Nagendra Kumar 717 Cisco 718 7200 Kit Creek Road 719 Research Triangle Park, NC 27709 720 US 722 Email: naikumar@cisco.com 724 Rajiv Asati 725 Cisco 726 7200 Kit Creek Road 727 Research Triangle Park, NC 27709 728 US 730 Email: rajiva@cisco.com 732 Mach(Guoyi) Chen 733 Huawei 735 Email: mach.chen@huawei.com 737 Xiaohu Xu 738 Huawei 740 Email: xuxiaohu@huawei.com 742 Andrew Dolganow 743 Nokia 744 750D Chai Chee Rd 745 06-06 Viva Business Park 469004 746 Singapore 748 Email: andrew.dolganow@nokia.com 750 Tony Przygienda 751 Juniper Networks 752 1194 N. Mathilda Ave 753 Sunnyvale, CA 95089 754 USA 756 Email: prz@juniper.net 757 Arkadiy Gulko 758 Thomson Reuters 759 195 Broadway 760 New York NY 10007 761 USA 763 Email: arkadiy.gulko@thomsonreuters.com 765 Dom Robinson 766 id3as-company Ltd 767 UK 769 Email: Dom@id3as.co.uk 771 Vishal Arya 772 DirecTV Inc 773 2230 E Imperial Hwy 774 CA 90245 775 USA 777 Email: varya@directv.com 779 Caitlin Bestler 780 Nexenta Systems 781 451 El Camino Real 782 Santa Clara, CA 783 US 785 Email: caitlin.bestler@nexenta.com