idnits 2.17.1 draft-ietf-bier-use-cases-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (January 31, 2019) is 1905 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-bier-mvpn-01 -- Obsolete informational reference (is this intentional?): RFC 4601 (Obsoleted by RFC 7761) Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group N. Kumar 3 Internet-Draft R. Asati 4 Intended status: Informational Cisco 5 Expires: August 4, 2019 M. Chen 6 X. Xu 7 Huawei 8 A. Dolganow 9 Nokia 10 T. Przygienda 11 Juniper Networks 12 A. Gulko 13 Thomson Reuters 14 D. Robinson 15 id3as-company Ltd 16 V. Arya 17 DirecTV Inc 18 C. Bestler 19 Nexenta 20 January 31, 2019 22 BIER Use Cases 23 draft-ietf-bier-use-cases-09.txt 25 Abstract 27 Bit Index Explicit Replication (BIER) is an architecture that 28 provides optimal multicast forwarding through a "BIER domain" without 29 requiring intermediate routers to maintain any multicast related per- 30 flow state. BIER also does not require any explicit tree-building 31 protocol for its operation. A multicast data packet enters a BIER 32 domain at a "Bit-Forwarding Ingress Router" (BFIR), and leaves the 33 BIER domain at one or more "Bit-Forwarding Egress Routers" (BFERs). 34 The BFIR router adds a BIER header to the packet. The BIER header 35 contains a bit-string in which each bit represents exactly one BFER 36 to forward the packet to. The set of BFERs to which the multicast 37 packet needs to be forwarded is expressed by setting the bits that 38 correspond to those routers in the BIER header. 40 This document describes some of the use cases for BIER. 42 Status of This Memo 44 This Internet-Draft is submitted in full conformance with the 45 provisions of BCP 78 and BCP 79. 47 Internet-Drafts are working documents of the Internet Engineering 48 Task Force (IETF). Note that other groups may also distribute 49 working documents as Internet-Drafts. The list of current Internet- 50 Drafts is at https://datatracker.ietf.org/drafts/current/. 52 Internet-Drafts are draft documents valid for a maximum of six months 53 and may be updated, replaced, or obsoleted by other documents at any 54 time. It is inappropriate to use Internet-Drafts as reference 55 material or to cite them other than as "work in progress." 57 This Internet-Draft will expire on August 4, 2019. 59 Copyright Notice 61 Copyright (c) 2019 IETF Trust and the persons identified as the 62 document authors. All rights reserved. 64 This document is subject to BCP 78 and the IETF Trust's Legal 65 Provisions Relating to IETF Documents 66 (https://trustee.ietf.org/license-info) in effect on the date of 67 publication of this document. Please review these documents 68 carefully, as they describe your rights and restrictions with respect 69 to this document. Code Components extracted from this document must 70 include Simplified BSD License text as described in Section 4.e of 71 the Trust Legal Provisions and are provided without warranty as 72 described in the Simplified BSD License. 74 Table of Contents 76 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 77 2. Specification of Requirements . . . . . . . . . . . . . . . . 3 78 3. BIER Use Cases . . . . . . . . . . . . . . . . . . . . . . . 3 79 3.1. Multicast in L3VPN Networks . . . . . . . . . . . . . . . 3 80 3.2. BUM in EVPN . . . . . . . . . . . . . . . . . . . . . . . 4 81 3.3. IPTV and OTT Services . . . . . . . . . . . . . . . . . . 5 82 3.4. Multi-Service, Converged L3VPN Network . . . . . . . . . 6 83 3.5. Control-Plane Simplification and SDN-Controlled Networks 7 84 3.6. Data Center Virtualization/Overlay . . . . . . . . . . . 7 85 3.7. Financial Services . . . . . . . . . . . . . . . . . . . 8 86 3.8. 4K Broadcast Video Services . . . . . . . . . . . . . . . 9 87 3.9. Distributed Storage Cluster . . . . . . . . . . . . . . . 10 88 3.10. HTTP-Level Multicast . . . . . . . . . . . . . . . . . . 11 89 4. Security Considerations . . . . . . . . . . . . . . . . . . . 13 90 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 91 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 13 92 7. Contributing Authors . . . . . . . . . . . . . . . . . . . . 13 93 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 94 8.1. Normative References . . . . . . . . . . . . . . . . . . 13 95 8.2. Informative References . . . . . . . . . . . . . . . . . 14 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 98 1. Introduction 100 Bit Index Explicit Replication (BIER) [RFC8279] is an architecture 101 that provides optimal multicast forwarding through a "BIER domain" 102 without requiring intermediate routers to maintain any multicast 103 related per-flow state. BIER also does not require any explicit 104 tree-building protocol for its operation. A multicast data packet 105 enters a BIER domain at a "Bit-Forwarding Ingress Router" (BFIR), and 106 leaves the BIER domain at one or more "Bit-Forwarding Egress Routers" 107 (BFERs). The BFIR router adds a BIER header to the packet. The BIER 108 header contains a bit-string in which each bit represents exactly one 109 BFER to forward the packet to. The set of BFERs to which the 110 multicast packet needs to be forwarded is expressed by setting the 111 bits that correspond to those routers in the BIER header. 113 The obvious advantage of BIER is that there is no per flow multicast 114 state in the core of the network and there is no tree building 115 protocol that sets up tree on demand based on users joining a 116 multicast flow. In that sense, BIER is potentially applicable to 117 many services where multicast is used and not limited to the examples 118 described in this draft. In this document we are describing a few 119 use cases where BIER could provide benefit over using existing 120 mechanisms. 122 2. Specification of Requirements 124 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 125 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 126 document are to be interpreted as described in [RFC2119]. 128 3. BIER Use Cases 130 3.1. Multicast in L3VPN Networks 132 The Multicast L3VPN architecture [RFC6513] describes many different 133 profiles in order to transport L3 multicast across a provider's 134 network. Each profile has its own different tradeoffs (see section 135 2.1 [RFC6513]). When using "Multidirectional Inclusive" "Provider 136 Multicast Service Interface" (MI-PMSI) an efficient tree is built per 137 VPN, but causes flooding of egress PE's that are part of the VPN, but 138 have not joined a particular C-multicast flow. This problem can be 139 solved with the "Selective" PMSI (S-PMSI) by building a special tree 140 for only those PEs that have joined the C-multicast flow for that 141 specific VPN. The more S-PMSI's, the less bandwidth is wasted due to 142 flooding, but causes more state to be created in the provider's 143 network. This is a typical problem network operators are faced with 144 by finding the right balance between the amount of state carried in 145 the network and how much flooding (waste of bandwidth) is acceptable. 147 Some of the complexity with L3VPN's comes due to providing different 148 profiles to accommodate these trade-offs. 150 With BIER there is no trade-off between State and Flooding. Since 151 the receiver information is explicitly carried within the packet, 152 there is no need to build S-PMSI's to deliver multicast to a sub-set 153 of the VPN egress PE's. Due to that behaviour, there is no need for 154 S-PMSI's. 156 MI-PMSI's and S-PMSI's are also used to provide the VPN context to 157 the egress PE router that receives the multicast packet. Also, in 158 some MVPN profiles it is also required to know which Ingress PE 159 forwarded the packet. Based on the PMSI the packet is received from, 160 the target VPN is determined. This also means there is a requirement 161 to have at least a PMSI per VPN or per VPN/ingress PE. This means 162 the amount of state created in the network is proportional to the VPN 163 and ingress PEs. Creating PMSI state per VPN can be prevented by 164 applying the procedures as documented in [RFC5331]. This however has 165 not been very much adopted/implemented due to the excessive flooding 166 it would cause to egress PEs since *all* VPN multicast packets are 167 forwarded to *all* PEs that have one or more VPNs attached to it. 169 With BIER, the destination PEs are identified in the multicast 170 packet, so there is no flooding concern when implementing [RFC5331]. 171 For that reason there is no need to create multiple BIER domains per 172 VPN, the VPN context can be carry in the multicast packet using the 173 procedures as defined in [RFC5331]. Also see [I-D.ietf-bier-mvpn] 174 for more information. 176 With BIER only a few MVPN profiles will remain relevant, simplifying 177 the operational cost and making it easier to be interoperable among 178 different vendors. 180 3.2. BUM in EVPN 182 The current widespread adoption of L2VPN services [RFC4664], 183 especially the upcoming EVPN solution [RFC7432] which transgresses 184 many limitations of VPLS, introduces the need for an efficient 185 mechanism to replicate broadcast, unknown and multicast (BUM) traffic 186 towards the PEs that participate in the same EVPN instances (EVIs). 187 As simplest deployable mechanism, ingress replication is used but 188 poses accordingly a high burden on the ingress node as well as 189 saturating the underlying links with many copies of the same frame 190 headed to different PEs. Fortunately enough, EVPN signals internally 191 PMSI attribute [RFC6513] to establish transport for BUM frames and 192 with that allows to deploy a plethora of multicast replication 193 services that the underlying network layer can provide. It is 194 therefore relatively simple to deploy BIER P-Tunnels for EVPN and 195 with that distribute BUM traffic without creating P-router states in 196 the core that are required by PIM, mLDP or comparable solutions. 198 Specifically, the same I-PMSI attribute suggested for mVPN can be 199 used easily in EVPN, and given that EVPN can multiplex and 200 disassociate BUM frames on p2mp and mp2mp trees using upstream 201 assigned labels, BIER P-Tunnel will support BUM flooding for any 202 number of EVIs over a single sub-domain for maximum scalability but 203 allow at the other extreme of the spectrum to use a single BIER sub- 204 domain per EVI if such a deployment is necessary. 206 Multiplexing EVIs onto the same PMSI forces the PMSI to span more 207 than the necessary number of PEs normally, i.e. the union of all PEs 208 participating in the EVIs multiplexed on the PMSI. Given the 209 properties of BIER it is however possible to encode in the receiver 210 bitmask only the PEs that participate in the EVI that the BUM frame 211 targets. In a sense, BIER is an inclusive as well as a selective 212 tree and can allow delivering the frame to only the set of receivers 213 interested in a frame even though many others participate in the same 214 PMSI. 216 As another significant advantage, it is imaginable that the same BIER 217 tunnel needed for BUM frames can optimize the delivery of the 218 multicast frames though the signaling of group memberships for the 219 PEs involved, but has not been specified as of date. 221 3.3. IPTV and OTT Services 223 IPTV is a service, well known for its characteristics of allowing 224 both live and on-demand delivery of media traffic over an end-to-end 225 managed IP network. 227 Over The Top (OTT) is a similar service, well known for its 228 characteristics of allowing live and on-demand delivery of media 229 traffic between IP domains, where the source is often on an external 230 network relative to the receivers. 232 Content Delivery Networks (CDN) operators provide layer 4 233 applications, and often some degree of managed layer 3 IP networks, 234 that enable media to be securely and reliably delivered to many 235 receivers. In some models they may place applications within third 236 party networks, or they may place those applications at the edges of 237 their own managed network peerings and similar inter-domain 238 connections. CDNs provide capabilities to help publishers scale to 239 meet large audience demand. Their applications are not limited to 240 audio and video delivery, but may include static and dynamic web 241 content, or optimized delivery for Massive Multiplayer Gaming and 242 similar. Most publishers will use a CDN for public Internet 243 delivery, and some publishers will use a CDN internally within their 244 IPTV networks to resolve layer 4 complexity. 246 In a typical IPTV environment the egress routers connecting to the 247 receivers will build the tree towards the ingress router connecting 248 to the IPTV servers. The egress routers would rely on IGMP/MLD 249 (static or dynamic) to learn about the receiver's interest in one or 250 more multicast groups/channels. Interestingly, BIER could allow 251 provisioning any new multicast group/channel by only modifying the 252 channel mapping on ingress routers. This is deemed beneficial for 253 the linear IPTV video broadcasting in which all receivers behind all 254 egress PE routers would receive the IPTV video traffic. 256 With BIER in an IPTV environment, there is no need for tree building 257 from egress to ingress. Further, any addition of new channels or new 258 egress routers can be directly controlled from the ingress router. 259 When a new channel is included, the multicast group is mapped to a 260 bit string that includes all egress routers. Ingress router would 261 start sending the new channel and deliver it to all egress routers. 262 As it can be observed, there is no need for static IGMP provisioning 263 in each egress router whenever a new group/channel is added. 264 Instead, it can be controlled from ingress router itself by 265 configuring the new group to bit mask mapping on ingress router. 267 With BIER in OTT environment, the edge routers in CDN domain 268 terminating the OTT user session connect to the ingress BIER routers 269 connecting content provider domains or a local cache server and 270 leverage the scalability benefit that BIER could provide. This may 271 rely on MBGP interoperation (or similar) between the egress of one 272 domain and the ingress of the next domain, or some other SDN control 273 plane may prove a more effective and simpler way to deploy BIER. For 274 a single CDN operator this could be well managed in the layer 4 275 applications that they provide and it may be that the initial 276 receiver in a remote domain is actually an application operated by 277 the CDN which in turn acts as a source for the ingress BIER router in 278 that remote domain, and by doing so keeps the BIER domains discrete. 280 3.4. Multi-Service, Converged L3VPN Network 282 Increasingly operators deploy single networks for multiple services. 283 For example a single metro core network could be deployed to provide 284 residential IPTV retail service, residential IPTV wholesale service, 285 and business L3VPN service with multicast. It may often be desired 286 by an operator to use a single architecture to deliver multicast for 287 all of those services. In some cases, governing regulations may 288 additionally require same service capabilities for both wholesale and 289 retail multicast services. To meet those requirements, some 290 operators use the multicast architecture as defined in [RFC5331]. 292 However, the need to support many L3VPNs, with some of those L3VPNs 293 scaling to hundreds of egress PE's and thousands of C-multicast 294 flows, make scaling/efficiency issues defined in earlier sections of 295 this document even more prevalent. Additionally support for tens of 296 millions of BGP multicast A-D and join routes alone could be required 297 in such networks with all of the consequences that such a scale 298 brings. 300 With BIER, again there is no need of tree building from egress to 301 ingress for each L3VPN or individual or group of c-multicast flows. 302 As described earlier, any addition of a new IPTV channel or new 303 egress router can be directly controlled from ingress router and 304 there is no flooding concern when implementing [RFC5331]. 306 3.5. Control-Plane Simplification and SDN-Controlled Networks 308 With the advent of Software Defined Networking, some operators are 309 looking at various ways to reduce the overall cost of providing 310 networking services including multicast delivery. Some of the 311 alternatives being considered include minimizing capex cost through 312 deployment of network elements with a simplified control plane 313 function, minimizing operational cost by reducing control protocols 314 required to achieve a particular service, etc. Segment routing as 315 described in [RFC8402] provides a solution that could be used to 316 provide simplified control plane architecture for unicast traffic. 317 With Segment routing deployed for unicast, a solution that simplifies 318 control plane for multicast would thus also be required, or 319 operational and capex cost reductions will not be achieved to their 320 full potential. 322 With BIER, there is no longer a need to run control protocols 323 required to build a distribution tree. If L3VPN with multicast, for 324 example, is deployed using [RFC5331] with MPLS in P-instance, the 325 MPLS control plane would no longer be required. BIER also allows 326 migration of C-multicast flows from non-BIER to BIER-based 327 architecture, which simplifies the operation of transitioning the 328 control plane. Finally, for operators, who desire a centralized, 329 offloaded control plane, multicast overlay as well as BIER forwarding 330 could be used with controller-based programming. 332 3.6. Data Center Virtualization/Overlay 334 Virtual eXtensible Local Area Network (VXLAN) [RFC7348] is a kind of 335 network virtualization overlay technology which is intended for 336 multi-tenancy data center networks. To emulate a layer 2 flooding 337 domain across the layer 3 underlay, it requires a 1:1 or n:1 mapping 338 between the VXLAN Virtual Network Instance (VNI) and the 339 corresponding IP multicast group. In other words, it requires 340 enabling the multicast capability in the underlay. For instance, it 341 requires enabling PIM-SM [RFC4601] or PIM-BIDIR [RFC5015] multicast 342 routing protocol in the underlay. VXLAN is designed to support 16M 343 VNIs at maximum. In the mapping ratio of 1:1, it would require 16M 344 multicast groups in the underlay which would become a significant 345 challenge to both the control plane and the data plane of the data 346 center switches. In the mapping ratio of n:1, it would result in 347 inefficiency bandwidth utilization which is not optimal in data 348 center networks. More importantly, it is recognized by many data 349 center operators as an undesireable burden to run multicast in data 350 center networks from the perspective of network operation and 351 maintenance. As a result, many VXLAN implementations claim to 352 support the ingress replication capability since ingress replication 353 eliminates the burden of running multicast in the underlay. Ingress 354 replication is an acceptable choice in small-sized networks where the 355 average number of receivers per multicast flow is not too large. 356 However, in multi-tenant data center networks, especially those in 357 which the NVE functionality is enabled on a large number of physical 358 servers, the average number of NVEs per VN instance would be very 359 large. As a result, the ingress replication scheme would result in a 360 serious bandwidth waste in the underlay and a significant replication 361 burden on ingress NVEs. 363 With BIER, there is no need for maintaining that huge amount of 364 multicast state in the underlay anymore while the delivery efficiency 365 of overlay BUM traffic is the same as if any kind of stateful 366 multicast protocols such as PIM-SM or PIM-BIDIR is enabled in the 367 underlay. 369 3.7. Financial Services 371 Financial services extensively rely on IP multicast to deliver stock 372 market data and its derivatives, and critically require optimal 373 latency path (from publisher to subscribers), deterministic 374 convergence (so as to deliver market data derivatives fairly to each 375 client) and secured delivery. 377 Current multicast solutions, e.g. PIM, mLDP, etc., however, don't 378 sufficiently address the above requirements. The reason is that the 379 current solutions are primarily subscriber driven, i.e. multicast 380 tree is setup using reverse path forwarding techniques, and as a 381 result, the chosen path for market data may not be latency optimal 382 from publisher to the (market data) subscribers. 384 As the number of multicast flows grows, the convergence time might 385 increase and make it somewhat nondeterministic from the first to the 386 last flow depending on platforms/implementations. Also, by having 387 more protocols in the network, the variability to ensure secured 388 delivery of multicast data increases, thereby undermining the overall 389 security aspect. 391 BIER enables setting up the most optimal path from publisher to 392 subscribers by leveraging unicast routing relevant for the 393 subscribers. With BIER, the multicast convergence is as fast as 394 unicast, uniform and deterministic regardless of number of multicast 395 flows. This makes BIER a perfect multicast technology to achieve 396 fairness for market derivatives per each subscriber. 398 3.8. 4K Broadcast Video Services 400 In a broadcast network environment, the media content is sourced from 401 various content providers across different locations. The 4k 402 broadcast video is an evolving service placing enormous demand on 403 network infrastructure in terms of low latency, faster convergence, 404 high throughput, and high bandwidth. 406 In a typical broadcast satellite network environment, the receivers 407 are the satellite terminal nodes which will receive the content from 408 various sources and feed the data to the satellite. Typically a 409 multicast group address is assigned for each source. Currently the 410 receivers can join the sources using either PIM-SM [RFC4601] or PIM- 411 SSM [RFC4607]. 413 In such network scenarios, normally PIM will be the multicast routing 414 protocol used to establish the tree between ingress connecting the 415 content media sources to egress routers connecting the receivers. In 416 PIM-SM mode, the receivers relies on shared tree to learn the source 417 address and build source tree while in PIM-SSM mode, IGMPv3 is used 418 by receiver to signal the source address to the egress router. In 419 either case, as the number of sources increases, the number of 420 multicast trees in the core also increases resulting in more 421 multicast state entries in the core and increasing the convergence 422 time. 424 With BIER in 4k broadcast satellite network environment, there is no 425 need to run PIM in the core and no need to maintain any multicast 426 state. The obvious advantage with BIER is the low multicast state 427 maintained in the core and the faster convergence (which is typically 428 at par with the unicast convergence). The edge router at the content 429 source facility can act as BIFR router and the edge router at the 430 receiver facility can act as BFER routers. Any addition of a new 431 content source or new satellite Terminal nodes can be added 432 seamlessly in to the BEIR domain. The group membership from the 433 receivers to the sources can be provisioned either by BGP or an SDN 434 controller. 436 3.9. Distributed Storage Cluster 438 Distributed Storage Clusters can benefit from dynamically targeted 439 multicast messaging both for dynamic load-balancing negotiations and 440 efficient concurrent replication of content to multiple targets. 442 For example, in the NexentaEdge storage cluster (by Nexenta Systems) 443 a Chunk Put transaction is accomplished with the following steps: 445 o The Client multicasts a Chunk Put Request to a multicast group 446 known as a Negotiating Group. This group holds a small number of 447 storage targets that are collectively responsible for providing 448 storage for a stable subset of the chunks to be stored. In 449 NexentaEdge this is based upon a cryptographic hash of the Object 450 Name or the Chunk payload. 452 o Each recipient of the Chunk Put Request unicasts a Chunk Put 453 Response to the Client indicating when it could accept a transfer 454 of the Chunk. 456 o The Client selects a different multicast group (a Rendezvous 457 Group) which will target the set storage targets selected to hold 458 the Chunk. This is a subset of the Negotiation Group, presumably 459 selected so as to complete the transfer as early as possible. 461 o >The Client multicasts a Chunk Put Accept message to inform the 462 Negotiation Group of what storage targets have been selected, when 463 the transfer will occur and over what multicast group. 465 o The client performs the multicast transfer over the Rendezvous 466 Group at the agreed upon time. 468 o Each recipient sends a Chunk Put Ack to positively or negatively 469 acknowledge the chunk transfer. 471 o The client will retry the entire transaction as needed if there 472 are not yet sufficient replicas of the Chunk. 474 Chunks are retrieved by multicasting a Chunk Get Request to the same 475 Negotiating Group, collecting Chunk Get Responses, picking one source 476 from those responses, sending a Chunk Get Accept message to identify 477 the selected source and having the selected storage server unicast 478 the chunk to the source. 480 Chunks are found by the Object Name or by having the payload 481 cryptographic hash of payload chunks be recorded in a "chunk 482 reference" in a metadata chunk. The metadata chunks are found using 483 the Object Name. 485 The general pattern in use here, which should apply to other cluster 486 applications, is that multicast messages are sent amongst a 487 dynamically selected subset of the entire cluster, which may result 488 in exchanging further messages over a smaller subset even more 489 dynamically selected. 491 Currently the distributed storage application discussed use of MLD 492 managed IPV6 multicast groups. This in turn requires either a push- 493 based mechanism for dynamically configuring Rendezvous Groups or pre- 494 provisioning a very large number of potential Rendezvous Groups and 495 dynamically selecting the multicast group that will deliver to the 496 selected set of storage targets. 498 BIER would eliminate the need for a vast number of multicast groups. 499 The entire cluster can be represented as a single BIER domain using 500 only the default sub-domain. Each Negotiating Group is simply a 501 subset of the whole that is deterministically selected by the 502 Cryptographic Hash of the Object Name or Chunk Payload. Each 503 Rendezvous Group is a further subset of the Negotiating Group. 505 In a simple mapping of the MLD managed multicast groups, each 506 Negotiating Group could be represented by a short bit string selected 507 by a Set Identifier. The Set Identier effectively becomes the 508 Negotiating Group. To address the entire Negotiating Group the bit 509 string is set to all ones. To later address a subset of the group a 510 subset bit string is used. 512 This allows a short fixed size BIER header to multicast to a very 513 large storage cluster. 515 3.10. HTTP-Level Multicast 517 Scenarios where a number of HTTP-level clients are quasi- 518 synchronously accessing the same HTTP-level resource can benefit from 519 the dynamic multicast group formation enabled by BIER. 521 For example, in the FLIPS (Flexible IP Services) solution by 522 InterDigital, network attachment points (NAPs) provide a protocol 523 mapping from HTTP to an efficient BIER-compliant transfer along a 524 bit-indexed path between an ingress (here the NAP to which the 525 clients connect) and an egress (here the NAP to which the HTTP-level 526 server connects). This is accomplished with the following steps: 528 o at the client NAP, the HTTP request is terminated at the HTTP 529 level at a local HTTP proxy. 531 o the HTTP request is published by the client NAP towards the FQDN 532 of the server defined in the HTTP request 533 * if no local BIER forwarding information exists to the server 534 (NAP), a path computation entity (PCE) is consulted, which 535 calculates a unicast path to the egress NAP (here the server 536 NAP). The PCE provides the forwarding information to the 537 client NAP, which in turn caches the result. 539 + if the local BIER forwarding information exists in the NAP- 540 local cache, it is used instead. 542 o Upon arrival of a client NAP request at the server NAP, the server 543 NAP proxy forwards the HTTP request as a well-formed HTTP request 544 locally to the server. 546 * If no client NAP forwarding information exists for the reverse 547 direction, this information is requested from the PCE. Upon 548 arrival of such reverse direction forwarding information, it is 549 stored in a local table for future use. 551 o Upon arrival of any further client NAP request at the server NAP 552 to an HTTP request whose response is still outstanding, the client 553 NAP is added to an internal request table and the request is 554 suppressed from being sent to the server. 556 * If no client NAP forwarding information exists for the reverse 557 direction, this information is requested from the PCE. Upon 558 arrival of such reverse direction forwarding information, it is 559 stored in a local table for future use. 561 o Upon arrival of an HTTP response at the server NAP, the server NAP 562 consults its internal request table for any outstanding HTTP 563 requests to the same request 565 the server NAP retrieves the stored BIER forwarding information 566 for the reverse direction for all outstanding HTTP requests 567 found above and determines the path information to all client 568 NAPs through a binary OR over all BIER forwarding identifiers 569 with the same SI field. This newly formed joint BIER multicast 570 response identifier is used to send the HTTP response across 571 the network, while the procedure is executed until all requests 572 have been served. 574 o Upon arrival of the HTTP response at a client NAP, it will be sent 575 by the client NAP proxy to the locally connected client. 577 A number of solutions exist to manage necessary updates in locally 578 stored BIER forwarding information for cases of client/server 579 mobility as well as for resilience purposes. 581 Applications for HTTP-level multicast are manifold. Examples are 582 HTTP-level streaming (HLS) services, provided as an OTT offering, 583 either at the level of end user clients (connected to BIER-enabled 584 NAPs) or site-level clients. Others are corporate intranet storage 585 cluster solutions that utilize HTTP- level synchronization. In 586 multi-tenant data centre scenarios such as outlined in Section 3.6., 587 the aforementioned solution can satisfy HTTP-level requests to 588 popular services and content in a multicast delivery manner. 590 BIER enables such solution through the bitfield representation of 591 forwarding information, which is in turn used for ad-hoc multicast 592 group formation at the HTTP request level. While such solution works 593 well in SDN-enabled intra- domain scenarios, BIER would enable the 594 realization of such scenarios in multi-domain scenarios over legacy 595 transport networks without relying on SDN-controlled infrastructure. 597 4. Security Considerations 599 There are no security issues introduced by this draft. 601 5. IANA Considerations 603 There are no IANA consideration introduced by this draft. 605 6. Acknowledgments 607 The authors would like to thank IJsbrand Wijnands, Greg Shepherd and 608 Christian Martin for their contribution. 610 The authors would also like to thank Anoop Ghanwani for his thorough 611 review and comments. 613 7. Contributing Authors 615 Dirk Trossen 616 InterDigital Inc 617 Email: dirk.trossen@interdigital.com 619 8. References 621 8.1. Normative References 623 [I-D.ietf-bier-mvpn] 624 Rosen, E., Sivakumar, M., Wijnands, I., Aldrin, S., 625 Dolganow, A., and T. Przygienda, "Multicast VPN Using 626 BIER", draft-ietf-bier-mvpn-01 (work in progress), July 627 2015. 629 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 630 Requirement Levels", BCP 14, RFC 2119, 631 DOI 10.17487/RFC2119, March 1997, 632 . 634 [RFC8279] Wijnands, IJ., Ed., Rosen, E., Ed., Dolganow, A., 635 Przygienda, T., and S. Aldrin, "Multicast Using Bit Index 636 Explicit Replication (BIER)", RFC 8279, 637 DOI 10.17487/RFC8279, November 2017, 638 . 640 8.2. Informative References 642 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 643 "Protocol Independent Multicast - Sparse Mode (PIM-SM): 644 Protocol Specification (Revised)", RFC 4601, 645 DOI 10.17487/RFC4601, August 2006, 646 . 648 [RFC4607] Holbrook, H. and B. Cain, "Source-Specific Multicast for 649 IP", RFC 4607, DOI 10.17487/RFC4607, August 2006, 650 . 652 [RFC4664] Andersson, L., Ed. and E. Rosen, Ed., "Framework for Layer 653 2 Virtual Private Networks (L2VPNs)", RFC 4664, 654 DOI 10.17487/RFC4664, September 2006, 655 . 657 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 658 "Bidirectional Protocol Independent Multicast (BIDIR- 659 PIM)", RFC 5015, DOI 10.17487/RFC5015, October 2007, 660 . 662 [RFC5331] Aggarwal, R., Rekhter, Y., and E. Rosen, "MPLS Upstream 663 Label Assignment and Context-Specific Label Space", 664 RFC 5331, DOI 10.17487/RFC5331, August 2008, 665 . 667 [RFC6513] Rosen, E., Ed. and R. Aggarwal, Ed., "Multicast in MPLS/ 668 BGP IP VPNs", RFC 6513, DOI 10.17487/RFC6513, February 669 2012, . 671 [RFC7348] Mahalingam, M., Dutt, D., Duda, K., Agarwal, P., Kreeger, 672 L., Sridhar, T., Bursell, M., and C. Wright, "Virtual 673 eXtensible Local Area Network (VXLAN): A Framework for 674 Overlaying Virtualized Layer 2 Networks over Layer 3 675 Networks", RFC 7348, DOI 10.17487/RFC7348, August 2014, 676 . 678 [RFC7432] Sajassi, A., Ed., Aggarwal, R., Bitar, N., Isaac, A., 679 Uttaro, J., Drake, J., and W. Henderickx, "BGP MPLS-Based 680 Ethernet VPN", RFC 7432, DOI 10.17487/RFC7432, February 681 2015, . 683 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 684 Decraene, B., Litkowski, S., and R. Shakir, "Segment 685 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 686 July 2018, . 688 Authors' Addresses 690 Nagendra Kumar 691 Cisco 692 7200 Kit Creek Road 693 Research Triangle Park, NC 27709 694 US 696 Email: naikumar@cisco.com 698 Rajiv Asati 699 Cisco 700 7200 Kit Creek Road 701 Research Triangle Park, NC 27709 702 US 704 Email: rajiva@cisco.com 706 Mach(Guoyi) Chen 707 Huawei 709 Email: mach.chen@huawei.com 711 Xiaohu Xu 712 Huawei 714 Email: xuxiaohu@huawei.com 715 Andrew Dolganow 716 Nokia 717 750D Chai Chee Rd 718 06-06 Viva Business Park 469004 719 Singapore 721 Email: andrew.dolganow@nokia.com 723 Tony Przygienda 724 Juniper Networks 725 1194 N. Mathilda Ave 726 Sunnyvale, CA 95089 727 USA 729 Email: prz@juniper.net 731 Arkadiy Gulko 732 Thomson Reuters 733 195 Broadway 734 New York NY 10007 735 USA 737 Email: arkadiy.gulko@thomsonreuters.com 739 Dom Robinson 740 id3as-company Ltd 741 UK 743 Email: Dom@id3as.co.uk 745 Vishal Arya 746 DirecTV Inc 747 2230 E Imperial Hwy 748 CA 90245 749 USA 751 Email: varya@directv.com 752 Caitlin Bestler 753 Nexenta Systems 754 451 El Camino Real 755 Santa Clara, CA 756 US 758 Email: caitlin.bestler@nexenta.com