<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE rfc SYSTEM "rfc2629.dtd" []>

<rfc category="info"
     docName="draft-kumar-bier-use-cases-00.txt" ipr="trust200902">

<?xml-stylesheet type='text/xsl' href='rfc2629.xslt' ?>

<?rfc toc="yes" ?>
<?rfc symrefs="yes" ?>
<?rfc sortrefs="yes"?>
<?rfc iprnotified="no" ?>
<?rfc strict="yes" ?>

<front>
  <title>BIER Use Cases</title>

  <author initials="N." surname="Kumar" fullname="Nagendra Kumar">
    <organization>Cisco</organization>
    <address>
      <postal>
        <street>7200 Kit Creek Road</street>
        <city>Research Triangle Park</city> <region>NC</region> <code>27709</code>
        <country>US</country>
      </postal>
      <email>naikumar@cisco.com</email>
    </address>
  </author>

  <author initials="R." surname="Asati" fullname="Rajiv Asati">
    <organization>Cisco</organization>
    <address>
      <postal>
        <street>7200 Kit Creek Road</street>
        <city>Research Triangle Park</city> <region>NC</region> <code>27709</code>
        <country>US</country>
      </postal>
      <email>rajiva@cisco.com</email>
    </address>
  </author>

  <author fullname="Mach(Guoyi) Chen" initials="M." surname="Chen">
    <organization>Huawei</organization>
    <address>
      <postal>
	<street/>
	<city/>
	<code/>
	<country/>
      </postal>
      <email>mach.chen@huawei.com</email>
    </address>
  </author>

  <author fullname="Xiaohu Xu" initials="X." surname="Xu">
    <organization>Huawei</organization>
    <address>
      <postal>
	<street/>
	<code/>
	<country/>
      </postal>
      <email>xuxiaohu@huawei.com</email>
    </address>
  </author>

  <date/>
  <abstract>
    <t>
      Bit Index Explicit Replication (BIER) is an architecture that provides optimal multicast forwarding through a "BIER domain" without requiring intermediate routers to maintain any multicast related per-flow state.  BIER also does not require any explicit tree-building protocol for its operation.  A multicast data packet enters a BIER domain at a "Bit-Forwarding Ingress Router" (BFIR), and leaves the BIER domain at one or more "Bit-Forwarding Egress Routers" (BFERs). The BFIR router adds a BIER header to the packet.  The BIER header contains a bit-string in which each bit represents exactly one BFER to forward the packet to.  The set of BFERs to which the multicast packet needs to be forwarded is expressed by setting the bits that correspond to those routers in the BIER header.
    </t>
    <t>
      This document describes some of the use-cases for BIER.
    </t>
  </abstract>
</front>

<middle>
  <section title="Introduction">
    <t>
      Bit Index Explicit Replication (BIER) <xref target="I-D.wijnands-bier-architecture"/> is an architecture that provides optimal multicast forwarding through a "BIER domain" without requiring intermediate routers to maintain any multicast related per-flow state.  BIER also does not require any explicit tree-building protocol for its operation.  A multicast data packet enters a BIER domain at a "Bit-Forwarding Ingress Router" (BFIR), and leaves the BIER domain at one or more "Bit-Forwarding Egress Routers" (BFERs). The BFIR router adds a BIER header to the packet.  The BIER header contains a bit-string in which each bit represents exactly one BFER to forward the packet to.  The set of BFERs to which the multicast packet needs to be forwarded is expressed by setting the bits that correspond to those routers in the BIER header.
    </t>
    <t>
      The obvious advantage of BIER is that there is no per flow multicast state in the core of the network and there is no tree building protocol that sets up tree on demand based on users joining a multicast flow. In that sense, BIER is potentially applicable to many services where Multicast is used and not limited to the examples described in this draft. In this document we are describing a few use-cases where BIER could provide benefit over using existing mechanisms.
    </t>
  </section>

  <section title="Specification of Requirements">
    <t>The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL
      NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and
      "OPTIONAL" in this document are to be interpreted as described
      in <xref target="RFC2119"/>.</t>
  </section>

  <section title="BIER Use Cases">

    <section title="Multicast in L3VPN Networks">
      <t>
	The Multicast L3VPN architecture <xref target='RFC6513'/> describes many different profiles in order to transport L3 Multicast across a providers network. Each profile has its own different tradeoffs (see section 2.1 <xref target='RFC6513'/>). When using "Multidirectional Inclusive" “Provider Multicast Service Interface” (MI-PMSI) an efficient tree is build per VPN, but causes flooding of egress PE’s that are part of the VPN, but have not joined a particular C-multicast flow. This problem can be solved with the “Selective” PMSI to build a special tree for only those PE’s that have joined the C-multicast flow for that specific VPN. The more S-PMSI’s, the less bandwidth is wasted due to flooding, but causes more state to be created in the providers network. This is a typical problem network operators are faced with by finding the right balance between the amount of state carried in the network and how much flooding (waste of bandwidth) is acceptable. Some of the complexity with L3VPN’s comes due to providing different profiles to accommodate these trade-offs.
      </t>
      <t>
	With BIER there is no trade-off between State and Flooding. Since the receiver information is explicitly carried within the packet, there is no need to build S-PMSI’s to deliver multicast to a sub-set of the VPN egress PE’s. Due to that behaviour, there is no need for S-PMSI’s.
      </t>
      <t>
	Mi-PMSI’s and S-PMSI’s are also used to provide the VPN context to the Egress PE router that receives the multicast packet. Also, in some MVPN profiles it is also required to know which Ingress PE forwarded the packet. Based on the PMSI the packet is received from, the target VPN is determined. This also means there is a requirement to have a least a PMSI per VPN or per VPN/Ingress PE. This means the amount of state created in the network is proportional to the VPN and ingress PE’s. Creating PMSI state per VPN can be prevented by applying the procedures as documented in <xref target='RFC5331'/>. This however has not been very much adopted/implemented due to the excessive flooding it would cause to Egress PE’s since *all* VPN multicast packets are forwarded to *all* PE’s that have one or more VPN’s attached to it. 
      </t>
      <t>
	With BIER, the destination PE’s are identified in the multicast packet, so there is no flooding concern when implementing <xref target='RFC5331'/>. For that reason there is no need to create multiple BIER domain’s per VPN, the VPN context can be carry in the multicast packet using the procedures as defined in <xref target='RFC5331'/>. Also see <xref target="I-D.rosen-l3vpn-mvpn-bier"/> for more information.
      </t>
      <t>
	With BIER only a few MVPN profiles will remain relevant, simplifying the operational cost and making it easier to be interoperable among different vendors.
      </t>
    </section>

    <section title="IPTV Services">
      <t>
	IPTV is a service, well known for its characteristics of allowing both live and on-demand delivery of media traffic over IP. In a typical IPTV environment the egress routers connecting to the receivers will build the tree towards the ingress router connecting to the IPTV servers. The egress routers would rely on IGMP/MLD (static or dynamic) to learn about the receiver's interest in one or more multicast group/channels. Interestingly, BIER could allows provisioning any new multicast group/channel by only modifying the channel mapping on ingress routers.  This is deemed beneficial for the linear IPTV video broadcasting in which every receivers behind every egress PE routers would receive the IPTV video traffic.
      </t>
      <t>
	With BIER, there is no need of tree building from egress to ingress. Further, any addition of new channel or new egress routers can be directly controlled from ingress router. When a new channel is included, the multicast group is mapped to Bit string that includes all egress routers. Ingress router would start sending the new channel and deliver it to all egress routers. As it can be observed, there is no need for static IGMP provisioning in each egress routers whenever a new channel/stream is added. Instead, it can be controlled from ingress router itself by configuring the new group to Bit Mask mapping on ingress router.
      </t>
    </section>

    <section title="Data center Virtualization/Overlay">
      <t>
	Virtual eXtensible Local Area Network (VXLAN) <xref target='RFC7348'/> is a kind of network virtualization overlay technology which is intended for multi-tenancy data center networks. To emulate a layer2 flooding domain across the layer3 underlay, it requires to have a mapping between the VXLAN Virtual Network Instance (VNI) and the IP multicast group in a ratio of 1:1 or n:1.  In other words, it requires to enable the multicast capability in the underlay. For instance, it requires to enable PIM-SM <xref target='RFC4601'/> or PIM-BIDIR <xref target='RFC5015'/> multicast routing protocol in the underlay. VXLAN is designed to support 16M VNIs at maximum.  In the mapping ratio of 1:1, it would require 16M multicast groups in the underlay which would become a significant challenge to both the control plane and the data plane of the data center switches.  In the mapping ratio of n:1, it would result in inefficiency bandwidth utilization which is not optimal in data center networks.  More importantly, it is recognized by many data center operators as a unaffordable burden to run multicast in data center networks from network operation and maintenance perspectives. As a result, many VXLAN implementations are claimed to support the ingress replication capability since ingress replication eliminates the burden of running multicast in the underlay.  Ingress replication is an acceptable choice in small-sized networks where the average number of receivers per multicast flow is not too large.  However, in multi-tenant data center networks, especially those in which the NVE functionality is enabled on a high amount of physical servers, the average number of NVEs per VN instance would be very large. As a result, the ingress replication scheme would result in a serious bandwidth waste in the underlay and a significant replication burden on ingress NVEs.
      </t>
      <t>
	With BIER, there is no need for maintaining that huge amount of multicast states in the underlay anymore while the delivery efficiency of overlay BUM traffic is the same as if any kind of stateful multicast protocols such as PIM-SM or PIM-BIDIR is enabled in the underlay.
      </t>
    </section>
  </section>

  <section title="Security Considerations">
    <t>
      There are no security issues introduced by this draft.
    </t>
  </section>

  <section title="IANA Considerations">
    <t>
      There are no IANA consideration introduced by this draft.
    </t>
  </section>

  <section title="Acknowledgments">
    <t>
      The authors would like to thank IJsbrand Wijnands, Greg Shepherd and Christian Martin for their contribution.
    </t>
  </section>
</middle>
<back>
  <references title='Normative References'>
    <?rfc include='reference.RFC.2119' ?>
    <?rfc include="reference.I-D.draft-rosen-l3vpn-mvpn-bier-01.xml"?>
    <?rfc include="reference.I-D.draft-wijnands-bier-architecture-01.xml"?>
  </references>
  <references title='Informative References'>
    <?rfc include='reference.RFC.7348' ?>
    <?rfc include='reference.RFC.4601' ?>
    <?rfc include='reference.RFC.5015' ?>
    <?rfc include='reference.RFC.6513' ?>
    <?rfc include='reference.RFC.5331' ?>
  </references>

</back>
</rfc>
