TOC 
SAM Research GroupM. Waehlisch
Internet-Draftlink-lab & FU Berlin
Intended status: InformationalT C. Schmidt
Expires: September 10, 2010HAW Hamburg
 S. Venaas
 cisco Systems
 March 09, 2010


A Common API for Transparent Hybrid Multicast
draft-waehlisch-sam-common-api-02

Abstract

Group communication services are most efficiently implemented on the lowest layer available. However, as the deployment status of multicast technologies largely varies throughout the Internet, globally operational group solutions are frequently forced to using a stable, upper layer protocol controlled by the application itself. This document describes a common multicast API that is suitable for transparent underlay and overlay communication. It proposes abstract naming and addressing by multicast URIs and discusses mapping mechanisms between different namespaces and distribution technologies. Additionally, it describes the application of this API for building gateways that interconnect current multicast domains throughout the Internet.

Status of this Memo

This Internet-Draft is submitted to IETF in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as “work in progress.”

The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt.

The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html.

This Internet-Draft will expire on September 10, 2010.

Copyright Notice

Copyright (c) 2010 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the BSD License.



Table of Contents

1.  Introduction
2.  Terminology
3.  Overview
    3.1.  Objectives and Reference Scenarios
    3.2.  Group Communication Stack & API
    3.3.  Mapping
    3.4.  Naming and Addressing
4.  Hybrid Multicast API
    4.1.  Abstract Data Types
    4.2.  Send/Receive Calls
    4.3.  Socket Options
    4.4.  Service Calls
5.  Functional Details
    5.1.  Mapping
    5.2.  URI Scheme
6.  IANA Considerations
7.  Security Considerations
8.  Acknowledgements
9.  Informative References
Appendix A.  Practical Example of the API
Appendix B.  Deployment Use Cases for Hybrid Multicast
    B.1.  DVMRP
    B.2.  PIM-SM
    B.3.  PIM-SSM
    B.4.  BIDIR-PIM
Appendix C.  Change Log
§  Authors' Addresses




 TOC 

1.  Introduction

Currently, group application programmers need to make a choice of the distribution technology required at runtime. There is no common communication interface that abstracts multicast subscriptions from the underlying deployment. The standard multicast socket options [RFC3493] (Gilligan, R., Thomson, S., Bound, J., McCann, J., and W. Stevens, “Basic Socket Interface Extensions for IPv6,” February 2003.), [RFC3678] (Thaler, D., Fenner, B., and B. Quinn, “Socket Interface Extensions for Multicast Source Filters,” January 2004.) are bound to an IP version and do not distinguish between naming and addressing of multicast identifiers. Group communication, however, is commonly implemented in different flavors and on different layers (e.g., IP vs. application layer multicast), and may be based on different technologies on the same tier (e.g., IPv4 vs. IPv6).

Multicast application development should be decoupled of technological deployment throughout the infrastructure. It requires a common multicast API that offers calls to transmit and receive multicast data independent of the supporting layer and the underlying technological details. For inter-technology transmissions, a consistent view on multicast states is needed, as well. This document describes an abstract group communication API and core functions necessary for transparent operations. Specific implementation guidelines with respect to operating systems or programming languages are out-of-scope of this document.

In contrast to the standard multicast socket interface, the API introduced in this document abstracts naming and addressing. Using a multicast address in the current socket API predefines the corresponding routing layer. In this memo, the multicast address used for joining a group denotes an application layer data stream that is identified by a multicast URI and without an association to the underlying distribution technology. Group name can be mapped to variable routing identifiers.

The aim of this common API is twofold:

Multicast technologies may be various P2P-based, IPv4 or IPv6 network layer multicast, or implemented by some other application service. Corresponding namespaces may be IP addresses, overlay hashes, other application layer group identifiers, e.g., <sip:*@peanuts.org>, or names defined by the applications.

This document also proposes and discusses mapping mechanisms between different namespaces and forwarding technologies. Additionally, the multicast API provides internal interfaces to access current multicast states at the host. Multiple multicast protocols may run in parallel on a single host. These protocols may interact to provide a gateway function that bridges data between different domains. The application of this API at gateways operating between current multicast instances throughout the Internet is described, as well.



 TOC 

2.  Terminology

This document uses the terminology as defined for the multicast protocols [RFC2710] (Deering, S., Fenner, W., and B. Haberman, “Multicast Listener Discovery (MLD) for IPv6,” October 1999.),[RFC3376] (Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A. Thyagarajan, “Internet Group Management Protocol, Version 3,” October 2002.),[RFC3810] (Vida, R. and L. Costa, “Multicast Listener Discovery Version 2 (MLDv2) for IPv6,” June 2004.),[RFC4601] (Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, “Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised),” August 2006.),[RFC4604] (Holbrook, H., Cain, B., and B. Haberman, “Using Internet Group Management Protocol Version 3 (IGMPv3) and Multicast Listener Discovery Protocol Version 2 (MLDv2) for Source-Specific Multicast,” August 2006.). In addition, the following terms will be used.

Group Address:
A Group Address is a routing identifier. It represents a technological identifier and thus reflects the distribution technology in use. Multicast packet forwarding is based on this ID.
Group Name:
A Group Name is an application identifier that is used by applications to manage (e.g., join/leave and send/receive) a multicast group. The Group Name does not imply any distribution technologies but represents a logical identifier.
Multicast Namespace:
A Multicast Namespace is a collection of designators (i.e., names or addresses) for groups that share a common syntax. Typical instances of namespaces are IPv4 or IPv6 multicast addresses, overlay group ids, group names defined on the application layer (e.g., SIP or Email), or some human readable strings.
Multicast Domain:
A Multicast Domain accommodates nodes and routers of a common, single multicast forwarding technology and is bound to a single namespace.
Interface
An Interface is a forwarding instance of a distribution technology on a given node. For example, the IP interface 192.168.1.1 at an IPv4 host.
Inter-domain Multicast Gateway:
An Inter-domain Multicast Gateway (IMG) is an entity that interconnects different multicast domains. Its objective is to forward data between these domains, e.g., between IP layer and overlay multicast.



 TOC 

3.  Overview



 TOC 

3.1.  Objectives and Reference Scenarios

The default use case addressed in this memo targets at applications that participate in a group by using a common identifier taken from some common namespace. Programmers should be able to transparently use this identifier in their program without the need to consider a deployment status in target domains. Aided by gateways and, where available, by a node-specific multicast middleware, applications shall be enabled to establish group communication, even if resident in domains that are not connected by a common multicast service technology.

This document covers the following two general scenarios:

  1. Multicast domains running the same multicast technology but remaining isolated, possibly only connected by network layer unicast.
  2. Multicast domains running different multicast technologies, but hosting nodes that are members of the same multicast group.


                                        +-------+         +-------+
                                        | Member|         | Member|
                                        |  Foo  |         |   G   |
                                        +-------+         +-------+
                                              \            /
                                            ***  ***  ***  ***
                                           *   **   **   **   *
                                          *                    *
                                           *   MCast Tec A    *
                                          *                    *
                                           *   **   **   **   *
                                            ***  ***  ***  ***
   +-------+          +-------+                      |
   | Member|          | Member|                  +-------+
   |   G   |          |  Foo  |                  |  IMG  |
   +-------+          +-------+                  +-------+
       |                |                            |
       ***  ***  ***  ***                  ***  ***  ***  ***
      *   **   **   **   *                *   **   **   **   *
     *                    *  +-------+   *                    *
      *   MCast Tec A    * --|  IMG  |--  *   MCast Tec B    *    +-------+
     *                    *  +-------+   *                    * - | Member|
      *   **   **   **   *                *   **   **   **   *    |   G   |
       ***  ***  ***  ***                  ***  ***  ***  ***     +-------+

 Figure 1: Reference scenarios for hybrid multicast, interconnecting group members from isolated homogeneous and heterogeneous domains. 

It is assumed throughout the document that the domain composition, as well as the node attachement to a specific technology remain unchanged during a multicast session.



 TOC 

3.2.  Group Communication Stack & API

Multicast applications may use a group communication stack to deliver and receive multicast data. This group communication stack exhibits two tasks:

The group communication API consists of three parts: (1) Send/Receive Calls, (2) Socket Options, and (3) Service Calls. (1) provides a minimal API to initiate a multicast socket, send and receive multicast data in a technology-transparent fashion. (2) allows for the configuration of the multicast socket, i.e., setting path length and associate interfaces explicitly. (3) returns internal multicast states per interface such as the multicast groups under subscription.

The general procedure to initiate multicast communication is the following:

  1. An application opens a multicast socket.
  2. An application subscribes/leaves/sends to a logical group identifier.
  3. A function maps the logical group ID (Group Name) to a technical group ID (Group Address).
  4. The technical group ID is allocated or revised if already in use.

The multicast socket describes a group communication channel composed of one or multiple interfaces. A socket may be created without explicit interface association by the application, which leaves the choice of the underlying forwarding technology to the group communication stack. However, an application may also bind the socket to one or multiple dedicated interfaces, which predefines the forwarding technology and the namespace(s) of the Group Address(es).

Applications are not required to maintain states for Group Addresses. The group communication stack accounts for the mapping of the Group Name to the Group Address(es) and vice versa. Multicast data passed to the application will be augmented by the corresponding Group Name. Multiple multicast subscriptions thus can be conducted on a single multicast socket without the need for Group Name encoding at the application side.

Hosts may support several multicast protocols. The group communication stack discovers available multicast-enabled communication interfaces. It provides a minimal hybrid function that bridges data between different interfaces and multicast domains. Details of service discovery are out-of-scope of this document.

The extended multicast functions can be implemented by a middleware, for example.



*-------*     *-------*
| App 1 |     | App 2 |
*-------*     *-------*
    |             |
*---------------------*         ---|
|   Middleware        |            |
*---------------------*            |
     |          |                  |
*---------*     |                  |
| Overlay |     |                   \  Group Communication
*---------*     |                   /  Stack
     |          |                  |
     |          |                  |
*---------------------*            |
|   Underlay          |            |
*---------------------*         ---|
 Figure 2: The middleware covers underlay and overlay for the application 



 TOC 

3.3.  Mapping

A mapping is required between a Group Name and the Group Address space, as well as between Group Addresses in different namespaces.

Two (or more) identifiers in different namespaces may belong to

a.
the same multicast channel (i.e., same technical ID).
b.
different multicast channels (i.e., different technical IDs).

This decision can be solved based on invertible mappings. However, the application of such functions depends on the cardinality of the namespaces and thus does not hold in general. A large identifier space (e.g., IPv6) cannot obviously be mapped to a smaller set (e.g., IPv4).

A mapping can be realized by embedding smaller in larger namespaces or selecting an arbitrary, unused ID in the target space. The relation between logical and technical ID is stored based on a mapping service (e.g., DHT). The middleware thus queries the mapping service first, and creates an new technical group ID only if there is no identifier available for the namespace in use. The Group Name is associated with one or more Group Addresses, which belong to different namespaces. Depending on the scope of the mapping service, it ensures a consistent use of the technical ID in a local or global domain.

All group members subscribe to the same Group Name within the same namespace.



 TOC 

3.4.  Naming and Addressing

The Group Name is used by applications to identify groups. It hides the deployed technology employed to distribute data. In contrast to this, multicast forwarding operates on Group Addresses. Although both identifiers may be identical in symbols, they carry different meaning. They may also belong to different namespaces. The namespace of the Group Address reflects the routing technology, and the namespace of the Group Name represents the context in which the application operates.

A multicast socket (IPv4/v6 interface) can be used by multiple logical multicast IDs from different namespaces (IPv4-group address, IPv6-group address). In practice, a library that implements the defined API would provide high-level data types to the application similar to the current socket API (e.g., InetAddress in Java). Using this data type would implicitly determine the namespace.

To reflect namespace specific treatment for applications, identifiers in API calls are represented by URIs. An implementation of the API may provide convenience functions that detect the namespace of a Group Name (e.g., InetAddress instead of Inet6Address and Inet4Address). Details of automatic identifcation is out-of-scope of this document.



 TOC 

4.  Hybrid Multicast API



 TOC 

4.1.  Abstract Data Types

URI
is any kind of Group Address or Group Name that follows the syntax defined in Section 5.2 (URI Scheme). For example, ipv4://224.1.2.3:5000 and sip://news@cnn.com.
Interface
denotes the interface and thus the layer and instance on which the corresponding call will be effective. This may be unspecified to leave the decicision to the group communication stack.
SocketHandle
references on an instance of a multicast socket.


 TOC 

4.2.  Send/Receive Calls

init(out SocketHandle h, [in enum Interface i]
This call initiates a multicast socket and provides the application programmer with a corresponding handle. If no interfaces will be assigned based on the call, the default interface will be selected and associated with the socket. The call may return an error code in the case of failures, e.g., due to a non-operational middleware.
join(in SocketHandle h, in URI g, [in Interface i])
This operation initiates a group subscription. Depending on the interfaces that are associated with the socket , this may result in an IGMP/MLD report or overlay subscriptions.
leave(in SocketHandle h, in URI g, [in Interface i])
This operation results in an unsubscription for the given address.
send(in SocketHandle h, in URI g, in Message msg)
This call passes multicast data for a Multicast Name g from the application to the multicast socket.
receive(in SocketHandle h, out URI g, out Message msg)
This call passes multicast data and the corresponding Group Name g to the application.


 TOC 

4.3.  Socket Options

getInterfaces(out enum Interface i)
This call returns a list of all available multicast communication interfaces at the current host.
addInterface(in SocketHandle h, in Interface i)
This call adds a distribution channel to the socket. This may be an overlay or underlay interface, e.g., IPv6 or DHT. Multiple interfaces of the same technology may be associated with the socket.
delInterface(in SocketHandle h, in Interface i)
This call removes an interface from the socket.
setTTL(in SocketHandle h)
This function configures the maximum hop count for the socket h a multicast message is allowed to traverse.


 TOC 

4.4.  Service Calls

groupSet(out enum URI g, in Interface i)
This operation returns all registered multicast groups. The information can be provided by group management or routing protocols. The return values distinguish between sender and listener states.
neighborSet(out enum URI g, in Interface i)
This function can be invoked to get the set of multicast routing neighbors.
designatedHost(out Bool b, in URI g)
This function returns true, if the host has the role of a designated forwarder or querier. Such an information is provided by almost all multicast protocols to handle packet duplication, if multiple multicast instances serve on the same subnet.
updateListener(out URI g, in Interface i)
This upcall is invoked to inform a group service about a change of listener states for a group. This is the result of receiver new subscriptions or leaves. The group service may call groupSet to get updated information.


 TOC 

5.  Functional Details

In this section, we describe the functional details of the API and the middleware.

TODO



 TOC 

5.1.  Mapping

Group Name to Group Address, SSM/ASM TODO



 TOC 

5.2.  URI Scheme

Multicast Names and Multicast Addresses are described based on a URI scheme. The scheme defines a subset of the URI specified in [RFC3986] (Berners-Lee, T., Fielding, R., and L. Masinter, “Uniform Resource Identifier (URI): Generic Syntax,” January 2005.) and follows the guidelines in [RFC4395] (Hansen, T., Hardie, T., and L. Masinter, “Guidelines and Registration Procedures for New URI Schemes,” February 2006.).

The multicast URI is defined as follows:

scheme "://" group "@" instantiation ":" port "/" sec-credentials

The parts of the URI are defined as follows:

scheme
referes to the specification of the assigned identifier [RFC3986] (Berners-Lee, T., Fielding, R., and L. Masinter, “Uniform Resource Identifier (URI): Generic Syntax,” January 2005.).
group
identifies the group.
instantiation
identifies the entitiy that generates the instance of the group (e.g., a SIP domain or a source in SSM).
port
identifies a specific application at an instance of a group.
sec-credentials
used to implement security credentials (e.g., to authorize a multicast group access).

TODO



 TOC 

6.  IANA Considerations

This document makes no request of IANA.



 TOC 

7.  Security Considerations

This draft does neither introduce additional messages nor novel protocol operations. TODO



 TOC 

8.  Acknowledgements

We would like to thank the HAMcast-team (Dominik Charousset, Gabriel Hege, Fabian Holler, Alexander Knauf, Sebastian Meiling, and Sebastian Woelke) at the HAW Hamburg for fruitful discussions.

This work is partially supported by the German Federal Ministry of Education and Research within the HAMcast project, which is part of G-Lab.



 TOC 

9. Informative References

[I-D.ietf-mboned-auto-multicast] Thaler, D., Talwar, M., Aggarwal, A., Vicisano, L., and T. Pusateri, “Automatic IP Multicast Without Explicit Tunnels (AMT),” draft-ietf-mboned-auto-multicast-10 (work in progress), March 2010 (TXT).
[RFC1075] Waitzman, D., Partridge, C., and S. Deering, “Distance Vector Multicast Routing Protocol,” RFC 1075, November 1988 (TXT).
[RFC2119] Bradner, S., “Key words for use in RFCs to Indicate Requirement Levels,” BCP 14, RFC 2119, March 1997 (TXT, HTML, XML).
[RFC2710] Deering, S., Fenner, W., and B. Haberman, “Multicast Listener Discovery (MLD) for IPv6,” RFC 2710, October 1999 (TXT).
[RFC3376] Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A. Thyagarajan, “Internet Group Management Protocol, Version 3,” RFC 3376, October 2002 (TXT).
[RFC3493] Gilligan, R., Thomson, S., Bound, J., McCann, J., and W. Stevens, “Basic Socket Interface Extensions for IPv6,” RFC 3493, February 2003 (TXT).
[RFC3678] Thaler, D., Fenner, B., and B. Quinn, “Socket Interface Extensions for Multicast Source Filters,” RFC 3678, January 2004 (TXT).
[RFC3810] Vida, R. and L. Costa, “Multicast Listener Discovery Version 2 (MLDv2) for IPv6,” RFC 3810, June 2004 (TXT).
[RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, “Uniform Resource Identifier (URI): Generic Syntax,” STD 66, RFC 3986, January 2005 (TXT, HTML, XML).
[RFC4395] Hansen, T., Hardie, T., and L. Masinter, “Guidelines and Registration Procedures for New URI Schemes,” BCP 35, RFC 4395, February 2006 (TXT).
[RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, “Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised),” RFC 4601, August 2006 (TXT, PDF).
[RFC4604] Holbrook, H., Cain, B., and B. Haberman, “Using Internet Group Management Protocol Version 3 (IGMPv3) and Multicast Listener Discovery Protocol Version 2 (MLDv2) for Source-Specific Multicast,” RFC 4604, August 2006 (TXT).
[RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, “Bidirectional Protocol Independent Multicast (BIDIR-PIM),” RFC 5015, October 2007 (TXT).


 TOC 

Appendix A.  Practical Example of the API

  -- Application above middleware:

  //Initialize multicast socket; the middleware selects all available
  //interfaces
  MulticastSocket m = new MulticastSocket();

  URI mcAddressv4 = new URI("ipv4://224.1.2.3:5000");
  m.join(mcAddressv4);

  URI mcAddressv6 = new URI("ipv6://[FF02:0:0:0:0:0:0:3]:6000");
  m.join(mcAddressv6)

  URI mcAddressSIP = new URI("sip://news@cnn.com");
  m.join(mcAddressSIP)

  -- Middleware:

  join(URI mcAddress) {
    //Select interfaces in use
    for all this.interfaces {
      switch (interface.type) {
        case "ipv6":
          //... map logical ID to routing address
          Inet6Address rtAddressIPv6 = new Inet6Address()
          mapNametoAddress(mcAddress,rtAddressIPv6)
          interface.join(rtAddressIPv6)
        case "ipv4":
          //... map logical ID to routing address
          Inet4Address rtAddressIPv4 = new Inet4Address()
          mapNametoAddress(mcAddress,rtAddressIPv4)
          interface.join(rtAddressIPv4)
        case "sip":
          //... map logical ID to routing address
          SIPAddress rtAddressSIP = new SIPAddress()
          mapNametoAddress(mcAddress,rtAddressSIP)
          interface.join(rtAddressSIP)
        case "dht":
          //... map logical ID to routing address
          DHTAddress rtAddressDHT = new DHTAddress()
          mapNametoAddress(mcAddress,rtAddressDHT)
          interface.join(rtAddressDHT)
         //...
      }
    }
  }



 TOC 

Appendix B.  Deployment Use Cases for Hybrid Multicast

This section describes the application of the defined API to implement an IMG.



 TOC 

B.1.  DVMRP

The following procedure describes a transparent mapping of a DVMRP-based any source multicast service to another many-to-many multicast technology.

An arbitrary DVMRP [RFC1075] (Waitzman, D., Partridge, C., and S. Deering, “Distance Vector Multicast Routing Protocol,” November 1988.) router will not be informed about new receivers, but will learn about new sources immediately. The concept of DVMRP does not provide any central multicast instance. Thus, the IMG can be placed anywhere inside the multicast region, but requires a DVMRP neighbor connectivity. The group communication stack used by the IMG is enhanced by a DVMRP implementation. New sources in the underlay will be advertised based on the DVMRP flooding mechanism and received by the IMG. Based on this the updateSender() call is triggered. The relay agent initiates a corresponding join in the native network and forwards the received source data towards the overlay routing protocol. Depending on the group states, the data will be distributed to overlay peers.

DVMRP establishes source specific multicast trees. Therefore, a graft message is only visible for DVMRP routers on the path from the new receiver subnet to the source, but in general not for an IMG. To overcome this problem, data of multicast senders will be flooded in the overlay as well as in the underlay. Hence, an IMG has to initiate an all-group join to the overlay using the namespace extension of the API. Each IMG is initially required to forward the received overlay data to the underlay, independent of native multicast receivers. Subsequent prunes may limit unwanted data distribution thereafter.



 TOC 

B.2.  PIM-SM

The following procedure describes a transparent mapping of a PIM-SM-based any source multicast service to another many-to-many multicast technology.

The Protocol Independent Multicast Sparse Mode (PIM-SM) [RFC4601] (Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, “Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised),” August 2006.) establishes rendezvous points (RP). These entities receive listener and source subscriptions of a domain. To be continuously updated, an IMG has to be co-located with a RP. Whenever PIM register messages are received, the IMG must signal internally a new multicast source using updateSender(). Subsequently, the IMG joins the group and a shared tree between the RP and the sources will be established, which may change to a source specific tree after a sufficient number of data has been delivered. Source traffic will be forwarded to the RP based on the IMG join, even if there are no further receivers in the native multicast domain. Designated routers of a PIM-domain send receiver subscriptions towards the PIM-SM RP. The reception of such messages invokes the updateListener() call at the IMG, which initiates a join towards the overlay routing protocol. Overlay multicast data arriving at the IMG will then transparently be forwarded in the underlay network and distributed through the RP instance.



 TOC 

B.3.  PIM-SSM

The following procedure describes a transparent mapping of a PIM-SSM-based source specific multicast service to another one-to-many multicast technology.

PIM Source Specific Multicast (PIM-SSM) is defined as part of PIM-SM and admits source specific joins (S,G) according to the source specific host group model [RFC4604] (Holbrook, H., Cain, B., and B. Haberman, “Using Internet Group Management Protocol Version 3 (IGMPv3) and Multicast Listener Discovery Protocol Version 2 (MLDv2) for Source-Specific Multicast,” August 2006.). A multicast distribution tree can be established without the assistance of a rendezvous point.

Sources are not advertised within a PIM-SSM domain. Consequently, an IMG cannot anticipate the local join inside a sender domain and deliver a priori the multicast data to the overlay instance. If an IMG of a receiver domain initiates a group subscription via the overlay routing protocol, relaying multicast data fails, as data are not available at the overlay instance. The IMG instance of the receiver domain, thus, has to locate the IMG instance of the source domain to trigger the corresponding join. In the sense of PIM-SSM, the signaling should not be flooded in underlay and overlay.

One solution could be to intercept the subscription at both, source and receiver sites: To monitor multicast receiver subscriptions (updateListener()) in the underlay, the IMG is placed on path towards the source, e.g., at a domain border router. This router intercepts join messages and extracts the unicast source address S, initializing an IMG specific join to S via regular unicast. Multicast data arriving at the IMG of the sender domain can be distributed via the overlay. Discovering the IMG of a multicast sender domain may be implemented analogously to AMT [I‑D.ietf‑mboned‑auto‑multicast] (Thaler, D., Talwar, M., Aggarwal, A., Vicisano, L., and T. Pusateri, “Automatic IP Multicast Without Explicit Tunnels (AMT),” March 2010.) by anycast. Consequently, the source address S of the group (S,G) should be built based on an anycast prefix. The corresponding IMG anycast address for a source domain is then derived from the prefix of S.



 TOC 

B.4.  BIDIR-PIM

The following procedure describes a transparent mapping of a BIDIR-PIM-based any source multicast service to another many-to-many multicast technology.

Bidirectional PIM [RFC5015] (Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, “Bidirectional Protocol Independent Multicast (BIDIR-PIM),” October 2007.) is a variant of PIM-SM. In contrast to PIM-SM, the protocol pre-establishes bidirectional shared trees per group, connecting multicast sources and receivers. The rendezvous points are virtualized in BIDIR-PIM as an address to identify on-tree directions (up and down). However, routers with the best link towards the (virtualized) rendezvous point address are selected as designated forwarders for a link-local domain and represent the actual distribution tree. The IMG is to be placed at the RP-link, where the rendezvous point address is located. As source data in either cases will be transmitted to the rendezvous point address, the BIDIR-PIM instance of the IMG receives the data and can internally signal new senders towards the stack via updateSender(). The first receiver subscription for a new group within a BIDIR-PIM domain needs to be transmitted to the RP to establish the first branching point. Using the updateListener() invocation, an IMG will thereby be informed about group requests from its domain, which are then delegated to the overlay.



 TOC 

Appendix C.  Change Log

Changes since draft-waehlisch-sam-common-api-01

  1. TODO


 TOC 

Authors' Addresses

  Matthias Waehlisch
  link-lab & FU Berlin
  Hoenower Str. 35
  Berlin 10318
  Germany
Email:  mw@link-lab.net
URI:  http://www.inf.fu-berlin.de/~waehl
  
  Thomas C. Schmidt
  HAW Hamburg
  Berliner Tor 7
  Hamburg 20099
  Germany
Email:  schmidt@informatik.haw-hamburg.de
URI:  http://inet.cpt.haw-hamburg.de/members/schmidt
  
  Stig Venaas
  cisco Systems
  Tasman Drive
  San Jose, CA 95134
  USA
Email:  stig@cisco.com