idnits 2.17.1 draft-waehlisch-sam-common-api-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 4 characters in excess of 72. == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 == There are 1 instance of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 342 has weird spacing: '...t Calls provi...' == Line 345 has weird spacing: '...e Calls provi...' == Line 348 has weird spacing: '...Options provi...' == Line 352 has weird spacing: '...e Calls provi...' == Line 529 has weird spacing: '... scheme refer...' == (4 more instances...) -- The document date (March 07, 2011) is 4770 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 1038, but no explicit reference was found in the text == Outdated reference: A later version (-18) exists of draft-ietf-mboned-auto-multicast-10 -- Obsolete informational reference (is this intentional?): RFC 4395 (Obsoleted by RFC 7595) -- Obsolete informational reference (is this intentional?): RFC 4601 (Obsoleted by RFC 7761) Summary: 1 error (**), 0 flaws (~~), 12 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SAM Research Group M. Waehlisch 3 Internet-Draft link-lab & FU Berlin 4 Intended status: Informational T C. Schmidt 5 Expires: September 8, 2011 HAW Hamburg 6 S. Venaas 7 cisco Systems 8 March 07, 2011 10 A Common API for Transparent Hybrid Multicast 11 draft-waehlisch-sam-common-api-06 13 Abstract 15 Group communication services exist in a large variety of flavors, and 16 technical implementations at different protocol layers. Multicast 17 data distribution is most efficiently performed on the lowest 18 available layer, but a heterogeneous deployment status of multicast 19 technologies throughout the Internet requires an adaptive service 20 binding at runtime. Today, it is difficult to write an application 21 that runs everywhere and at the same time makes use of the most 22 efficient multicast service available in the network. Facing 23 robustness requirements, developers are frequently forced to using a 24 stable, upper layer protocol controlled by the application itself. 25 This document describes a common multicast API that is suitable for 26 transparent communication in underlay and overlay, and grants access 27 to the different multicast flavors. It proposes an abstract naming 28 by multicast URIs and discusses mapping mechanisms between different 29 namespaces and distribution technologies. Additionally, it describes 30 the application of this API for building gateways that interconnect 31 current multicast domains throughout the Internet. 33 Status of this Memo 35 This Internet-Draft is submitted in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF). Note that other groups may also distribute 40 working documents as Internet-Drafts. The list of current Internet- 41 Drafts is at http://datatracker.ietf.org/drafts/current/. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on September 8, 2011. 50 Copyright Notice 52 Copyright (c) 2011 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 1.1. Use Cases for the Common API . . . . . . . . . . . . . . . 5 69 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 70 3. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 71 3.1. Objectives and Reference Scenarios . . . . . . . . . . . . 7 72 3.2. Group Communication API & Protocol Stack . . . . . . . . . 8 73 3.3. Naming and Addressing . . . . . . . . . . . . . . . . . . 10 74 3.4. Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 11 75 4. Common Multicast API . . . . . . . . . . . . . . . . . . . . . 12 76 4.1. Notation . . . . . . . . . . . . . . . . . . . . . . . . . 12 77 4.2. Abstract Data Types . . . . . . . . . . . . . . . . . . . 12 78 4.2.1. Multicast URI . . . . . . . . . . . . . . . . . . . . 12 79 4.2.2. Interface . . . . . . . . . . . . . . . . . . . . . . 13 80 4.2.3. Membership Events . . . . . . . . . . . . . . . . . . 13 81 4.3. Group Management Calls . . . . . . . . . . . . . . . . . . 14 82 4.3.1. Create . . . . . . . . . . . . . . . . . . . . . . . . 14 83 4.3.2. Delete . . . . . . . . . . . . . . . . . . . . . . . . 14 84 4.3.3. Join . . . . . . . . . . . . . . . . . . . . . . . . . 14 85 4.3.4. Leave . . . . . . . . . . . . . . . . . . . . . . . . 15 86 4.3.5. Source Register . . . . . . . . . . . . . . . . . . . 15 87 4.3.6. Source Deregister . . . . . . . . . . . . . . . . . . 16 88 4.4. Send and Receive Calls . . . . . . . . . . . . . . . . . . 16 89 4.4.1. Send . . . . . . . . . . . . . . . . . . . . . . . . . 16 90 4.4.2. Receive . . . . . . . . . . . . . . . . . . . . . . . 17 91 4.5. Socket Options . . . . . . . . . . . . . . . . . . . . . . 17 92 4.5.1. Get Interfaces . . . . . . . . . . . . . . . . . . . . 17 93 4.5.2. Add Interface . . . . . . . . . . . . . . . . . . . . 17 94 4.5.3. Delete Interface . . . . . . . . . . . . . . . . . . . 18 95 4.5.4. Set TTL . . . . . . . . . . . . . . . . . . . . . . . 18 96 4.5.5. Get TTL . . . . . . . . . . . . . . . . . . . . . . . 18 98 4.6. Service Calls . . . . . . . . . . . . . . . . . . . . . . 19 99 4.6.1. Group Set . . . . . . . . . . . . . . . . . . . . . . 19 100 4.6.2. Neighbor Set . . . . . . . . . . . . . . . . . . . . . 19 101 4.6.3. Children Set . . . . . . . . . . . . . . . . . . . . . 20 102 4.6.4. Parent Set . . . . . . . . . . . . . . . . . . . . . . 20 103 4.6.5. Designated Host . . . . . . . . . . . . . . . . . . . 21 104 4.6.6. Enable Membership Events . . . . . . . . . . . . . . . 21 105 4.6.7. Disable Membership Events . . . . . . . . . . . . . . 22 106 5. Functional Details . . . . . . . . . . . . . . . . . . . . . . 22 107 5.1. Namespaces . . . . . . . . . . . . . . . . . . . . . . . . 22 108 5.2. Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 23 109 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 110 7. Security Considerations . . . . . . . . . . . . . . . . . . . 23 111 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 23 112 9. Informative References . . . . . . . . . . . . . . . . . . . . 23 113 Appendix A. C Signatures . . . . . . . . . . . . . . . . . . . . 25 114 Appendix B. Practical Example of the API . . . . . . . . . . . . 26 115 Appendix C. Deployment Use Cases for Hybrid Multicast . . . . . . 27 116 C.1. DVMRP . . . . . . . . . . . . . . . . . . . . . . . . . . 28 117 C.2. PIM-SM . . . . . . . . . . . . . . . . . . . . . . . . . . 28 118 C.3. PIM-SSM . . . . . . . . . . . . . . . . . . . . . . . . . 29 119 C.4. BIDIR-PIM . . . . . . . . . . . . . . . . . . . . . . . . 29 120 Appendix D. Change Log . . . . . . . . . . . . . . . . . . . . . 30 121 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 31 123 1. Introduction 125 Currently, group application programmers need to make the choice of 126 the distribution technology that the application will require at 127 runtime. There is no common communication interface that abstracts 128 multicast transmission and subscriptions from the deployment state at 129 runtime. The standard multicast socket options [RFC3493], [RFC3678] 130 are bound to an IP version and do not distinguish between naming and 131 addressing of multicast identifiers. Group communication, however, 132 is commonly implemented in different flavors such as any source (ASM) 133 vs. source specific mutlicast (SSM), on different layers (e.g., IP 134 vs. application layer multicast), and may be based on different 135 technologies on the same tier as with IPv4 vs. IPv6. It is the 136 objective of this document to provide a universal access to group 137 services. 139 Multicast application development should be decoupled of 140 technological deployment throughout the infrastructure. It requires 141 a common multicast API that offers calls to transmit and receive 142 multicast data independent of the supporting layer and the underlying 143 technological details. For inter-technology transmissions, a 144 consistent view on multicast states is needed, as well. This 145 document describes an abstract group communication API and core 146 functions necessary for transparent operations. Specific 147 implementation guidelines with respect to operating systems or 148 programming languages are out-of-scope of this document. 150 In contrast to the standard multicast socket interface, the API 151 introduced in this document abstracts naming from addressing. Using 152 a multicast address in the current socket API predefines the 153 corresponding routing layer. In this specification, the multicast 154 name used for joining a group denotes an application layer data 155 stream that is identified by a multicast URI, independent of its 156 binding to a specific distribution technology. Such a group name can 157 be mapped to variable routing identifiers. 159 The aim of this common API is twofold: 161 o Enable any application programmer to implement group-oriented data 162 communication independent of the underlying delivery mechanisms. 163 In particular, allow for a late binding of group applications to 164 multicast technologies that makes applications efficient, but 165 robust with respect to deployment aspects. 167 o Allow for a flexible namespace support in group addressing, and 168 thereby separate naming and addressing/routing schemes from the 169 application design. This abstraction does not only decouple 170 programs from specific aspects of underlying protocols, but may 171 open application design to extend to specifically flavored group 172 services. 174 Multicast technologies may be of various P2P kinds, IPv4 or IPv6 175 network layer multicast, or implemented by some other application 176 service. Corresponding namespaces may be IP addresses or DNS naming, 177 overlay hashes or other application layer group identifiers like 178 , but also names independently defined by the 179 applications. Common namespaces are introduced later in this 180 document, but follow an open concept suitable for further extensions. 182 This document also proposes and discusses mapping mechanisms between 183 different namespaces and forwarding technologies. Additionally, the 184 multicast API provides internal interfaces to access current 185 multicast states at the host. Multiple multicast protocols may run 186 in parallel on a single host. These protocols may interact to 187 provide a gateway function that bridges data between different 188 domains. The application of this API at gateways operating between 189 current multicast instances throughout the Internet is described, as 190 well. 192 1.1. Use Cases for the Common API 194 Four generic use cases can be identified that require an abstract 195 common API for multicast services: 197 Application Programming Independent of Technologies Application 198 programmers are provided with group primitives that remain 199 independent of multicast technologies and its deployment in target 200 domains. They are thus enabled to develop programs once that run 201 in every deployment scenario. The employment of group names in 202 the form of abstract meta data types allows applications to remain 203 namespace-agnostic in the sense that the resolution of namespaces 204 and name-to-address mappings may be delegated to a system service 205 at runtime. Thereby, the complexity is minimized as developers 206 need not care about how data is distributed in groups, while the 207 system service can take advantage of extended information of the 208 network environment as acquired at startup. 210 Global Identification of Groups Groups can be identified 211 independent of technological instantiations and beyond deployment 212 domains. Taking advantage of the abstract naming, an application 213 is thus enabled to match data received from different interface 214 technologies (e.g., IPv4, IPv6, or overlays) to belong to the same 215 group. This not only increases flexibility, an application may 216 for instance combine heterogeneous multipath streams, but 217 simplifies the design and implementation of gateways and 218 translators. 220 Simplified Service Deployment through Generic Gateways The API 221 allows for an implementation of abstract gateway functions with 222 mappings to specific technologies residing at a system level. 223 Such generic gateways may provide a simple bridging service and 224 facilitate an inter-domain deployment of multicast. 226 Mobility-agnostic Group Communication Group naming and management as 227 foreseen in the API remain independent of locators. Naturally, 228 applications stay unaware of any mobility-related address changes. 229 Handover-initiated re-addressing is delegated to the mapping 230 services at the system level and may be designed to smoothly 231 interact with mobility management solutions provided at the 232 network or transport layer. 234 2. Terminology 236 This document uses the terminology as defined for the multicast 237 protocols [RFC2710],[RFC3376],[RFC3810],[RFC4601],[RFC4604]. In 238 addition, the following terms will be used. 240 Group Address: A Group Address is a routing identifier. It 241 represents a technological specifier and thus reflects the 242 distribution technology in use. Multicast packet forwarding is 243 based on this ID. 245 Group Name: A Group Name is an application identifier that is used 246 by applications to manage communication in a multicast group 247 (e.g., join/leave and send/receive). The Group Name does not 248 predefine any distribution technologies, even if it syntactically 249 corresponds to an address, but represents a logical identifier. 251 Multicast Namespace: A Multicast Namespace is a collection of 252 designators (i.e., names or addresses) for groups that share a 253 common syntax. Typical instances of namespaces are IPv4 or IPv6 254 multicast addresses, overlay group ids, group names defined on the 255 application layer (e.g., SIP or Email), or some human readable 256 strings. 258 Multicast Domain: A Multicast Domain hosts nodes and routers of a 259 common, single multicast forwarding technology and is bound to a 260 single namespace. 262 Interface An Interface is a forwarding instance of a distribution 263 technology on a given node. For example, the IP interface 264 192.168.1.1 at an IPv4 host. 266 Inter-domain Multicast Gateway: An Inter-domain Multicast Gateway 267 (IMG) is an entity that interconnects different multicast domains. 268 Its objective is to forward data between these domains, e.g., 269 between IP layer and overlay multicast. 271 3. Overview 273 3.1. Objectives and Reference Scenarios 275 The default use case addressed in this document targets at 276 applications that participate in a group by using some common 277 identifier taken from some common namespace. This group name is 278 typically learned at runtime from user interaction like the selection 279 of an IPTV channel, from dynamic session negotiations like in the 280 Session Initiation Protocol (SIP), but may as well have been 281 predefined for an application as a common group name. Technology- 282 specific system functions then transparently map the group name to 283 group addresses such that 285 o programmers are enabled to process group names in their programs 286 without the need to consider technological mappings to designated 287 deployments in target domains; 289 o applications are enabled to identify packets that belong to a 290 logically named group, independent of the interface technology 291 used for sending and receiving packets. The latter shall also 292 hold for multicast gateways. 294 This document refers to a reference scenario that covers the 295 following two hybrid deployment cases displayed in Figure 1: 297 1. Multicast domains running the same multicast technology but 298 remaining isolated, possibly only connected by network layer 299 unicast. 301 2. Multicast domains running different multicast technologies, but 302 hosting nodes that are members of the same multicast group. 304 +-------+ +-------+ 305 | Member| | Member| 306 | Foo | | G | 307 +-------+ +-------+ 308 \ / 309 *** *** *** *** 310 * ** ** ** * 311 * * 312 * MCast Tec A * 313 * * 314 * ** ** ** * 315 *** *** *** *** 316 +-------+ +-------+ | 317 | Member| | Member| +-------+ 318 | G | | Foo | | IMG | 319 +-------+ +-------+ +-------+ 320 | | | 321 *** *** *** *** *** *** *** *** 322 * ** ** ** * * ** ** ** * 323 * * +-------+ * * 324 * MCast Tec A * --| IMG |-- * MCast Tec B * +-------+ 325 * * +-------+ * * - | Member| 326 * ** ** ** * * ** ** ** * | G | 327 *** *** *** *** *** *** *** *** +-------+ 329 Figure 1: Reference scenarios for hybrid multicast, interconnecting 330 group members from isolated homogeneous and heterogeneous domains. 332 It is assumed throughout the document that the domain composition, as 333 well as the node attachment to a specific technology remain unchanged 334 during a multicast session. 336 3.2. Group Communication API & Protocol Stack 338 The group communication API consists of four parts. Two parts 339 combine the essential communication functions, while the remaining 340 two offer optional extensions for an enhanced management: 342 Group Management Calls provide the minimal API to instantiate a 343 multicast socket and manage group membership. 345 Send/Receive Calls provide the minimal API to send and receive 346 multicast data in a technology-transparent fashion. 348 Socket Options provide extension calls for an explicit configuration 349 of the multicast socket like setting hop limits or associated 350 interfaces. 352 Service Calls provide extension calls that grant access to internal 353 multicast states of an interface such as the multicast groups 354 under subscription or the multicast forwarding information base. 356 Multicast applications that use the common API require assistance by 357 a group communication stack. This protocol stack serves two needs: 359 o It provides system-level support to transfer the abstract 360 functions of the common API, including namespace support, into 361 protocol operations at interfaces. 363 o It bridges data distribution between different multicast 364 technologies. 366 A general initiation of a multicast communication in this setting 367 proceeds as follows: 369 1. An application opens an abstract multicast socket. 371 2. The application subscribes/leaves/(de)registers to a group using 372 a logical group identifier. 374 3. An intrinsic function of the stack maps the logical group ID 375 (Group Name) to a technical group ID (Group Address). This 376 function may make use of deployment-specific knowledge such as 377 available technologies and group address management in its 378 domain. 380 4. Packet distribution proceeds to and from one or several 381 multicast-enabled interfaces. 383 The multicast socket describes a group communication channel composed 384 of one or multiple interfaces. A socket may be created without 385 explicit interface association by the application, which leaves the 386 choice of the underlying forwarding technology to the group 387 communication stack. However, an application may also bind the 388 socket to one or multiple dedicated interfaces, which predefines the 389 forwarding technology and the namespace(s) of the Group Address(es). 391 Applications are not required to maintain mapping states for Group 392 Addresses. The group communication stack accounts for the mapping of 393 the Group Name to the Group Address(es) and vice versa. Multicast 394 data passed to the application will be augmented by the corresponding 395 Group Name. Multiple multicast subscriptions thus can be conducted 396 on a single multicast socket without the need for Group Name encoding 397 at the application side. 399 Hosts may support several multicast protocols. The group 400 communication stack discovers available multicast-enabled 401 communication interfaces. It provides a minimal hybrid function that 402 bridges data between different interfaces and multicast domains. 403 Details of service discovery are out-of-scope of this document. 405 The extended multicast functions can be implemented by a middleware 406 as conceptually visualized in Figure 2. 408 *-------* *-------* 409 | App 1 | | App 2 | 410 *-------* *-------* 411 | | 412 *---------------------* ---| 413 | Middleware | | 414 *---------------------* | 415 | | | 416 *---------* | | 417 | Overlay | | \ Group Communication 418 *---------* | / Stack 419 | | | 420 | | | 421 *---------------------* | 422 | Underlay | | 423 *---------------------* ---| 425 Figure 2: A middleware for offering uniform access to multicast in 426 underlay and overlay 428 3.3. Naming and Addressing 430 Applications use Group Names to identify groups. Names can uniquely 431 determine a group in a global communication context and hide 432 technological deployment for data distribution from the application. 433 In contrast, multicast forwarding operates on Group Addresses. Even 434 though both identifiers may be identical in symbols, they carry 435 different meanings. They may also belong to different namespaces. 436 The namespace of a Group Address reflects a routing technology, while 437 the namespace of a Group Name represents the context in which the 438 application operates. 440 URIs [RFC3986] are a common way to represent namespace-specific 441 identifiers in applications in the form of an abstract meta-data 442 type. Throughout this document, any kind of Group Name follows a URI 443 notation with the syntax defined in Section 4.2.1. Examples are, 444 ip://224.1.2.3:5000 for a canonical IPv4 ASM group, 445 sip://news@cnn.com for an application-specific naming with service 446 instantiator and default port selection. 448 An implementation of the group communication middleware can provide 449 convenience functions that detect the namespace of a Group Name and 450 use it to optimize service instantiation. In practice, such a 451 library would provide support for high-level data types to the 452 application, similar to the current socket API (e.g., InetAddress in 453 Java). Using this data type could implicitly determine the 454 namespace. Details of automatic namespace identification is out-of- 455 scope of this document. 457 3.4. Mapping 459 All group members subscribe to the same Group Name taken from a 460 common namespace and thereby identify the group in a technology- 461 agnostic way. 463 Group Names require a mapping to addresses prior to service 464 instantiation at an Interface. Similarly, a mapping is needed at 465 gateways to translate between Group Addresses from different 466 namespaces. Some namespaces facilitate a canonical transformation to 467 default address spaces. For example, ip://224.1.2.3:5000 has an 468 obvious correspondence to 224.1.2.3 in the IPv4 multicast address 469 space. Note that in this example the multicast URI can be completely 470 recovered from any data packet received from this group. 472 However, mapping in general can be more complex and need not be 473 invertible. Mapping functions can be stateless in some contexts, but 474 may require states in others. The application of such functions 475 depends on the cardinality of the namespaces, the structure of 476 address spaces, and possible address collisions. For example, it is 477 not obvious how to map a large identifier space (e.g., IPv6) to a 478 smaller, collision-prone set like IPv4. 480 Two (or more) Multicast Addresses from different namespaces may 481 belong to 483 a. the same logical group (i.e., same Multicast Name) 485 b. different multicast channels (i.e., different technical IDs). 487 This decision can be based on invertible mappings. However, the 488 application of such functions depends on the cardinality of the 489 namespaces and thus does not hold in general. It is not obvious how 490 to map a large identifier space (e.g., IPv6) to a smaller set (e.g., 491 IPv4). 493 A mapping can be realized by embedding smaller in larger namespaces 494 or selecting an arbitrary, unused ID in the target space. The 495 relation between logical and technical ID is maintained by mapping 496 functions which can be stateless or stateful. The middleware thus 497 queries the mapping service first, and creates a new technical group 498 ID only if there is no identifier available for the namespace in use. 499 The Group Name is associated with one or more Group Addresses, which 500 belong to different namespaces. Depending on the scope of the 501 mapping service, it ensures a consistent use of the technical ID in a 502 local or global domain. 504 4. Common Multicast API 506 4.1. Notation 508 The following description of the common multicast API is described in 509 pseudo syntax. Variables that are passed to function calls are 510 declared by "in", return values are declared by "out". A list of 511 elements is denoted by <>. 513 The corresponding C signatures are defined in Appendix A. 515 4.2. Abstract Data Types 517 4.2.1. Multicast URI 519 Multicast Names and Multicast Addresses used in this API follow an 520 URI scheme that defines a subset of the generic URI specified in 521 [RFC3986] and is compliant with the guidelines in [RFC4395]. 523 The multicast URI is defined as follows: 525 scheme "://" group "@" instantiation ":" port "/" sec-credentials 527 The parts of the URI are defined as follows: 529 scheme refers to the specification of the assigned identifier 530 [RFC3986] which takes the role of the namespace. 532 group identifies the group uniquely within the namespace given in 533 scheme. 535 instantiation identifies the entity that generates the instance of 536 the group (e.g., a SIP domain or a source in SSM) using the 537 namespace given in scheme. 539 port identifies a specific application at an instance of a group. 541 sec-credentials used to implement security credentials (e.g., to 542 authorize a multicast group access). 544 4.2.2. Interface 546 The interface denotes the layer and instance on which the 547 corresponding call will be effective. In agreement with [RFC3493] we 548 identify an interface by an identifier, which is a positive integer 549 starting at 1. 551 Properties of an interface are stored in the following struct: 553 struct if_prop { 554 unsigned int if_index; /* 1, 2, ... */ 555 char *if_name; /* "eth0", "eth1:1", "lo", ... */ 556 char *if_addr; /* "1.2.3.4", "abc123", ... */ 557 char *if_tech; /* "ip", "overlay", ... */ 558 }; 560 The following function retrieves all available interfaces from the 561 system: 563 getInterfaces(out Int num_ifs, out Interface ); 565 It extends the functions for Interface Identification in [RFC3493] 566 (cf., Section 4) and can be implemented by: 568 struct if_prop *if_prop(void); 570 4.2.3. Membership Events 572 A membership event is triggered by a mutlicast state change, which is 573 observed by the current node. It is related to a specific Group Name 574 and may be receiver or source oriented. 576 event_type { 577 join_event, 578 leave_event, 579 new_source_event 580 }; 582 event { 583 event_type event, 584 Uri group_name 585 }; 587 An event will be created by the middleware and passed to applcations 588 that are registered for events. 590 4.3. Group Management Calls 592 4.3.1. Create 594 The create call initiates a multicast socket and provides the 595 application programmer with a corresponding handle. If no interfaces 596 will be assigned based on the call, the default interface will be 597 selected and associated with the socket. The call may return an 598 error code in the case of failures, e.g., due to a non-operational 599 middleware. 601 createMSocket(in Interface , 602 out Socket s, out Int error); 604 The if argument denotes a list of interfaces (if_indexes) that will 605 be associated with the multicast socket. This parameter is optional. 607 On success a multicast socket identifier is returned, otherwise NULL. 609 4.3.2. Delete 611 The delete call removes the multicast socket. 613 deleteMSocket(in Socket s, out Int error); 615 The s argument identifies the multicast socket for destruction. 617 On success the value 0 is returned, otherwise -1. 619 4.3.3. Join 621 The join call initiates a subscription for the given group. 622 Depending on the interfaces that are associated with the socket, this 623 may result in an IGMP/MLD report or overlay subscription, for 624 example. 626 join(in Socket s, in Uri group_name, out Int error); 628 The s argument identifies the multicast socket. 630 The group_name argument identifies the group. 632 On success the value 0 is returned, otherwise -1. 634 4.3.4. Leave 636 The leave call results in an unsubscription for the given Group Name. 638 leave(in Socket s, in Uri group_name, out Int error); 640 The s argument identifies the multicast socket. 642 The group_name identifies the group. 644 On success the value 0 is returned, otherwise -1. 646 4.3.5. Source Register 648 The srcRegister call registers a source for a Group on all active 649 interfaces of the socket s. This call may assist group distribution 650 in some technologies, the creation of sub-overlays, for example. Not 651 all multicast technologies require his call. 653 srcRegister(in Socket s, in Uri group_name, 654 in Int num_ifs, in Interface , 655 out Int error); 657 The s argument identifies the multicast socket. 659 The group_name argument identifies the multicast group to which a 660 source intends to send data. 662 The num_ifs argument holds the number of elements in the ifs array. 663 This parameter is optional. 665 The ifs argument points to the list of interface indexes for which 666 the source registration failed. If num_ifs was 0 on output, a NULL 667 pointer is returned. This parameter is optional. 669 If source registration succeeded for all interfaces associated with 670 the socket, the value 0 is returned, otherwise -1. 672 4.3.6. Source Deregister 674 The srcDeregister indicates that a source does no longer intend to 675 send data to the multicast group. This call may remain without 676 effect in some multicast technologies. 678 srcDeregister(in Socket s, in Uri group_name, 679 in Int num_ifs, in Interface , 680 out Int error); 682 The s argument identifies the multicast socket. 684 The group_name argument identifies the multicast group to which a 685 source has stopped to send multicast data. 687 The num_ifs argument holds the number of elements in the ifs array. 689 The ifs argument points to the list of interfaces for which the 690 source deregistration failed. If num_ifs was 0 on output, a NULL 691 pointer is returned. 693 If source deregistration succeeded for all interfaces associated with 694 the socket, the value 0 is returned, otherwise -1. 696 4.4. Send and Receive Calls 698 4.4.1. Send 700 The send call passes multicast data for a Multicast Name from the 701 application to the multicast socket. 703 send(in Socket s, in Uri group_name, 704 in Size msg_len, in Masg msg_buf, 705 in Int error); 707 The s argument identifies the multicast socket. 709 The group_name argument identifies the group to which data will be 710 sent. 712 The msg_len argument holds the length of the message to be sent. 714 The msg_buf argument passes the multicast data to the multicast 715 socket. 717 On success the value 0 is returned, otherwise -1. 719 4.4.2. Receive 721 The receive call passes multicast data and the corresponding Group 722 Name to the application. 724 receive(in Socket s, in Uri group_name, 725 in Size msg_len, in Msg msg_buf, 726 in Int error); 728 The s argument identifies the multicast socket. 730 The group_name argument identifies the multicast group for which data 731 was received. 733 The msg_len argument holds the length of the received message. 735 The msg_buf argument points to the payload of the received multicast 736 data. 738 On success the value 0 is returned, otherwise -1. 740 4.5. Socket Options 742 The following calls configure an existing multicast socket. 744 4.5.1. Get Interfaces 746 The getInterface call returns an array of all available multicast 747 communication interfaces associated with the multicast socket. 749 getInterfaces(in Socket s, out Int num_ifs, 750 out Interface , out Int error); 752 The s argument identifies the multicast socket. 754 The num_ifs argument holds the number of interfaces in the ifs list. 756 The ifs argument points to an array of interface index identifiers. 758 On success the value 0 or lager is returned, otherwise -1. 760 4.5.2. Add Interface 762 The addInterface call adds a distribution channel to the socket. 763 This may be an overlay or underlay interface, e.g., IPv6 or DHT. 764 Multiple interfaces of the same technology may be associated with the 765 socket. 767 addInterface(in Socket s, in Interface if, 768 out Int error); 770 The s and if arguments identify a multicast socket and interface, 771 respectively. 773 On success the value 0 is returned, otherwise -1. 775 4.5.3. Delete Interface 777 The delnterface call removes the interface if from the multicast 778 socket. 780 delInterface(in Socket s, Interface if, 781 out Int error); 783 The s and if arguments identify a multicast socket and interface, 784 respectively. 786 On success the value 0 is returned, otherwise -1. 788 4.5.4. Set TTL 790 The setTTL call configures the maximum hop count for the socket a 791 multicast message is allowed to traverse. 793 setTTL(in Socket s, in Int h, 794 in Int num_ifs, in Interface , 795 out Int error); 797 The s and h arguments identify a multicast socket and the maximum hop 798 count, respectively. 800 The num_ifs argument holds the number of interfaces in the ifs list. 801 This parameter is optional. 803 The ifs argument points to an array of interface index identifiers. 804 This parameter is optional. 806 On success the value 0 is returned, otherwise -1. 808 4.5.5. Get TTL 810 The getTTL call returns the maximum hop count a multicast message is 811 allowed to traverse for the socket. 813 getTTL(in Socket s, 814 out Int h, out Int error); 816 The s argument identifies a multicast socket. 818 The h argument holds the maximum number of hops associated with 819 socket s. 821 On success the value 0 is returned, otherwise -1. 823 4.6. Service Calls 825 4.6.1. Group Set 827 The groupSet call returns all multicast groups registered at a given 828 interface. This information can be provided by group management 829 states or routing protocols. The return values distinguish between 830 sender and listener states. 832 int groupSet(in Interface if, out Int num_groups, 833 out GroupSet , out Int error); 835 struct GroupSet { 836 uri group_name; /* registered multicast group */ 837 int type; /* 0 = listener state, 1 = sender state, 838 2 = sender & listener state */ 839 } 841 The if argument identifies the interface for which states are 842 maintained. 844 The num_groups argument holds the number of groups in the groupSet 845 array. 847 The groupSet argument points to a lilst of group states. 849 On success the value 0 is returned, otherwise -1. 851 4.6.2. Neighbor Set 853 The neighborSet function returns the set of neighboring nodes for a 854 given interface as seen by the multicast routing protocol. 856 neighborSet(in Interface if, out Int num_neighbors, 857 out Uri , out Int error); 859 The if argument identifies the interface for which neighbors are 860 inquired. 862 The num_neighbors argument holds the number of addresses in the 863 neighbor_address array. 865 The neighbor_address argument points to a list of neighboring nodes 866 on a successful return. 868 On success the value 0 is returned, otherwise -1. 870 4.6.3. Children Set 872 The childrenSet function returns the set of child nodes that receive 873 multicast data from a specified interface for a given group. For a 874 common multicast router, this call retrieves the multicast forwarding 875 information base per interface. 877 childrenSet(in Interface if, in Uri group_name, 878 out Int num_children, out Uri , 879 out Int error); 881 The if argument identifies the interface for which children are 882 inquired. 884 The group_name argument defines the multicast group for which 885 distribution is considered. 887 The num_children argument holds the number of addresses in the 888 child_address array. 890 The child_address argument points to a list of neighboring nodes on a 891 successful return. 893 On success the value 0 is returned, otherwise -1. 895 4.6.4. Parent Set 897 The parentSet function returns the set of neighbors from which the 898 current node receives multicast data at a given interface for the 899 specified group. 901 parentSet(in Interface if, in Uri group_name, 902 out Int num_parents, out Uri parent_address, 903 out Int error); 905 The if argument identifies the interface for which parents are 906 inquired. 908 The group_name argument defines the multicast group for which 909 distribution is considered. 911 The num_parents argument holds the number of addresses in the 912 parent_address array. 914 The parent_address argument points to a list of neighboring nodes on 915 a successful return. 917 On success the value 0 is returned, otherwise -1. 919 4.6.5. Designated Host 921 The designatedHost function inquires whether the host has the role of 922 a designated forwarder resp. querier, or not. Such an information is 923 provided by almost all multicast protocols to prevent packet 924 duplication, if multiple multicast instances serve on the same 925 subnet. 927 designatedHost(in Interface if, in Uri group_name 928 out Int return); 930 The if argument identifies the interface for which designated 931 forwarding is inquired. 933 The group_name argument specifies the group for which the host may 934 attain the role of designated forwarder. 936 The function returns 1 if the host is a designated forwarder or 937 querier, otherwise 0. The return value -1 indicates an error. 939 4.6.6. Enable Membership Events 941 The enableEvents function registers an application at the middleware 942 to inform the application about a group change. This is the result 943 of receiver new subscriptions or leaves as well as the observation of 944 source changes. The group service may call other service calls to 945 get additional information. 947 void enableEvents(); 949 Calling this function, the middleware starts to pass membership 950 events to the application. Each event includes an event type 951 identifier and a Group Name (cf., Section 4.2.3). 953 4.6.7. Disable Membership Events 955 The disableEvents function deactivates the information about group 956 state changes. 958 void disableEvents();, 960 On success the middleware will not pass membership events to the 961 application. 963 5. Functional Details 965 In this section, we describe specific functions of the API and the 966 associated system middleware in detail. 968 5.1. Namespaces 970 Namespace identifiers in URIs are placed in the scheme element and 971 characterize syntax and semantic of the group identifier. They 972 enable the use of convenience functions and high-level data types 973 while processing URIs. When used in names, they may facilitate a 974 default mapping and a recovery of names from addresses. They 975 characterize its type, when used in addresses. 977 Compliant to the URI concept, namespace-schemes can be added. 978 Examples of schemes and functions currently foreseen include 980 IP This namespace is comprised of regular IP node naming, i.e., DNS 981 names and addresses taken from any version of the Internet 982 Protocol. A processor dealing with the IP namespace is required 983 to determine the syntax (DNS name, IP address version) of the 984 group expression. 986 OLM This namespace covers address strings immediately valid in an 987 overlay network. A processor handling those strings need not be 988 aware of the address generation mechanism, but may pass these 989 values directly to a corresponding overlay. 991 SIP The SIP namespace is an example of an application-layer scheme 992 that bears inherent group functions (conferencing). SIP 993 conference URIs may be directly exchanged and interpreted at the 994 application, and mapped to group addresses on the system level to 995 generate a corresponding multicast group. 997 Opaque This namespace transparently carries strings without further 998 syntactical information, meanings or associated resolution 999 mechanism. 1001 5.2. Mapping 1003 Group Name to Group Address, SSM/ASM TODO 1005 6. IANA Considerations 1007 This document makes no request of IANA. 1009 7. Security Considerations 1011 This draft does neither introduce additional messages nor novel 1012 protocol operations. 1014 8. Acknowledgements 1016 We would like to thank the HAMcast-team, Dominik Charousset, Gabriel 1017 Hege, Fabian Holler, Alexander Knauf, Sebastian Meiling, and 1018 Sebastian Woelke, at the HAW Hamburg for many fruitful discussions 1019 and for their continuous critical feedback while implementing API and 1020 a hybrid multicast middleware. 1022 This work is partially supported by the German Federal Ministry of 1023 Education and Research within the HAMcast project, which is part of 1024 G-Lab. 1026 9. Informative References 1028 [I-D.ietf-mboned-auto-multicast] 1029 Thaler, D., Talwar, M., Aggarwal, A., Vicisano, L., and T. 1030 Pusateri, "Automatic IP Multicast Without Explicit Tunnels 1031 (AMT)", draft-ietf-mboned-auto-multicast-10 (work in 1032 progress), March 2010. 1034 [RFC1075] Waitzman, D., Partridge, C., and S. Deering, "Distance 1035 Vector Multicast Routing Protocol", RFC 1075, 1036 November 1988. 1038 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1039 Requirement Levels", BCP 14, RFC 2119, March 1997. 1041 [RFC2710] Deering, S., Fenner, W., and B. Haberman, "Multicast 1042 Listener Discovery (MLD) for IPv6", RFC 2710, 1043 October 1999. 1045 [RFC3376] Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A. 1046 Thyagarajan, "Internet Group Management Protocol, Version 1047 3", RFC 3376, October 2002. 1049 [RFC3493] Gilligan, R., Thomson, S., Bound, J., McCann, J., and W. 1050 Stevens, "Basic Socket Interface Extensions for IPv6", 1051 RFC 3493, February 2003. 1053 [RFC3678] Thaler, D., Fenner, B., and B. Quinn, "Socket Interface 1054 Extensions for Multicast Source Filters", RFC 3678, 1055 January 2004. 1057 [RFC3810] Vida, R. and L. Costa, "Multicast Listener Discovery 1058 Version 2 (MLDv2) for IPv6", RFC 3810, June 2004. 1060 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1061 Resource Identifier (URI): Generic Syntax", STD 66, 1062 RFC 3986, January 2005. 1064 [RFC4395] Hansen, T., Hardie, T., and L. Masinter, "Guidelines and 1065 Registration Procedures for New URI Schemes", BCP 35, 1066 RFC 4395, February 2006. 1068 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 1069 "Protocol Independent Multicast - Sparse Mode (PIM-SM): 1070 Protocol Specification (Revised)", RFC 4601, August 2006. 1072 [RFC4604] Holbrook, H., Cain, B., and B. Haberman, "Using Internet 1073 Group Management Protocol Version 3 (IGMPv3) and Multicast 1074 Listener Discovery Protocol Version 2 (MLDv2) for Source- 1075 Specific Multicast", RFC 4604, August 2006. 1077 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 1078 "Bidirectional Protocol Independent Multicast (BIDIR- 1079 PIM)", RFC 5015, October 2007. 1081 Appendix A. C Signatures 1083 This section describes the C signatures of the common multicast API, 1084 which is defined in Section 4. 1086 int createMSocket(uint32_t *if); 1088 int deleteMSocket(int s); 1090 int join(int s, const uri group_name); 1092 int leave(int s, const uri group_name); 1094 int srcRegister(int s, const uri group_name, 1095 uint_t num_ifs, uint_t *ifs); 1097 int srcDeregister(int s, const uri group_name, 1098 uint_t num_ifs, uint_t *ifs); 1100 int send(int s, const uri group_name, 1101 size_t msg_len, const void *buf); 1103 int receive(int s, const uri group_name, 1104 size_t msg_len, msg *msg_buf); 1106 int getInterfaces(int s, uint_t num_ifs, uint_t *ifs); 1108 int addInterface(int s, uint32_t if); 1110 int delInterface(int s, uint32_t if); 1112 int setTTL(int s, int h, uint_t num_ifs, uint_t *ifs); 1114 int getTTL(int s, int h); 1116 int groupSet(uint32_t if, uint_t *num_groups, 1117 struct groupSet *groupSet); 1119 struct groupSet { 1120 uri group_name; /* registered multicast group */ 1121 int type; /* 0 = listener state, 1 = sender state, 1122 2 = sender & listener state */ 1124 int neighborSet(uint32_t if, uint_t *num_neighbors, 1125 const uri *neighbor_address); 1127 int childrenSet(uint32_t if, const uri group_name, 1128 uint_t *num_children, const uri *child_address); 1130 int parentSet(uint32_t if, const uri group_name, uint_t *num_parents, 1131 const uri *parent_address); 1133 int designatedHost(uint32_t if, const uri *group_name); 1135 Appendix B. Practical Example of the API 1136 -- Application above middleware: 1138 //Initialize multicast socket; 1139 //the middleware selects all available interfaces 1140 MulticastSocket m = new MulticastSocket(); 1142 m.join(URI("ip://224.1.2.3:5000")); 1143 m.join(URI("ip://[FF02:0:0:0:0:0:0:3]:6000")); 1144 m.join(URI("sip://news@cnn.com")); 1146 -- Middleware: 1148 join(URI mcAddress) { 1149 //Select interfaces in use 1150 for all this.interfaces { 1151 switch (interface.type) { 1152 case "ipv6": 1153 //... map logical ID to routing address 1154 Inet6Address rtAddressIPv6 = new Inet6Address(); 1155 mapNametoAddress(mcAddress,rtAddressIPv6); 1156 interface.join(rtAddressIPv6); 1157 case "ipv4": 1158 //... map logical ID to routing address 1159 Inet4Address rtAddressIPv4 = new Inet4Address(); 1160 mapNametoAddress(mcAddress,rtAddressIPv4); 1161 interface.join(rtAddressIPv4); 1162 case "sip-session": 1163 //... map logical ID to routing address 1164 SIPAddress rtAddressSIP = new SIPAddress(); 1165 mapNametoAddress(mcAddress,rtAddressSIP); 1166 interface.join(rtAddressSIP); 1167 case "dht": 1168 //... map logical ID to routing address 1169 DHTAddress rtAddressDHT = new DHTAddress(); 1170 mapNametoAddress(mcAddress,rtAddressDHT); 1171 interface.join(rtAddressDHT); 1172 //... 1173 } 1174 } 1175 } 1177 Appendix C. Deployment Use Cases for Hybrid Multicast 1179 This section describes the application of the defined API to 1180 implement an IMG. 1182 C.1. DVMRP 1184 The following procedure describes a transparent mapping of a DVMRP- 1185 based any source multicast service to another many-to-many multicast 1186 technology. 1188 An arbitrary DVMRP [RFC1075] router will not be informed about new 1189 receivers, but will learn about new sources immediately. The concept 1190 of DVMRP does not provide any central multicast instance. Thus, the 1191 IMG can be placed anywhere inside the multicast region, but requires 1192 a DVMRP neighbor connectivity. The group communication stack used by 1193 the IMG is enhanced by a DVMRP implementation. New sources in the 1194 underlay will be advertised based on the DVMRP flooding mechanism and 1195 received by the IMG. Based on this the event "new_source_event" is 1196 created and passed to the application. The relay agent initiates a 1197 corresponding join in the native network and forwards the received 1198 source data towards the overlay routing protocol. Depending on the 1199 group states, the data will be distributed to overlay peers. 1201 DVMRP establishes source specific multicast trees. Therefore, a 1202 graft message is only visible for DVMRP routers on the path from the 1203 new receiver subnet to the source, but in general not for an IMG. To 1204 overcome this problem, data of multicast senders will be flooded in 1205 the overlay as well as in the underlay. Hence, an IMG has to 1206 initiate an all-group join to the overlay using the namespace 1207 extension of the API. Each IMG is initially required to forward the 1208 received overlay data to the underlay, independent of native 1209 multicast receivers. Subsequent prunes may limit unwanted data 1210 distribution thereafter. 1212 C.2. PIM-SM 1214 The following procedure describes a transparent mapping of a PIM-SM- 1215 based any source multicast service to another many-to-many multicast 1216 technology. 1218 The Protocol Independent Multicast Sparse Mode (PIM-SM) [RFC4601] 1219 establishes rendezvous points (RP). These entities receive listener 1220 and source subscriptions of a domain. To be continuously updated, an 1221 IMG has to be co-located with a RP. Whenever PIM register messages 1222 are received, the IMG must signal internally a new multicast source 1223 using the event "new_source_event". Subsequently, the IMG joins the 1224 group and a shared tree between the RP and the sources will be 1225 established, which may change to a source specific tree after a 1226 sufficient number of data has been delivered. Source traffic will be 1227 forwarded to the RP based on the IMG join, even if there are no 1228 further receivers in the native multicast domain. Designated routers 1229 of a PIM-domain send receiver subscriptions towards the PIM-SM RP. 1231 The reception of such messages initiates the event "join_event" at 1232 the IMG, which initiates a join towards the overlay routing protocol. 1233 Overlay multicast data arriving at the IMG will then transparently be 1234 forwarded in the underlay network and distributed through the RP 1235 instance. 1237 C.3. PIM-SSM 1239 The following procedure describes a transparent mapping of a PIM-SSM- 1240 based source specific multicast service to another one-to-many 1241 multicast technology. 1243 PIM Source Specific Multicast (PIM-SSM) is defined as part of PIM-SM 1244 and admits source specific joins (S,G) according to the source 1245 specific host group model [RFC4604]. A multicast distribution tree 1246 can be established without the assistance of a rendezvous point. 1248 Sources are not advertised within a PIM-SSM domain. Consequently, an 1249 IMG cannot anticipate the local join inside a sender domain and 1250 deliver a priori the multicast data to the overlay instance. If an 1251 IMG of a receiver domain initiates a group subscription via the 1252 overlay routing protocol, relaying multicast data fails, as data are 1253 not available at the overlay instance. The IMG instance of the 1254 receiver domain, thus, has to locate the IMG instance of the source 1255 domain to trigger the corresponding join. In the sense of PIM-SSM, 1256 the signaling should not be flooded in underlay and overlay. 1258 One solution could be to intercept the subscription at both, source 1259 and receiver sites: To monitor multicast receiver subscriptions 1260 ("join_event" or "leave_event") in the underlay, the IMG is placed on 1261 path towards the source, e.g., at a domain border router. This 1262 router intercepts join messages and extracts the unicast source 1263 address S, initializing an IMG specific join to S via regular 1264 unicast. Multicast data arriving at the IMG of the sender domain can 1265 be distributed via the overlay. Discovering the IMG of a multicast 1266 sender domain may be implemented analogously to AMT 1267 [I-D.ietf-mboned-auto-multicast] by anycast. Consequently, the 1268 source address S of the group (S,G) should be built based on an 1269 anycast prefix. The corresponding IMG anycast address for a source 1270 domain is then derived from the prefix of S. 1272 C.4. BIDIR-PIM 1274 The following procedure describes a transparent mapping of a BIDIR- 1275 PIM-based any source multicast service to another many-to-many 1276 multicast technology. 1278 Bidirectional PIM [RFC5015] is a variant of PIM-SM. In contrast to 1279 PIM-SM, the protocol pre-establishes bidirectional shared trees per 1280 group, connecting multicast sources and receivers. The rendezvous 1281 points are virtualized in BIDIR-PIM as an address to identify on-tree 1282 directions (up and down). However, routers with the best link 1283 towards the (virtualized) rendezvous point address are selected as 1284 designated forwarders for a link-local domain and represent the 1285 actual distribution tree. The IMG is to be placed at the RP-link, 1286 where the rendezvous point address is located. As source data in 1287 either cases will be transmitted to the rendezvous point address, the 1288 BIDIR-PIM instance of the IMG receives the data and can internally 1289 signal new senders towards the stack via the "new_source_event". The 1290 first receiver subscription for a new group within a BIDIR-PIM domain 1291 needs to be transmitted to the RP to establish the first branching 1292 point. Using the "join_event", an IMG will thereby be informed about 1293 group requests from its domain, which are then delegated to the 1294 overlay. 1296 Appendix D. Change Log 1298 The following changes have been made from 1299 draft-waehlisch-sam-common-api-05 1301 1. Description of the Common API using pseudo syntax added 1303 2. C signatures of the Comon API moved to appendix 1305 3. updateSender() and updateListener() calls replaced by events 1307 4. Function destroyMSocket renamed as deleteMSocket. 1309 The following changes have been made from 1310 draft-waehlisch-sam-common-api-04 1312 1. updateSender() added. 1314 The following changes have been made from 1315 draft-waehlisch-sam-common-api-03 1317 1. Use cases added for illustration. 1319 2. Service calls added for inquiring on the multicast distribution 1320 system. 1322 3. Namespace examples added. 1324 4. Clarifications and editorial improvements. 1326 The following changes have been made from 1327 draft-waehlisch-sam-common-api-02 1329 1. Rename init() in createMSocket(). 1331 2. Added calls srcRegister()/srcDeregister(). 1333 3. Rephrased API calls in C-style. 1335 4. Cleanup code in "Practical Example of the API". 1337 5. Partial reorganization of the document. 1339 6. Many editorial improvements. 1341 The following changes have been made from 1342 draft-waehlisch-sam-common-api-01 1344 1. Document restructured to clarify the realm of document overview 1345 and specific contributions s.a. naming and addressing. 1347 2. A clear separation of naming and addressing was drawn. Multicast 1348 URIs have been introduced. 1350 3. Clarified and adapted the API calls. 1352 4. Introduced Socket Option calls. 1354 5. Deployment use cases moved to an appendix. 1356 6. Simple programming example added. 1358 7. Many editorial improvements. 1360 Authors' Addresses 1362 Matthias Waehlisch 1363 link-lab & FU Berlin 1364 Hoenower Str. 35 1365 Berlin 10318 1366 Germany 1368 Email: mw@link-lab.net 1369 URI: http://www.inf.fu-berlin.de/~waehl 1370 Thomas C. Schmidt 1371 HAW Hamburg 1372 Berliner Tor 7 1373 Hamburg 20099 1374 Germany 1376 Email: schmidt@informatik.haw-hamburg.de 1377 URI: http://inet.cpt.haw-hamburg.de/members/schmidt 1379 Stig Venaas 1380 cisco Systems 1381 Tasman Drive 1382 San Jose, CA 95134 1383 USA 1385 Email: stig@cisco.com