idnits 2.17.1 draft-irtf-samrg-common-api-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 4 characters in excess of 72. == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 == There are 1 instance of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 342 has weird spacing: '...t Calls provi...' == Line 345 has weird spacing: '...e Calls provi...' == Line 348 has weird spacing: '...Options provi...' == Line 352 has weird spacing: '...e Calls provi...' == Line 523 has weird spacing: '... scheme refer...' == (4 more instances...) -- The document date (March 11, 2011) is 4788 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 1031, but no explicit reference was found in the text == Outdated reference: A later version (-18) exists of draft-ietf-mboned-auto-multicast-10 -- Obsolete informational reference (is this intentional?): RFC 4395 (Obsoleted by RFC 7595) -- Obsolete informational reference (is this intentional?): RFC 4601 (Obsoleted by RFC 7761) Summary: 1 error (**), 0 flaws (~~), 12 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SAM Research Group M. Waehlisch 3 Internet-Draft link-lab & FU Berlin 4 Intended status: Informational T C. Schmidt 5 Expires: September 12, 2011 HAW Hamburg 6 S. Venaas 7 cisco Systems 8 March 11, 2011 10 A Common API for Transparent Hybrid Multicast 11 draft-irtf-samrg-common-api-01 13 Abstract 15 Group communication services exist in a large variety of flavors, and 16 technical implementations at different protocol layers. Multicast 17 data distribution is most efficiently performed on the lowest 18 available layer, but a heterogeneous deployment status of multicast 19 technologies throughout the Internet requires an adaptive service 20 binding at runtime. Today, it is difficult to write an application 21 that runs everywhere and at the same time makes use of the most 22 efficient multicast service available in the network. Facing 23 robustness requirements, developers are frequently forced to use a 24 stable, upper layer protocol controlled by the application itself. 25 This document describes a common multicast API that is suitable for 26 transparent communication in underlay and overlay, and grants access 27 to the different multicast flavors. It proposes an abstract naming 28 by multicast URIs and discusses mapping mechanisms between different 29 namespaces and distribution technologies. Additionally, it describes 30 the application of this API for building gateways that interconnect 31 current multicast domains throughout the Internet. 33 Status of this Memo 35 This Internet-Draft is submitted in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF). Note that other groups may also distribute 40 working documents as Internet-Drafts. The list of current Internet- 41 Drafts is at http://datatracker.ietf.org/drafts/current/. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on September 12, 2011. 50 Copyright Notice 52 Copyright (c) 2011 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 1.1. Use Cases for the Common API . . . . . . . . . . . . . . . 5 69 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 70 3. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 71 3.1. Objectives and Reference Scenarios . . . . . . . . . . . . 7 72 3.2. Group Communication API & Protocol Stack . . . . . . . . . 8 73 3.3. Naming and Addressing . . . . . . . . . . . . . . . . . . 10 74 3.4. Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 11 75 4. Common Multicast API . . . . . . . . . . . . . . . . . . . . . 12 76 4.1. Notation . . . . . . . . . . . . . . . . . . . . . . . . . 12 77 4.2. Abstract Data Types . . . . . . . . . . . . . . . . . . . 12 78 4.2.1. Multicast URI . . . . . . . . . . . . . . . . . . . . 12 79 4.2.2. Interface . . . . . . . . . . . . . . . . . . . . . . 13 80 4.2.3. Membership Events . . . . . . . . . . . . . . . . . . 13 81 4.3. Group Management Calls . . . . . . . . . . . . . . . . . . 14 82 4.3.1. Create . . . . . . . . . . . . . . . . . . . . . . . . 14 83 4.3.2. Delete . . . . . . . . . . . . . . . . . . . . . . . . 14 84 4.3.3. Join . . . . . . . . . . . . . . . . . . . . . . . . . 14 85 4.3.4. Leave . . . . . . . . . . . . . . . . . . . . . . . . 14 86 4.3.5. Source Register . . . . . . . . . . . . . . . . . . . 15 87 4.3.6. Source Deregister . . . . . . . . . . . . . . . . . . 15 88 4.4. Send and Receive Calls . . . . . . . . . . . . . . . . . . 16 89 4.4.1. Send . . . . . . . . . . . . . . . . . . . . . . . . . 16 90 4.4.2. Receive . . . . . . . . . . . . . . . . . . . . . . . 16 91 4.5. Socket Options . . . . . . . . . . . . . . . . . . . . . . 17 92 4.5.1. Get Interfaces . . . . . . . . . . . . . . . . . . . . 17 93 4.5.2. Add Interface . . . . . . . . . . . . . . . . . . . . 17 94 4.5.3. Delete Interface . . . . . . . . . . . . . . . . . . . 17 95 4.5.4. Set TTL . . . . . . . . . . . . . . . . . . . . . . . 18 96 4.5.5. Get TTL . . . . . . . . . . . . . . . . . . . . . . . 18 98 4.6. Service Calls . . . . . . . . . . . . . . . . . . . . . . 18 99 4.6.1. Group Set . . . . . . . . . . . . . . . . . . . . . . 19 100 4.6.2. Neighbor Set . . . . . . . . . . . . . . . . . . . . . 19 101 4.6.3. Children Set . . . . . . . . . . . . . . . . . . . . . 20 102 4.6.4. Parent Set . . . . . . . . . . . . . . . . . . . . . . 20 103 4.6.5. Designated Host . . . . . . . . . . . . . . . . . . . 21 104 4.6.6. Enable Membership Events . . . . . . . . . . . . . . . 21 105 4.6.7. Disable Membership Events . . . . . . . . . . . . . . 21 106 5. Functional Details . . . . . . . . . . . . . . . . . . . . . . 21 107 5.1. Namespaces . . . . . . . . . . . . . . . . . . . . . . . . 22 108 5.2. Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 22 109 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 110 7. Security Considerations . . . . . . . . . . . . . . . . . . . 22 111 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 23 112 9. Informative References . . . . . . . . . . . . . . . . . . . . 23 113 Appendix A. C Signatures . . . . . . . . . . . . . . . . . . . . 24 114 Appendix B. Practical Example of the API . . . . . . . . . . . . 25 115 Appendix C. Deployment Use Cases for Hybrid Multicast . . . . . . 26 116 C.1. DVMRP . . . . . . . . . . . . . . . . . . . . . . . . . . 27 117 C.2. PIM-SM . . . . . . . . . . . . . . . . . . . . . . . . . . 27 118 C.3. PIM-SSM . . . . . . . . . . . . . . . . . . . . . . . . . 28 119 C.4. BIDIR-PIM . . . . . . . . . . . . . . . . . . . . . . . . 28 120 Appendix D. Change Log . . . . . . . . . . . . . . . . . . . . . 29 121 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 31 123 1. Introduction 125 Currently, group application programmers need to make the choice of 126 the distribution technology that the application will require at 127 runtime. There is no common communication interface that abstracts 128 multicast transmission and subscriptions from the deployment state at 129 runtime. The standard multicast socket options [RFC3493], [RFC3678] 130 are bound to an IP version and do not distinguish between naming and 131 addressing of multicast identifiers. Group communication, however, 132 is commonly implemented in different flavors such as any source (ASM) 133 vs. source specific multicast (SSM), on different layers (e.g., IP 134 vs. application layer multicast), and may be based on different 135 technologies on the same tier as with IPv4 vs. IPv6. It is the 136 objective of this document to provide a universal access to group 137 services. 139 Multicast application development should be decoupled of 140 technological deployment throughout the infrastructure. It requires 141 a common multicast API that offers calls to transmit and receive 142 multicast data independent of the supporting layer and the underlying 143 technological details. For inter-technology transmissions, a 144 consistent view on multicast states is needed, as well. This 145 document describes an abstract group communication API and core 146 functions necessary for transparent operations. Specific 147 implementation guidelines with respect to operating systems or 148 programming languages are out-of-scope of this document. 150 In contrast to the standard multicast socket interface, the API 151 introduced in this document abstracts naming from addressing. Using 152 a multicast address in the current socket API predefines the 153 corresponding routing layer. In this specification, the multicast 154 name used for joining a group denotes an application layer data 155 stream that is identified by a multicast URI, independent of its 156 binding to a specific distribution technology. Such a group name can 157 be mapped to variable routing identifiers. 159 The aim of this common API is twofold: 161 o Enable any application programmer to implement group-oriented data 162 communication independent of the underlying delivery mechanisms. 163 In particular, allow for a late binding of group applications to 164 multicast technologies that makes applications efficient, but 165 robust with respect to deployment aspects. 167 o Allow for a flexible namespace support in group addressing, and 168 thereby separate naming and addressing/routing schemes from the 169 application design. This abstraction does not only decouple 170 programs from specific aspects of underlying protocols, but may 171 open application design to extend to specifically flavored group 172 services. 174 Multicast technologies may be of various P2P kinds, IPv4 or IPv6 175 network layer multicast, or implemented by some other application 176 service. Corresponding namespaces may be IP addresses or DNS naming, 177 overlay hashes or other application layer group identifiers like 178 , but also names independently defined by the 179 applications. Common namespaces are introduced later in this 180 document, but follow an open concept suitable for further extensions. 182 This document also proposes and discusses mapping mechanisms between 183 different namespaces and forwarding technologies. Additionally, the 184 multicast API provides internal interfaces to access current 185 multicast states at the host. Multiple multicast protocols may run 186 in parallel on a single host. These protocols may interact to 187 provide a gateway function that bridges data between different 188 domains. The application of this API at gateways operating between 189 current multicast instances throughout the Internet is described, as 190 well. 192 1.1. Use Cases for the Common API 194 Four generic use cases can be identified that require an abstract 195 common API for multicast services: 197 Application Programming Independent of Technologies Application 198 programmers are provided with group primitives that remain 199 independent of multicast technologies and their deployment in 200 target domains. They are thus enabled to develop programs once 201 that run in every deployment scenario. The employment of group 202 names in the form of abstract meta data types allows applications 203 to remain namespace-agnostic in the sense that the resolution of 204 namespaces and name-to-address mappings may be delegated to a 205 system service at runtime. Thereby, the complexity is minimized 206 as developers need not care about how data is distributed in 207 groups, while the system service can take advantage of extended 208 information of the network environment as acquired at startup. 210 Global Identification of Groups Groups can be identified 211 independent of technological instantiations and beyond deployment 212 domains. Taking advantage of the abstract naming, an application 213 is thus enabled to match data received from different interface 214 technologies (e.g., IPv4, IPv6, or overlays) to belong to the same 215 group. This not only increases flexibility, an application may 216 for instance combine heterogeneous multipath streams, but 217 simplifies the design and implementation of gateways and 218 translators. 220 Simplified Service Deployment through Generic Gateways The API 221 allows for an implementation of abstract gateway functions with 222 mappings to specific technologies residing at a system level. 223 Such generic gateways may provide a simple bridging service and 224 facilitate an inter-domain deployment of multicast. 226 Mobility-agnostic Group Communication Group naming and management as 227 foreseen in the API remain independent of locators. Naturally, 228 applications stay unaware of any mobility-related address changes. 229 Handover-initiated re-addressing is delegated to the mapping 230 services at the system level and may be designed to smoothly 231 interact with mobility management solutions provided at the 232 network or transport layer. 234 2. Terminology 236 This document uses the terminology as defined for the multicast 237 protocols [RFC2710],[RFC3376],[RFC3810],[RFC4601],[RFC4604]. In 238 addition, the following terms will be used. 240 Group Address: A Group Address is a routing identifier. It 241 represents a technological specifier and thus reflects the 242 distribution technology in use. Multicast packet forwarding is 243 based on this ID. 245 Group Name: A Group Name is an application identifier that is used 246 by applications to manage communication in a multicast group 247 (e.g., join/leave and send/receive). The Group Name does not 248 predefine any distribution technologies, even if it syntactically 249 corresponds to an address, but represents a logical identifier. 251 Multicast Namespace: A Multicast Namespace is a collection of 252 designators (i.e., names or addresses) for groups that share a 253 common syntax. Typical instances of namespaces are IPv4 or IPv6 254 multicast addresses, overlay group ids, group names defined on the 255 application layer (e.g., SIP or Email), or some human readable 256 strings. 258 Multicast Domain: A Multicast Domain hosts nodes and routers of a 259 common, single multicast forwarding technology and is bound to a 260 single namespace. 262 Interface An Interface is a forwarding instance of a distribution 263 technology on a given node. For example, the IP interface 264 192.168.1.1 at an IPv4 host. 266 Inter-domain Multicast Gateway: An Inter-domain Multicast Gateway 267 (IMG) is an entity that interconnects different multicast domains. 268 Its objective is to forward data between these domains, e.g., 269 between IP layer and overlay multicast. 271 3. Overview 273 3.1. Objectives and Reference Scenarios 275 The default use case addressed in this document targets at 276 applications that participate in a group by using some common 277 identifier taken from some common namespace. This group name is 278 typically learned at runtime from user interaction like the selection 279 of an IPTV channel, from dynamic session negotiations like in the 280 Session Initiation Protocol (SIP), but may as well have been 281 predefined for an application as a common group name. Technology- 282 specific system functions then transparently map the group name to 283 group addresses such that 285 o programmers are enabled to process group names in their programs 286 without the need to consider technological mappings to designated 287 deployments in target domains; 289 o applications are enabled to identify packets that belong to a 290 logically named group, independent of the interface technology 291 used for sending and receiving packets. The latter shall also 292 hold for multicast gateways. 294 This document refers to a reference scenario that covers the 295 following two hybrid deployment cases displayed in Figure 1: 297 1. Multicast domains running the same multicast technology but 298 remaining isolated, possibly only connected by network layer 299 unicast. 301 2. Multicast domains running different multicast technologies, but 302 hosting nodes that are members of the same multicast group. 304 +-------+ +-------+ 305 | Member| | Member| 306 | Foo | | G | 307 +-------+ +-------+ 308 \ / 309 *** *** *** *** 310 * ** ** ** * 311 * * 312 * MCast Tec A * 313 * * 314 * ** ** ** * 315 *** *** *** *** 316 +-------+ +-------+ | 317 | Member| | Member| +-------+ 318 | G | | Foo | | IMG | 319 +-------+ +-------+ +-------+ 320 | | | 321 *** *** *** *** *** *** *** *** 322 * ** ** ** * * ** ** ** * 323 * * +-------+ * * 324 * MCast Tec A * --| IMG |-- * MCast Tec B * +-------+ 325 * * +-------+ * * - | Member| 326 * ** ** ** * * ** ** ** * | G | 327 *** *** *** *** *** *** *** *** +-------+ 329 Figure 1: Reference scenarios for hybrid multicast, interconnecting 330 group members from isolated homogeneous and heterogeneous domains. 332 It is assumed throughout the document that the domain composition, as 333 well as the node attachment to a specific technology remain unchanged 334 during a multicast session. 336 3.2. Group Communication API & Protocol Stack 338 The group communication API consists of four parts. Two parts 339 combine the essential communication functions, while the remaining 340 two offer optional extensions for an enhanced management: 342 Group Management Calls provide the minimal API to instantiate a 343 multicast socket and manage group membership. 345 Send/Receive Calls provide the minimal API to send and receive 346 multicast data in a technology-transparent fashion. 348 Socket Options provide extension calls for an explicit configuration 349 of the multicast socket like setting hop limits or associated 350 interfaces. 352 Service Calls provide extension calls that grant access to internal 353 multicast states of an interface such as the multicast groups 354 under subscription or the multicast forwarding information base. 356 Multicast applications that use the common API require assistance by 357 a group communication stack. This protocol stack serves two needs: 359 o It provides system-level support to transfer the abstract 360 functions of the common API, including namespace support, into 361 protocol operations at interfaces. 363 o It bridges data distribution between different multicast 364 technologies. 366 A general initiation of a multicast communication in this setting 367 proceeds as follows: 369 1. An application opens an abstract multicast socket. 371 2. The application subscribes/leaves/(de)registers to a group using 372 a logical group identifier. 374 3. An intrinsic function of the stack maps the logical group ID 375 (Group Name) to a technical group ID (Group Address). This 376 function may make use of deployment-specific knowledge such as 377 available technologies and group address management in its 378 domain. 380 4. Packet distribution proceeds to and from one or several 381 multicast-enabled interfaces. 383 The multicast socket describes a group communication channel composed 384 of one or multiple interfaces. A socket may be created without 385 explicit interface association by the application, which leaves the 386 choice of the underlying forwarding technology to the group 387 communication stack. However, an application may also bind the 388 socket to one or multiple dedicated interfaces, which predefines the 389 forwarding technology and the namespace(s) of the Group Address(es). 391 Applications are not required to maintain mapping states for Group 392 Addresses. The group communication stack accounts for the mapping of 393 the Group Name to the Group Address(es) and vice versa. Multicast 394 data passed to the application will be augmented by the corresponding 395 Group Name. Multiple multicast subscriptions thus can be conducted 396 on a single multicast socket without the need for Group Name encoding 397 at the application side. 399 Hosts may support several multicast protocols. The group 400 communication stack discovers available multicast-enabled 401 communication interfaces. It provides a minimal hybrid function that 402 bridges data between different interfaces and multicast domains. 403 Details of service discovery are out-of-scope of this document. 405 The extended multicast functions can be implemented by a middleware 406 as conceptually visualized in Figure 2. 408 *-------* *-------* 409 | App 1 | | App 2 | 410 *-------* *-------* 411 | | 412 *---------------------* ---| 413 | Middleware | | 414 *---------------------* | 415 | | | 416 *---------* | | 417 | Overlay | | \ Group Communication 418 *---------* | / Stack 419 | | | 420 | | | 421 *---------------------* | 422 | Underlay | | 423 *---------------------* ---| 425 Figure 2: A middleware for offering uniform access to multicast in 426 underlay and overlay 428 3.3. Naming and Addressing 430 Applications use Group Names to identify groups. Names can uniquely 431 determine a group in a global communication context and hide 432 technological deployment for data distribution from the application. 433 In contrast, multicast forwarding operates on Group Addresses. Even 434 though both identifiers may be identical in symbols, they carry 435 different meanings. They may also belong to different namespaces. 436 The namespace of a Group Address reflects a routing technology, while 437 the namespace of a Group Name represents the context in which the 438 application operates. 440 URIs [RFC3986] are a common way to represent namespace-specific 441 identifiers in applications in the form of an abstract meta-data 442 type. Throughout this document, any kind of Group Name follows a URI 443 notation with the syntax defined in Section 4.2.1. Examples are, 444 ip://224.1.2.3:5000 for a canonical IPv4 ASM group, 445 sip://news@cnn.com for an application-specific naming with service 446 instantiator and default port selection. 448 An implementation of the group communication middleware can provide 449 convenience functions that detect the namespace of a Group Name and 450 use it to optimize service instantiation. In practice, such a 451 library would provide support for high-level data types to the 452 application, similar to the current socket API (e.g., InetAddress in 453 Java). Using this data type could implicitly determine the 454 namespace. Details of automatic namespace identification is out-of- 455 scope of this document. 457 3.4. Mapping 459 All group members subscribe to the same Group Name taken from a 460 common namespace and thereby identify the group in a technology- 461 agnostic way. 463 Group Names require a mapping to addresses prior to service 464 instantiation at an Interface. Similarly, a mapping is needed at 465 gateways to translate between Group Addresses from different 466 namespaces. Some namespaces facilitate a canonical transformation to 467 default address spaces. For example, ip://224.1.2.3:5000 has an 468 obvious correspondence to 224.1.2.3 in the IPv4 multicast address 469 space. Note that in this example the multicast URI can be completely 470 recovered from any data packet received from this group. 472 However, mapping in general can be more complex and need not be 473 invertible. Mapping functions can be stateless in some contexts, but 474 may require states in others. The application of such functions 475 depends on the cardinality of the namespaces, the structure of 476 address spaces, and possible address collisions. For example, it is 477 not obvious how to map a large identifier space (e.g., IPv6) to a 478 smaller, collision-prone set like IPv4. 480 Two (or more) Multicast Addresses from different namespaces may 481 belong to 483 a. the same logical group (i.e., same Group Name) 485 b. different multicast channels (i.e., different Group Addresses). 487 A mapping can be realized by embedding smaller in larger namespaces 488 or selecting an arbitrary, unused ID in the target space. The 489 relation between logical and technical ID is maintained by mapping 490 functions which can be stateless or stateful. The middleware thus 491 queries the mapping service first, and creates a new technical group 492 ID only if there is no identifier available for the namespace in use. 493 The Group Name is associated with one or more Group Addresses, which 494 belong to different namespaces. Depending on the scope of the 495 mapping service, it ensures a consistent use of the technical ID in a 496 local or global domain. 498 4. Common Multicast API 500 4.1. Notation 502 The following description of the common multicast API is described in 503 pseudo syntax. Variables that are passed to function calls are 504 declared by "in", return values are declared by "out". A list of 505 elements is denoted by <>. 507 The corresponding C signatures are defined in Appendix A. 509 4.2. Abstract Data Types 511 4.2.1. Multicast URI 513 Multicast Names and Multicast Addresses used in this API follow an 514 URI scheme that defines a subset of the generic URI specified in 515 [RFC3986] and is compliant with the guidelines in [RFC4395]. 517 The multicast URI is defined as follows: 519 scheme "://" group "@" instantiation ":" port "/" sec-credentials 521 The parts of the URI are defined as follows: 523 scheme refers to the specification of the assigned identifier 524 [RFC3986] which takes the role of the namespace. 526 group identifies the group uniquely within the namespace given in 527 scheme. 529 instantiation identifies the entity that generates the instance of 530 the group (e.g., a SIP domain or a source in SSM) using the 531 namespace given in scheme. 533 port identifies a specific application at an instance of a group. 535 sec-credentials used to implement security credentials (e.g., to 536 authorize a multicast group access). 538 4.2.2. Interface 540 The interface denotes the layer and instance on which the 541 corresponding call will be effective. In agreement with [RFC3493] we 542 identify an interface by an identifier, which is a positive integer 543 starting at 1. 545 Properties of an interface are stored in the following struct: 547 struct if_prop { 548 unsigned int if_index; /* 1, 2, ... */ 549 char *if_name; /* "eth0", "eth1:1", "lo", ... */ 550 char *if_addr; /* "1.2.3.4", "abc123", ... */ 551 char *if_tech; /* "ip", "overlay", ... */ 552 }; 554 The following function retrieves all available interfaces from the 555 system: 557 getInterfaces(out Int num_ifs, out Interface ); 559 It extends the functions for Interface Identification in [RFC3493] 560 (cf., Section 4) and can be implemented by: 562 struct if_prop *if_prop(void); 564 4.2.3. Membership Events 566 A membership event is triggered by a multicast state change, which is 567 observed by the current node. It is related to a specific Group Name 568 and may be receiver or source oriented. 570 event_type { 571 join_event, 572 leave_event, 573 new_source_event 574 }; 576 event { 577 event_type event, 578 Uri group_name 579 }; 581 An event will be created by the middleware and passed to applications 582 that are registered for events. 584 4.3. Group Management Calls 586 4.3.1. Create 588 The create call initiates a multicast socket and provides the 589 application programmer with a corresponding handle. If no interfaces 590 will be assigned based on the call, the default interface will be 591 selected and associated with the socket. The call may return an 592 error code in the case of failures, e.g., due to a non-operational 593 middleware. 595 createMSocket(in Interface , 596 out Socket s); 598 The if argument denotes a list of interfaces (if_indexes) that will 599 be associated with the multicast socket. This parameter is optional. 601 On success a multicast socket identifier is returned, otherwise NULL. 603 4.3.2. Delete 605 The delete call removes the multicast socket. 607 deleteMSocket(in Socket s, out Int error); 609 The s argument identifies the multicast socket for destruction. 611 On success the out parameter error is 0, otherwise -1. 613 4.3.3. Join 615 The join call initiates a subscription for the given group. 616 Depending on the interfaces that are associated with the socket, this 617 may result in an IGMP/MLD report or overlay subscription, for 618 example. 620 join(in Socket s, in Uri group_name, out Int error); 622 The s argument identifies the multicast socket. 624 The group_name argument identifies the group. 626 On success the out parameter error is 0, otherwise -1. 628 4.3.4. Leave 630 The leave call results in an unsubscription for the given Group Name. 632 leave(in Socket s, in Uri group_name, out Int error); 634 The s argument identifies the multicast socket. 636 The group_name identifies the group. 638 On success the out parameter error is 0, otherwise -1. 640 4.3.5. Source Register 642 The srcRegister call registers a source for a Group on all active 643 interfaces of the socket s. This call may assist group distribution 644 in some technologies, the creation of sub-overlays, for example. Not 645 all multicast technologies require his call. 647 srcRegister(in Socket s, in Uri group_name, 648 in Int num_ifs, in Interface , 649 out Int error); 651 The s argument identifies the multicast socket. 653 The group_name argument identifies the multicast group to which a 654 source intends to send data. 656 The num_ifs argument holds the number of elements in the ifs array. 657 This parameter is optional. 659 The ifs argument points to the list of interface indexes for which 660 the source registration failed. If num_ifs was 0 on output, a NULL 661 pointer is returned. This parameter is optional. 663 If source registration succeeded for all interfaces associated with 664 the socket, the out parameter error is 0, otherwise -1. 666 4.3.6. Source Deregister 668 The srcDeregister indicates that a source does no longer intend to 669 send data to the multicast group. This call may remain without 670 effect in some multicast technologies. 672 srcDeregister(in Socket s, in Uri group_name, 673 in Int num_ifs, in Interface , 674 out Int error); 676 The s argument identifies the multicast socket. 678 The group_name argument identifies the multicast group to which a 679 source has stopped to send multicast data. 681 The num_ifs argument holds the number of elements in the ifs array. 683 The ifs argument points to the list of interfaces for which the 684 source deregistration failed. If num_ifs was 0 on output, a NULL 685 pointer is returned. 687 If source deregistration succeeded for all interfaces associated with 688 the socket, the out parameter error is 0, otherwise -1. 690 4.4. Send and Receive Calls 692 4.4.1. Send 694 The send call passes multicast data for a Multicast Name from the 695 application to the multicast socket. 697 send(in Socket s, in Uri group_name, 698 in Size msg_len, in Msg msg_buf, 699 out Int error); 701 The s argument identifies the multicast socket. 703 The group_name argument identifies the group to which data will be 704 sent. 706 The msg_len argument holds the length of the message to be sent. 708 The msg_buf argument passes the multicast data to the multicast 709 socket. 711 On success the out parameter error is 0, otherwise -1. 713 4.4.2. Receive 715 The receive call passes multicast data and the corresponding Group 716 Name to the application. 718 receive(in Socket s, out Uri group_name, 719 out Size msg_len, out Msg msg_buf, 720 out Int error); 722 The s argument identifies the multicast socket. 724 The group_name argument identifies the multicast group for which data 725 was received. 727 The msg_len argument holds the length of the received message. 729 The msg_buf argument points to the payload of the received multicast 730 data. 732 On success the out parameter error is 0, otherwise -1. 734 4.5. Socket Options 736 The following calls configure an existing multicast socket. 738 4.5.1. Get Interfaces 740 The getInterface call returns an array of all available multicast 741 communication interfaces associated with the multicast socket. 743 getInterfaces(in Socket s, out Int num_ifs, 744 out Interface , out Int error); 746 The s argument identifies the multicast socket. 748 The num_ifs argument holds the number of interfaces in the ifs list. 750 The ifs argument points to an array of interface index identifiers. 752 On success the out parameter error is 0, otherwise -1. 754 4.5.2. Add Interface 756 The addInterface call adds a distribution channel to the socket. 757 This may be an overlay or underlay interface, e.g., IPv6 or DHT. 758 Multiple interfaces of the same technology may be associated with the 759 socket. 761 addInterface(in Socket s, in Interface if, 762 out Int error); 764 The s and if arguments identify a multicast socket and interface, 765 respectively. 767 On success the value 0 is returned, otherwise -1. 769 4.5.3. Delete Interface 771 The delInterface call removes the interface if from the multicast 772 socket. 774 delInterface(in Socket s, Interface if, 775 out Int error); 777 The s and if arguments identify a multicast socket and interface, 778 respectively. 780 On success the out parameter error is 0, otherwise -1. 782 4.5.4. Set TTL 784 The setTTL call configures the maximum hop count for the socket a 785 multicast message is allowed to traverse. 787 setTTL(in Socket s, in Int h, 788 in Int num_ifs, in Interface , 789 out Int error); 791 The s and h arguments identify a multicast socket and the maximum hop 792 count, respectively. 794 The num_ifs argument holds the number of interfaces in the ifs list. 795 This parameter is optional. 797 The ifs argument points to an array of interface index identifiers. 798 This parameter is optional. 800 On success the out parameter error is 0, otherwise -1. 802 4.5.5. Get TTL 804 The getTTL call returns the maximum hop count a multicast message is 805 allowed to traverse for the socket. 807 getTTL(in Socket s, 808 out Int h, out Int error); 810 The s argument identifies a multicast socket. 812 The h argument holds the maximum number of hops associated with 813 socket s. 815 On success the out parameter error is 0, otherwise -1. 817 4.6. Service Calls 818 4.6.1. Group Set 820 The groupSet call returns all multicast groups registered at a given 821 interface. This information can be provided by group management 822 states or routing protocols. The return values distinguish between 823 sender and listener states. 825 struct GroupSet { 826 uri group_name; /* registered multicast group */ 827 int type; /* 0 = listener state, 1 = sender state, 828 2 = sender & listener state */ 829 } 831 groupSet(in Interface if, out Int num_groups, 832 out GroupSet , out Int error); 834 The if argument identifies the interface for which states are 835 maintained. 837 The num_groups argument holds the number of groups in the groupSet 838 array. 840 The groupSet argument points to a list of group states. 842 On success the out parameter error is 0, otherwise -1. 844 4.6.2. Neighbor Set 846 The neighborSet function returns the set of neighboring nodes for a 847 given interface as seen by the multicast routing protocol. 849 neighborSet(in Interface if, out Int num_neighbors, 850 out Uri , out Int error); 852 The if argument identifies the interface for which neighbors are 853 inquired. 855 The num_neighbors argument holds the number of addresses in the 856 neighbor_address array. 858 The neighbor_address argument points to a list of neighboring nodes 859 on a successful return. 861 On success the out parameter error is 0, otherwise -1. 863 4.6.3. Children Set 865 The childrenSet function returns the set of child nodes that receive 866 multicast data from a specified interface for a given group. For a 867 common multicast router, this call retrieves the multicast forwarding 868 information base per interface. 870 childrenSet(in Interface if, in Uri group_name, 871 out Int num_children, out Uri , 872 out Int error); 874 The if argument identifies the interface for which children are 875 inquired. 877 The group_name argument defines the multicast group for which 878 distribution is considered. 880 The num_children argument holds the number of addresses in the 881 child_address array. 883 The child_address argument points to a list of neighboring nodes on a 884 successful return. 886 On success the out parameter error is 0, otherwise -1. 888 4.6.4. Parent Set 890 The parentSet function returns the set of neighbors from which the 891 current node receives multicast data at a given interface for the 892 specified group. 894 parentSet(in Interface if, in Uri group_name, 895 out Int num_parents, out Uri parent_address, 896 out Int error); 898 The if argument identifies the interface for which parents are 899 inquired. 901 The group_name argument defines the multicast group for which 902 distribution is considered. 904 The num_parents argument holds the number of addresses in the 905 parent_address array. 907 The parent_address argument points to a list of neighboring nodes on 908 a successful return. 910 On success the out parameter error is 0, otherwise -1. 912 4.6.5. Designated Host 914 The designatedHost function inquires whether the host has the role of 915 a designated forwarder resp. querier, or not. Such an information is 916 provided by almost all multicast protocols to prevent packet 917 duplication, if multiple multicast instances serve on the same 918 subnet. 920 designatedHost(in Interface if, in Uri group_name 921 out Int return); 923 The if argument identifies the interface for which designated 924 forwarding is inquired. 926 The group_name argument specifies the group for which the host may 927 attain the role of designated forwarder. 929 The function returns 1 if the host is a designated forwarder or 930 querier, otherwise 0. The return value -1 indicates an error. 932 4.6.6. Enable Membership Events 934 The enableEvents function registers an application at the middleware 935 to inform the application about a group change. This is the result 936 of receiver new subscriptions or leaves as well as the observation of 937 source changes. The group service may call other service calls to 938 get additional information. 940 enableEvents(); 942 Calling this function, the middleware starts to pass membership 943 events to the application. Each event includes an event type 944 identifier and a Group Name (cf., Section 4.2.3). 946 4.6.7. Disable Membership Events 948 The disableEvents function deactivates the information about group 949 state changes. 951 disableEvents(); 953 On success the middleware will not pass membership events to the 954 application. 956 5. Functional Details 958 In this section, we describe specific functions of the API and the 959 associated system middleware in detail. 961 5.1. Namespaces 963 Namespace identifiers in URIs are placed in the scheme element and 964 characterize syntax and semantic of the group identifier. They 965 enable the use of convenience functions and high-level data types 966 while processing URIs. When used in names, they may facilitate a 967 default mapping and a recovery of names from addresses. They 968 characterize its type, when used in addresses. 970 Compliant to the URI concept, namespace-schemes can be added. 971 Examples of schemes and functions currently foreseen include 973 IP This namespace is comprised of regular IP node naming, i.e., DNS 974 names and addresses taken from any version of the Internet 975 Protocol. A processor dealing with the IP namespace is required 976 to determine the syntax (DNS name, IP address version) of the 977 group expression. 979 OLM This namespace covers address strings immediately valid in an 980 overlay network. A processor handling those strings need not be 981 aware of the address generation mechanism, but may pass these 982 values directly to a corresponding overlay. 984 SIP The SIP namespace is an example of an application-layer scheme 985 that bears inherent group functions (conferencing). SIP 986 conference URIs may be directly exchanged and interpreted at the 987 application, and mapped to group addresses on the system level to 988 generate a corresponding multicast group. 990 Opaque This namespace transparently carries strings without further 991 syntactical information, meanings or associated resolution 992 mechanism. 994 5.2. Mapping 996 Group Name to Group Address, SSM/ASM TODO 998 6. IANA Considerations 1000 This document makes no request of IANA. 1002 7. Security Considerations 1004 This draft does neither introduce additional messages nor novel 1005 protocol operations. 1007 8. Acknowledgements 1009 We would like to thank the HAMcast-team, Dominik Charousset, Gabriel 1010 Hege, Fabian Holler, Alexander Knauf, Sebastian Meiling, and 1011 Sebastian Woelke, at the HAW Hamburg for many fruitful discussions 1012 and for their continuous critical feedback while implementing API and 1013 a hybrid multicast middleware. 1015 This work is partially supported by the German Federal Ministry of 1016 Education and Research within the HAMcast project, which is part of 1017 G-Lab. 1019 9. Informative References 1021 [I-D.ietf-mboned-auto-multicast] 1022 Thaler, D., Talwar, M., Aggarwal, A., Vicisano, L., and T. 1023 Pusateri, "Automatic IP Multicast Without Explicit Tunnels 1024 (AMT)", draft-ietf-mboned-auto-multicast-10 (work in 1025 progress), March 2010. 1027 [RFC1075] Waitzman, D., Partridge, C., and S. Deering, "Distance 1028 Vector Multicast Routing Protocol", RFC 1075, 1029 November 1988. 1031 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1032 Requirement Levels", BCP 14, RFC 2119, March 1997. 1034 [RFC2710] Deering, S., Fenner, W., and B. Haberman, "Multicast 1035 Listener Discovery (MLD) for IPv6", RFC 2710, 1036 October 1999. 1038 [RFC3376] Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A. 1039 Thyagarajan, "Internet Group Management Protocol, Version 1040 3", RFC 3376, October 2002. 1042 [RFC3493] Gilligan, R., Thomson, S., Bound, J., McCann, J., and W. 1043 Stevens, "Basic Socket Interface Extensions for IPv6", 1044 RFC 3493, February 2003. 1046 [RFC3678] Thaler, D., Fenner, B., and B. Quinn, "Socket Interface 1047 Extensions for Multicast Source Filters", RFC 3678, 1048 January 2004. 1050 [RFC3810] Vida, R. and L. Costa, "Multicast Listener Discovery 1051 Version 2 (MLDv2) for IPv6", RFC 3810, June 2004. 1053 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1054 Resource Identifier (URI): Generic Syntax", STD 66, 1055 RFC 3986, January 2005. 1057 [RFC4395] Hansen, T., Hardie, T., and L. Masinter, "Guidelines and 1058 Registration Procedures for New URI Schemes", BCP 35, 1059 RFC 4395, February 2006. 1061 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 1062 "Protocol Independent Multicast - Sparse Mode (PIM-SM): 1063 Protocol Specification (Revised)", RFC 4601, August 2006. 1065 [RFC4604] Holbrook, H., Cain, B., and B. Haberman, "Using Internet 1066 Group Management Protocol Version 3 (IGMPv3) and Multicast 1067 Listener Discovery Protocol Version 2 (MLDv2) for Source- 1068 Specific Multicast", RFC 4604, August 2006. 1070 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 1071 "Bidirectional Protocol Independent Multicast (BIDIR- 1072 PIM)", RFC 5015, October 2007. 1074 Appendix A. C Signatures 1076 This section describes the C signatures of the common multicast API, 1077 which is defined in Section 4. 1079 int createMSocket(uint32_t *if); 1081 int deleteMSocket(int s); 1083 int join(int s, const uri group_name); 1085 int leave(int s, const uri group_name); 1087 int srcRegister(int s, const uri group_name, 1088 uint_t num_ifs, uint_t *ifs); 1090 int srcDeregister(int s, const uri group_name, 1091 uint_t num_ifs, uint_t *ifs); 1093 int send(int s, const uri group_name, 1094 size_t msg_len, const void *buf); 1096 int receive(int s, const uri group_name, 1097 size_t msg_len, msg *msg_buf); 1099 int getInterfaces(int s, uint_t num_ifs, uint_t *ifs); 1101 int addInterface(int s, uint32_t if); 1103 int delInterface(int s, uint32_t if); 1105 int setTTL(int s, int h, uint_t num_ifs, uint_t *ifs); 1107 int getTTL(int s, int h); 1109 int groupSet(uint32_t if, uint_t *num_groups, 1110 struct groupSet *groupSet); 1112 struct groupSet { 1113 uri group_name; /* registered multicast group */ 1114 int type; /* 0 = listener state, 1 = sender state, 1115 2 = sender & listener state */ 1117 int neighborSet(uint32_t if, uint_t *num_neighbors, 1118 const uri *neighbor_address); 1120 int childrenSet(uint32_t if, const uri group_name, 1121 uint_t *num_children, const uri *child_address); 1123 int parentSet(uint32_t if, const uri group_name, uint_t *num_parents, 1124 const uri *parent_address); 1126 int designatedHost(uint32_t if, const uri *group_name); 1128 Appendix B. Practical Example of the API 1129 -- Application above middleware: 1131 //Initialize multicast socket; 1132 //the middleware selects all available interfaces 1133 MulticastSocket m = new MulticastSocket(); 1135 m.join(URI("ip://224.1.2.3:5000")); 1136 m.join(URI("ip://[FF02:0:0:0:0:0:0:3]:6000")); 1137 m.join(URI("sip://news@cnn.com")); 1139 -- Middleware: 1141 join(URI mcAddress) { 1142 //Select interfaces in use 1143 for all this.interfaces { 1144 switch (interface.type) { 1145 case "ipv6": 1146 //... map logical ID to routing address 1147 Inet6Address rtAddressIPv6 = new Inet6Address(); 1148 mapNametoAddress(mcAddress,rtAddressIPv6); 1149 interface.join(rtAddressIPv6); 1150 case "ipv4": 1151 //... map logical ID to routing address 1152 Inet4Address rtAddressIPv4 = new Inet4Address(); 1153 mapNametoAddress(mcAddress,rtAddressIPv4); 1154 interface.join(rtAddressIPv4); 1155 case "sip-session": 1156 //... map logical ID to routing address 1157 SIPAddress rtAddressSIP = new SIPAddress(); 1158 mapNametoAddress(mcAddress,rtAddressSIP); 1159 interface.join(rtAddressSIP); 1160 case "dht": 1161 //... map logical ID to routing address 1162 DHTAddress rtAddressDHT = new DHTAddress(); 1163 mapNametoAddress(mcAddress,rtAddressDHT); 1164 interface.join(rtAddressDHT); 1165 //... 1166 } 1167 } 1168 } 1170 Appendix C. Deployment Use Cases for Hybrid Multicast 1172 This section describes the application of the defined API to 1173 implement an IMG. 1175 C.1. DVMRP 1177 The following procedure describes a transparent mapping of a DVMRP- 1178 based any source multicast service to another many-to-many multicast 1179 technology. 1181 An arbitrary DVMRP [RFC1075] router will not be informed about new 1182 receivers, but will learn about new sources immediately. The concept 1183 of DVMRP does not provide any central multicast instance. Thus, the 1184 IMG can be placed anywhere inside the multicast region, but requires 1185 a DVMRP neighbor connectivity. The group communication stack used by 1186 the IMG is enhanced by a DVMRP implementation. New sources in the 1187 underlay will be advertised based on the DVMRP flooding mechanism and 1188 received by the IMG. Based on this the event "new_source_event" is 1189 created and passed to the application. The relay agent initiates a 1190 corresponding join in the native network and forwards the received 1191 source data towards the overlay routing protocol. Depending on the 1192 group states, the data will be distributed to overlay peers. 1194 DVMRP establishes source specific multicast trees. Therefore, a 1195 graft message is only visible for DVMRP routers on the path from the 1196 new receiver subnet to the source, but in general not for an IMG. To 1197 overcome this problem, data of multicast senders will be flooded in 1198 the overlay as well as in the underlay. Hence, an IMG has to 1199 initiate an all-group join to the overlay using the namespace 1200 extension of the API. Each IMG is initially required to forward the 1201 received overlay data to the underlay, independent of native 1202 multicast receivers. Subsequent prunes may limit unwanted data 1203 distribution thereafter. 1205 C.2. PIM-SM 1207 The following procedure describes a transparent mapping of a PIM-SM- 1208 based any source multicast service to another many-to-many multicast 1209 technology. 1211 The Protocol Independent Multicast Sparse Mode (PIM-SM) [RFC4601] 1212 establishes rendezvous points (RP). These entities receive listener 1213 and source subscriptions of a domain. To be continuously updated, an 1214 IMG has to be co-located with a RP. Whenever PIM register messages 1215 are received, the IMG must signal internally a new multicast source 1216 using the event "new_source_event". Subsequently, the IMG joins the 1217 group and a shared tree between the RP and the sources will be 1218 established, which may change to a source specific tree after a 1219 sufficient number of data has been delivered. Source traffic will be 1220 forwarded to the RP based on the IMG join, even if there are no 1221 further receivers in the native multicast domain. Designated routers 1222 of a PIM-domain send receiver subscriptions towards the PIM-SM RP. 1224 The reception of such messages initiates the event "join_event" at 1225 the IMG, which initiates a join towards the overlay routing protocol. 1226 Overlay multicast data arriving at the IMG will then transparently be 1227 forwarded in the underlay network and distributed through the RP 1228 instance. 1230 C.3. PIM-SSM 1232 The following procedure describes a transparent mapping of a PIM-SSM- 1233 based source specific multicast service to another one-to-many 1234 multicast technology. 1236 PIM Source Specific Multicast (PIM-SSM) is defined as part of PIM-SM 1237 and admits source specific joins (S,G) according to the source 1238 specific host group model [RFC4604]. A multicast distribution tree 1239 can be established without the assistance of a rendezvous point. 1241 Sources are not advertised within a PIM-SSM domain. Consequently, an 1242 IMG cannot anticipate the local join inside a sender domain and 1243 deliver a priori the multicast data to the overlay instance. If an 1244 IMG of a receiver domain initiates a group subscription via the 1245 overlay routing protocol, relaying multicast data fails, as data are 1246 not available at the overlay instance. The IMG instance of the 1247 receiver domain, thus, has to locate the IMG instance of the source 1248 domain to trigger the corresponding join. In the sense of PIM-SSM, 1249 the signaling should not be flooded in underlay and overlay. 1251 One solution could be to intercept the subscription at both, source 1252 and receiver sites: To monitor multicast receiver subscriptions 1253 ("join_event" or "leave_event") in the underlay, the IMG is placed on 1254 path towards the source, e.g., at a domain border router. This 1255 router intercepts join messages and extracts the unicast source 1256 address S, initializing an IMG specific join to S via regular 1257 unicast. Multicast data arriving at the IMG of the sender domain can 1258 be distributed via the overlay. Discovering the IMG of a multicast 1259 sender domain may be implemented analogously to AMT 1260 [I-D.ietf-mboned-auto-multicast] by anycast. Consequently, the 1261 source address S of the group (S,G) should be built based on an 1262 anycast prefix. The corresponding IMG anycast address for a source 1263 domain is then derived from the prefix of S. 1265 C.4. BIDIR-PIM 1267 The following procedure describes a transparent mapping of a BIDIR- 1268 PIM-based any source multicast service to another many-to-many 1269 multicast technology. 1271 Bidirectional PIM [RFC5015] is a variant of PIM-SM. In contrast to 1272 PIM-SM, the protocol pre-establishes bidirectional shared trees per 1273 group, connecting multicast sources and receivers. The rendezvous 1274 points are virtualized in BIDIR-PIM as an address to identify on-tree 1275 directions (up and down). However, routers with the best link 1276 towards the (virtualized) rendezvous point address are selected as 1277 designated forwarders for a link-local domain and represent the 1278 actual distribution tree. The IMG is to be placed at the RP-link, 1279 where the rendezvous point address is located. As source data in 1280 either cases will be transmitted to the rendezvous point address, the 1281 BIDIR-PIM instance of the IMG receives the data and can internally 1282 signal new senders towards the stack via the "new_source_event". The 1283 first receiver subscription for a new group within a BIDIR-PIM domain 1284 needs to be transmitted to the RP to establish the first branching 1285 point. Using the "join_event", an IMG will thereby be informed about 1286 group requests from its domain, which are then delegated to the 1287 overlay. 1289 Appendix D. Change Log 1291 The following changes have been made from 1292 draft-irtf-samrg-common-api-00 1294 1. Incorrect pseudo code syntax fixed 1296 2. Minor editorial improvements 1298 The following changes have been made from 1299 draft-waehlisch-sam-common-api-06 1301 1. no changes; draft adopted as WG document (previous 1302 draft-waehlisch-sam-common-api-06, now 1303 draft-irtf-samrg-common-api-00) 1305 The following changes have been made from 1306 draft-waehlisch-sam-common-api-05 1308 1. Description of the Common API using pseudo syntax added 1310 2. C signatures of the Comon API moved to appendix 1312 3. updateSender() and updateListener() calls replaced by events 1314 4. Function destroyMSocket renamed as deleteMSocket. 1316 The following changes have been made from 1317 draft-waehlisch-sam-common-api-04 1318 1. updateSender() added. 1320 The following changes have been made from 1321 draft-waehlisch-sam-common-api-03 1323 1. Use cases added for illustration. 1325 2. Service calls added for inquiring on the multicast distribution 1326 system. 1328 3. Namespace examples added. 1330 4. Clarifications and editorial improvements. 1332 The following changes have been made from 1333 draft-waehlisch-sam-common-api-02 1335 1. Rename init() in createMSocket(). 1337 2. Added calls srcRegister()/srcDeregister(). 1339 3. Rephrased API calls in C-style. 1341 4. Cleanup code in "Practical Example of the API". 1343 5. Partial reorganization of the document. 1345 6. Many editorial improvements. 1347 The following changes have been made from 1348 draft-waehlisch-sam-common-api-01 1350 1. Document restructured to clarify the realm of document overview 1351 and specific contributions s.a. naming and addressing. 1353 2. A clear separation of naming and addressing was drawn. Multicast 1354 URIs have been introduced. 1356 3. Clarified and adapted the API calls. 1358 4. Introduced Socket Option calls. 1360 5. Deployment use cases moved to an appendix. 1362 6. Simple programming example added. 1364 7. Many editorial improvements. 1366 Authors' Addresses 1368 Matthias Waehlisch 1369 link-lab & FU Berlin 1370 Hoenower Str. 35 1371 Berlin 10318 1372 Germany 1374 Email: mw@link-lab.net 1375 URI: http://www.inf.fu-berlin.de/~waehl 1377 Thomas C. Schmidt 1378 HAW Hamburg 1379 Berliner Tor 7 1380 Hamburg 20099 1381 Germany 1383 Email: schmidt@informatik.haw-hamburg.de 1384 URI: http://inet.cpt.haw-hamburg.de/members/schmidt 1386 Stig Venaas 1387 cisco Systems 1388 Tasman Drive 1389 San Jose, CA 95134 1390 USA 1392 Email: stig@cisco.com