idnits 2.17.1 draft-irtf-samrg-common-api-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 == There are 1 instance of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 492 has weird spacing: '...t Calls provi...' == Line 495 has weird spacing: '...e Calls provi...' == Line 498 has weird spacing: '...Options provi...' == Line 502 has weird spacing: '...e Calls provi...' == Line 791 has weird spacing: '...-scheme refer...' == (5 more instances...) -- The document date (October 10, 2013) is 3849 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Experimental ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 1390, but no explicit reference was found in the text ** Obsolete normative reference: RFC 4395 (Obsoleted by RFC 7595) ** Obsolete normative reference: RFC 4601 (Obsoleted by RFC 7761) == Outdated reference: A later version (-18) exists of draft-ietf-mboned-auto-multicast-15 == Outdated reference: A later version (-21) exists of draft-ietf-p2psip-sip-11 Summary: 2 errors (**), 0 flaws (~~), 14 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SAM Research Group M. Waehlisch 3 Internet-Draft link-lab & FU Berlin 4 Intended status: Experimental T. Schmidt 5 Expires: April 13, 2014 HAW Hamburg 6 S. Venaas 7 cisco Systems 8 October 10, 2013 10 A Common API for Transparent Hybrid Multicast 11 draft-irtf-samrg-common-api-10 13 Abstract 15 Group communication services exist in a large variety of flavors and 16 technical implementations at different protocol layers. Multicast 17 data distribution is most efficiently performed on the lowest 18 available layer, but a heterogeneous deployment status of multicast 19 technologies throughout the Internet requires an adaptive service 20 binding at runtime. Today, it is difficult to write an application 21 that runs everywhere and at the same time makes use of the most 22 efficient multicast service available in the network. Facing 23 robustness requirements, developers are frequently forced to use a 24 stable, upper layer protocol provided by the application itself. 25 This document describes a common multicast API that is suitable for 26 transparent communication in underlay and overlay, and grants access 27 to the different multicast flavors. It proposes an abstract naming 28 by multicast URIs and discusses mapping mechanisms between different 29 namespaces and distribution technologies. Additionally, this 30 document describes the application of this API for building gateways 31 that interconnect current multicast domains throughout the Internet. 32 It reports on an implementation of the programming interface, 33 including a service middleware. This document is a product of the 34 Scalable Adaptive Multicast (SAM) Research Group. 36 Status of This Memo 38 This Internet-Draft is submitted in full conformance with the 39 provisions of BCP 78 and BCP 79. 41 Internet-Drafts are working documents of the Internet Engineering 42 Task Force (IETF). Note that other groups may also distribute 43 working documents as Internet-Drafts. The list of current Internet- 44 Drafts is at http://datatracker.ietf.org/drafts/current/. 46 Internet-Drafts are draft documents valid for a maximum of six months 47 and may be updated, replaced, or obsoleted by other documents at any 48 time. It is inappropriate to use Internet-Drafts as reference 49 material or to cite them other than as "work in progress." 51 This Internet-Draft will expire on April 13, 2014. 53 Copyright Notice 55 Copyright (c) 2013 IETF Trust and the persons identified as the 56 document authors. All rights reserved. 58 This document is subject to BCP 78 and the IETF Trust's Legal 59 Provisions Relating to IETF Documents 60 (http://trustee.ietf.org/license-info) in effect on the date of 61 publication of this document. Please review these documents 62 carefully, as they describe your rights and restrictions with respect 63 to this document. Code Components extracted from this document must 64 include Simplified BSD License text as described in Section 4.e of 65 the Trust Legal Provisions and are provided without warranty as 66 described in the Simplified BSD License. 68 Table of Contents 70 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 71 1.1. Use Cases for the Common API . . . . . . . . . . . . . . 5 72 1.2. Illustrative Examples . . . . . . . . . . . . . . . . . . 6 73 1.2.1. Support of Multiple Underlying Technologies . . . . . 6 74 1.2.2. Support of Multi-Resolution Multicast . . . . . . . . 8 75 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 9 76 3. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 9 77 3.1. Objectives and Reference Scenarios . . . . . . . . . . . 10 78 3.2. Group Communication API and Protocol Stack . . . . . . . 11 79 3.3. Naming and Addressing . . . . . . . . . . . . . . . . . . 13 80 3.4. Namespaces . . . . . . . . . . . . . . . . . . . . . . . 13 81 3.5. Name-to-Address Mapping . . . . . . . . . . . . . . . . . 14 82 3.5.1. Canonical Mapping . . . . . . . . . . . . . . . . . . 14 83 3.5.2. Mapping at End Points . . . . . . . . . . . . . . . . 15 84 3.5.3. Mapping at Inter-domain Multicast Gateways . . . . . 15 85 3.6. A Note on Explicit Multicast (XCAST) . . . . . . . . . . 15 86 3.7. MTU Handling . . . . . . . . . . . . . . . . . . . . . . 15 87 4. Common Multicast API . . . . . . . . . . . . . . . . . . . . 16 88 4.1. Notation . . . . . . . . . . . . . . . . . . . . . . . . 16 89 4.2. URI Scheme Definition . . . . . . . . . . . . . . . . . . 17 90 4.2.1. Syntax . . . . . . . . . . . . . . . . . . . . . . . 17 91 4.2.2. Semantic . . . . . . . . . . . . . . . . . . . . . . 17 92 4.2.3. Generic Namespaces . . . . . . . . . . . . . . . . . 18 93 4.2.4. Application-centric Namespaces . . . . . . . . . . . 19 94 4.2.5. Future Namespaces . . . . . . . . . . . . . . . . . . 19 95 4.3. Additional Abstract Data Types . . . . . . . . . . . . . 19 96 4.3.1. Interface . . . . . . . . . . . . . . . . . . . . . . 19 97 4.3.2. Membership Events . . . . . . . . . . . . . . . . . . 20 98 4.4. Group Management Calls . . . . . . . . . . . . . . . . . 20 99 4.4.1. Create . . . . . . . . . . . . . . . . . . . . . . . 20 100 4.4.2. Delete . . . . . . . . . . . . . . . . . . . . . . . 21 101 4.4.3. Join . . . . . . . . . . . . . . . . . . . . . . . . 21 102 4.4.4. Leave . . . . . . . . . . . . . . . . . . . . . . . . 21 103 4.4.5. Source Register . . . . . . . . . . . . . . . . . . . 22 104 4.4.6. Source Deregister . . . . . . . . . . . . . . . . . . 22 105 4.5. Send and Receive Calls . . . . . . . . . . . . . . . . . 22 106 4.5.1. Send . . . . . . . . . . . . . . . . . . . . . . . . 23 107 4.5.2. Receive . . . . . . . . . . . . . . . . . . . . . . . 23 108 4.6. Socket Options . . . . . . . . . . . . . . . . . . . . . 24 109 4.6.1. Get Interfaces . . . . . . . . . . . . . . . . . . . 24 110 4.6.2. Add Interface . . . . . . . . . . . . . . . . . . . . 24 111 4.6.3. Delete Interface . . . . . . . . . . . . . . . . . . 24 112 4.6.4. Set TTL . . . . . . . . . . . . . . . . . . . . . . . 25 113 4.6.5. Get TTL . . . . . . . . . . . . . . . . . . . . . . . 25 114 4.6.6. Atomic Message Size . . . . . . . . . . . . . . . . . 26 115 4.7. Service Calls . . . . . . . . . . . . . . . . . . . . . . 26 116 4.7.1. Group Set . . . . . . . . . . . . . . . . . . . . . . 26 117 4.7.2. Neighbor Set . . . . . . . . . . . . . . . . . . . . 26 118 4.7.3. Children Set . . . . . . . . . . . . . . . . . . . . 27 119 4.7.4. Parent Set . . . . . . . . . . . . . . . . . . . . . 27 120 4.7.5. Designated Host . . . . . . . . . . . . . . . . . . . 28 121 4.7.6. Enable Membership Events . . . . . . . . . . . . . . 28 122 4.7.7. Disable Membership Events . . . . . . . . . . . . . . 28 123 4.7.8. Maximum Message Size . . . . . . . . . . . . . . . . 29 124 5. Implementation . . . . . . . . . . . . . . . . . . . . . . . 29 125 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29 126 7. Security Considerations . . . . . . . . . . . . . . . . . . . 30 127 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 30 128 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 30 129 9.1. Normative References . . . . . . . . . . . . . . . . . . 31 130 9.2. Informative References . . . . . . . . . . . . . . . . . 32 131 Appendix A. C Signatures . . . . . . . . . . . . . . . . . . . . 33 132 Appendix B. Use Case for the API . . . . . . . . . . . . . . . . 35 133 Appendix C. Deployment Use Cases for Hybrid Multicast . . . . . 37 134 C.1. DVMRP . . . . . . . . . . . . . . . . . . . . . . . . . . 37 135 C.2. PIM-SM . . . . . . . . . . . . . . . . . . . . . . . . . 37 136 C.3. PIM-SSM . . . . . . . . . . . . . . . . . . . . . . . . . 38 137 C.4. BIDIR-PIM . . . . . . . . . . . . . . . . . . . . . . . . 39 138 Appendix D. Change Log . . . . . . . . . . . . . . . . . . . . . 39 139 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 42 141 1. Introduction 143 Currently, group application programmers need to make the choice of 144 the distribution technology that the application will require at 145 runtime. There is no common communication interface that abstracts 146 multicast transmission and subscriptions from the deployment state at 147 runtime, nor has been the use of DNS for group addresses established. 148 The standard multicast socket options [RFC3493] [RFC3678] are bound 149 to an IP version by not distinguishing between naming and addressing 150 of multicast identifiers. Group communication, however, is commonly 151 implemented in different flavors such as any source (ASM) vs. source 152 specific multicast (SSM), on different layers (e.g., IP vs. 153 application layer multicast), and may be based on different 154 technologies on the same tier as with IPv4 vs. IPv6. The objective 155 of this document is to provide for programmers a universal access to 156 group services. 158 Multicast application development should be decoupled of 159 technological deployment throughout the infrastructure. It requires 160 a common multicast API that offers calls to transmit and receive 161 multicast data independent of the supporting layer and the underlying 162 technological details. For inter-technology transmissions, a 163 consistent view on multicast states is needed as well. This document 164 describes an abstract group communication API and core functions 165 necessary for transparent operations. Specific implementation 166 guidelines with respect to operating systems or programming languages 167 are out of scope of this document. 169 In contrast to the standard multicast socket interface, the API 170 introduced in this document abstracts naming from addressing. Using 171 a multicast address in the current socket API predefines the 172 corresponding routing layer. In this specification, the multicast 173 name used for joining a group denotes an application layer data 174 stream that is identified by a multicast URI, independent of its 175 binding to a specific distribution technology. Such a group name can 176 be mapped to variable routing identifiers. 178 The aim of this common API is twofold: 180 o Enable any application programmer to implement group-oriented data 181 communication independent of the underlying delivery mechanisms. 182 In particular, allow for a late binding of group applications to 183 multicast technologies that makes applications efficient, but 184 robust with respect to deployment aspects. 186 o Allow for a flexible namespace support in group addressing and 187 thereby separate naming and addressing resp. routing schemes from 188 the application design. This abstraction does not only decouple 189 programs from specific aspects of underlying protocols, but it may 190 open application design to extend to specifically flavored group 191 services. 193 Multicast technologies may be of various peer-to-peer kinds, IPv4 or 194 IPv6 network layer multicast, or implemented by some other 195 application service. Corresponding namespaces may be IP addresses or 196 DNS naming, overlay hashes, or other application layer group 197 identifiers like , but they can also be names 198 independently defined by the applications. Common namespaces are 199 introduced later in this document but follow an open concept suitable 200 for further extensions. 202 This document also discusses mapping mechanisms between different 203 namespaces and forwarding technologies and proposes expressions of 204 defaults for an intended binding. Additionally, the multicast API 205 provides internal interfaces to access current multicast states at 206 the host. Multiple multicast protocols may run in parallel on a 207 single host. These protocols may interact to provide a gateway 208 function that bridges data between different domains. The usage of 209 this API at gateways operating between current multicast instances 210 throughout the Internet is described as well. Finally, a report on 211 an implementation of the programming interface, including a service 212 middleware, is presented. 214 This document represents the consensus of the SAM Research Group. It 215 has been reviewed by the Research Group members active in the 216 specific area of work. In addition, this document has been 217 comprehensively reviewed by people who are not "in" the Research 218 Group but are experts in the area. 220 1.1. Use Cases for the Common API 222 The following generic use cases can be identified that require an 223 abstract common API for multicast services: 225 Application Programming Independent of Technologies: Application 226 programmers are provided with group primitives that remain 227 independent of multicast technologies and their deployment in 228 target domains. They are thus enabled to develop programs once 229 that run in every deployment scenario. The use of Group Names in 230 the form of abstract meta data types allows applications to remain 231 namespace-agnostic in the sense that the resolution of namespaces 232 and name-to-address mappings may be delegated to a system service 233 at runtime. Thereby, the complexity is minimized as developers 234 need not care about how data is distributed in groups, while 235 system service can take advantage of extended information of the 236 network environment as acquired at startup. 238 Global Identification of Groups: Groups can be identified 239 independent of technological instantiations and beyond deployment 240 domains. Taking advantage of the abstract naming, an application 241 is thus enabled to match data received from different Interface 242 technologies (e.g., IPv4, IPv6, and overlays) to belong to the 243 same group. This not only increases flexibility - an application 244 may for instance combine heterogeneous multipath streams - but 245 also simplifies the design and implementation of gateways. 247 Uniform Access to Multicast Flavors: The URI naming scheme uniformly 248 supports different flavors of group communication such as any 249 source and source specific multicast, and selective broadcast, 250 independent of their service instantiation. The traditional SSM 251 model, for instance, can experience manifold support, either by 252 directly mapping the multicast URI (i.e., "group@instantiation") 253 to an (S,G) state on the IP layer, or by first resolving S for a 254 subsequent group address query, or by transferring this process to 255 any of the various source specific overlay schemes, or by 256 delegating to a plain replication server. The application 257 programmer can invoke any of these underlying mechanisms with the 258 same line of code. 260 Simplified Service Deployment through Generic Gateways: The common 261 multicast API allows for an implementation of abstract gateway 262 functions with mappings to specific technologies residing at a 263 system level. Generic gateways may provide a simple bridging 264 service and facilitate an inter-domain deployment of multicast. 266 Mobility-agnostic Group Communication: Group naming and management 267 as foreseen in the common multicast API remain independent of 268 locators. Naturally, applications stay unaware of any mobility- 269 related address changes. Handover-initiated re-addressing is 270 delegated to the mapping services at the system level and may be 271 designed to smoothly interact with mobility management solutions 272 provided at the network or transport layer (see [RFC5757] for 273 mobility-related aspects). 275 1.2. Illustrative Examples 277 1.2.1. Support of Multiple Underlying Technologies 279 On a very high-level, the common multicast API provides the 280 application programmer with one single interface to manage multicast 281 content independent of the technology underneath. Considering the 282 following simple example in Figure 1, a multicast source S is 283 connected via IPv4 and IPv6. It distributes one flow of multicast 284 content (e.g., a movie). Receivers are connected via IPv4/v6 and 285 overlay multicast, respectively. 287 +-------+ +-------+ +-------+ 288 | S | | R1 | | R3 | 289 +-------+ +-------+ +-------+ 290 v6| v4| |v4 |OLM 291 | | / | 292 | ***| *** ***/ ** *** /*** *** *** 293 \* |* ** /** * * /* ** ** * 294 *\ \_______/_______*__v4__+-------+ * / * 295 *\ IPv4/v6 * | R2 |__OLM__ *_/ Overlay Mcast * 296 * \_________________*__v6__+-------+ * * 297 * ** ** ** * * ** ** ** * 298 *** *** *** *** *** *** *** *** 300 Figure 1: Common scenario: Source S sends the same multicast content 301 via different technologies 303 Using the current BSD socket API, the application programmer needs to 304 decide on the IP technologies at coding time. Additional 305 distribution techniques, such as overlay multicast, must be 306 individually integrated into the application. For each technology, 307 the application programmer needs to create a separate socket and 308 initiate a dedicated join or send. As the current socket API does 309 not distinguish between group name and group address, the content 310 will be delivered multiple times to the same receiver (cf., R2). 311 Whenever the source distributes content via a technology that is not 312 supported by the receivers or its Internet Service Provider (cf., 313 R3), a gateway is required. Gateway functions rely on a coherent 314 view of the multicast group states. 316 The common multicast API simplifies programming of multicast 317 applications as it abstracts content distribution from specific 318 technologies. In addition to calls that implement receiving and 319 sending of multicast data, the API provides service calls to grant 320 access to internal multicast states at the host. The API description 321 in this document defines a minimal set of programming interfaces to 322 the system components at the host to operate group communication. It 323 is left to specific implementations to provide additional convenience 324 functions for programmers. 326 The implementation of content distribution for the example shown in 327 Figure 1 may then look like: 329 //Initialize multicast socket 330 MulticastSocket m = new MulticastSocket(); 331 //Associate all available interfaces 332 m.addInterface(getInterfaces()); 333 //Subscribe to multicast group 334 m.join(URI("ham:opaque:news@cnn.com")); 335 //Send to multicast group 336 m.send(URI("ham:opaque:news@cnn.com"),message); 338 Send/receive example using the common multicast API 340 The gateway function for R2 can be implemented by service calls that 341 look like: 343 //Initialize multicast socket 344 MulticastSocket m = new MulticastSocket(); 345 //Check (a) host is designated multicast node for this interface 346 // (b) receivers exist 347 for all this.getInterfaces() { 348 if(designatedHost(this.interface) && 349 childrenSet(this.interface, 350 URI("ham:opaque:news@cnn.com")) != NULL) { 351 m.addInterface(this.interface); 352 } 353 } 354 while(true) { 355 m.send(URI("ham:opaque:news@cnn.com"),message); 356 } 358 Gateway example using the common multicast API 360 1.2.2. Support of Multi-Resolution Multicast 362 Multi-resolution multicast adjusts the multicast stream to consider 363 heterogeneous end devices. The multicast data (e.g., available by 364 different compression levels) is typically announced using multiple 365 multicast addresses that are unrelated to each other. Using the 366 common API, multi-resolution multicast can be implemented 367 transparently by an operator with the help of Name-to-Address 368 mapping, or by systematic naming in a subscriber-centric perspective. 370 Operator-Centric: An operator deploys a domain-specific mapping. In 371 this case, any multicast receiver (e.g., mobile or DSL user) 372 subscribes to the same multicast name, which will be resolved 373 locally to different multicast addresses. In this case, each 374 Group Address describes a different level of data quality. 376 Subscriber-Centric: In a subscriber-centric example, the multicast 377 receiver chooses the quality in advance, based on a predefined 378 naming syntax. Consider a layered video stream "blockbuster" 379 available at different qualities Q_i, each of which consists of 380 the base layer plus the sum of EL_j, j <= i enhancement layers. 381 Each individual layer may then be accessible by a name 382 "EL_j.Q_i.blockbuster", j <= i, while a specific quality 383 aggregates the corresponding layers to "Q_i.blockbuster", and the 384 full-size movie may be just called "blockbuster". 386 2. Terminology 388 This document uses the terminology as defined for the multicast 389 protocols [RFC2710],[RFC3376],[RFC3810],[RFC4601],[RFC4604]. In 390 addition, the following terms will be used. 392 Group Address: A Group Address is a routing identifier. It 393 represents a technological specifier and thus reflects the 394 distribution technology in use. Multicast packet forwarding is 395 based on this address. 397 Group Name: A Group Name is an application identifier used by 398 applications to manage communication in a multicast group (e.g., 399 join/leave and send/receive). The Group Name does not predefine 400 any distribution technologies. Even if it syntactically 401 corresponds to an address, it solely represents a logical 402 identifier. 404 Multicast Namespace: A Multicast Namespace is a collection of 405 designators (i.e., names or addresses) for groups that share a 406 common syntax. Typical instances of namespaces are IPv4 or IPv6 407 multicast addresses, overlay group IDs, group names defined on the 408 application layer (e.g., SIP or email), or some human readable 409 string. 411 Interface: An Interface is a forwarding instance of a distribution 412 technology on a given node. For example, the IP Interface 413 192.168.1.1 at an IPv4 host, or an overlay routing interface. 415 Multicast Domain: A Multicast Domain hosts nodes and routers of a 416 common, single multicast forwarding technology and is bound to a 417 single namespace. 419 Inter-domain Multicast Gateway (IMG): An Inter-domain Multicast 420 Gateway (IMG) is an entity that interconnects different Multicast 421 Domains. Its objective is to forward data between these domains, 422 e.g., between an IP layer and overlay multicast. 424 3. Overview 425 3.1. Objectives and Reference Scenarios 427 The default use case addressed in this document targets at 428 applications that participate in a group by using some common 429 identifier taken from some common namespace. This Group Name is 430 typically learned at runtime from user interaction like the selection 431 of an IPTV channel, from dynamic session negotiations like in the 432 Session Initiation Protocol (SIP) [RFC3261] or P2PSIP 433 [I-D.ietf-p2psip-sip], but may as well have been predefined for an 434 application as a common Group Name. Technology-specific system 435 functions then transparently map the Group Name to Group Addresses 436 such that 438 o programmers are enabled to process group names in their programs 439 without the need to consider technological mappings that relate to 440 designated deployments in target domains; 442 o applications are enabled to identify packets that belong to a 443 logically named group, independent of the Interface technology 444 used for sending and receiving packets. The latter shall also 445 hold for multicast gateways. 447 This document considers two reference scenarios that cover the 448 following hybrid deployment cases displayed in Figure 2: 450 1. Multicast Domains running the same multicast technology but 451 remaining isolated, possibly only connected by network layer 452 unicast. 454 2. Multicast Domains running different multicast technologies but 455 hosting nodes that are members of the same multicast group. 457 +-------+ +-------+ 458 | Member| | Member| 459 | Foo | | G | 460 +-------+ +-------+ 461 \ / 462 *** *** *** *** 463 * ** ** ** * 464 * * 465 * MCast Tec A * 466 * * 467 * ** ** ** * 468 *** *** *** *** 469 +-------+ +-------+ | 470 | Member| | Member| +-------+ 471 | G | | Foo | | IMG | 472 +-------+ +-------+ +-------+ 473 | | | 474 *** *** *** *** *** *** *** *** 475 * ** ** ** * * ** ** ** * 476 * * +-------+ * * 477 * MCast Tec A * --| IMG |-- * MCast Tec B * +------+ 478 * * +-------+ * * -|Member| 479 * ** ** ** * * ** ** ** * | G | 480 *** *** *** *** *** *** *** *** +------+ 482 Figure 2: Reference scenarios for hybrid multicast, interconnecting 483 group members from isolated homogeneous and heterogeneous domains. 485 3.2. Group Communication API and Protocol Stack 487 The group communication API abstracts the socket concept and consists 488 of four parts. Two parts combine the essential communication 489 functions, while the remaining two offer optional extensions for an 490 enhanced monitoring and management: 492 Group Management Calls provide the minimal API to instantiate aan 493 abstract multicast socket and to manage group membership; 495 Send/Receive Calls provide the minimal API to send and receive 496 multicast data in a technology-transparent fashion; 498 Socket Options provide extension calls for an explicit configuration 499 of the multicast socket such as setting hop limits or associated 500 Interfaces; 502 Service Calls provide extension calls that grant access to internal 503 multicast states of an Interface such as the multicast groups 504 under subscription or the multicast forwarding information base. 506 Multicast applications that use the common API require assistance by 507 a group communication stack. This protocol stack serves two needs: 509 o It provides system-level support to transfer the abstract 510 functions of the common API, including namespace support, into 511 protocol operations at Interfaces; 513 o It provides group communication services across different 514 multicast technologies at the local host. 516 A general initiation of a multicast communication in this setting 517 proceeds as follows: 519 1. An application opens an abstract multicast socket; 520 2. The application subscribes/leaves/(de)registers to a group using 521 a Group Name; 523 3. An intrinsic function of the stack maps the logical group ID 524 (Group Name) to a technical group ID (Group Address). This 525 function may make use of deployment-specific knowledge such as 526 available technologies and group address management in its 527 domain; 529 4. Packet distribution proceeds to and from one or several 530 multicast-enabled Interfaces. 532 The abstract multicast socket describes a group communication channel 533 composed of one or multiple Interfaces. A socket may be created 534 without explicit Interface association by the application, which 535 leaves the choice of the underlying forwarding technology to the 536 group communication stack. However, an application may also bind the 537 socket to one or multiple dedicated Interfaces, which predefines the 538 forwarding technology and the Multicast Namespace(s) of the Group 539 Address(es). 541 Applications are not required to maintain mapping states for Group 542 Addresses. The group communication stack accounts for the mapping of 543 the Group Name to the Group Address(es) and vice versa. Multicast 544 data passed to the application will be augmented by the corresponding 545 Group Name. Multiple multicast subscriptions thus can be conducted 546 on a single multicast socket without the need for Group Name encoding 547 at the application side. 549 Hosts may support several multicast protocols. The group 550 communication stack discovers available multicast-enabled Interfaces. 551 It provides a minimal hybrid function that bridges data between 552 different Interfaces and Multicast Domains. Details of service 553 discovery are out of scope of this document. 555 The extended multicast functions can be implemented by a middleware 556 as conceptually presented in Figure 3. 558 *-------* *-------* 559 | App 1 | | App 2 | 560 *-------* *-------* 561 | | 562 *---------------------* ---| 563 | Middleware | | 564 *---------------------* | 565 | | | 566 *---------* | | 567 | Overlay | | \ Group Communication 568 *---------* | / Stack 569 | | | 570 | | | 571 *---------------------* | 572 | Underlay | | 573 *---------------------* ---| 575 Figure 3: Architecture of a group communication stack with a 576 middleware offering uniform access to multicast in underlay and 577 overlay 579 3.3. Naming and Addressing 581 Applications use Group Names to identify groups. Names can uniquely 582 determine a group in a global communication context and hide 583 technological deployment for data distribution from the application. 584 In contrast, multicast forwarding operates on Group Addresses. Even 585 though both identifiers may be identical in symbols, they carry 586 different meanings. They may also belong to different Multicast 587 Namespaces. The Namespace of a Group Address reflects a routing 588 technology, while the Namespace of a Group Name represents the 589 context in which the application operates. 591 URIs [RFC3986] are a common way to represent Namespace-specific 592 identifiers in applications in the form of an abstract meta-data 593 type. Throughout this document, all Group Names follows a URI 594 notation with the syntax defined in Section 4.2. Examples are, 595 ham:ip:224.1.2.3:5000 for a canonical IPv4 ASM group at UDP port 596 5000, ham:sip:news@cnn.com for an application-specific naming with 597 service instantiator and default port selection. 599 An implementation of the group communication stack can provide 600 convenience functions that detect the Namespace of a Group Name or 601 further optimize service instantiation. In practice, such a library 602 would provide support for high-level data types to the application, 603 similar to some versions of the current socket API (e.g., InetAddress 604 in Java). Using this data type could implicitly determine the 605 Namespace. Details of automatic Namespace identification or service 606 handling are out of scope of this document. 608 3.4. Namespaces 609 Namespace identifiers in URIs are placed in the scheme element and 610 characterize syntax and semantic of the group identifier. They 611 enable the use of convenience functions and high-level data types 612 while processing URIs. When used in names, they may indicate an 613 application context, or facilitate a default mapping and a recovery 614 of names from addresses. They characterize its type, when used in 615 addresses. 617 Compliant to the URI concept, namespace-schemes can be added. 618 Examples of schemes are generic (see Section 4.2.3) or inherited from 619 applications (see Section 4.2.4). 621 3.5. Name-to-Address Mapping 623 The multicast communication paradigm requires all group members to 624 subscribe to the same Group Name, taken from a common Multicast 625 Namespace, and thereby to identify the group in a technology-agnostic 626 way. Following this common API, a sender correspondingly registers a 627 Group Name prior to transmission. 629 At communication end points, Group Names require a mapping to Group 630 Addresses prior to service instantiation at its Interface(s). 631 Similarly, a mapping is needed at gateways to translate between Group 632 Addresses from different namespaces consistently. Two requirements 633 need to be met by a mapping function that translates between 634 Multicast Names and Addresses. 636 a. For a given Group Name, identify an Address that is appropriate 637 for a local distribution instance. 639 b. For a given Group Address, invert the mapping to recover the 640 Group Name. 642 In general, mappings can be complex and do not need to be invertible. 643 A mapping can be realized by embedding smaller namespaces into 644 larger, or by selecting an arbitrary, unused ID in a smaller target 645 namespace. For example, it is not obvious how to map a large 646 identifier space (e.g., IPv6) to a smaller, collision-prone set like 647 IPv4 (see [I-D.venaas-behave-v4v6mc-framework], 648 [I-D.venaas-behave-mcast46], [RFC6219]). Mapping functions can be 649 stateless in some contexts, but may require states in others. The 650 application of such functions depends on the cardinality of the 651 namespaces, the structure of address spaces, and possible address 652 collisions. However, some namespaces facilitate a canonical, 653 invertible transformation to default address spaces. 655 3.5.1. Canonical Mapping 656 Some Multicast Namespaces defined in Section 3.4 can express a 657 canonical default mapping. For example, ham:ip:224.1.2.3:5000 658 indicates the correspondence to 224.1.2.3 in the default IPv4 659 multicast address space at port 5000. This default mapping is bound 660 to a technology and may not always be applicable, e.g., in the case 661 of address collisions. Note that under canonical mapping, the 662 multicast URI can be completely recovered from any data message 663 received within this group. 665 3.5.2. Mapping at End Points 667 Multicast listeners or senders require a Name-to-Address conversion 668 for all technologies they actively run in a group. Even though a 669 mapping applies to the local Multicast Domain only, end points may 670 need to learn a valid Group Address from neighboring nodes, e.g., 671 from a gateway in the collision-prone IPv4 domain. Once set, an end 672 point will always be aware of the Name-to-Address correspondence and 673 thus can autonomously invert the mapping. 675 3.5.3. Mapping at Inter-domain Multicast Gateways 677 Multicast data may arrive at an IMG in one technology, requesting the 678 gateway to re-address packets for another distribution system. At 679 initial arrival, the IMG may not have explicit knowledge of the 680 corresponding Multicast Group Name. To perform a consistent mapping, 681 the group name needs to be acquired. It may have been distributed at 682 source registration, or may have been learned from a neighboring 683 node, details of which are beyond the scope of this document. 685 3.6. A Note on Explicit Multicast (XCAST) 687 In Explicit Multicast (XCAST) [RFC5058], the multicast source 688 explicitly pre-defines the receivers. From a conceptual perspective, 689 XCAST is an additional distribution technology (i.e., a new 690 technology-specific interface) for this API. XCAST requires 691 aggregated knowledge of receivers that is available at the origin of 692 the distribution tree. The instantiation part of the Group Name may 693 refer to such a management instance and tree root, which can be the 694 source or some co-located processor. 696 An implementation of XCAST then requires a topology-dependent mapping 697 of the Group Name to the set of subscribers. Defining details of 698 this multi-destination mapping is out of scope of this document. 700 3.7. MTU Handling 702 This API considers a multi-technology scenario, in which different 703 technologies may have different Maximum Transmission Unit (MTU) 704 sizes. Even if the MTU size between two hosts has been determined, 705 it may change over time either initiated by the network (e.g., path 706 changes) or by end hosts (e.g., interface change due to mobility). 708 The design of this API is based on the objective of robust 709 communication and easy application development. The MTU handling and 710 the placement of fragmentation is thus guided by the following 711 observations. 713 Application Application programmers need a simple way to transmit 714 packets in a technology-agnostic fashion. For this, it is 715 convenient at the time of coding to rely on a transparent maximum 716 amount of data that can be sent in one message from a socket. A 717 regular program flow should not be distracted by querying and 718 changing MTU sizes. Technically, the configuration of the maximum 719 message size used by the application programmer may change and 720 disrupt communication, when (a) interfaces will be added or 721 excluded, or (b) the path MTU changes during transmission and thus 722 disables the corresponding interfaces. 724 Middleware A middleware situated between application and technology 725 interfaces ensures a general ability of packet handling, which 726 prevents the application programmer to implement fragmentation. A 727 uniform maximum message size shall be guaranteed by the group 728 communication stack (e.g., middleware), which is not allowed to 729 change during runtime. The latter would conflict with a 730 technology-agnostic development. 732 Technology Interfaces Fragmentation requirements depends on the 733 technology in use. Hence, the (technology-bound) interfaces need 734 to copel with MTU sizes that may vary among interfaces and along 735 different paths. 737 The concept of this API also aims at guaranteeing a maximum message 738 size for the application programmer, thereby to handle fragmentation 739 at the interface level, if needed. Nevertheless, the application 740 programmer should be able to determine the technology-specific atomic 741 message size to optimize data distribution or for other reasons. 743 The uniform maximum message size should take realistic values (e.g., 744 following IP clients) to enable smooth and efficient services. A 745 detailed selection scheme of MTU values is out of scope of this 746 document. 748 4. Common Multicast API 750 4.1. Notation 751 The following description of the common multicast API is expressed in 752 pseudo syntax. Variables that are passed to function calls are 753 declared by "in", return values are declared by "out". A list of 754 elements is denoted by "<>". The pseudo syntax assumes that lists 755 include an attribute which represents the number of elements. 757 The corresponding C signatures are defined in Appendix A. 759 4.2. URI Scheme Definition 761 Multicast Names and Multicast Addresses used in this API are 762 represented by a URI scheme that is specified in the following 763 subsections. A corresponding ham: URI denotes a multicast channel, 764 and may be dereferenced to retrieve data published to that channel. 766 4.2.1. Syntax 768 The syntax of the multicast URI is described by Augmented Backus-Naur 769 Form (ABNF) [RFC5234] and is defined as follows: 771 ham-URI = ham-scheme ":" namespace ":" group [ "@" instantiation ] 772 [ ":" port ] [ "/" sec-credentials ] 774 ham-scheme = "ham" ; hybrid adaptive multicast 775 namespace = ALPHA *( ALPHA / DIGIT / "+" / "-" / "." ) 776 group = "*" / 1*unreserved ; unreserved from [RFC3986] 777 instantiation = 1*unreserved ; unreserved from [RFC3986] 778 port = 1*DIGIT 779 sec-credentials = alg ";" val 780 alg = 1*unreserved ; unreserved from [RFC3986] 781 val = 1*unreserved ; unreserved from [RFC3986] 783 Percent-encoding is applied to dinstinguish between reserved and 784 unreserved assignments of the same character in the same ham-URI 785 component (cf., [RFC3986]). 787 4.2.2. Semantic 789 The semantic of the different parts of the URI is defined as follows: 791 ham-scheme refers to the specification of the assigned identifier 792 "ham". 794 namespace takes the role of the Multicast Namespace. It defines the 795 syntax of the group and instantiation part of the ham-URI. A 796 basic syntax for these elements is specified in Section 4.2.1. 797 The namespace may further restrict the syntax of designators. 799 Example namespaces are described in Section 4.2.3 and 800 Section 4.2.4. 802 group uniquely identifies the group within the Multicast Namespace 803 given in namespace. The literally "*" describes all members of 804 the Multicast Group. 806 instantiation identifies the entity that generates the instance of 807 the group (e.g., a SIP domain or a source in SSM, a dedicated 808 routing entity or a named processor that accounts for the group 809 communication), using syntax and semantic as defined by the 810 Namespace. This parameter is optional. Note that ambiguities 811 (e.g., identical node addresses in multiple overlay instances) can 812 be distinguished by ports. 814 port identifies a specific application at an instance of a group. 815 This parameter is optional. 817 sec-credentials used to implement security mechanisms (e.g., to 818 authorize Multicast Group access or authenticate multicast 819 operations). This parameter is optional. The alg-part describes 820 the security algorithm in use. The val-part describes the actual 821 value for authentification, authorization, and accounting. Note 822 that security credentials may carry a distinct technical meaning 823 w.r.t. AAA schemes and may differ between group members. Hence 824 the sec-credentials are not considered part of the Group Name. 826 4.2.3. Generic Namespaces 828 IP This namespace is comprised of regular IP node naming, i.e., DNS 829 names and addresses taken from any version of the Internet 830 Protocol. The syntax of the group and instantiation follows the 831 "host" definition in [RFC3986], Section 3.2.2. A processor 832 dealing with the IP namespace is required to determine the syntax 833 (DNS name, IP address, version) of the group and instantiation 834 expression. 836 SHA-2 This namespace carries address strings compliant to SHA-2 hash 837 digests. The syntax of the group and instantiation follows the 838 "val" definition in [RFC6920], Section 3. A processor handling 839 those strings is required to determine the length of the 840 expressions and passes appropriate values directly to a 841 corresponding overlay. 843 Opaque This namespace transparently carries strings without further 844 syntactical information, meanings, or associated resolution 845 mechanism. The corresponding syntax for the group and 846 instantation part of the ham-URI is defined in Section 4.2.1. 848 4.2.4. Application-centric Namespaces 850 SIP The SIP namespace is an example of an application layer scheme 851 that bears inherent group functions (conferencing). SIP 852 conference URIs may be directly exchanged and interpreted at the 853 application, and mapped to group addresses on the system level to 854 generate a corresponding multicast group. The syntax of the group 855 and instantiation is described by the "userinfo" in [RFC3261], 856 Section 25.1. 858 RELOAD This namespace covers address strings immediately valid in a 859 RELOAD [I-D.ietf-p2psip-base] overlay network. A processor 860 handling those strings may pass these values directly to a 861 corresponding overlay that may manage multicast distribution 862 according to [I-D.irtf-samrg-sam-baseline-protocol]. 864 4.2.5. Future Namespaces 866 The concept of the Common Multicast API allows for any namespace that 867 complies with the superset syntax defined in Section 4.2.1. This 868 document specifies a basic set of Multicast namespaces in 869 Section 4.2.3 and Section 4.2.4. If additional namespaces are needed 870 in the future, a registry for these namespaces should be created. 871 All namespaces defined in this document should then also be assigned 872 to the registry. 874 4.3. Additional Abstract Data Types 876 4.3.1. Interface 878 The Interface denotes the layer and instance on which the 879 corresponding call takes effect. In agreement with [RFC3493], we 880 identify an Interface by an identifier, which is a positive integer 881 starting at 1. 883 Properties of an Interface are stored in the following data 884 structure: 886 struct ifProp { 887 UnsignedInt if_index; /* 1, 2, ... */ 888 String *ifName; /* "eth0", "eth1:1", "lo", ... */ 889 String *ifAddr; /* "1.2.3.4", "abc123", ... */ 890 String *ifTech; /* "ip", "overlay", ... */ 891 }; 893 The following function retrieves all available Interfaces from the 894 system: 896 getInterfaces(out Interface ); 898 It extends the functions for Interface Identification defined in 899 Section 4 of [RFC3493] and can be implemented by: 901 struct ifProp(out IfProp ); 903 4.3.2. Membership Events 905 A membership event is triggered by a multicast state change, which is 906 observed by the current node. It is related to a specific Group Name 907 and may be receiver or source oriented. 909 eventType { 910 joinEvent; 911 leaveEvent; 912 newSourceEvent; 913 }; 915 event { 916 EventType event; 917 Uri groupName; 918 Interface if; 919 }; 921 An event will be created by the group communication stack and passed 922 to applications that have registered for events. 924 4.4. Group Management Calls 926 4.4.1. Create 928 The create call initiates a multicast socket and provides the 929 application programmer with a corresponding handle. If no Interfaces 930 will be assigned based on the call, the default Interface will be 931 selected and associated with the socket. The call returns an error 932 code in the case of failures, e.g., due to a non-operational 933 communication middleware.?? 935 createMSocket(in Interface , 936 out Socket s); 938 The ifs argument denotes a list of Interfaces (if_indexes) that will 939 be associated with the multicast socket. This parameter is optional. 941 On success, a multicast socket identifier is returned, otherwise 942 NULL. 944 4.4.2. Delete 946 The delete call removes the multicast socket. 948 deleteMSocket(in Socket s, out Int error); 950 The s argument identifies the multicast socket for destruction. 952 On success, the out parameter error is 0, otherwise -1. 954 4.4.3. Join 956 The join call initiates a subscription for the given Group Name. 957 Depending on the Interfaces that are associated with the socket, this 958 may result in an IGMP/MLD report or overlay subscription, for 959 example. 961 join(in Socket s, in Uri groupName, out Int error); 963 The s argument identifies the multicast socket. 965 The groupName argument identifies the group. 967 On success, the out parameter error is 0, otherwise -1. 969 4.4.4. Leave 971 The leave call results in an unsubscription for the given Group Name. 973 leave(in Socket s, in Uri groupName, out Int error); 975 The s argument identifies the multicast socket. 977 The groupName identifies the group. 979 On success, the out parameter error is 0, otherwise -1. 981 4.4.5. Source Register 983 The srcRegister call registers a source for a Group on all active 984 Interfaces of the socket s. This call may assist group distribution 985 in some technologies, for example the creation of sub-overlays or may 986 facilitate a name-to-address mapping. Likewise, it may remain 987 without effect in some multicast technologies. 989 srcRegister(in Socket s, in Uri groupName, 990 out Interface , out Int error); 992 The s argument identifies the multicast socket. 994 The groupName argument identifies the multicast group to which a 995 source intends to send data. 997 The ifs argument points to the list of Interface indexes for which 998 the source registration failed. A NULL pointer is returned, if the 999 list is empty. This parameter is optional. 1001 If source registration succeeded for all Interfaces associated with 1002 the socket, the out parameter error is 0, otherwise -1. 1004 4.4.6. Source Deregister 1006 The srcDeregister indicates that a source does no longer intend to 1007 send data to the multicast group. This call may remain without 1008 effect in some multicast technologies. 1010 srcDeregister(in Socket s, in Uri groupName, 1011 out Interface , out Int error); 1013 The s argument identifies the multicast socket. 1015 The group_name argument identifies the multicast group to which a 1016 source has stopped to send multicast data. 1018 The ifs argument points to the list of Interfaces for which the 1019 source deregistration failed. A NULL pointer is returned, if the 1020 list is empty. 1022 If source deregistration succeeded for all Interfaces associated with 1023 the socket, the out parameter error is 0, otherwise -1. 1025 4.5. Send and Receive Calls 1026 4.5.1. Send 1028 The send call passes multicast data destined for a Multicast Name 1029 from the application to the multicast socket.?? 1031 It is worth noting that it is the choice of the programmer to send 1032 data via one socket per group or to use a single socket for multiple 1033 groups. 1035 send(in Socket s, in Uri groupName, 1036 in Size msgLen, in Msg msgBuf, 1037 out Int error); 1039 The s argument identifies the multicast socket. 1041 The groupName argument identifies the group to which data will be 1042 sent. 1044 The msgLen argument holds the length of the message to be sent. 1046 The msgBuf argument passes the multicast data to the multicast 1047 socket. 1049 On success, the out parameter error is 0, otherwise -1. A message 1050 that is too long is indicated by an implementation-specific error 1051 code (e.g., EMSGSIZE in C). 1053 4.5.2. Receive 1055 The receive call passes multicast data and the corresponding Group 1056 Name to the application. This may come in a blocking or non-blocking 1057 variant.?? 1059 It is worth noting that it is the choice of the programmer to receive 1060 data via one socket per group or to use a single socket for multiple 1061 groups. 1063 receive(in Socket s, out Uri groupName, 1064 out Size msgLen, out Msg msgBuf, 1065 out Int error); 1067 The s argument identifies the multicast socket. 1069 The group_name argument identifies the multicast group for which data 1070 was received. 1072 The msgLen argument holds the length of the received message. 1074 The msgBuf argument points to the payload of the received multicast 1075 data. 1077 On success, the out parameter error is 0, otherwise -1. A message 1078 that is too long is indicated by an implementation-specific error 1079 handling (e.g., EMSGSIZE). 1081 4.6. Socket Options 1083 The following calls configure an existing multicast socket. 1085 4.6.1. Get Interfaces 1087 The getInterface call returns an array of all available multicast 1088 communication Interfaces associated with the multicast socket. 1090 getInterfaces(in Socket s, 1091 out Interface , out Int error); 1093 The s argument identifies the multicast socket. 1095 The ifs argument points to an array of Interface index identifiers. 1097 On success, the out parameter error is 0, otherwise -1. 1099 4.6.2. Add Interface 1101 The addInterface call adds a distribution channel to the socket. 1102 This may be an overlay or underlay Interface, e.g., IPv6 or DHT. 1103 Multiple Interfaces of the same technology may be associated with the 1104 socket. 1106 addInterface(in Socket s, in Interface if, 1107 out Int error); 1109 The s and if arguments identify a multicast socket and Interface, 1110 respectively. 1112 On success, the value 0 is returned, otherwise -1. 1114 4.6.3. Delete Interface 1116 The delInterface call removes the Interface if from the multicast 1117 socket. 1119 delInterface(in Socket s, Interface if, 1120 out Int error); 1122 The s and if arguments identify a multicast socket and Interface, 1123 respectively. 1125 On success, the out parameter error is 0, otherwise -1. 1127 4.6.4. Set TTL 1129 The setTTL call configures the maximum hop count for the socket a 1130 multicast message is allowed to traverse. 1132 setTTL(in Socket s, in Int h, 1133 in Interface , 1134 out Int error); 1136 The s and h arguments identify a multicast socket and the maximum hop 1137 count, respectively. 1139 The ifs argument points to an array of Interface index identifiers. 1140 This parameter is optional. 1142 On success, the out parameter error is 0, otherwise -1. 1144 4.6.5. Get TTL 1146 The getTTL call returns the maximum hop count a multicast message is 1147 allowed to traverse for the socket. 1149 getTTL(in Socket s, 1150 out Int h, out Int error); 1152 The s argument identifies a multicast socket. 1154 The h argument holds the maximum number of hops associated with 1155 socket s. 1157 On success, the out parameter error is 0, otherwise -1. 1159 4.6.6. Atomic Message Size 1161 The getAtomicMsgSize function returns the maximum message size that 1162 an application is allowed to transmit per socket at once without 1163 fragmentation. This value depends on the interfaces associated with 1164 the socket in use and thus may change during runtime. 1166 getAtomicMsgSize(in Socket s, 1167 out Int return); 1169 On success, the function returns a positive value of appropriate 1170 message size, otherwise -1. 1172 4.7. Service Calls 1174 4.7.1. Group Set 1176 The groupSet call returns all multicast groups registered at a given 1177 Interface. This information can be provided by group management 1178 states or routing protocols. The return values distinguish between 1179 sender and listener states. 1181 struct GroupSet { 1182 Uri groupName; /* registered multicast group */ 1183 Unt type; /* 0 = listener state, 1 = sender state, 1184 2 = sender & listener state */ 1185 } 1187 groupSet(in Interface if, 1188 out GroupSet , out Int error); 1190 The if argument identifies the Interface for which states are 1191 maintained. 1193 The groupSet argument points to a list of group states. 1195 On success, the out parameter error is 0, otherwise -1. 1197 4.7.2. Neighbor Set 1199 The neighborSet function returns the set of neighboring nodes for a 1200 given Interface as seen by the multicast routing protocol. 1202 neighborSet(in Interface if, 1203 out Uri , out Int error); 1205 The if argument identifies the Interface for which neighbors are 1206 inquired. 1208 The neighborsAddresses argument points to a list of neighboring nodes 1209 on a successful return. 1211 On success, the out parameter error is 0, otherwise -1. 1213 4.7.3. Children Set 1215 The childrenSet function returns the set of child nodes that receive 1216 multicast data from a specified Interface for a given group. For a 1217 common multicast router, this call retrieves the multicast forwarding 1218 information base per Interface. 1220 childrenSet(in Interface if, in Uri groupName, 1221 out Uri , out Int error); 1223 The if argument identifies the Interface for which children are 1224 inquired. 1226 The groupName argument defines the multicast group for which 1227 distribution is considered. 1229 The childrenAddresses argument points to a list of neighboring nodes 1230 on a successful return. 1232 On success, the out parameter error is 0, otherwise -1. 1234 4.7.4. Parent Set 1236 The parentSet function returns the set of neighbors from which the 1237 current node receives multicast data at a given Interface for the 1238 specified group. 1240 parentSet(in Interface if, in Uri groupName, 1241 out Uri , out Int error); 1243 The if argument identifies the Interface for which parents are 1244 inquired. 1246 The groupName argument defines the multicast group for which 1247 distribution is considered. 1249 The parentsAddresses argument points to a list of neighboring nodes 1250 on a successful return. 1252 On success, the out parameter error is 0, otherwise -1. 1254 4.7.5. Designated Host 1256 The designatedHost function inquires whether this host has the role 1257 of a designated forwarder resp. querier, or not. Such an information 1258 is provided by almost all multicast protocols to prevent packet 1259 duplication, if multiple multicast instances serve on the same 1260 subnet. 1262 designatedHost(in Interface if, in Uri groupName 1263 out Int return); 1265 The if argument identifies the Interface for which designated 1266 forwarding is inquired. 1268 The groupName argument specifies the group for which the host may 1269 attain the role of designated forwarder. 1271 The function returns 1 if the host is a designated forwarder or 1272 querier, otherwise 0. The return value -1 indicates an error. 1274 4.7.6. Enable Membership Events 1276 The enableEvents function registers an application at the group 1277 communication stack to receive information about group changes. 1278 State changes are the result of new receiver subscriptions or leaves 1279 as well as of source changes. Upon receiving an event, the group 1280 service may obtain additional information from further service calls. 1282 enableEvents(); 1284 Calling this function, the stack starts to pass membership events to 1285 the application. Each event includes an event type identifier and a 1286 Group Name (cf., Section 4.3.2). 1288 The multicast protocol has not to support membership tracking to 1289 enable this feature. This function can also be implemented at the 1290 middelware layer. 1292 4.7.7. Disable Membership Events 1294 The disableEvents function deactivates the information about group 1295 state changes. 1297 disableEvents(); 1299 On success, the stack will not pass membership events to the 1300 application. 1302 4.7.8. Maximum Message Size 1304 The getMaxMsgSize function returns the maximum message size that an 1305 application is allowed to transmit per socket at once. This value is 1306 statically guaranteed by the group communication stack. 1308 getMaxMsgSize(out Int return); 1310 On success, the function returns a positive value of allowed message 1311 size, otherwise -1. 1313 5. Implementation 1315 A reference implementation of the Common API for Transparent Hybrid 1316 Multicast is available with the HAMcast stack [hamcast-dev] [GC2010] 1317 [LCN2012]. This open-source software supports the multicast API (C++ 1318 and Java library) for group application development, the middleware 1319 as a user space system service, and several multicast-technology 1320 modules. The middleware is implemented in C++. 1322 This API is verified and adjusted based on the real-world experiences 1323 gathered in the HAMcast project, and by additional users of the 1324 stack. 1326 6. IANA Considerations 1328 This document specifies the "ham" URI scheme and requests IANA 1329 registration as "Permanent URI Schemes" according to [RFC4395]. 1331 URI scheme name ham 1333 Status permanent 1335 URI scheme syntax See Section 4.2.1. 1337 URI scheme semantics See Section 4.2.2. 1339 Encoding considerations Section 4.2.1 1341 Applications/protocols that use this The scheme is used by multicast 1342 URI scheme name applications to access multicast 1343 content. 1345 Interoperability considerations None 1347 Security considerations See Section 7. 1349 Contact Matthias Waehlisch, mw@link- 1350 lab.net 1352 Author/Change controller IRTF 1354 References As specified in this document. 1356 7. Security Considerations 1358 This draft does neither introduce additional messages nor novel 1359 protocol operations. 1361 8. Acknowledgements 1363 We would like to thank the HAMcast-team, Nora Berg, Gabriel Hege, 1364 Fabian Holler, Alexander Knauf, Sebastian Meiling, Sebastian Woelke, 1365 and Sebastian Zagaria, at the HAW Hamburg for many fruitful 1366 discussions and for their continuous critical feedback while 1367 implementing the common multicast API and a hybrid multicast 1368 middleware. Special thanks to Dominik Charousset of the HAMcast-team 1369 for in-depth perspectives on the matter of code. We gratefully 1370 acknowledge WeeSan, Mario Kolberg, and John Buford for reviewing and 1371 their suggestions to improve the document. We would like to thank 1372 the Name-based socket BoF (in particular Dave Thaler) for clarifying 1373 insights into the question of meta function calls. We thank Lisandro 1374 Zambenedetti Granville and Tony Li for very careful reviews of the 1375 pre-final versions of this document. Barry Leiba and Graham Klyne 1376 provided very constructive input to find a suitable URI scheme. They 1377 are gratefully acknowledged. 1379 This work is partially supported by the German Federal Ministry of 1380 Education and Research within the HAMcast project (see http:// 1381 hamcast.realmv6.org), which is part of G-Lab. 1383 9. References 1384 9.1. Normative References 1386 [RFC1075] Waitzman, D., Partridge, C., and S. Deering, "Distance 1387 Vector Multicast Routing Protocol", RFC 1075, November 1388 1988. 1390 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1391 Requirement Levels", BCP 14, RFC 2119, March 1997. 1393 [RFC2710] Deering, S., Fenner, W., and B. Haberman, "Multicast 1394 Listener Discovery (MLD) for IPv6", RFC 2710, October 1395 1999. 1397 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1398 A., Peterson, J., Sparks, R., Handley, M., and E. 1399 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1400 June 2002. 1402 [RFC3376] Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A. 1403 Thyagarajan, "Internet Group Management Protocol, Version 1404 3", RFC 3376, October 2002. 1406 [RFC3493] Gilligan, R., Thomson, S., Bound, J., McCann, J., and W. 1407 Stevens, "Basic Socket Interface Extensions for IPv6", RFC 1408 3493, February 2003. 1410 [RFC3678] Thaler, D., Fenner, B., and B. Quinn, "Socket Interface 1411 Extensions for Multicast Source Filters", RFC 3678, 1412 January 2004. 1414 [RFC3810] Vida, R. and L. Costa, "Multicast Listener Discovery 1415 Version 2 (MLDv2) for IPv6", RFC 3810, June 2004. 1417 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1418 Resource Identifier (URI): Generic Syntax", STD 66, RFC 1419 3986, January 2005. 1421 [RFC4395] Hansen, T., Hardie, T., and L. Masinter, "Guidelines and 1422 Registration Procedures for New URI Schemes", BCP 35, RFC 1423 4395, February 2006. 1425 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 1426 "Protocol Independent Multicast - Sparse Mode (PIM-SM): 1427 Protocol Specification (Revised)", RFC 4601, August 2006. 1429 [RFC4604] Holbrook, H., Cain, B., and B. Haberman, "Using Internet 1430 Group Management Protocol Version 3 (IGMPv3) and Multicast 1431 Listener Discovery Protocol Version 2 (MLDv2) for Source- 1432 Specific Multicast", RFC 4604, August 2006. 1434 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 1435 "Bidirectional Protocol Independent Multicast (BIDIR- 1436 PIM)", RFC 5015, October 2007. 1438 [RFC5058] Boivie, R., Feldman, N., Imai, Y., Livens, W., and D. 1439 Ooms, "Explicit Multicast (Xcast) Concepts and Options", 1440 RFC 5058, November 2007. 1442 [RFC5234] Crocker, D. and P. Overell, "Augmented BNF for Syntax 1443 Specifications: ABNF", STD 68, RFC 5234, January 2008. 1445 [RFC6920] Farrell, S., Kutscher, D., Dannewitz, C., Ohlman, B., 1446 Keranen, A., and P. Hallam-Baker, "Naming Things with 1447 Hashes", RFC 6920, April 2013. 1449 9.2. Informative References 1451 [GC2010] Meiling, S., Charousset, D., Schmidt, T., and M. 1452 Waehlisch, "System-assisted Service Evolution for a Future 1453 Internet - The HAMcast Approach to Pervasive Multicast", 1454 Proc. of IEEE GLOBECOM 2010 Workshops. MCS 2010, pp. 1455 938-942, Piscataway, NJ, USA: IEEE Press, December 2010. 1457 [I-D.ietf-mboned-auto-multicast] 1458 Bumgardner, G., "Automatic Multicast Tunneling", draft- 1459 ietf-mboned-auto-multicast-15 (work in progress), July 1460 2013. 1462 [I-D.ietf-p2psip-base] 1463 Jennings, C., Lowekamp, B., Rescorla, E., Baset, S., and 1464 H. Schulzrinne, "REsource LOcation And Discovery (RELOAD) 1465 Base Protocol", draft-ietf-p2psip-base-26 (work in 1466 progress), February 2013. 1468 [I-D.ietf-p2psip-sip] 1469 Jennings, C., Lowekamp, B., Rescorla, E., Baset, S., 1470 Schulzrinne, H., and T. Schmidt, "A SIP Usage for RELOAD", 1471 draft-ietf-p2psip-sip-11 (work in progress), July 2013. 1473 [I-D.irtf-samrg-sam-baseline-protocol] 1474 Buford, J. and M. Kolberg, "Application Layer Multicast 1475 Extensions to RELOAD", draft-irtf-samrg-sam-baseline- 1476 protocol-06 (work in progress), July 2013. 1478 [I-D.venaas-behave-mcast46] 1479 Venaas, S., Asaeda, H., SUZUKI, S., and T. Fujisaki, "An 1480 IPv4 - IPv6 multicast translator", draft-venaas-behave- 1481 mcast46-02 (work in progress), December 2010. 1483 [I-D.venaas-behave-v4v6mc-framework] 1484 Venaas, S., Li, X., and C. Bao, "Framework for IPv4/IPv6 1485 Multicast Translation", draft-venaas-behave-v4v6mc- 1486 framework-03 (work in progress), June 2011. 1488 [LCN2012] Meiling, S., Schmidt, T., and M. Waehlisch, "Large-Scale 1489 Measurement and Analysis of One-Way Delay in Hybrid 1490 Multicast Networks", Proc. of 37th Annual IEEE Conference 1491 on Local Computer Networks (LCN 2012). Piscataway, NJ, 1492 USA: IEEE Press, October 2012. 1494 [RFC5757] Schmidt, T., Waehlisch, M., and G. Fairhurst, "Multicast 1495 Mobility in Mobile IP Version 6 (MIPv6): Problem Statement 1496 and Brief Survey", RFC 5757, February 2010. 1498 [RFC6219] Li, X., Bao, C., Chen, M., Zhang, H., and J. Wu, "The 1499 China Education and Research Network (CERNET) IVI 1500 Translation Design and Deployment for the IPv4/IPv6 1501 Coexistence and Transition", RFC 6219, May 2011. 1503 [hamcast-dev] 1504 , "HAMcast developers", , 1505 . 1507 Appendix A. C Signatures 1509 This section describes the C signatures of the common multicast API, 1510 which are defined in Section 4. 1512 int createMSocket(int* result, size_t num_ifs, const uint32_t* ifs); 1514 int deleteMSocket(int s); 1516 int join(int msock, const char* group_uri); 1518 int leave(int msock, const char* group_uri); 1520 int srcRegister(int msock, 1521 const char* group_uri, 1522 size_t num_ifs, 1523 uint32_t* ifs); 1525 int srcDeregister(int msock, 1526 const char* group_uri, 1527 size_t num_ifs, 1528 uint32_t* ifs); 1530 int send(int msock, 1531 const char* group_uri, 1532 size_t buf_len, 1533 const void* buf); 1535 int receive(int msock, 1536 const char* group_uri, 1537 size_t buf_len, 1538 void* buf); 1540 int getInterfaces(int msock, 1541 size_t* num_ifs, 1542 uint32_t** ifs); 1544 int addInterface(int msock, uint32_t iface); 1546 int delInterface(int msock, uint32_t iface); 1548 int setTTL(int msock, uint8_t value, 1549 size_t num_ifs, uint32_t* ifs); 1551 int getTTL(int msock, uint8_t* result); 1553 int getAtomicMsgSize(int msock); 1555 typedef struct { 1556 char* group_uri; /* registered mcast group */ 1557 int type; /* 0: listener state, 1558 1: sender state 1559 2: sender and listener state */ 1561 } 1562 GroupSet; 1564 int groupSet(uint32_t iface, 1565 size_t* num_groups, 1566 GroupSet** groups); 1568 int neighborSet(uint32_t iface, 1569 const char* group_name, 1570 size_t* num_neighbors, 1571 char** neighbor_uris); 1573 int childrenSet(uint32_t iface, 1574 const char* group_name, 1575 size_t* num_children, 1576 char** children_uris); 1578 int parentSet(uint32_t iface, 1579 const char* group_name, 1580 size_t* num_parents, 1581 char** parents_uris); 1583 int designatedHost(uint32_t iface, 1584 const char* group_name); 1586 typedef void (*MembershipEventCallback) 1587 (int, /* event type */ 1588 uint32_t, /* interface id */ 1589 const char*); /* group uri */ 1591 int registerEventCallback(MembershipEventCallback callback); 1593 int envableEvents(); 1595 int disableEvents(); 1597 int getMaxMsgSize(); 1599 Appendix B. Use Case for the API 1600 For the sake of readability, we demonstrate developing using the API 1601 based on a high-level Java-like syntax; we do not consider error 1602 handling 1604 -- Application above middleware: 1606 //Initialize multicast socket; 1607 //the middleware selects all available interfaces 1608 MulticastSocket m = new MulticastSocket(); 1610 m.join(URI("ham:ip:224.1.2.3:5000")); 1611 m.join(URI("ham:ip:[FF02:0:0:0:0:0:0:3]:6000")); 1612 m.join(URI("ham:sip:news@cnn.com")); 1614 -- Middleware: 1616 join(URI mcAddress) { 1617 //Select interfaces in use 1618 for all this.interfaces { 1619 switch (interface.type) { 1620 case "ipv6": 1621 //... map logical ID to routing address 1622 Inet6Address rtAddressIPv6 = new Inet6Address(); 1623 mapNametoAddress(mcAddress,rtAddressIPv6); 1624 interface.join(rtAddressIPv6); 1625 case "ipv4": 1626 //... map logical ID to routing address 1627 Inet4Address rtAddressIPv4 = new Inet4Address(); 1628 mapNametoAddress(mcAddress,rtAddressIPv4); 1629 interface.join(rtAddressIPv4); 1630 case "sip-session": 1631 //... map logical ID to routing address 1632 SIPAddress rtAddressSIP = new SIPAddress(); 1633 mapNametoAddress(mcAddress,rtAddressSIP); 1634 interface.join(rtAddressSIP); 1635 case "dht": 1636 //... map logical ID to routing address 1637 DHTAddress rtAddressDHT = new DHTAddress(); 1638 mapNametoAddress(mcAddress,rtAddressDHT); 1639 interface.join(rtAddressDHT); 1640 //... 1641 } 1642 } 1643 } 1645 Appendix C. Deployment Use Cases for Hybrid Multicast 1647 This section describes the application of the defined API to 1648 implement an IMG. 1650 C.1. DVMRP 1652 The following procedure describes a transparent mapping of a DVMRP- 1653 based any source multicast service to another many-to-many multicast 1654 technology, e.g., an overlay. 1656 An arbitrary DVMRP [RFC1075] router will not be informed about new 1657 receivers, but will learn about new sources immediately. The concept 1658 of DVMRP does not provide any central multicast instance. Thus, the 1659 IMG can be placed anywhere inside the multicast region, but requires 1660 a DVMRP neighbor connectivity. Thus the group communication stack 1661 used by the IMG is enhanced by a DVMRP implementation. New sources 1662 in the underlay will be advertised based on the DVMRP flooding 1663 mechanism and received by the IMG. Based on this, the event 1664 "new_source_event" is created and passed to the application. The 1665 relay agent initiates a corresponding join in the native network and 1666 forwards the received source data towards the overlay routing 1667 protocol. Depending on the group states, the data will be 1668 distributed to overlay peers. 1670 DVMRP establishes source specific multicast trees. Therefore, a 1671 graft message is only visible to DVMRP routers on the path from the 1672 new receiver subnet to the source, but in general not to an IMG. To 1673 overcome this problem, data of multicast senders in the overlay may 1674 become noticeable via the Source Register call, as well as by an IMG 1675 that initiates an an all-group join in the overlay using the 1676 namespace extension of the API. Each IMG is initially required to 1677 forward the data received in the overlay to the underlay, independent 1678 of native multicast receivers. Subsequent prunes may limit unwanted 1679 data distribution thereafter. 1681 C.2. PIM-SM 1683 The following procedure describes a transparent mapping of a PIM-SM- 1684 based any source multicast service to another many-to-many multicast 1685 technology, e.g., an overlay. 1687 The Protocol Independent Multicast Sparse Mode (PIM-SM) [RFC4601] 1688 establishes rendezvous points (RP). These entities receive listener 1689 subscriptions and source registering of a domain. For a continuous 1690 update an IMG has to be co-located with an RP. Whenever PIM register 1691 messages are received, the IMG must signal internally a new multicast 1692 source using the event "new_source_event". Subsequently, the IMG 1693 joins the group and a shared tree between the RP and the sources will 1694 be established, which may change to a source specific tree after PIM 1695 switches to phase three. Source traffic will be forwarded to the RP 1696 based on the IMG join, even if there are no further receivers in the 1697 native multicast domain. Designated routers of a PIM-domain send 1698 receiver subscriptions towards the PIM-SM RP. The reception of such 1699 messages initiates the event "join_event" at the IMG, which initiates 1700 a join towards the overlay routing protocol. Overlay multicast data 1701 arriving at the IMG will then transparently be forwarded in the 1702 underlay network and distributed through the RP instance. 1704 C.3. PIM-SSM 1706 The following procedure describes a transparent mapping of a PIM-SSM- 1707 based source specific multicast service to another one-to-many 1708 multicast technology, e.g., an overlay. 1710 PIM Source Specific Multicast (PIM-SSM) is defined as part of PIM-SM 1711 and admits source specific joins (S,G) according to the source 1712 specific host group model [RFC4604]. A multicast distribution tree 1713 can be established without the assistance of a rendezvous point. 1715 Sources are not advertised within a PIM-SSM domain. Consequently, an 1716 IMG cannot anticipate the local join inside a sender domain and 1717 deliver a priori the multicast data to the overlay instance. If an 1718 IMG of a receiver domain initiates a group subscription via the 1719 overlay routing protocol, relaying multicast data fails, as data is 1720 not available at the overlay instance. The IMG instance of the 1721 receiver domain, thus, has to locate the IMG instance of the source 1722 domain to trigger the corresponding join. In agreement with the 1723 objectives of PIM-SSM, the signaling should not be flooded in 1724 underlay and overlay. 1726 A solution can be to intercept the subscription at both, source and 1727 receiver sites: To monitor multicast receiver subscriptions 1728 ("join_event" or "leave_event") in the underlay, the IMG is placed on 1729 path towards the source, e.g., at a domain border router. This 1730 router intercepts join messages and extracts the unicast source 1731 address S, initializing an IMG specific join to S via regular 1732 unicast. Multicast data arriving at the IMG of the sender domain can 1733 be distributed via the overlay. Discovering the IMG of a multicast 1734 sender domain may be implemented analogously to AMT 1735 [I-D.ietf-mboned-auto-multicast] by anycast. Consequently, the 1736 source address S of the group (S,G) should be built based on an 1737 anycast prefix. The corresponding IMG anycast address for a source 1738 domain is then derived from the prefix of S. 1740 C.4. BIDIR-PIM 1742 The following procedure describes a transparent mapping of a BIDIR- 1743 PIM-based any source multicast service to another many-to-many 1744 multicast technology, e.g., an overlay. 1746 Bidirectional PIM [RFC5015] is a variant of PIM-SM. In contrast to 1747 PIM-SM, the protocol pre-establishes bidirectional shared trees per 1748 group, connecting multicast sources and receivers. The rendezvous 1749 points are virtualized in BIDIR-PIM as an address to identify on-tree 1750 directions (up and down). Routers with the best link towards the 1751 (virtualized) rendezvous point address are selected as designated 1752 forwarders for a link-local domain and represent the actual 1753 distribution tree. The IMG is to be placed at the RP-link, where the 1754 rendezvous point address is located. As source data in either cases 1755 will be transmitted to the rendezvous point link, the BIDIR-PIM 1756 instance of the IMG receives the data and can internally signal new 1757 senders towards the stack via the "new_source_event". The first 1758 receiver subscription for a new group within a BIDIR-PIM domain needs 1759 to be transmitted to the RP to establish the first branching point. 1760 Using the "join_event", an IMG will thereby be informed about group 1761 requests from its domain, which are then delegated to the overlay. 1763 Appendix D. Change Log 1765 The following changes have been made from draft-irtf-samrg-common- 1766 api-09 1768 1. Clarifying statement about ham: URI added 1770 The following changes have been made from draft-irtf-samrg-common- 1771 api-08 1773 1. Redefinition of the URI scheme 1775 The following changes have been made from draft-irtf-samrg-common- 1776 api-07 1778 1. Editorial polishing following Tony's review 1780 The following changes have been made from draft-irtf-samrg-common- 1781 api-06 1783 1. Editorial comments from Lisandro included 1785 2. Syntax notation in Section 4.2.2, 4.2.3, an 4.6.1 improved 1787 3. Appendix A improved 1788 The following changes have been made from draft-irtf-samrg-common- 1789 api-05 1791 1. Added preparations for IRSG review 1793 2. Fixed error codes 1795 3. Editorial improvements 1797 4. Updated references 1799 The following changes have been made from draft-irtf-samrg-common- 1800 api-04 1802 1. Added section "A Note on Explicit Multicast (XCAST)" 1804 2. Added section "MTU Handling" 1806 3. Added socket option getAtomicMSgSize 1808 4. Added service call getMaxMsgSize 1810 The following changes have been made from draft-irtf-samrg-common- 1811 api-03 1813 1. Added section "Illustrative Example" 1815 2. Added section "Implementation" 1817 3. Minor clarifications 1819 The following changes have been made from draft-irtf-samrg-common- 1820 api-02 1822 1. Added use case of multicast flavor support 1824 2. Restructured Section 3 1826 3. Major update on namespaces and on mapping 1828 4. C signatures completed 1830 5. Many clarifications and editorial improvements 1832 The following changes have been made from draft-irtf-samrg-common- 1833 api-01 1835 1. Pseudo syntax for lists objects changed 1836 2. Editorial improvements 1838 The following changes have been made from draft-irtf-samrg-common- 1839 api-00 1841 1. Incorrect pseudo code syntax fixed 1843 2. Minor editorial improvements 1845 The following changes have been made from draft-waehlisch-sam-common- 1846 api-06 1848 1. no changes; draft adopted as WG document (previous draft- 1849 waehlisch-sam-common-api-06, now draft-irtf-samrg-common-api-00) 1851 The following changes have been made from draft-waehlisch-sam-common- 1852 api-05 1854 1. Description of the Common API using pseudo syntax added 1856 2. C signatures of the Comon API moved to appendix 1858 3. updateSender() and updateListener() calls replaced by events 1860 4. Function destroyMSocket renamed as deleteMSocket. 1862 The following changes have been made from draft-waehlisch-sam-common- 1863 api-04 1865 1. updateSender() added. 1867 The following changes have been made from draft-waehlisch-sam-common- 1868 api-03 1870 1. Use cases added for illustration. 1872 2. Service calls added for inquiring on the multicast distribution 1873 system. 1875 3. Namespace examples added. 1877 4. Clarifications and editorial improvements. 1879 The following changes have been made from draft-waehlisch-sam-common- 1880 api-02 1882 1. Rename init() in createMSocket(). 1884 2. Added calls srcRegister()/srcDeregister(). 1886 3. Rephrased API calls in C-style. 1888 4. Cleanup code in "Practical Example of the API". 1890 5. Partial reorganization of the document. 1892 6. Many editorial improvements. 1894 The following changes have been made from draft-waehlisch-sam-common- 1895 api-01 1897 1. Document restructured to clarify the realm of document overview 1898 and specific contributions s.a. naming and addressing. 1900 2. A clear separation of naming and addressing was drawn. Multicast 1901 URIs have been introduced. 1903 3. Clarified and adapted the API calls. 1905 4. Introduced Socket Option calls. 1907 5. Deployment use cases moved to an appendix. 1909 6. Simple programming example added. 1911 7. Many editorial improvements. 1913 Authors' Addresses 1915 Matthias Waehlisch 1916 link-lab & FU Berlin 1917 Hoenower Str. 35 1918 Berlin 10318 1919 Germany 1921 Email: mw@link-lab.net 1922 URI: http://www.inf.fu-berlin.de/~waehl 1923 Thomas C. Schmidt 1924 HAW Hamburg 1925 Berliner Tor 7 1926 Hamburg 20099 1927 Germany 1929 Email: schmidt@informatik.haw-hamburg.de 1930 URI: http://inet.cpt.haw-hamburg.de/members/schmidt 1932 Stig Venaas 1933 cisco Systems 1934 Tasman Drive 1935 San Jose, CA 95134 1936 USA 1938 Email: stig@cisco.com