idnits 2.17.1 draft-waehlisch-sam-common-api-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 4 characters in excess of 72. == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 == There are 1 instance of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 337 has weird spacing: '...t Calls provi...' == Line 340 has weird spacing: '...e Calls provi...' == Line 343 has weird spacing: '...Options provi...' == Line 347 has weird spacing: '...e Calls provi...' == Line 515 has weird spacing: '... scheme refer...' == (4 more instances...) -- The document date (January 28, 2011) is 4836 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 970, but no explicit reference was found in the text == Outdated reference: A later version (-18) exists of draft-ietf-mboned-auto-multicast-10 -- Obsolete informational reference (is this intentional?): RFC 4395 (Obsoleted by RFC 7595) -- Obsolete informational reference (is this intentional?): RFC 4601 (Obsoleted by RFC 7761) Summary: 1 error (**), 0 flaws (~~), 12 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 SAM Research Group M. Waehlisch 3 Internet-Draft link-lab & FU Berlin 4 Intended status: Informational T C. Schmidt 5 Expires: August 1, 2011 HAW Hamburg 6 S. Venaas 7 cisco Systems 8 January 28, 2011 10 A Common API for Transparent Hybrid Multicast 11 draft-waehlisch-sam-common-api-05 13 Abstract 15 Group communication services exist in a large variety of flavors, and 16 technical implementations at different protocol layers. Multicast 17 data distribution is most efficiently performed on the lowest 18 available layer, but a heterogeneous deployment status of multicast 19 technologies throughout the Internet requires an adaptive service 20 binding at runtime. Today, it is difficult to write an application 21 that runs everywhere and at the same time makes use of the most 22 efficient multicast service available in the network. Facing 23 robustness requirements, developers are frequently forced to using a 24 stable, upper layer protocol controlled by the application itself. 25 This document describes a common multicast API that is suitable for 26 transparent communication in underlay and overlay, and grants access 27 to the different multicast flavors. It proposes an abstract naming 28 by multicast URIs and discusses mapping mechanisms between different 29 namespaces and distribution technologies. Additionally, it describes 30 the application of this API for building gateways that interconnect 31 current multicast domains throughout the Internet. 33 Status of this Memo 35 This Internet-Draft is submitted in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF). Note that other groups may also distribute 40 working documents as Internet-Drafts. The list of current Internet- 41 Drafts is at http://datatracker.ietf.org/drafts/current/. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on August 1, 2011. 50 Copyright Notice 52 Copyright (c) 2011 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 1.1. Use Cases for the Common API . . . . . . . . . . . . . . . 5 69 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 70 3. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 71 3.1. Objectives and Reference Scenarios . . . . . . . . . . . . 7 72 3.2. Group Communication API & Protocol Stack . . . . . . . . . 8 73 3.3. Naming and Addressing . . . . . . . . . . . . . . . . . . 10 74 3.4. Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 11 75 4. Common Multicast API . . . . . . . . . . . . . . . . . . . . . 12 76 4.1. Abstract Data Types . . . . . . . . . . . . . . . . . . . 12 77 4.1.1. Multicast URI . . . . . . . . . . . . . . . . . . . . 12 78 4.1.2. Interface . . . . . . . . . . . . . . . . . . . . . . 12 79 4.2. Group Management Calls . . . . . . . . . . . . . . . . . . 13 80 4.2.1. Create . . . . . . . . . . . . . . . . . . . . . . . . 13 81 4.2.2. Destroy . . . . . . . . . . . . . . . . . . . . . . . 13 82 4.2.3. Join . . . . . . . . . . . . . . . . . . . . . . . . . 14 83 4.2.4. Leave . . . . . . . . . . . . . . . . . . . . . . . . 14 84 4.2.5. Source Register . . . . . . . . . . . . . . . . . . . 14 85 4.2.6. Source Deregister . . . . . . . . . . . . . . . . . . 15 86 4.3. Send and Receive Calls . . . . . . . . . . . . . . . . . . 15 87 4.3.1. Send . . . . . . . . . . . . . . . . . . . . . . . . . 15 88 4.3.2. Receive . . . . . . . . . . . . . . . . . . . . . . . 16 89 4.4. Socket Options . . . . . . . . . . . . . . . . . . . . . . 16 90 4.4.1. Get Interfaces . . . . . . . . . . . . . . . . . . . . 16 91 4.4.2. Add Interface . . . . . . . . . . . . . . . . . . . . 16 92 4.4.3. Delete Interface . . . . . . . . . . . . . . . . . . . 17 93 4.4.4. Set TTL . . . . . . . . . . . . . . . . . . . . . . . 17 94 4.5. Service Calls . . . . . . . . . . . . . . . . . . . . . . 17 95 4.5.1. Group Set . . . . . . . . . . . . . . . . . . . . . . 17 96 4.5.2. Neighbor Set . . . . . . . . . . . . . . . . . . . . . 18 97 4.5.3. Children Set . . . . . . . . . . . . . . . . . . . . . 18 98 4.5.4. Parent Set . . . . . . . . . . . . . . . . . . . . . . 19 99 4.5.5. Designated Host . . . . . . . . . . . . . . . . . . . 19 100 4.5.6. Update Listener . . . . . . . . . . . . . . . . . . . 20 101 4.5.7. Update Sender . . . . . . . . . . . . . . . . . . . . 20 102 5. Functional Details . . . . . . . . . . . . . . . . . . . . . . 20 103 5.1. Namespaces . . . . . . . . . . . . . . . . . . . . . . . . 20 104 5.2. Mapping . . . . . . . . . . . . . . . . . . . . . . . . . 21 105 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 106 7. Security Considerations . . . . . . . . . . . . . . . . . . . 21 107 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 21 108 9. Informative References . . . . . . . . . . . . . . . . . . . . 22 109 Appendix A. Practical Example of the API . . . . . . . . . . . . 23 110 Appendix B. Deployment Use Cases for Hybrid Multicast . . . . . . 24 111 B.1. DVMRP . . . . . . . . . . . . . . . . . . . . . . . . . . 25 112 B.2. PIM-SM . . . . . . . . . . . . . . . . . . . . . . . . . . 25 113 B.3. PIM-SSM . . . . . . . . . . . . . . . . . . . . . . . . . 26 114 B.4. BIDIR-PIM . . . . . . . . . . . . . . . . . . . . . . . . 26 115 Appendix C. Change Log . . . . . . . . . . . . . . . . . . . . . 27 116 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 28 118 1. Introduction 120 Currently, group application programmers need to make the choice of 121 the distribution technology that the application will require at 122 runtime. There is no common communication interface that abstracts 123 multicast transmission and subscriptions from the deployment state at 124 runtime. The standard multicast socket options [RFC3493], [RFC3678] 125 are bound to an IP version and do not distinguish between naming and 126 addressing of multicast identifiers. Group communication, however, 127 is commonly implemented in different flavors such as any source (ASM) 128 vs. source specific mutlicast (SSM), on different layers (e.g., IP 129 vs. application layer multicast), and may be based on different 130 technologies on the same tier as with IPv4 vs. IPv6. It is the 131 objective of this document to provide a universal access to group 132 services. 134 Multicast application development should be decoupled of 135 technological deployment throughout the infrastructure. It requires 136 a common multicast API that offers calls to transmit and receive 137 multicast data independent of the supporting layer and the underlying 138 technological details. For inter-technology transmissions, a 139 consistent view on multicast states is needed, as well. This 140 document describes an abstract group communication API and core 141 functions necessary for transparent operations. Specific 142 implementation guidelines with respect to operating systems or 143 programming languages are out-of-scope of this document. 145 In contrast to the standard multicast socket interface, the API 146 introduced in this document abstracts naming from addressing. Using 147 a multicast address in the current socket API predefines the 148 corresponding routing layer. In this specification, the multicast 149 name used for joining a group denotes an application layer data 150 stream that is identified by a multicast URI, independent of its 151 binding to a specific distribution technology. Such a group name can 152 be mapped to variable routing identifiers. 154 The aim of this common API is twofold: 156 o Enable any application programmer to implement group-oriented data 157 communication independent of the underlying delivery mechanisms. 158 In particular, allow for a late binding of group applications to 159 multicast technologies that makes applications efficient, but 160 robust with respect to deployment aspects. 162 o Allow for a flexible namespace support in group addressing, and 163 thereby separate naming and addressing/routing schemes from the 164 application design. This abstraction does not only decouple 165 programs from specific aspects of underlying protocols, but may 166 open application design to extend to specifically flavored group 167 services. 169 Multicast technologies may be of various P2P kinds, IPv4 or IPv6 170 network layer multicast, or implemented by some other application 171 service. Corresponding namespaces may be IP addresses or DNS naming, 172 overlay hashes or other application layer group identifiers like 173 , but also names independently defined by the 174 applications. Common namespaces are introduced later in this 175 document, but follow an open concept suitable for further extensions. 177 This document also proposes and discusses mapping mechanisms between 178 different namespaces and forwarding technologies. Additionally, the 179 multicast API provides internal interfaces to access current 180 multicast states at the host. Multiple multicast protocols may run 181 in parallel on a single host. These protocols may interact to 182 provide a gateway function that bridges data between different 183 domains. The application of this API at gateways operating between 184 current multicast instances throughout the Internet is described, as 185 well. 187 1.1. Use Cases for the Common API 189 Four generic use cases can be identified that require an abstract 190 common API for multicast services: 192 Application Programming Independent of Technologies Application 193 programmers are provided with group primitives that remain 194 independent of multicast technologies and its deployment in target 195 domains. They are thus enabled to develop programs once that run 196 in every deployment scenario. The employment of group names in 197 the form of abstract meta data types allows applications to remain 198 namespace-agnostic in the sense that the resolution of namespaces 199 and name-to-address mappings may be delegated to a system service 200 at runtime. Thereby, the complexity is minimized as developers 201 need not care about how data is distributed in groups, while the 202 system service can take advantage of extended information of the 203 network environment as acquired at startup. 205 Global Identification of Groups Groups can be identified 206 independent of technological instantiations and beyond deployment 207 domains. Taking advantage of the abstract naming, an application 208 is thus enabled to match data received from different interface 209 technologies (e.g., IPv4, IPv6, or overlays) to belong to the same 210 group. This not only increases flexibility, an application may 211 for instance combine heterogeneous multipath streams, but 212 simplifies the design and implementation of gateways and 213 translators. 215 Simplified Service Deployment through Generic Gateways The API 216 allows for an implementation of abstract gateway functions with 217 mappings to specific technologies residing at a system level. 218 Such generic gateways may provide a simple bridging service and 219 facilitate an inter-domain deployment of multicast. 221 Mobility-agnostic Group Communication Group naming and management as 222 foreseen in the API remain independent of locators. Naturally, 223 applications stay unaware of any mobility-related address changes. 224 Handover-initiated re-addressing is delegated to the mapping 225 services at the system level and may be designed to smoothly 226 interact with mobility management solutions provided at the 227 network or transport layer. 229 2. Terminology 231 This document uses the terminology as defined for the multicast 232 protocols [RFC2710],[RFC3376],[RFC3810],[RFC4601],[RFC4604]. In 233 addition, the following terms will be used. 235 Group Address: A Group Address is a routing identifier. It 236 represents a technological specifier and thus reflects the 237 distribution technology in use. Multicast packet forwarding is 238 based on this ID. 240 Group Name: A Group Name is an application identifier that is used 241 by applications to manage communication in a multicast group 242 (e.g., join/leave and send/receive). The Group Name does not 243 predefine any distribution technologies, even if it syntactically 244 corresponds to an address, but represents a logical identifier. 246 Multicast Namespace: A Multicast Namespace is a collection of 247 designators (i.e., names or addresses) for groups that share a 248 common syntax. Typical instances of namespaces are IPv4 or IPv6 249 multicast addresses, overlay group ids, group names defined on the 250 application layer (e.g., SIP or Email), or some human readable 251 strings. 253 Multicast Domain: A Multicast Domain hosts nodes and routers of a 254 common, single multicast forwarding technology and is bound to a 255 single namespace. 257 Interface An Interface is a forwarding instance of a distribution 258 technology on a given node. For example, the IP interface 259 192.168.1.1 at an IPv4 host. 261 Inter-domain Multicast Gateway: An Inter-domain Multicast Gateway 262 (IMG) is an entity that interconnects different multicast domains. 263 Its objective is to forward data between these domains, e.g., 264 between IP layer and overlay multicast. 266 3. Overview 268 3.1. Objectives and Reference Scenarios 270 The default use case addressed in this document targets at 271 applications that participate in a group by using some common 272 identifier taken from some common namespace. This group name is 273 typically learned at runtime from user interaction like the selection 274 of an IPTV channel, from dynamic session negotiations like in the 275 Session Initiation Protocol (SIP), but may as well have been 276 predefined for an application as a common group name. Technology- 277 specific system functions then transparently map the group name to 278 group addresses such that 280 o programmers are enabled to process group names in their programs 281 without the need to consider technological mappings to designated 282 deployments in target domains; 284 o applications are enabled to identify packets that belong to a 285 logically named group, independent of the interface technology 286 used for sending and receiving packets. The latter shall also 287 hold for multicast gateways. 289 This document refers to a reference scenario that covers the 290 following two hybrid deployment cases displayed in Figure 1: 292 1. Multicast domains running the same multicast technology but 293 remaining isolated, possibly only connected by network layer 294 unicast. 296 2. Multicast domains running different multicast technologies, but 297 hosting nodes that are members of the same multicast group. 299 +-------+ +-------+ 300 | Member| | Member| 301 | Foo | | G | 302 +-------+ +-------+ 303 \ / 304 *** *** *** *** 305 * ** ** ** * 306 * * 307 * MCast Tec A * 308 * * 309 * ** ** ** * 310 *** *** *** *** 311 +-------+ +-------+ | 312 | Member| | Member| +-------+ 313 | G | | Foo | | IMG | 314 +-------+ +-------+ +-------+ 315 | | | 316 *** *** *** *** *** *** *** *** 317 * ** ** ** * * ** ** ** * 318 * * +-------+ * * 319 * MCast Tec A * --| IMG |-- * MCast Tec B * +-------+ 320 * * +-------+ * * - | Member| 321 * ** ** ** * * ** ** ** * | G | 322 *** *** *** *** *** *** *** *** +-------+ 324 Figure 1: Reference scenarios for hybrid multicast, interconnecting 325 group members from isolated homogeneous and heterogeneous domains. 327 It is assumed throughout the document that the domain composition, as 328 well as the node attachment to a specific technology remain unchanged 329 during a multicast session. 331 3.2. Group Communication API & Protocol Stack 333 The group communication API consists of four parts. Two parts 334 combine the essential communication functions, while the remaining 335 two offer optional extensions for an enhanced management: 337 Group Management Calls provide the minimal API to instantiate a 338 multicast socket and manage group membership. 340 Send/Receive Calls provide the minimal API to send and receive 341 multicast data in a technology-transparent fashion. 343 Socket Options provide extension calls for an explicit configuration 344 of the multicast socket like setting hop limits or associated 345 interfaces. 347 Service Calls provide extension calls that grant access to internal 348 multicast states of an interface such as the multicast groups 349 under subscription or the multicast forwarding information base. 351 Multicast applications that use the common API require assistance by 352 a group communication stack. This protocol stack serves two needs: 354 o It provides system-level support to transfer the abstract 355 functions of the common API, including namespace support, into 356 protocol operations at interfaces. 358 o It bridges data distribution between different multicast 359 technologies. 361 A general initiation of a multicast communication in this setting 362 proceeds as follows: 364 1. An application opens an abstract multicast socket. 366 2. The application subscribes/leaves/(de)registers to a group using 367 a logical group identifier. 369 3. An intrinsic function of the stack maps the logical group ID 370 (Group Name) to a technical group ID (Group Address). This 371 function may make use of deployment-specific knowledge such as 372 available technologies and group address management in its 373 domain. 375 4. Packet distribution proceeds to and from one or several 376 multicast-enabled interfaces. 378 The multicast socket describes a group communication channel composed 379 of one or multiple interfaces. A socket may be created without 380 explicit interface association by the application, which leaves the 381 choice of the underlying forwarding technology to the group 382 communication stack. However, an application may also bind the 383 socket to one or multiple dedicated interfaces, which predefines the 384 forwarding technology and the namespace(s) of the Group Address(es). 386 Applications are not required to maintain mapping states for Group 387 Addresses. The group communication stack accounts for the mapping of 388 the Group Name to the Group Address(es) and vice versa. Multicast 389 data passed to the application will be augmented by the corresponding 390 Group Name. Multiple multicast subscriptions thus can be conducted 391 on a single multicast socket without the need for Group Name encoding 392 at the application side. 394 Hosts may support several multicast protocols. The group 395 communication stack discovers available multicast-enabled 396 communication interfaces. It provides a minimal hybrid function that 397 bridges data between different interfaces and multicast domains. 398 Details of service discovery are out-of-scope of this document. 400 The extended multicast functions can be implemented by a middleware 401 as conceptually visualized in Figure 2. 403 *-------* *-------* 404 | App 1 | | App 2 | 405 *-------* *-------* 406 | | 407 *---------------------* ---| 408 | Middleware | | 409 *---------------------* | 410 | | | 411 *---------* | | 412 | Overlay | | \ Group Communication 413 *---------* | / Stack 414 | | | 415 | | | 416 *---------------------* | 417 | Underlay | | 418 *---------------------* ---| 420 Figure 2: A middleware for offering uniform access to multicast in 421 underlay and overlay 423 3.3. Naming and Addressing 425 Applications use Group Names to identify groups. Names can uniquely 426 determine a group in a global communication context and hide 427 technological deployment for data distribution from the application. 428 In contrast, multicast forwarding operates on Group Addresses. Even 429 though both identifiers may be identical in symbols, they carry 430 different meanings. They may also belong to different namespaces. 431 The namespace of a Group Address reflects a routing technology, while 432 the namespace of a Group Name represents the context in which the 433 application operates. 435 URIs [RFC3986] are a common way to represent namespace-specific 436 identifiers in applications in the form of an abstract meta-data 437 type. Throughout this document, any kind of Group Name follows a URI 438 notation with the syntax defined in Section 4.1.1. Examples are, 439 ip://224.1.2.3:5000 for a canonical IPv4 ASM group, 440 sip://news@cnn.com for an application-specific naming with service 441 instantiator and default port selection. 443 An implementation of the group communication middleware can provide 444 convenience functions that detect the namespace of a Group Name and 445 use it to optimize service instantiation. In practice, such a 446 library would provide support for high-level data types to the 447 application, similar to the current socket API (e.g., InetAddress in 448 Java). Using this data type could implicitly determine the 449 namespace. Details of automatic namespace identification is out-of- 450 scope of this document. 452 3.4. Mapping 454 All group members subscribe to the same Group Name taken from a 455 common namespace and thereby identify the group in a technology- 456 agnostic way. 458 Group Names require a mapping to addresses prior to service 459 instantiation at an Interface. Similarly, a mapping is needed at 460 gateways to translate between Group Addresses from different 461 namespaces. Some namespaces facilitate a canonical transformation to 462 default address spaces. For example, ip://224.1.2.3:5000 has an 463 obvious correspondence to 224.1.2.3 in the IPv4 multicast address 464 space. Note that in this example the multicast URI can be completely 465 recovered from any data packet received from this group. 467 However, mapping in general can be more complex and need not be 468 invertible. Mapping functions can be stateless in some contexts, but 469 may require states in others. The application of such functions 470 depends on the cardinality of the namespaces, the structure of 471 address spaces, and possible address collisions. For example, it is 472 not obvious how to map a large identifier space (e.g., IPv6) to a 473 smaller, collision-prone set like IPv4. 475 Two (or more) Multicast Addresses from different namespaces may 476 belong to 478 a. the same logical group (i.e., same Multicast Name) 480 b. different multicast channels (i.e., different technical IDs). 482 This decision can be based on invertible mappings. However, the 483 application of such functions depends on the cardinality of the 484 namespaces and thus does not hold in general. It is not obvious how 485 to map a large identifier space (e.g., IPv6) to a smaller set (e.g., 486 IPv4). 488 A mapping can be realized by embedding smaller in larger namespaces 489 or selecting an arbitrary, unused ID in the target space. The 490 relation between logical and technical ID is maintained by mapping 491 functions which can be stateless or stateful. The middleware thus 492 queries the mapping service first, and creates a new technical group 493 ID only if there is no identifier available for the namespace in use. 494 The Group Name is associated with one or more Group Addresses, which 495 belong to different namespaces. Depending on the scope of the 496 mapping service, it ensures a consistent use of the technical ID in a 497 local or global domain. 499 4. Common Multicast API 501 4.1. Abstract Data Types 503 4.1.1. Multicast URI 505 Multicast Names and Multicast Addresses used in this API follow an 506 URI scheme that defines a subset of the generic URI specified in 507 [RFC3986] and is compliant with the guidelines in [RFC4395]. 509 The multicast URI is defined as follows: 511 scheme "://" group "@" instantiation ":" port "/" sec-credentials 513 The parts of the URI are defined as follows: 515 scheme refers to the specification of the assigned identifier 516 [RFC3986] which takes the role of the namespace. 518 group identifies the group uniquely within the namespace given in 519 scheme. 521 instantiation identifies the entity that generates the instance of 522 the group (e.g., a SIP domain or a source in SSM) using the 523 namespace given in scheme. 525 port identifies a specific application at an instance of a group. 527 sec-credentials used to implement security credentials (e.g., to 528 authorize a multicast group access). 530 4.1.2. Interface 532 The interface denotes the layer and instance on which the 533 corresponding call will be effective. In agreement with [RFC3493] we 534 identify an interface by an identifier, which is a positive integer 535 starting at 1. 537 Properties of an interface are stored in the following struct: 539 struct if_prop { 540 unsigned int if_index; /* 1, 2, ... */ 541 char *if_name; /* "eth0", "eth1:1", "lo", ... */ 542 char *if_addr; /* "1.2.3.4", "abc123" ... */ 543 char *if_tech; /* "ip", "overlay", ... */ 544 }; 546 The following function retrieves all available interfaces from the 547 system: 549 struct if_prop *if_prop(void); 551 It extends the functions for Interface Identification in [RFC3493] 552 (cf., Section 4). 554 4.2. Group Management Calls 556 4.2.1. Create 558 The create call initiates a multicast socket and provides the 559 application programmer with a corresponding handle. If no interfaces 560 will be assigned based on the call, the default interface will be 561 selected and associated with the socket. The call may return an 562 error code in the case of failures, e.g., due to a non-operational 563 middleware. 565 int createMSocket(uint32_t *if); 567 The if argument denotes a list of interfaces (if_indexes) that will 568 be associated with the multicast socket. This parameter is optional. 570 On success a multicast socket identifier is returned, otherwise NULL. 572 4.2.2. Destroy 574 The destroy call removes the multicast socket. 576 int destroyMSocket(int s); 578 The s argument identifies the multicast socket for destruction. 580 On success the value 0 is returned, otherwise -1. 582 4.2.3. Join 584 The join call initiates a subscription for the given group. 585 Depending on the interfaces that are associated with the socket, this 586 may result in an IGMP/MLD report or overlay subscription. 588 int join(int s, const uri group_name); 590 The s argument identifies the multicast socket. 592 The group_name argument identifies the group. 594 On success the value 0 is returned, otherwise -1. 596 4.2.4. Leave 598 The leave call results in an unsubscription for the given Group Name. 600 int leave(int s, const uri group_name); 602 The s argument identifies the multicast socket. 604 The group_name identifies the group. 606 On success the value 0 is returned, otherwise -1. 608 4.2.5. Source Register 610 The srcRegister call registers a source for a Group on all active 611 interfaces of the socket s. This call may assist group distribution 612 in some technologies, the creation of sub-overlays, for example. Not 613 all multicast technologies require his call. 615 int srcRegister(int s, const uri group_name, 616 uint_t num_ifs, uint_t *ifs); 618 The s argument identifies the multicast socket. 620 The group_name argument identifies the multicast group to which a 621 source intends to send data. 623 The num_ifs argument holds the number of elements in the ifs array. 624 This parameter is optional. 626 The ifs argument points to the list of interface indexes for which 627 the source registration failed. If num_ifs was 0 on output, a NULL 628 pointer is returned. This parameter is optional. 630 If source registration succeeded for all interfaces associated with 631 the socket, the value 0 is returned, otherwise -1. 633 4.2.6. Source Deregister 635 The srcDeregister indicates that a source does no longer intend to 636 send data to the multicast group. This call may remain without 637 effect in some multicast technologies. 639 int srcDeregister(int s, const uri group_name, 640 uint_t num_ifs, uint_t *ifs); 642 The s argument identifies the multicast socket. 644 The group_name argument identifies the multicast group to which a 645 source has stopped to send multicast data. 647 The num_ifs argument holds the number of elements in the ifs array. 649 The ifs argument points to the list of interfaces for which the 650 source deregistration failed. If num_ifs was 0 on output, a NULL 651 pointer is returned. 653 If source deregistration succeeded for all interfaces associated with 654 the socket, the value 0 is returned, otherwise -1. 656 4.3. Send and Receive Calls 658 4.3.1. Send 660 The send call passes multicast data for a Multicast Name from the 661 application to the multicast socket. 663 int send(int s, const uri group_name, 664 size_t msg_len, const void *buf); 666 The s argument identifies the multicast socket. 668 The group_name argument identifies the group to which data will be 669 sent. 671 The msg_len argument holds the length of the message to be sent. 673 The buf argument passes the multicast data to the multicast socket. 675 On success the value 0 is returned, otherwise -1. 677 4.3.2. Receive 679 The receive call passes multicast data and the corresponding Group 680 Name to the application. 682 int receive(int s, const uri group_name, 683 size_t msg_len, msg *msg_buf); 685 The s argument identifies the multicast socket. 687 The group_name argument identifies the multicast group for which data 688 was received. 690 The msg_len argument holds the length of the received message. 692 The msg_buf argument points to the payload of the received multicast 693 data. 695 On success the value 0 is returned, otherwise -1. 697 4.4. Socket Options 699 The following calls configure an existing multicast socket. 701 4.4.1. Get Interfaces 703 The getInterface call returns an array of all available multicast 704 communication interfaces associated with the multicast socket. 706 int getInterfaces(int s, uint_t num_ifs, uint_t *ifs); 708 The s argument identifies the multicast socket. 710 The num_ifs argument holds the number of interfaces in the ifs list. 712 The ifs argument points to an array of interface index identifiers. 714 On success the value 0 or lager is returned, otherwise -1. 716 4.4.2. Add Interface 718 The addInterface call adds a distribution channel to the socket. 719 This may be an overlay or underlay interface, e.g., IPv6 or DHT. 720 Multiple interfaces of the same technology may be associated with the 721 socket. 723 int addInterface(int s, uint32_t if); 725 The s and if arguments identify a multicast socket and interface, 726 respectively. 728 On success the value 0 is returned, otherwise -1. 730 4.4.3. Delete Interface 732 The delnterface call removes the interface if from the multicast 733 socket. 735 int delInterface(int s, uint32_t if); 737 The s and if arguments identify a multicast socket and interface, 738 respectively. 740 On success the value 0 is returned, otherwise -1. 742 4.4.4. Set TTL 744 The setTTL call configures the maximum hop count for the socket a 745 multicast message is allowed to traverse. 747 int setTTL(int s, int h, uint_t num_ifs, uint_t *ifs); 749 The s and h arguments identify a multicast socket and the maximum hop 750 count, respectively. 752 The num_ifs argument holds the number of interfaces in the ifs list. 753 This parameter is optional. 755 The ifs argument points to an array of interface index identifiers. 756 This parameter is optional. 758 On success the value 0 is returned, otherwise -1. 760 4.5. Service Calls 762 4.5.1. Group Set 764 The groupSet call returns all multicast groups registered at a given 765 interface. This information can be provided by group management 766 states or routing protocols. The return values distinguish between 767 sender and listener states. 769 int groupSet(uint32_t if, uint_t *num_groups, 770 struct groupSet *groupSet); 772 struct groupSet { 773 uri group_name; /* registered multicast group */ 774 int type; /* 0 = listener state, 1 = sender state, 775 2 = sender & listener state */ 777 The if argument identifies the interface for which states are 778 maintained. 780 The num_groups argument holds the number of groups in the groupSet 781 array. 783 The groupSet argument points to an array of group states. 785 On success the value 0 is returned, otherwise -1. 787 4.5.2. Neighbor Set 789 The neighborSet function returns the set of neighboring nodes for a 790 given interface as seen by the multicast routing protocol. 792 int neighborSet(uint32_t if, uint_t *num_neighbors, 793 const uri *neighbor_address); 795 The if argument identifies the interface for which neighbors are 796 inquired. 798 The num_neighbors argument holds the number of addresses in the 799 neighbor_address array. 801 The neighbor_address argument points to a list of neighboring nodes 802 on a successful return. 804 On success the value 0 is returned, otherwise -1. 806 4.5.3. Children Set 808 The childrenSet function returns the set of child nodes that receive 809 multicast data from a specified interface for a given group. For a 810 common multicast router, this call retrieves the multicast forwarding 811 information base per interface. 813 int childrenSet(uint32_t if, const uri group_name, 814 uint_t *num_children, const uri *child_address); 816 The if argument identifies the interface for which children are 817 inquired. 819 The group_name argument defines the multicast group for which 820 distribution is considered. 822 The num_children argument holds the number of addresses in the 823 child_address array. 825 The child_address argument points to a list of neighboring nodes on a 826 successful return. 828 On success the value 0 is returned, otherwise -1. 830 4.5.4. Parent Set 832 The parentSet function returns the set of neighbors from which the 833 current node receives multicast data at a given interface for the 834 specified group. 836 int parentSet(uint32_t if, const uri group_name, uint_t *num_parents, 837 const uri *parent_address); 839 The if argument identifies the interface for which parents are 840 inquired. 842 The group_name argument defines the multicast group for which 843 distribution is considered. 845 The num_parents argument holds the number of addresses in the 846 parent_address array. 848 The parent_address argument points to a list of neighboring nodes on 849 a successful return. 851 On success the value 0 is returned, otherwise -1. 853 4.5.5. Designated Host 855 The designatedHost function inquires whether the host has the role of 856 a designated forwarder resp. querier, or not. Such an information is 857 provided by almost all multicast protocols to prevent packet 858 duplication, if multiple multicast instances serve on the same 859 subnet. 861 int designatedHost(uint32_t if, const uri *group_name); 863 The if argument identifies the interface for which designated 864 forwarding is inquired. 866 The group_name argument specifies the group for which the host may 867 attain the role of designated forwarder. 869 The function returns 1 if the host is a designated forwarder or 870 querier, otherwise 0. The return value -1 indicates an error. 872 4.5.6. Update Listener 874 The updateListener function is invoked to inform a group service 875 about a change of listener states for a group. This is the result of 876 receiver new subscriptions or leaves. The group service may call 877 groupSet to get updated information. 879 const uri *updateListener(); 881 On success the updateListener function points to the Group Name that 882 experienced a state change, otherwise NULL is returned. 884 4.5.7. Update Sender 886 The updateSender function is invoked to inform a group service about 887 a change of sender states for a group. The group service may call 888 groupSet to get updated information. 890 const uri *updateSender(); 892 On success the updateListener function points to the Group Name that 893 experienced a state change, otherwise NULL is returned. 895 5. Functional Details 897 In this section, we describe specific functions of the API and the 898 associated system middleware in detail. 900 5.1. Namespaces 902 Namespace identifiers in URIs are placed in the scheme element and 903 characterize syntax and semantic of the group identifier. They 904 enable the use of convenience functions and high-level data types 905 while processing URIs. When used in names, they may facilitate a 906 default mapping and a recovery of names from addresses. They 907 characterize its type, when used in addresses. 909 Compliant to the URI concept, namespace-schemes can be added. 910 Examples of schemes and functions currently foreseen include 912 IP This namespace is comprised of regular IP node naming, i.e., DNS 913 names and addresses taken from any version of the Internet 914 Protocol. A processor dealing with the IP namespace is required 915 to determine the syntax (DNS name, IP address version) of the 916 group expression. 918 OLM This namespace covers address strings immediately valid in an 919 overlay network. A processor handling those strings need not be 920 aware of the address generation mechanism, but may pass these 921 values directly to a corresponding overlay. 923 SIP The SIP namespace is an example of an application-layer scheme 924 that bears inherent group functions (conferencing). SIP 925 conference URIs may be directly exchanged and interpreted at the 926 application, and mapped to group addresses on the system level to 927 generate a corresponding multicast group. 929 Opaque This namespace transparently carries strings without further 930 syntactical information, meanings or associated resolution 931 mechanism. 933 5.2. Mapping 935 Group Name to Group Address, SSM/ASM TODO 937 6. IANA Considerations 939 This document makes no request of IANA. 941 7. Security Considerations 943 This draft does neither introduce additional messages nor novel 944 protocol operations. TODO 946 8. Acknowledgements 948 We would like to thank the HAMcast-team, Dominik Charousset, Gabriel 949 Hege, Fabian Holler, Alexander Knauf, Sebastian Meiling, and 950 Sebastian Woelke, at the HAW Hamburg for many fruitful discussions 951 and for their continuous critical feedback while implementing API and 952 a hybrid multicast middleware. 954 This work is partially supported by the German Federal Ministry of 955 Education and Research within the HAMcast project, which is part of 956 G-Lab. 958 9. Informative References 960 [I-D.ietf-mboned-auto-multicast] 961 Thaler, D., Talwar, M., Aggarwal, A., Vicisano, L., and T. 962 Pusateri, "Automatic IP Multicast Without Explicit Tunnels 963 (AMT)", draft-ietf-mboned-auto-multicast-10 (work in 964 progress), March 2010. 966 [RFC1075] Waitzman, D., Partridge, C., and S. Deering, "Distance 967 Vector Multicast Routing Protocol", RFC 1075, 968 November 1988. 970 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 971 Requirement Levels", BCP 14, RFC 2119, March 1997. 973 [RFC2710] Deering, S., Fenner, W., and B. Haberman, "Multicast 974 Listener Discovery (MLD) for IPv6", RFC 2710, 975 October 1999. 977 [RFC3376] Cain, B., Deering, S., Kouvelas, I., Fenner, B., and A. 978 Thyagarajan, "Internet Group Management Protocol, Version 979 3", RFC 3376, October 2002. 981 [RFC3493] Gilligan, R., Thomson, S., Bound, J., McCann, J., and W. 982 Stevens, "Basic Socket Interface Extensions for IPv6", 983 RFC 3493, February 2003. 985 [RFC3678] Thaler, D., Fenner, B., and B. Quinn, "Socket Interface 986 Extensions for Multicast Source Filters", RFC 3678, 987 January 2004. 989 [RFC3810] Vida, R. and L. Costa, "Multicast Listener Discovery 990 Version 2 (MLDv2) for IPv6", RFC 3810, June 2004. 992 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 993 Resource Identifier (URI): Generic Syntax", STD 66, 994 RFC 3986, January 2005. 996 [RFC4395] Hansen, T., Hardie, T., and L. Masinter, "Guidelines and 997 Registration Procedures for New URI Schemes", BCP 35, 998 RFC 4395, February 2006. 1000 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 1001 "Protocol Independent Multicast - Sparse Mode (PIM-SM): 1002 Protocol Specification (Revised)", RFC 4601, August 2006. 1004 [RFC4604] Holbrook, H., Cain, B., and B. Haberman, "Using Internet 1005 Group Management Protocol Version 3 (IGMPv3) and Multicast 1006 Listener Discovery Protocol Version 2 (MLDv2) for Source- 1007 Specific Multicast", RFC 4604, August 2006. 1009 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and L. Vicisano, 1010 "Bidirectional Protocol Independent Multicast (BIDIR- 1011 PIM)", RFC 5015, October 2007. 1013 Appendix A. Practical Example of the API 1014 -- Application above middleware: 1016 //Initialize multicast socket; 1017 //the middleware selects all available interfaces 1018 MulticastSocket m = new MulticastSocket(); 1020 m.join(URI("ip://224.1.2.3:5000")); 1021 m.join(URI("ip://[FF02:0:0:0:0:0:0:3]:6000")); 1022 m.join(URI("sip://news@cnn.com")); 1024 -- Middleware: 1026 join(URI mcAddress) { 1027 //Select interfaces in use 1028 for all this.interfaces { 1029 switch (interface.type) { 1030 case "ipv6": 1031 //... map logical ID to routing address 1032 Inet6Address rtAddressIPv6 = new Inet6Address(); 1033 mapNametoAddress(mcAddress,rtAddressIPv6); 1034 interface.join(rtAddressIPv6); 1035 case "ipv4": 1036 //... map logical ID to routing address 1037 Inet4Address rtAddressIPv4 = new Inet4Address(); 1038 mapNametoAddress(mcAddress,rtAddressIPv4); 1039 interface.join(rtAddressIPv4); 1040 case "sip-session": 1041 //... map logical ID to routing address 1042 SIPAddress rtAddressSIP = new SIPAddress(); 1043 mapNametoAddress(mcAddress,rtAddressSIP); 1044 interface.join(rtAddressSIP); 1045 case "dht": 1046 //... map logical ID to routing address 1047 DHTAddress rtAddressDHT = new DHTAddress(); 1048 mapNametoAddress(mcAddress,rtAddressDHT); 1049 interface.join(rtAddressDHT); 1050 //... 1051 } 1052 } 1053 } 1055 Appendix B. Deployment Use Cases for Hybrid Multicast 1057 This section describes the application of the defined API to 1058 implement an IMG. 1060 B.1. DVMRP 1062 The following procedure describes a transparent mapping of a DVMRP- 1063 based any source multicast service to another many-to-many multicast 1064 technology. 1066 An arbitrary DVMRP [RFC1075] router will not be informed about new 1067 receivers, but will learn about new sources immediately. The concept 1068 of DVMRP does not provide any central multicast instance. Thus, the 1069 IMG can be placed anywhere inside the multicast region, but requires 1070 a DVMRP neighbor connectivity. The group communication stack used by 1071 the IMG is enhanced by a DVMRP implementation. New sources in the 1072 underlay will be advertised based on the DVMRP flooding mechanism and 1073 received by the IMG. Based on this the updateSender() call is 1074 triggered. The relay agent initiates a corresponding join in the 1075 native network and forwards the received source data towards the 1076 overlay routing protocol. Depending on the group states, the data 1077 will be distributed to overlay peers. 1079 DVMRP establishes source specific multicast trees. Therefore, a 1080 graft message is only visible for DVMRP routers on the path from the 1081 new receiver subnet to the source, but in general not for an IMG. To 1082 overcome this problem, data of multicast senders will be flooded in 1083 the overlay as well as in the underlay. Hence, an IMG has to 1084 initiate an all-group join to the overlay using the namespace 1085 extension of the API. Each IMG is initially required to forward the 1086 received overlay data to the underlay, independent of native 1087 multicast receivers. Subsequent prunes may limit unwanted data 1088 distribution thereafter. 1090 B.2. PIM-SM 1092 The following procedure describes a transparent mapping of a PIM-SM- 1093 based any source multicast service to another many-to-many multicast 1094 technology. 1096 The Protocol Independent Multicast Sparse Mode (PIM-SM) [RFC4601] 1097 establishes rendezvous points (RP). These entities receive listener 1098 and source subscriptions of a domain. To be continuously updated, an 1099 IMG has to be co-located with a RP. Whenever PIM register messages 1100 are received, the IMG must signal internally a new multicast source 1101 using updateSender(). Subsequently, the IMG joins the group and a 1102 shared tree between the RP and the sources will be established, which 1103 may change to a source specific tree after a sufficient number of 1104 data has been delivered. Source traffic will be forwarded to the RP 1105 based on the IMG join, even if there are no further receivers in the 1106 native multicast domain. Designated routers of a PIM-domain send 1107 receiver subscriptions towards the PIM-SM RP. The reception of such 1108 messages invokes the updateListener() call at the IMG, which 1109 initiates a join towards the overlay routing protocol. Overlay 1110 multicast data arriving at the IMG will then transparently be 1111 forwarded in the underlay network and distributed through the RP 1112 instance. 1114 B.3. PIM-SSM 1116 The following procedure describes a transparent mapping of a PIM-SSM- 1117 based source specific multicast service to another one-to-many 1118 multicast technology. 1120 PIM Source Specific Multicast (PIM-SSM) is defined as part of PIM-SM 1121 and admits source specific joins (S,G) according to the source 1122 specific host group model [RFC4604]. A multicast distribution tree 1123 can be established without the assistance of a rendezvous point. 1125 Sources are not advertised within a PIM-SSM domain. Consequently, an 1126 IMG cannot anticipate the local join inside a sender domain and 1127 deliver a priori the multicast data to the overlay instance. If an 1128 IMG of a receiver domain initiates a group subscription via the 1129 overlay routing protocol, relaying multicast data fails, as data are 1130 not available at the overlay instance. The IMG instance of the 1131 receiver domain, thus, has to locate the IMG instance of the source 1132 domain to trigger the corresponding join. In the sense of PIM-SSM, 1133 the signaling should not be flooded in underlay and overlay. 1135 One solution could be to intercept the subscription at both, source 1136 and receiver sites: To monitor multicast receiver subscriptions 1137 (updateListener()) in the underlay, the IMG is placed on path towards 1138 the source, e.g., at a domain border router. This router intercepts 1139 join messages and extracts the unicast source address S, initializing 1140 an IMG specific join to S via regular unicast. Multicast data 1141 arriving at the IMG of the sender domain can be distributed via the 1142 overlay. Discovering the IMG of a multicast sender domain may be 1143 implemented analogously to AMT [I-D.ietf-mboned-auto-multicast] by 1144 anycast. Consequently, the source address S of the group (S,G) 1145 should be built based on an anycast prefix. The corresponding IMG 1146 anycast address for a source domain is then derived from the prefix 1147 of S. 1149 B.4. BIDIR-PIM 1151 The following procedure describes a transparent mapping of a BIDIR- 1152 PIM-based any source multicast service to another many-to-many 1153 multicast technology. 1155 Bidirectional PIM [RFC5015] is a variant of PIM-SM. In contrast to 1156 PIM-SM, the protocol pre-establishes bidirectional shared trees per 1157 group, connecting multicast sources and receivers. The rendezvous 1158 points are virtualized in BIDIR-PIM as an address to identify on-tree 1159 directions (up and down). However, routers with the best link 1160 towards the (virtualized) rendezvous point address are selected as 1161 designated forwarders for a link-local domain and represent the 1162 actual distribution tree. The IMG is to be placed at the RP-link, 1163 where the rendezvous point address is located. As source data in 1164 either cases will be transmitted to the rendezvous point address, the 1165 BIDIR-PIM instance of the IMG receives the data and can internally 1166 signal new senders towards the stack via updateSender(). The first 1167 receiver subscription for a new group within a BIDIR-PIM domain needs 1168 to be transmitted to the RP to establish the first branching point. 1169 Using the updateListener() invocation, an IMG will thereby be 1170 informed about group requests from its domain, which are then 1171 delegated to the overlay. 1173 Appendix C. Change Log 1175 The following changes have been made from 1176 draft-waehlisch-sam-common-api-03 1178 1. Use cases added for illustration. 1180 2. Service calls added for inquiring on the multicast distribution 1181 system. 1183 3. Namespace examples added. 1185 4. Clarifications and editorial improvements. 1187 The following changes have been made from 1188 draft-waehlisch-sam-common-api-02 1190 1. Rename init() in createMSocket(). 1192 2. Added calls srcRegister()/srcDeregister(). 1194 3. Rephrased API calls in C-style. 1196 4. Cleanup code in "Practical Example of the API". 1198 5. Partial reorganization of the document. 1200 6. Many editorial improvements. 1202 The following changes have been made from 1203 1. Document restructured to clarify the realm of document overview 1204 and specific contributions s.a. naming and addressing. 1206 2. A clear separation of naming and addressing was drawn. Multicast 1207 URIs have been introduced. 1209 3. Clarified and adapted the API calls. 1211 4. Introduced Socket Option calls. 1213 5. Deployment use cases moved to an appendix. 1215 6. Simple programming example added. 1217 7. Many editorial improvements. 1219 Authors' Addresses 1221 Matthias Waehlisch 1222 link-lab & FU Berlin 1223 Hoenower Str. 35 1224 Berlin 10318 1225 Germany 1227 Email: mw@link-lab.net 1228 URI: http://www.inf.fu-berlin.de/~waehl 1230 Thomas C. Schmidt 1231 HAW Hamburg 1232 Berliner Tor 7 1233 Hamburg 20099 1234 Germany 1236 Email: schmidt@informatik.haw-hamburg.de 1237 URI: http://inet.cpt.haw-hamburg.de/members/schmidt 1238 Stig Venaas 1239 cisco Systems 1240 Tasman Drive 1241 San Jose, CA 95134 1242 USA 1244 Email: stig@cisco.com