idnits 2.17.1 draft-ohta-static-multicast-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 10 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. == There are 4 instances of lines with non-RFC2606-compliant FQDNs in the document. == There are 5 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 3 instances of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 233: '... MUST and DNS dynamic update w...' Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 38 has weird spacing: '...on list based...' == Line 317 has weird spacing: '...ne with the c...' == Line 347 has weird spacing: '...address is ac...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 1999) is 9076 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'CBT' is defined on line 470, but no explicit reference was found in the text == Unused Reference: 'PIM' is defined on line 473, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'MANOLO' -- Possible downref: Non-RFC (?) normative reference: ref. 'IANA' ** Downref: Normative reference to an Historic RFC: RFC 2189 (ref. 'CBT') ** Obsolete normative reference: RFC 2117 (ref. 'PIM') (Obsoleted by RFC 2362) Summary: 8 errors (**), 0 flaws (~~), 10 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 INTERNET DRAFT M. Ohta 2 draft-ohta-static-multicast-02.txt Tokyo Institute of Technology 3 J. Crowcroft 4 University College London 5 June 1999 7 Static Multicast 9 Status of this Memo 11 This document is an Internet-Draft and is in full conformance with 12 all provisions of Section 10 of RFC2026. 14 Internet-Drafts are working documents of the Internet Engineering 15 Task Force (IETF), its areas, and its working groups. Note that 16 other groups may also distribute working documents as Internet- 17 Drafts. 19 Internet-Drafts are draft documents valid for a maximum of six months 20 and may be updated, replaced, or obsoleted by other documents at any 21 time. It is inappropriate to use Internet- Drafts as reference 22 material or to cite them other than as "work in progress." 24 The list of current Internet-Drafts can be accessed at 25 http://www.ietf.org/ietf/1id-abstracts.txt 27 The list of Internet-Draft Shadow Directories can be accessed at 28 http://www.ietf.org/shadow.html. 30 Abstract 32 The current IP Multicast model appears to achieve a level of 33 simplicity by extending the IP unicast addressing model (historically 34 the classful A,B, and C net numbers) from the mask and longest match 35 schemes of CIDR, with a new classful address space, class D. The 36 routing systems have been also built in a deceptively simple way in 37 one of three manners - either broadcast and prune (DVMRP, Dense Mode 38 PIM), destination list based tree computation (MOSPF) or single 39 centered trees (current sparse mode PIM and CBT). The multicast 40 service creates the illusion of a spectrum that one can "tune in to", 41 as an application writer. Due to this view, many have seen the 42 multicast pilot service, the Mbone, as a worldwide Ethernet, where 43 simple distributed algorithms can be used to allocate "wavelengths" 44 and advertise them through "broadcast" on a channel (the session 45 directory), associated with a spectrum. 47 These three pieces of the picture have tempted people to construct a 48 distributed architecture for a number of next level services that 49 cannot work at more than a modest scale, since they ignore the basic 50 spirit of location independence for senders and receivers of IP 51 packets, whether unicast or multicast. The problem is that many of 52 these services are attempting to group activities at source, when it 53 is only at join time that user grouping becomes apparent (if you 54 like, multicast usage is a good example of very late binding). These 55 services include Address Allocation and Session Creation, 56 Advertisement and Discovery. 58 This memo proposes approaches to solve some current multicast 59 problems rather statically with DNS and URL based approach, and avoid 60 the misguided pitfalls of trying to use address allocation to 61 implement traffic aggregation for different sources or aggregation of 62 multicast route policy control through control of such aggregated 63 sources. 65 Note that a minor level of aggregation occurs in applications which 66 source cumulative layered data (e.g. audio/video/game data - ref 67 vic/rat/rlc) - this memo is orthogonal to such an approach, which in 68 any case only results in a small constant factor reduction in state. 70 A lot of the IP multicast additional pieces of baggage are associated 71 with the multimedia conferencing on Mbone - however, the commercial 72 internet use of multicast includes many other applications - for 73 these, SDR may not be the best directory model. 75 1. Introduction 77 Multicast and related applications have traditionally been developed 78 in Routing and Transport areas. Naturally, designers have tried to 79 solve many problems using techniques familiar to those working in the 80 routing and transport areas, that is, with flooding or multicast. 82 Of course, global flooding or multicast do not scale very well, which 83 means that scalable solutions that make use of these techniques only 84 are often impossible in the world wide Internet. 86 An attempt to reduce the scalability requirement to localize 87 multicast and flooding area through TTL or administrative scoping 88 (intra-site, intra-provider multicast etc.) works only in a small 89 scale experiment like Mbone. In the real Internet, senders and 90 receivers of multicast communication, in general, may be using 91 different providers and are distributed beyond AS boundaries. 93 As a result, there was a hope that address aggregation and unicast 94 area topology report aggregation can solve the multicast scalability 95 problems in the same way that they have bailed out the unicast 96 Internet from problems with limitation of router memory and the 97 capacity needed for route update reports: 99 a) Unicast addresses refer to a location, however. Multicast 100 addresses are logical addresses, and refer to sets of members who 101 may be anywhere, and may be sent to by sources which are also in 102 more than one of many places. This means that for unrelated 103 multicast group (and we anticipate that, in general, we can expect 104 relationship between groups only when the groups belongs to a 105 single application and that there is far more group that is 106 unrelated than there is group that is related), there is no 107 meaningful allocation at session creation time of a mask/prefix 108 style multicast address, either for destination group, or sources. 110 b) To control the amount of state and routing control messages, 111 the Internet has divided the routing systems into autonomous 112 systems/regions, which can run their own routing, and need only 113 report summarized information at the edge to another region. This 114 serves two purposes in the Unicast world: 116 1/ Inter-domain routing protocols can be deployed that are 117 different in different areas (this may be applied recursively). 119 2/ Summarization can be applied at "min-cut" points in the 120 topology, and reachability information only needs to be 121 exported/imported across borders. 123 Note that, autonomous system boundaries are merely for operational 124 purpose of easy policy description. The boundary does not 125 contribute to protocol issues to reduce the amount of routing 126 information, which is accomplished with multi-layered OSPF without 127 BGP. 129 With multicast, while one could define inter-working boundaries and 130 functions as the IDMR WG has, the principle goal of scaling the 131 reports at a border cannot be achieved in a location independent 132 manner (in the sense that without moving all the receivers to a 133 particular region, there is no aggregation feasible). 135 As a result of this confusion, intra-domain multicast protocols, 136 which are expected to operate within a single AS have been developed 137 that scale poorly, even though there was no known inter domain 138 multicast protocol which solves the scalability problem. 140 It has been shown [MANOLO] that aggregation of multicast routing 141 table entries, the number of which is a major scalability problem for 142 IP multicast, is, in general, impossible. 144 The impossibility proof assumes nothing about QoS. That is, 145 multicast QoS Flow state can be aggregated as good/bad as multicast 146 best effort communication. RSVP may be extended to aggregate RSVP 147 requests of strongly interrelated flows, for example, for streams 148 with layered encoding, which may or may not share a single multicast 149 address, latter case of which may result in a small constant factor 150 of routing table entry reduction. 152 There may be a counter argument that a broadcast/prune in region (== 153 big ether) and spare in other region for clumpy cast can overcome the 154 problem. However, forwarding for "spare in other region" needs a 155 routing table entry of its own. Moreover, even in the region, 156 "broadcast/prune" scales worse than the theoretical lower bound of 157 PIM-SM/CBT [MANOLO]. 159 The impossibility of the multicast routing entry aggregation is 160 applicable to the entire Internet or to a small part, such as a 161 single subnet, of it. Still, in a small part of the Internet 162 containing are 4 parties (host, subnet, subdomain, etc.), all the 163 possible multicast forwarding pattern is 16 that, in a sense, 164 multicast routing table aggregation is possible, if multicast 165 addresses can be assigned according to the distribution pattern of 166 the receivers. So, there may still be some misunderstanding that in 167 multi-access link layers, such as Ethernet or ATM, where link local 168 address can be assigned dynamically, multicast forwarding state may 169 be aggregated. However, at the boundary, full routing table look-up 170 based on the IP addresses is necessary. Worse, if the part contains 171 16 parties, the possible pattern of receivers is 65536 that there 172 virtually is no aggregation possible. MPLS hype is, as usual, as 173 good/bad as using some multi-access link and does not help here. 175 Thus, it is now necessary to thoroughly reconsider the architecture 176 of multicast, Given a theoretical lower bound of multicast routing 177 table entries, now is the time to find a multicast algorithm to 178 achieve that lower bound. It is also meaningful to make the multicast 179 architecture independent of unicast address hierarchy. 181 Fortunately, some problems can easily be solved for many common cases 182 using techniques available in other areas without scalability 183 problems. 185 Since the legacy multicast architecture was constructed carefully 186 assuming routing table aggregation possible, it is necessary to 187 change some of it to deploy new techniques. To solve hard 188 scalability problems, it is necessary to recognize that all the 189 details of all the protocols are tightly interrelated. 191 The multicast problems identified to be better solved in internet or 192 application area in this memo are: 194 Multicast Address Allocation 196 There was a proposal to allocate multicast address dynamically 197 along the unicast address hierarchy. Such an allocation policy 198 was expected to enhance the possibility of aggregation. 199 However, as shown in the next section, it is impossible to 200 aggregate a class D multicast routing table using simple mask 201 and longest match type approaches. This, while it is still 202 possible to aggregate multicast address allocations, it is not 203 meaningful. 205 There is a misunderstanding that multicast addresses are scarce 206 resource that must be assigned dynamically. But, first of all, 207 dynamic assignment does not mean efficient assignment. 208 Secondly, as a multicast routing table cannot be aggregated, 209 the limitation on routing table size in many of today's routers 210 is such that we simply will run out of memory on a router 211 before we run out of addresses or even use a significant piece 212 of the class D address space: Considering the current global 213 unicast routing table size, 2^16 global multicast addresses are 214 more than enough. 216 Given these arguments, we can see that from performance 217 grounds, it is meaningful to allocate multicast addresses 218 statically through the DNS. 220 Multicast Core/RP Location 222 CBT and PIM-SM were developed as intra-domain multicast 223 protocols designed to be independent of the underlying unicast 224 routing protocols. Naturally, they achieve the lower bound of 225 spatial routing table size complexity. However, CBT and PIM-SM 226 are not totally independent of unicast routing architecture, 227 since they depends on flooding within an AS to locate the core 228 or rendez-vous point. While this scales a little better than 229 static assignment, it is still fairly bad. On the other hand, 230 it is straight forward to use DNS to map from DNS multicast 231 name to multicast address, core and RP. This solution may not 232 be an option when dynamic multicast address assignment was a 233 MUST and DNS dynamic update was not possible. However, this is 234 now rectified since DNS update is being implemented now. 236 Multicast Session Announcement 238 The announcement of multicast sessions can be performed over a 239 special multicast channel. However, this approach does not 240 scale if the number of multicast channels increases. Of course, 241 it is possible to introduce hierarchy of multicast session 242 announcement channels. The real world complex structure makes 243 the relationships between session announcement a complex 244 problem. In such a system, users would join a session 245 directory hierarchy by joining a group for some level, 246 following the hierarchy, or following short-cut or following 247 links, changing between several multicast groups to reach the 248 final destination multicast for the session they seek. But as 249 is shown later, multicasting costs routing table entries and 250 associated protocol processing power in the routers if 251 multicast data flows over the routers. Hence it is desirable 252 to constrain the number of multicast channels to be as small as 253 possible. 255 If, instead, we use WWW as EPG (Electric Program Guide) and 256 embed SDP or SMIL information in RTP URLs, it can be used as 257 multicast session declaration with an arbitrary complex 258 structure including hierarchy, short-cut or links, and we can 259 use search techniques on this static data more easily. 261 Of course, neither DNS nor WWW scale automatically: they must 262 continue to scale anyway and a lot of effort has already and will 263 continue to be paid to make them better scale, more dynamically and 264 more securely. and their servers are also becoming more capable 265 (caching etc). DNS will be used for unicast name to address lookup 266 for the foreseeable future, as WWW will be the preferred way to 267 retrieve information. 269 2. Meaningless Aggregation of Multicast Addresses 271 It is, in general, impossible to aggregate Internet standard 272 multicast routing table entries. 274 The minimum amount of state in each multicast router must be 275 proportional to the number of multicast data flows which are running 276 over it. 278 The locations of receivers are different, multicast application by 279 multicast application. Multicast forwarding must be performed over a 280 tree from each source (or from core/RP) to the receiver set for each 281 and every application. The sources are different too. Thus, the tree 282 is different multicast by multicast. 284 It is possible to aggregate multicast address allocation by making 285 multicast location dependent with, say, a root domain. Then it is 286 possible to aggregate routing table entries to the root domain. For 287 some type of central set of agencies (traditional broadcast TV/Radio) 288 it might be possible to site their feeds at the same places in the 289 Internet. But this is antithetical to the arbitrary growth allowed by 290 random siting/evolution of content providers today, even in the Web. 291 Sheer numbers preclude building unicast pipes from each source to a 292 central set of sites. 294 However, it is still impossible to aggregate routing table entries to 295 the receivers. The distribution pattern of receivers is unrelated to 296 the location of the root domain. That is, a separate routing table 297 entry is necessary for each multicast application. 299 A group of multicast receivers sharing a root domain may still have 300 weak relationships in that most of them do not have any member in 301 domains far from the root domain. Then, it is possible to share a 302 default routing table entry, not to forward anything. But, such an 303 entry is meaningless, because there is no data packet that will be 304 forwarded for that entry and we still need unaggregated routing table 305 entries for each multicast running over multicast routers. 307 Alternatively, it is possible to assign multicast addresses 308 aggregated according to the statically or runtime detected 309 distribution pattern of the receiver hosts, areas or domains. 310 However, even with 32 receiver hosts, areas or domains, we need 32 311 bits for the aggregation prefix of the multicast addresses, which is 312 too many for IPv4. Even IPv6 address space does not help a lot (96 313 receivers is not a great step forward!). Moreover, as the multicast 314 membership changes dynamically, the multicast address itself must 315 change dynamically. 317 In other words, if we stay in line with the current model of the 318 Internet standard multicast, it is impossible to aggregate multicast 319 routing table entries. It is meaningless to try to aggregate 320 multicast address assignment. 322 It is, of course, meaningful and necessary to delegate multicast 323 address allocation, hierarchically 325 3. The Difficulty of (Multicast) Address Assignment 327 Compared to the administrative effort for unicast address assignment 328 by IANA, Internic, RIPE, APNIC and all the country NICs and 329 development of the policy they used, it is trivially easy to develop 330 a DHCP protocol. The difficulty with DHCP was in the fact that the 331 clients can not be reached by its IP address. In the absence of this 332 bootstrap problem, it is trivial to develop a DHCP-like dynamic 333 multicast address assignment protocol for clients, whose unicast 334 addresses are already established. It could be as simple as a new 335 option field of DHCP. 337 However, such a use of DHCP is meaningless, unless an administrator 338 of the DHCP server has been delegated a block of unicast addresses 339 and establishes a policy on how to assign them to clients. We argue 340 that the DHCP-like mechanism for multicast is a misdirected solution. 342 Basically, multicast address assignment is not a protocol issue. 344 4. Recycling the Unicast Policy, Mechanism and Established Address 345 Assignment for Multicast Policy, Mechanism and Address Assignment 347 If rather static allocation of multicast address is acceptable, it 348 is possible to reuse the policy, mechanism, address assignment and 349 protocol of unicast address assignment for multicast addresses.. 351 For example, if we decide to use 225.0.0.0/8 for the static 352 allocation, it is trivial to delegate the authority of multicast 353 address 225.1.2.3 to an administrator of 3.2.1.in-addr.arpa, the 354 administrator of 1.2.3.0/24. 356 We can simply define that the multicast DNS name should be looked up 357 as: 359 3.2.192.225.in-addr.arpa. CNAME mcast.3.2.192.in-addr.arpa. 361 mcast.3.2.192.in-addr.arpa. PTR bbc.com. 363 bbc.com. A 225.192.2.3 365 Additionally if we construct applications that check the reverse 366 mapping, unauthorized use of multicast addresses will be 367 automatically rejected, which is what we are doing today with unicast 368 addresses. 370 Note that the administrator of 3.2.192.in-addr.arpa is not the final 371 person to be delegated the address but can further delegate the 372 authority of mcast.3.2.192.in-addr.arpa. to someone else. 374 It should also be noted that, while the delegation uses the existing 375 policy, mechanism, assignment and protocol, it does not mean that the 376 multicast address must be used within the unicast routing domain of 377 the unicast address block. 379 Just as MX servers or name servers can be located anywhere in the 380 Internet regardless of the location of the hosts under the DNS domain 381 they are serving, multicast channels can be used anywhere in the 382 world. 384 The assingment policy automatically assures global uniqueness. 386 However, it is still possible to have multicast addresses with local 387 scopes, as long as they share globally unique well known DNS names, 388 which is what we are using for intra-subnet multicast with IANA 389 assigned well known names [IANA]. 391 5. Core/RP location 393 The location of core of CBT or rendez-vous point of PIM-SM through 394 DNS is straight forward as: 396 bbc.com. A 255.192.2.3 397 RVP london-station.bbc.com. 399 or 401 bbc.com. A 255.192.2.3 402 CORE london-station.bbc.com. 404 Again, just as MX servers or name servers can be located anywhere in 405 the Internet regardless of the location of the hosts under the DNS 406 domain they are serving, core or rendez-vous points can be located 407 anywhere in the world. 409 CORE and RVP RRs have exactly the same syntax as PTR RR. Their query 410 type values are . Neither the current CBT 411 nor PIM-SM allow a single multicast group that has multiple cores or 412 rendez-vous points, though future extension may. Thus, at the DNS 413 level, a single node may have multiple CORE or RVP RRs. That is, the 414 following DNS node is a valid node: 416 bbc.com. A 255.192.2.3 417 RVP london-station.bbc.com. 418 RVP wales-station.bbc.com. 420 6. Session Announcement 422 The proposal is essentially to use a URL of RTP combined with SDP 423 like: 425 rtp://london-station.bbc.com/?t=2873397496+2873404696& 426 m=audio+3456+RTP/AVP+0&m=video+2232+RTP/AVP+31 428 The URL contains all the necessary information to establish a 429 session, including the domain name (or multicast address), port 430 number(s), RTP payload type and optional QoS requirement. 432 Then, users surfing over WWW can actively search or randomly 433 encounter some multicast or unicast RTP URL. 435 If the user clicks on the anchor of a URL, the user will be queried 436 whether he want to receive (should be default for multicast) or send 437 data or both (should be default for unicast). He will also queried 438 the source or destination of the data with appropriate default (his 439 TV at the living room) and the multicast session begins, if 440 necessary, with RSVP. 442 7. Policy Considerations 444 Some people wrongly believe that the separation of multicast routing 445 into the classical intra and inter domain functions is necessary for 446 multicast policy control. 448 Multicat policy is controlled by controlling forwarding direction of 449 multicast control or data packets, which is controlled by the unicast 450 routing table. 452 If separate control of multicast and unicast policy is desired, what 453 is necessary is to instantiate two sets of unicast routing tables, 454 one used for normal unicast routing and the other to be consulted by 455 multicast protocols. 457 While there may be intra and inter domain unicast routing and inter 458 domain multicast forwarding policy, we don't need a novel protocol to 459 disseminate policies in some multicast specific way for inter and 460 intra domain multicast policy routing. FOrwarding is forwarding. 462 8. References 464 [MANOLO] Manolo Sola, Masataka Ohta, Toshinori Maeno, "Scalability of 465 Internet Multicast Protocols", Proceedings of INET'98, 466 http://www.isoc.org/inet98/proceedings/6d/6d_3.htm, July 1998. 468 [IANA] For now, http://www.iana.org/ 470 [CBT] RFC 2189 Core Based Trees (CBT version 2) Multicast Routing. A. 471 Ballardie. September 1997. 473 [PIM] RFC 2117 Protocol Independent Multicast-Sparse Mode (PIM-SM): 474 Protocol Specification. D. Estrin, D. Farinacci, A. Helmy, D. Thaler, 475 S. Deering, M. Handley, V. Jacobson, C. Liu, P. Sharma, L. Wei. June 476 1997. 478 9. Security Considerations 480 For routers to merge multiple JOIN requests, which may contain 481 different (forged or wrong) Cores/RPs of the same group, and to 482 forward the JOIN to the true Core/RP, the routers must be able to 483 look up Core/RP from group address by themselves with some (weak or 484 strong) security. 486 The Core/RP information can not be flooded in advance for obvious 487 scalability problem that the look up must be on demand. 489 Because of this, e-mail, WWW or SDR can not be used to look up 490 Core/RP and DNS is the easiest way. 492 10. Authors' Addresses 494 Masataka Ohta 495 Computer Center 496 Tokyo Institute of Technology 497 2-12-1, O-okayama, Meguro-ku 498 Tokyo 152, JAPAN 500 Phone: +81-3-5734-3299 501 Fax: +81-3-5734-3415 502 EMail: mohta@necom830.hpcl.titech.ac.jp 504 Jon Crowcroft 505 Dept. of Computer Science 506 University College London 507 London WC1E 6BT, UK 509 EMail: j.crowcroft@cs.ucl.ac.uk