idnits 2.17.1 draft-perlman-simple-multicast-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-26) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 12 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 13 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (August 1998) is 9386 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'Cheriton' is defined on line 553, but no explicit reference was found in the text == Unused Reference: 'IGMP' is defined on line 555, but no explicit reference was found in the text == Unused Reference: 'CBT' is defined on line 558, but no explicit reference was found in the text == Unused Reference: 'PIMSM' is defined on line 561, but no explicit reference was found in the text == Unused Reference: 'BGMP' is defined on line 566, but no explicit reference was found in the text == Unused Reference: 'MASC' is defined on line 569, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'Cheriton' -- Possible downref: Non-RFC (?) normative reference: ref. 'IGMP' -- Possible downref: Non-RFC (?) normative reference: ref. 'CBT' ** Obsolete normative reference: RFC 2117 (ref. 'PIMSM') (Obsoleted by RFC 2362) -- Possible downref: Non-RFC (?) normative reference: ref. 'BGMP' -- Possible downref: Non-RFC (?) normative reference: ref. 'MASC' Summary: 10 errors (**), 0 flaws (~~), 9 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force R. Perlman 3 INTERNET DRAFT Sun Microsystems 4 C-Y Lee 5 Nortel 6 A. Ballardie 7 Research Consultant 8 J. Crowcroft 9 UCL 10 August 1998 12 A Design for Simple, Low-Overhead Multicast 13 15 Status of This Memo 17 This document is an Internet Draft, Internet Drafts are working 18 documents of the Internet Engineering Task Force (IETF), its Areas, 19 and its Working Groups. Note that other groups may also distribute 20 working documents as Internet Drafts. 22 Internet Drafts are draft documents valid for a maximum of six 23 months. Internet Drafts may be updated, replaced, or obsoleted by 24 other documents at any time. It is not appropriate to use Internet 25 Drafts as reference material, or to cite them other than as a 26 ``working draft'' or ``work in progress.'' 28 Please check the I-D abstract listing contained in each Internet 29 Draft directory to learn the current status of this or any other 30 Internet Draft. 32 Abstract 34 This paper describes how a lot of the complexity and overhead of 35 multicast can go away if a slightly different approach is taken. 36 Approaches like this have been proposed in the past, but the design 37 has not been carried through to completion. This paper describes the 38 approach and then compares it with other approaches. 40 1.0 Introduction 42 The basic idea is that a multicast group is created by generating: 44 - a name, suitable for lookup by humans, 46 - a multicast address (i.e., a class D IP multicast address) 47 - a distinguished node known as the "core" 49 Endnodes look up the group through some sort of directory, e.g., DNS 50 or SDR. The look-up is based on name, but the information retrieved 51 is the multicast address and the core address. 53 Endnodes then join the group by sending a special join message 54 towards the core, creating state in the routers along the path. The 55 result is a tree of shortest paths from the core to each member, 56 where only the routers along the tree need to have any state about 57 the group. 59 1.1 Very simplified Review of PIM-SM and BGMP 61 Originally there was the idea, presented in CBT, of forming a 62 multicast group by choosing a distinguished node, the "core", and 63 having all members join by sending special join messages towards the 64 core. The routers along the path keep state about which ports are in 65 the group. If a router along the path of the join already has state 66 about that group the join does not proceed further. Instead that 67 router just "grafts" the new limb onto the tree. The result is a tree 68 of shortest paths from the core, with only the routers along the path 69 knowing anything about that group. 71 Later modifications included: 72 - "optimizing" the tree by forming a tree for each sender. The way 73 this is done, each node could independently decide whether the volume 74 of traffic from a particular source was worth joining a per-source 75 tree. The result was that there were two possible trees for traffic 76 from a particular source for group M, the shared tree and the source 77 tree. To prevent loops, the shared tree had to be unidirectional, 78 i.e., to send to the shared tree, you'd have to unicast to the core. 79 - the assumption is that routers have to be able to figure out, 80 solely based on the multicast address, who the core was. This 81 resulted in a protocol whereby "core-capable" routers continuously 82 advertise themselves, all routers keep track of the current set of 83 live core-capable routers, and there is some sort of hash to map a 84 multicast address to one of the set of core-capable routers. This 85 advertisement protocol is confined to within a domain. 86 - Because the core-capable advertisements (mercifully) do not 87 travel through the entire Internet, but are instead confined to a 88 domain, there needed to be a protocol whereby routers could know the 89 mapping of multicast address to core, even in other domains. The 90 proposal (BGMP) was to have each domain acquire (somehow, a current 91 area for research) a set of addresses. These addresses would be 92 advertised through the interdomain routing protocol. Therefore they 93 had to be allocated in blocks so as to attempt not to place too much 94 of a burden on the interdomain routing protocol. 96 We use the term "aggregatable" to mean that the routing protocol can 97 summarize large numbers of addresses in a small amount of space. 99 2.0 The Design 101 The design we propose is most similar in spirit to the original idea 102 of CBT, because it does not have: 103 - per source trees 104 - the necessity for routers to know the mapping between multicast 105 addresses and cores 107 2.1 Address Allocation 109 The only constraint on the multicast addresses in our proposal is 110 that they be unique in time and space. There is no need for the 111 addresses to be aggregatable, since they will not be advertised in 112 routing protocols. It is far simpler to allocate addresses, and you 113 can do with a smaller number of them, if there are no further 114 constraints on the addresses than that they be unique in time (and 115 space). 117 The way to allocate addresses is to have a bunch of address 118 allocation servers sprinkled throughout the Internet, each with a 119 block of addresses. Anyone that wants to form a group finds any one 120 of the address allocation servers, asks for an address and gives an 121 amount of time, say 1 day. After 1 day (plus handwave for clock skew, 122 etc.,) that address can be reallocated to someone else. 124 There can be a hierarchy of these servers. At the top, there might be 125 10 of them, each with 1/10 of the address space. Anyone could ask one 126 of them for a single address, or for a block of addresses. If you 127 want to be a server handing out addresses, you can request a block of 128 addresses, and then clients can request individual addresses from 129 you. If you run out, you can ask for more. 131 Having a hierarchy with lots of servers, rather than a few servers 132 with millions of addresses each, has the following advantages: 133 - no performance bottlenecks 134 - it's likely a server will be nearby 135 - servers don't have to keep track of millions of individual 136 addresses, each one with a different timer as to when it can be re- 137 allocated. 139 Aggregation wastes addresses, since if you don't overestimate, you 140 will run out and have to acquire more blocks, which will cause 141 fragmentation of the space (more intervals to have to advertise). 142 Since in our scheme the addresses will not get advertised in any 143 routing protocol, there is no necessity to have them have any 144 topological significance or any other constraint that would waste the 145 addresses. Therefore there will effectively be more addresses, and 146 less likelihood they will "run out". 148 If we are worried about running out even though in our proposal they 149 do not need to be aggregatable, then we could reserve some to be 150 limited in scope, say within a domain. If you wanted to create a 151 group where you know that all members are in that domain, you'd get 152 one of the "local" addresses. Since the same block of local addresses 153 can be used simultaneously within different domains, this makes a lot 154 more addresses. Routers at the domain boundary should set up filters 155 to make sure that packets with local addresses don't "leak" out. This 156 concept is known as "administratively scoped addresses". 158 2.2 Creating the Multicast Group 160 There should be a "multicast group creation" utility. 162 To create a group you pick a name that humans will recognize the 163 group by. You input that into the multicast group creation utility, 164 along with information about how long the group will live, and a 165 logical choice for the core address. If no core is specified, the 166 default is to have the node that created the group be the core. The 167 utility finds an address allocation server, gets an address, and then 168 registers the group and core address into the directory. 170 2.3 Joining a Group 172 To join a group, you browse the directory to find the appropriate 173 name, and then tell a "multicast join" utility that you'd like to 174 join that group. The utility looks up the address and core address. 176 Then it sends a special control packet, known as a "join" message, 177 from your node. This message gets sent in the same direction as a 178 unicast message to the core address would be sent, but it creates 179 state in the routers along the path, to know which ports are in the 180 group. If a router receives a join for multicast address M, and it 181 already has state for M, then it merely adds that port to its set of 182 ports for M, and does not forward the join further. 184 The result is a tree of shortest paths from the core to each member. 185 Each router on the tree has a database of (M, {ports}) that tells it, 186 for group M, which ports are in the tree. 188 2.4 Transmitting to multicast group M 189 If you are a member of the group, you simply transmit an IP packet 190 with destination address M. A router that receives a packet with 191 multicast address M checks its multicast database. If it knows about 192 M, it checks if the port it received it on is in its database. If 193 not, it drops the packet. If so, it forwards the packet onto all the 194 other ports listed in its database for M. The result is a bi- 195 directional tree. 197 If you are not a member of the group but want to transmit to the 198 group, you unicast to the core. 200 2.5 Interdomain Groups 202 Everything described above works just as well in the interdomain 203 case. You join a group by sending a join message to the core 204 address. No matter what domain the core is in, it has an IP address 205 and IP unicast routing already knows how to reach it. 207 2.6 Backbones 209 There might be concern about a backbone ISP that might need to keep 210 state about billions of groups. It is extremely unlikely that so many 211 groups would exist, especially over such a wide geographic area. 212 However, if there really is that concern, the solution is tunnels 213 between boundary routers in the backbone. A tunnel can be thought of 214 as a point-to-point link between the two routers. Traffic is sent 215 across the tunnel by adding an additional IP header, with the 216 destination address in the outer header being the address of the 217 other tunnel endpoint. 219 The simplest strategy is to assume a full mesh of tunnels between 220 every pair of boundary routers in that backbone. So for instance, if 221 there are 100 boundary routers for that backbone, each would maintain 222 99 tunnels, to each of the 99 other backbone boundary routers. Each 223 of those are treated as a "port". 225 When receiving a join, backbone boundary router R1 checks its routing 226 database to see which backbone boundary router leads to the core 227 address, say R7. It then forwards the join along the tunnel it 228 maintains between itself and R7. In its multicast group state, it 229 makes an entry for the multicast address M, and the set of ports 230 (including tunnel) included in the tree for M. 232 In this way the interior nodes of the backbone do not keep any state 233 about individual groups. 235 It is not necessary to maintain all the tunnels. They can be formed 236 on an as-needed basis. In fact, there is no reason for a tunnel to be 237 "maintained" at all. If R1 determines that multicast M should be 238 forwarded on a tunnel to R7, all it has to do is remember "R7", and 239 when it receives a packet with destination address M, it tunnels it 240 to R7. 242 2.7 Per-Source Trees 244 Don't bother. The most logical thing to optimize is the overhead to 245 the network to deliver a packet through the tree. The ideal tree is a 246 minimum weight spanning tree, but no proposals have attempted to try 247 to calculate a minimum weight spanning tree. It is most likely too 248 difficult to do so, though a simple algorithm for creating a minimum 249 weight or near-minimum weight tree would certainly be a welcome 250 advance in IP multicast. 252 A per source tree does not use less overhead to deliver the message 253 than the tree rooted at the core. The only thing more optimal about a 254 source tree than the shared core tree is the delay from the time the 255 source transmits to the time everyone receives the message. It is 256 unlikely that an application that is so delay sensitive that it would 257 work with a per-source tree but not with the shared core tree would 258 work at all if there are members located any significant distance 259 away from the source. 261 If there were an application that happens to be in a topology where 262 all members are close enough if per-source trees are used, but the 263 members are not close enough if the shared tree is used, then it is 264 certainly possible to explicitly form multiple trees in that case. 265 But this is an extremely rare scenario, and we should not require the 266 network to keep n times as much state for all groups, with a mind- 267 bogglingly complex protocol for switching from shared to per-source 268 trees, for the convenience of the very rare case. 270 2.8 Detecting Failure of the tree 272 This can be done with a simple scheme of keep-alive timers sent when 273 there has been no traffic for some amount of time. That way failures 274 can be detected and corrected when they occur. 276 If a router stops receiving keep-alives from the port away from the 277 core, it removes that port from the tree. If it stops receiving 278 keep-alives from the port towards the core, it rejoins the tree. If 279 a member stops receiving keep-alives, it rejoins the tree. 281 2.9 Detecting failures of the core 283 Ideally, the node selected as the core should be robust or a 284 redundant server may be installed as the hardware backup core. 286 To deal with core failures there is no one best way because it 287 depends on engineering tradeoffs: 289 - speed of recovery after a core failure 291 - state in routers 293 - bandwidth use 295 To give an example of tradeoffs, consider an extreme example of an 296 application for which every packet must be delivered without delay. 297 Such an application cannot tolerate waiting for unicast routing to 298 notice a link failure and reroute, or react to lack of receipt of 299 keep-alive messages. In this case, the ideal solution is to 300 consciously create multiple groups and transmit every message on all 301 of the trees. This uses a lot of bandwidth since all data is sent on 302 multiple trees, but there is no other way to accommodate an 303 application that must have that degree of availability. 305 For applications that can tolerate some amount of time after failure 306 to rebuild a new tree, there can be a backup group created, and nodes 307 would join the backup group if the core for the original group is 308 unreachable. 310 In the case where it is reasonable to have state in the routers, the 311 backup group could be formed proactively, but not used for data 312 unless the first group fails. The routers would have to maintain 313 state about two groups, and perhaps incur a modest bandwidth overhead 314 for keepalives to maintain the health of the backup tree, but there 315 would be no need to send all data on both trees as it would in the 316 first example application. 318 Decisions about whether to create multiple groups proactively can be 319 made on a per-application basis. 321 3.0 FAQs and answers 323 3.1 What if the core is an endnode? Endnodes can't forward packets. 325 A tree is a tree. It doesn't matter who the core is. Once the tree is 326 formed, the traffic pattern is the same no matter which node is the 327 core. If an endnode has only a single link to the network, it will 328 never forward traffic. If it receives traffic on that link, it does 329 not have any "other" links to forward the traffic to, so the single- 330 link core just receives or generates multicast like any other endnode 331 would do. 333 3.2 How should the core be chosen for an optimal tree? 334 If the core is a member of the group, the tree will be reasonably 335 optimal, and certainly from the viewpoint of how expensive it will be 336 for the network to deliver the packet, the core tree is as likely to 337 be good as any per-source tree. 339 3.3 IP is already deployed in the endnodes. How can we change all those 340 kernels? 342 There is no need to change the kernels. This can be deployed as an 343 application layer process which sends the special IP packet which is 344 the join message. There is no need to modify IGMP. As a matter of 345 fact, IGMP is not needed for this proposal. 347 IGMP is still useful, as is, for dense mode multicast, and this 348 proposal does not require removing IGMP or modifying IGMP. So there 349 are no difficult kernel modifications to the IP stack as a result of 350 this proposal. 352 3.4 Won't BGMP allow policy control, somehow? 354 There is the belief that since BGP allows routing policies, that BGMP 355 will somehow allow policies, though what exactly it is supposed to be 356 doing and how it would do it is not exactly well thought out at this 357 point. 359 We claim that each border router can be individually configured with 360 policies which it can individually enforce, and accomplish anything 361 that might have been accomplished with BGMP. This can be done with 362 the same sorts of packet filters that are already done in firewalls. 364 4.0 Security/Policy considerations 366 This section discusses various problems we might be trying to solve, 367 and proposed solutions. 369 4.1 Hiding data from unauthorized receivers 371 A motivation, perhaps, is to limit delivery to those receivers that 372 have "paid their bill". 374 One method of doing this is to try to ensure that the routing 375 infrastructure does not deliver data to unauthorized receivers. This 376 is not the right way to do this. The right way to hide data from the 377 unauthorized receivers is to encrypt the data, making sure that only 378 authorized receivers are given the key. Then there is no reason to 379 have the routing infrastructure attempt to prevent unauthorized nodes 380 from receiving the transmitted data, since it will do them no good 381 (it will be encrypted). 383 There has recently been work on key distribution for multicast, and 384 there are schemes for efficiently changing the shared group key 385 periodically, or when a new member joins (if it's not allowed to see 386 previous data), or when a member leaves (if it is not allowed to see 387 data after it leaves). These schemes are interesting, but rather than 388 describing them here, for the purpose of this paper we can assume 389 that distributing a shared secret key to all authorized receivers is 390 a solved problem. 392 The basic idea is that there is a group moderator that a member has 393 to check in with, first authenticating and then proving somehow that 394 he is authorized to join the group. Then the group moderator gives 395 that new member the key. 397 Making key changes very efficient is accomplished, in the recent work 398 on multicast key distribution, by having a hierarchy of keys, and 399 giving each member log(n) keys, so that changing the key only 400 involves log(n) work rather than having to individually contact each 401 member and give them a new group key. 403 4.2 Preventing unauthorized transmitters from cluttering the group with 404 data 406 The basic solution is to have authorized transmitters "sign" the 407 packet (need not be a public key signature), and have various filter 408 points reject unauthorized packets. 410 There are three possible scenarios: 412 a) all authorized group members are trusted to transmit, or at least 413 not to transmit when they should not. 415 In this case, the single shared group secret key will suffice. Each 416 packet can include a MIC (message integrity code), for instance, a 417 keyed MD with the group secret. Anyone that knows the group secret 418 can verify the MIC and discard the packet. If routers don't check the 419 MIC, then receivers will receive the spam, but will be able to 420 recognize it as bogus and discard it. If some selected routers are 421 given the shared key (say the firewall at the entrance to your 422 domain), then it can discard bogus packets before they clutter the 423 bandwidth of your domain. 425 b) there are "receiver-only" members that are not trusted to refrain 426 from transmitting, but they need to be able to recognize bogus 427 packets injected by unauthorized transmitters 429 In this case, we can't base the MIC on a secret key known to the 430 receivers. Instead we'd use public key technology. All the members, 431 and any routers that will be doing filtering, are given the 432 "authorized transmitter public key". All authorized transmitters are 433 given the authorized transmitter private key. They digitally sign 434 packets, and receivers and selected routers can recognize and discard 435 bogus packets. 437 c) there are authorized transmitters, but you don't want them to 438 impersonate each other 440 In this case, we can't have them all use the same private key. 441 Instead, we'd still have a single group public key that would need to 442 be distributed to all members and selected routers, but each 443 authorized transmitter would use an individual private key to sign 444 packets. There would be a group moderator that would know the group 445 private key. It would authorize a node to be a transmitter by signing 446 a certificate authorizing that transmitter's key to transmit packets 447 to this group. 449 In all of these cases, packets can be filtered, either at the 450 receivers, or earlier. If routers do not have the bandwidth to check 451 every packet for digital signatures, then they can "spot check", and 452 only start paying a lot of attention to a particular multicast 453 group's messages if it starts seeing bogus packets. Also, if there 454 are multiple routers they can share the load, and there are several 455 variants of this: 456 - each router checks a random percentage of packets. In this way, 457 most bogus packets will have been discarded before they reach the 458 receivers 459 - the routers along the path coordinate as to which packets each 460 will handle. For instance, there can be a simple hash function, and 461 each of the k routers on the path takes the ones that hash to one of 462 the values mod k. 463 - at the entrance to the domain, a bit in the packet is flipped 464 indicating the packet has not been verified. Then any router in the 465 domain that has spare cycles can check packets that have the bit set, 466 and either discard the packet (if it's bogus) or clear the bit so 467 other routers (and receivers) need not waste cycles testing the 468 packet. (assuming we're trying to protect ourselves from people 469 outside our domain, so we trust all the nodes within the domain). 471 5.0 Overhead Compared to other schemes 473 Address allocation is much simpler. There is no need for addresses to 474 be aggregatable. There is no reason to have BGMP to pass multicast 475 address blocks around in the interdomain protocol. The only routing 476 is based on the core address, which is a unicast IP address, which IP 477 already needs to be able to route towards. Passing around multicast 478 address ranges through BGMP uses bandwidth, CPU, and memory in the 479 routers, not to mention pages of specifications, implementation 480 effort, and more things to read and understand. With our proposal 481 none of that is necessary. 483 There is no reason, as in PIM-SM, to have core-capable routers 484 advertise themselves, and to have some sort of hash scheme so that 485 every router can determine, based on the multicast address, which of 486 the core-capable routers would be the core for that multicast 487 address. It is expensive to have all core-capable routers advertising 488 all the time. It is very complex to have an algorithm whereby a 489 router from that set is chosen in an identical manner by all routers, 490 and to hope that the same core will be chosen at all times by all 491 routers even if the set of core capable routers is different when one 492 makes a decision vs. another router making the decision. 494 In PIM-SM, the core is basically a router chosen at random, and there 495 is no reason to believe it will be conveniently located near the 496 other members of the group. So therefore the shared tree is likely to 497 be very suboptimal, leading to the desire for per-source trees. In 498 this scheme, since the core will be a member of the group or a node 499 consciously chosen by whoever was creating the group, the shared tree 500 is likely to be as good a tree as any of the per-source trees, 501 certainly according to the metric of total cost to the network of 502 delivering the data. 504 Using a single tree per group saves the network k times as much 505 state, assuming on the average, k transmitters per group. 507 Another possible scheme is Dave Cheriton's Express model, where every 508 group is a combination of multicast address and source address. The 509 multicast address, in other words, is unique to the source. Then 510 address allocation becomes even simpler than in our scheme. However, 511 the problem with this model is that for something like a conference 512 call every member needs to know all the possible transmitters so they 513 can join a separate group for each possible transmitter. If a new 514 member joins the group, somehow all the other members have to be 515 alerted in order to join the new tree. Also, this model requires the 516 network to keep k times as much state, i.e., a separate tree for 517 every possible transmitter. 519 6.0 Summary 521 Although this proposal is similar to CBT and PIM-SM, it differs 522 because: 524 - routers do not need to figure out which core goes with which 525 multicast address 526 - the core is a member of the group, or explicitly chosen for that 527 group, so the shared tree will be fairly good 528 - use a single shared tree per group, though extra groups can be 529 formed in the rare cases where a shared tree is not good enough 530 (i.e., don't bother with dynamically formed per-source trees) 531 - the shared tree can be bi-directional (and therefore more 532 efficient) 533 - no need for a protocol for core-capable routers to advertise 534 - no need for an algorithm that will hash a multicast address to 535 the choice among the set of core-capable routers 536 - no need to pass multicast address ranges in the interdomain 537 routing protocol 539 7. Acknowledgments 540 We would like to thank Brad Cain and Ross Callon for their comments. 541 Tony Ballardie would like to thank British Telecom Plc, and 3Com 542 Corporation for assisting with funding his work. 544 References 546 [STATIC_MCAST] M. Ohta, J. Crowcroft 547 Static Multicast, Internet-Draft, March 1998 549 [DNS_RP] DNS Based RP Placement scheme 550 Dino Farinacci's presentation in the MBONED WG, 551 40th IETF Meeting 553 [Cheriton] IDMR Mailing List discussion 555 [IGMP] Cain, Deering, Thyagarajan 556 Internet Group Management Protocol, Version 3, 558 [CBT] Ballardie, Cain, Zhang, Core Based Tree Multicast 559 Routing, Internet-Draft, March 1998 561 [PIMSM] Estrin, Farinacci, Helmy, Thaler, Deering, Handley, 562 Jacobson, Liu, Sharma, and Wei. 563 Protocol independent multicast-sparse mode (PIM- SM) 564 Specification, RFC-2117, June 1997 566 [BGMP] Thaler, Estrin, Meyers 567 Border Gateway Multicast Protocol Specification, 569 [MASC] Estrin, Handley, Kumar, Thaler 570 Multicast Address Set Clain Protocol 572 Authors' Addresses 574 Radia Perlman 575 Sun Microsystems Laboratories 576 2 Elizabeth Drive 577 Chelmsford, MA 01824 578 Radia.Perlman@sun.com 580 Cheng-Yin Lee 581 Nortel (Northern Telecom), Ltd. 582 PO Box 3511, Station C 583 Ottawa, ON K1Y 4H7, Canada 584 leecy@nortel.ca 586 Tony Ballardie 587 Research Consultant 588 aballardie@acm.org 590 Department of Computer Science 591 University College London 592 Gower Street 593 London, WC1E 6BT 594 UK 595 J.Crowcroft@cs.ucl.ac.uk