idnits 2.17.1 draft-armitage-ion-venus-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-03-29) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 1577 (ref. '1') (Obsoleted by RFC 2225) == Outdated reference: A later version (-14) exists of draft-ietf-rolc-nhrp-11 ** Downref: Normative reference to an Informational RFC: RFC 2121 (ref. '4') == Outdated reference: A later version (-06) exists of draft-armitage-ion-mars-scsp-02 -- Possible downref: Normative reference to a draft: ref. '5' == Outdated reference: A later version (-04) exists of draft-ietf-ion-scsp-01 -- Possible downref: Normative reference to a draft: ref. '7' Summary: 10 errors (**), 0 flaws (~~), 4 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet-Draft Grenville Armitage 3 Bellcore 4 April 21st, 1997 6 VENUS - Very Extensive Non-Unicast Service 7 9 Status of this Memo 11 This document was submitted to the IETF Internetworking over NBMA 12 (ION) WG. Publication of this document does not imply acceptance by 13 the ION WG of any ideas expressed within. Comments should be 14 submitted to the ion@nexen.com mailing list. 16 Distribution of this memo is unlimited. 18 This memo is an internet draft. Internet Drafts are working documents 19 of the Internet Engineering Task Force (IETF), its Areas, and its 20 Working Groups. Note that other groups may also distribute working 21 documents as Internet Drafts. 23 Internet Drafts are draft documents valid for a maximum of six 24 months. Internet Drafts may be updated, replaced, or obsoleted by 25 other documents at any time. It is not appropriate to use Internet 26 Drafts as reference material or to cite them other than as a "working 27 draft" or "work in progress". 29 Please check the lid-abstracts.txt listing contained in the 30 internet-drafts shadow directories on ds.internic.net (US East 31 Coast), nic.nordu.net (Europe), ftp.isi.edu (US West Coast), or 32 munnari.oz.au (Pacific Rim) to learn the current status of any 33 Internet Draft. 35 Abstract 37 The MARS model (RFC2022) provides a solution to intra-LIS IP 38 multicasting over ATM, establishing and managing the use of ATM pt- 39 mpt SVCs for IP multicast packet forwarding. Inter-LIS multicast 40 forwarding is achieved using Mrouters, in a similar manner to which 41 the `Classical IP over ATM' model uses Routers to inter-connect LISes 42 for unicast traffic. The development of unicast IP shortcut 43 mechanisms (e.g. NHRP) has led some people to request the 44 development of a Multicast equivalent. There are a number of 45 different approaches. This document focuses exclusively on the 46 problems associated with extending the MARS model to cover multiple 47 clusters or clusters spanning more than one subnet. It describes a 48 hypothetical solution, dubbed `Very Extensive NonUnicast Service' 49 (VENUS), and shows how complex such a service would be. It is also 50 noted that VENUS ultimately has the look and feel of a single, large 51 cluster using a distributed MARS. This document is being issued to 52 help focus ION efforts towards alternative solutions for establishing 53 ATM level multicast connections between LISes. 55 1. Introduction 57 The classical model of the Internet running over an ATM cloud 58 consists of multiple Logical IP Subnets (LISs) interconnected by IP 59 Routers [1]. The evolving IP Multicast over ATM solution (the `MARS 60 model' [2]) retains the classical model. The LIS becomes a `MARS 61 Cluster', and Clusters are interconnected by conventional IP 62 Multicast routers (Mrouters). 64 The development of NHRP [3], a protocol for discovering and managing 65 unicast forwarding paths that bypass IP routers, has led to some 66 calls for an IP multicast equivalent. Unfortunately, the IP 67 multicast service is a rather different beast to the IP unicast 68 service. This document aims to explain how much of what has been 69 learned during the development of NHRP must be carefully scrutinized 70 before being re-applied to the multicast scenario. Indeed, the 71 service provided by the MARS and MARS Clients in [2] are almost 72 orthogonal to the IP unicast service over ATM. 74 For the sake of discussion, let's call this hypothetical multicast 75 shortcut discovery protocol the `Very Extensive Non-Unicast Service' 76 (VENUS). A `VENUS Domain' is defined as the set of hosts from two or 77 more participating Logical IP Subnets (LISs). A multicast shortcut 78 connection is a point to multipoint SVC whose leaf nodes are 79 scattered around the VENUS Domain. (It will be noted in section 2 80 that a VENUS Domain might consist of a single MARS Cluster spanning 81 multiple LISs, or multiple MARS Clusters.) 83 VENUS faces a number of fundamental problems. The first is exploding 84 the scope over which individual IP/ATM interfaces must track and 85 react to IP multicast group membership changes. Under the classical 86 IP routing model Mrouters act as aggregation points for multicast 87 traffic flows in and out of Clusters [4]. They also act as 88 aggregators of group membership change information - only the IP/ATM 89 interfaces within each Cluster need to know the specific identities 90 of their local (intra-cluster) group members at any given time. 91 However, once you have sources within a VENUS Domain establishing 92 shortcut connections the data and signaling plane aggregation of 93 Mrouters is lost. In order for all possible sources throughout a 94 VENUS Domain to manage their outgoing pt-mpt SVCs they must be kept 95 aware of MARS_JOINs and MARS_LEAVEs occuring in every MARS Cluster 96 that makes up a VENUS Domain. The nett effect is that a VENUS domain 97 looks very similar to a single, large distributed MARS Cluster. 99 A second problem is the impact that shortcut connections will have on 100 IP level Inter Domain Multicast Routing (IDMR) protocols. Multicast 101 groups have many sources and many destinations scattered amongst the 102 participating Clusters. IDMR protocols assume that they can calculate 103 efficient inter-Cluster multicast trees by aggregating individual 104 sources or group members in any given Cluster (subnet) behind the 105 Mrouter serving that Cluster. If sources are able to simply bypass an 106 Mrouter we introduce a requirement that the existence of each and 107 every shortcut connection be propagated into the IDMR decision making 108 processes. The IDMR protocols may need to adapt when a source's 109 traffic bypasses its local Mrouter(s) and is injected into Mrouters 110 at more distant points on the IP-level multicast distribution tree. 111 (This issue has been looked at in [7], focussing on building 112 forwarding trees within networks where the termination points are 113 small in number and sparsely distributed. VENUS introduces tougher 114 requirements by assuming that multicast group membership may be dense 115 across the region of interest.) 117 This document will focus primarily on the internal problems of a 118 VENUS Domain, and leave the IDMR interactions for future analysis. 120 2. What does it mean to `shortcut' ? 122 Before going further it is worth considering both the definition of 123 the Cluster, and two possible definitions of `shortcut'. 125 2.1 What is a Cluster? 127 In [2] a MARS Cluster is defined as the set of IP/ATM interfaces that 128 are willing to engage in direct, ATM level pt-mpt SVCs to perform IP 129 multicast packet forwarding. Each IP/ATM interface (a MARS Client) 130 must keep state information regarding the ATM addresses of each leaf 131 node (recipient) of each pt-mpt SVC it has open. In addition, each 132 MARS Client receives MARS_JOIN and MARS_LEAVE messages from the MARS 133 whenever there is a requirement that Clients around the Cluster need 134 to update their pt-mpt SVCs for a given IP multicast group. 136 It is worth noting that no MARS Client has any concept of how big its 137 local cluster is - this knowledge is kept only by the MARS that a 138 given Client is registered with. 140 Fundamentally the Cluster (and the MARS model as a whole) is a 141 response to the requirement that any multicast IP/ATM interface using 142 pt-mpt SVCs must, as group membership changes, add and drop leaf 143 nodes itself. This means that some mechanism, spanning all possible 144 group members within the scopes of these pt-mpt SVCs, is required to 145 collect group membership information and distribute it in a timely 146 fashion to those interfaces. This is the MARS Cluster, with certain 147 scaling limits described in [4]. 149 2.2 LIS/Cluster boundary `shortcut' 151 The currently popular definition of `shortcut' is based on the 152 existence of unicast LIS boundaries. It is tied to the notion that 153 LIS boundaries have physical routers, and cutting through a LIS 154 boundary means bypassing a router. Intelligently bypassing routers 155 that sit at the edges of LISs has been the goal of NHRP. Discovering 156 the ATM level identity of an IP endpoint in a different LIS allows a 157 direct SVC to be established, thus shortcutting the logical IP 158 topology (and very real routers) along the unicast path from source 159 to destination. 161 For simplicity of early adoption RFC2022 recommends that a Cluster's 162 scope be made equivalent to that of a LIS. Under these circumstances 163 the `Classical IP' routing model places Mrouters at LIS/Cluster 164 boundaries, and multicast shortcutting must involve bypassing the 165 same physical routing entities as unicast shortcutting. Each MARS 166 Cluster would be independent and contain only those IP/ATM interfaces 167 that had been assigned to the same LIS. 169 As a consequence, a VENUS Domain covering the hosts in a number of 170 LIS/Clusters would have to co-ordinate each individual MARS from each 171 LIS/Cluster (to ensure group membership updates from around the VENUS 172 Domain were propagated correctly). 174 2.3 Big Cluster, LIS boundary `shortcut' 176 The MARS model's fundamental definition of a Cluster was deliberately 177 created to be independent of unicast terminology. Although not 178 currently well understood, it is possible to build a single MARS 179 Cluster that encompasses the members of multiple LISs. As expected, 180 inter-LIS unicast traffic would pass through (or bypass, if using 181 NHRP) routers on the LIS boundaries. Also as expected, each IP/ATM 182 interface, acting as a MARS Client, would forward their IP multicast 183 packets directly to intra-cluster group members. However, because the 184 direct intra-cluster SVCs would exist between hosts from the 185 different LISs making up the cluster, this could be considered a 186 `shortcut' of the unicast LIS boundaries. 188 This approach immediately brings up the problem of how the IDMR 189 protocols will react. Mrouters only need to exist at the edges of 190 Clusters. In the case of a single Cluster spanning multiple LISs, 191 each LIS becomes hidden behind the Mrouter at the Cluster's edge. 192 This is arguably not a big problem if the Cluster is a stub on an 193 IDMR protocol's multicast distribution tree, and if there is only a 194 single Mrouter in or out of the Cluster. Problems arise when two or 195 more Mrouters are attached to the edges of the Cluster, and the 196 Cluster is used for transit multicast traffic. Each Mrouter's 197 interface is assigned a unicast identity (e.g. that of the unicast 198 router containing the Mrouter). IDMR protocols that filter packets 199 based on the correctness of the upstream source may be confused at 200 receiving IP multicast packets directly from another Mrouter in the 201 same cluster but notionally `belonging' to an LIS multiple unicast IP 202 hops away. 204 Adjusting the packet filtering algorithms of Mrouters is something 205 that needs to be addressed by any multicast shortcut scheme. It has 206 been noted before and a solution proposed in [7]. For the sake of 207 argument this document will assume the problem solvable. (However, it 208 is important that any solution scales well under general topologies 209 and group membership densities.) 211 A multi-LIS MARS Cluster can be considered a simple VENUS Domain. 212 Since it is a single Cluster it can be scaled using the distributed 213 MARS solutions currently being developed within the IETF [5,6]. 215 3. So what must VENUS look like? 217 A number of functions that occur in the MARS model are fundamental to 218 the problem of managing root controlled, pt-mpt SVCs. The initial 219 setup of the forwarding SVC by any one MARS Client requires a 220 query/response exchange with the Client's local MARS, establishing 221 who the current group members are (i.e. what leaf nodes should be on 222 the SVC). Following SVC establishment comes the management phase - 223 MARS Clients need to be kept informed of group membership changes 224 within the scopes of their SVCs, so that leaf nodes may be added or 225 dropped as appropriate. 227 For intra-cluster multicasting the current MARS approach is our 228 solution for these two phases. 230 For the rest of this document we will focus on what VENUS would look 231 like when a VENUS Domain spans multiple MARS Clusters. Under such 232 circumstances VENUS is a mechanism co-ordinating the MARS entities of 233 each participating cluster. Each MARS is kept up to date with 234 sufficient domain-wide information to support both phases of client 235 operation (SVC establishment and SVC management) when the SVC's 236 endpoints are outside the immediate scope of a client's local MARS. 237 Inside a VENUS Domain a MARS Client is supplied information on group 238 members from all participating clusters. 240 The following subsections look at the problems associated with both 241 of these phases independently. To a first approximation the problems 242 identified are independent of the possible inter-MARS mechanisms. The 243 reader may assume the MARS in any cluster has some undefined 244 mechanism for communicating with the MARSs of clusters immediately 245 adjacent to its own cluster (i.e. connected by a single Mrouter hop). 247 3.1 SVC establishment - answering a MARS_REQUEST. 249 The SVC establishment phase contains a number of inter-related 250 problems. 252 First, the target of a MARS_REQUEST (an IP multicast group) is an 253 abstract entity. Let us assume that VENUS does not require every MARS 254 to know the entire list of group members across the participating 255 clusters. In this case each time a MARS_REQUEST is received by a 256 MARS from a local client, the MARS must construct a sequence of 257 MARS_MULTIs based on locally held information (on intra-cluster 258 members) and remotely solicited information. 260 So how does it solicit this information? Unlike the unicast 261 situation, there is no definite, single direction to route a 262 MARS_REQUEST across the participating clusters. The only `right' 263 approach is to send the MARS_REQUEST to all clusters, since group 264 members may exist anywhere and everywhere. Let us allow one obvious 265 optimization - the MARS_REQUEST is propagated along the IP multicast 266 forwarding tree that has been established for the target group by 267 whatever IDMR protocol is running at the time. 269 As noted in [4] there are various reasons why a Cluster's scope be 270 kept limited. Some of these (MARS Client or ATM NIC limitations) 271 imply that the VENUS discovery process not return more group members 272 in the MARS_MULTIs that the requesting MARS Client can handle. This 273 provides VENUS with an interesting problem of propagating out the 274 original MARS_REQUEST, but curtailing the MARS_REQUESTs propagation 275 when a sufficient number of group members have been identified. 276 Viewed from a different perspective, this means that the scope of 277 shortcut achievable by any given MARS Client may depend greatly on 278 the shape of the IP forwarding tree away from its location (and the 279 density of group members within clusters along the tree) at the time 280 the request was issued. 282 How might we limit the number of group members returned to a given 283 MARS Client? Adding a limit TLV to the MARS_REQUEST itself is 284 trivial. At first glance it might appear that when the limit is being 285 reached we could summarize the next cluster along the tree by the ATM 286 address of the Mrouter into that cluster. The nett effect would be 287 that the MARS Client establishes a shortcut to many hosts that are 288 inside closer clusters, and passes its traffic to more distant 289 clusters through the distant Mrouter. However, this approach only 290 works passably well for a very simplistic multicast topology (e.g. a 291 linear concatenation of clusters). 293 In a more general topology the IP multicast forwarding tree away from 294 the requesting MARS Client will branch a number of times, requiring 295 the MARS_REQUEST to be replicated along each branch. Ensuring that 296 the total number of returned group members does not exceed the 297 client's limit becomes rather more difficult to do efficiently. 298 (VENUS could simply halve the limit value each time it split a 299 MARS_REQUEST, but this might cause group member discovery on one 300 branch to end prematurely while all the group members along another 301 branch are discovered without reaching the subdivided limit.) 303 Now consider this decision making process scattered across all the 304 clients in all participating clusters. Clients may have different 305 limits on how many group members they can handle - leading to 306 situations where different sources can shortcut to different 307 (sub)sets of the group members scattered across the participating 308 clusters (because the IP multicast forwarding trees from senders in 309 different clusters may result in different discovery paths being 310 taken by their MARS_REQUESTs.) 312 Finally, when the MARS_REQUEST passes a cluster where the target 313 group is MCS supported, VENUS must ensure the ATM address of the MCS 314 is collected rather than the addresses of the actual group members. 315 (To do otherwise would violate the remote cluster's intra-cluster 316 decision to use an MCS. The shortcut in this case must be content to 317 directly reach the remote cluster's MCS.) 319 (A solution to part of this problem would be to ensure that a VENUS 320 Domain never has more MARS Clients throughout than the clients are 321 capable of adding as leaf nodes. This may or may not appeal to 322 people's desire for generality of a VENUS solution. It also would 323 appear to beg the question of why the problem of multiple-LIS 324 multicasting isn't solved simply by creating a single big MARS 325 Cluster.) 327 3.2 SVC management - tracking group membership changes. 329 Once a client's pt-mpt SVC is established, it must be kept up to 330 date. The consequence of this is simple, and potentially 331 devastating: The MARS_JOINs and MARS_LEAVEs from every MARS Client in 332 every participating cluster must be propagated to every possible 333 sender in every participating cluster (this applies to groups that 334 are VC Mesh supported - groups that are MCS supported in some or all 335 participating clusters introduce complications described below). 336 Unfortunately, the consequential signaling load (as all the 337 participating MARSs start broadcasting their MARS_JOIN/LEAVE 338 activity) is not localized to clusters containing MARS Clients who 339 have established shortcut SVCs. Since the IP multicast model is Any 340 to Multipoint, and you can never know where there may be source MARS 341 Clients, the JOINs and LEAVEs must be propagated everywhere, always, 342 just in case. (This is simply a larger scale version of sending JOINs 343 and LEAVEs to every cluster member over ClusterControlVC, and for 344 exactly the same reason.) 346 The use of MCSs in some clusters instead of VC Meshes significantly 347 complicates the situation, as does the initial scoping of a client's 348 shortcut during the SVC establishment phase (described in the 349 preceding section). 351 In Clusters where MCSs are supporting certain groups, MARS_JOINs or 352 MARS_LEAVEs are only propagated to MARS Clients when an MCS comes or 353 goes. However, it is not clear how to effectively accommodate the 354 current MARS_MIGRATE functionality (that allows a previously VC Mesh 355 based group to be shifted to an MCS within the scope of a single 356 cluster). If an MCS starts up within a single Cluster, it is possible 357 to shift all the intra-cluster senders to the MCS using MARS_MIGRATE 358 as currently described in the MARS model. However, MARS Clients in 359 remote clusters that have shortcut SVCs into the local cluster also 360 need some signal to shift (otherwise they will continue to send their 361 packets directly to the group members in the local cluster). 363 This is a non-trivial requirement, since we only want to force the 364 remote MARS Clients to drop some of their leaf nodes (the ones to 365 clients within the Cluster that now has an MCS), add the new MCS as a 366 leaf node, and leave all their other leaf nodes untouched (the cut- 367 through connections to other clusters). Simply broadcasting the 368 MARS_MIGRATE around all participating clusters would certainly not 369 work. VENUS needs a new control message with semantics of "replaced 370 leaf nodes {x, y, z} with leaf node {a}, and leave the rest alone". 371 Such a message is easy to define, but harder to use. 373 Another issue for SVC management is that the scope over which a MARS 374 Client needs to receive JOINs and LEAVEs needs to respect the 375 Client's limited capacity for handling leaf nodes on its SVC. If the 376 MARS Client initially issued a MARS_REQUEST and indicated it could 377 handle 1000 leaf nodes, it is not clear how to ensure that subsequent 378 joins of new members wont exceed that limit. Furthermore, if the SVC 379 establishment phase decided that the SVC would stop at a particular 380 Mrouter (due to leaf node limits being reached), the Client probably 381 should not be receiving direct MARS_JOIN or MARS_LEAVE messages 382 pertaining to activity in the cluster `behind' this Mrouter. (To do 383 otherwise could lead to multiple copies of the source client's 384 packets reaching group members inside the remote cluster - one 385 version through the Mrouter, and another on the direct SVC connection 386 that the source client would establish after receiving a subsequent, 387 global MARS_JOIN regarding a host inside the remote cluster.) 389 Another scenario involves the density of group members along the IDMR 390 multicast tree increasing with time after the initial MARS_REQUEST is 391 answered. Subsequent JOINs from Cluster members may dictate that a 392 `closer' Mrouter be used to aggregate the source's outbound traffic 393 (so as not to exceed the source's leaf node limitations). How to 394 dynamically shift between terminating on hosts within a Cluster, and 395 terminating on a cluster's edge Mrouter, is an open question. 397 To complicate matters further, this scoping of the VENUS domain-wide 398 propagation of MARS_JOINs and MARS_LEAVEs needs to be on a per- 399 source- cluster basis, at least. If MARS Clients within the same 400 cluster have different leaf node limits, the problem worsens. Under 401 such circumstances, one client may have been able to establish a 402 shortcut SVC directly into a remote cluster while a second client - 403 in the same source cluster - may have been forced to terminate its 404 shortcut on the remote cluster's Mrouter. The first client obviously 405 needs to know about group membership changes in the remote cluster, 406 whilst the second client does not. Propagating these JOIN/LEAVE 407 messages on ClusterControlVC in the source cluster will not work - 408 the MARS for the source cluster will need to explicitly send copies 409 of the JOIN/LEAVE messages only to those MARS Clients whose prior SVC 410 establishment phase indicates they need them. Propagation of messages 411 to indicate a VC Mesh to MCS transition within clusters may also need 412 to take account of the leaf node limitations of MARS Clients. The 413 scaling characteristics of this problem are left to the readers 414 imagination. 416 It was noted in the previous section that a VENUS domain could be 417 limited to ensure there are never more MARS Clients than any one 418 client's leaf node limit. This would certainly avoid the need to for 419 complicated MARS_JOIN/LEAVE propagation mechanisms. However, it begs 420 the question of how different the VENUS domain then becomes from a 421 single, large MARS Cluster. 423 4. What is the value in bypassing Mrouters? 425 This is a good question, since the whole aim of developing a shortcut 426 connection mechanism is predicated on the assumption that bypassing 427 IP level entities is always a `win'. However, this is arguably not 428 true for multicast. 430 The most important observation that should be made about shortcut 431 connection scenarios is that they increase the exposure of any given 432 IP/ATM interface to externally generated SVCs. If there are a 433 potential 1000 senders in a VENUS Domain, then you (as a group 434 member) open yourself up to a potential demand for 1000 instances of 435 your re-assembly engine (and 1000 distinct incoming SVCs, when you 436 get added as a leaf node to each sender's pt-mpt SVC, which your 437 local switch port must be able to support). 439 It should be no surprise that the ATM level scaling limits applicable 440 to a single MARS Cluster [4] will also apply to a VENUS Domain. Again 441 we're up against the question of why you'd bypass an Mrouter. As 442 noted in [4] Mrouters perform a useful function of data path 443 aggregation - 100 senders in one cluster become 1 pt-mpt SVC out of 444 the Mrouter into the next cluster along the tree. They also hide MARS 445 signaling activity - individual group membership changes in one 446 cluster are hidden from IP/ATM interfaces in surrounding clusters. 447 The loss of these benefits must be factored into any network designed 448 to utilize multicast shortcut connections. 450 (For the sake of completeness, it must be noted that extremely poor 451 mismatches of IP and ATM topologies may make Mrouter bypass 452 attractive if it improves the use of the underlying ATM cloud. There 453 may also be benefits in removing the additional re- 454 assembly/segmentation latencies of having packets pass through an 455 Mrouter. However, a VENUS Domain ascertained to be small enough to 456 avoid the scaling limits in [4] might just as well be constructed as 457 a single large MARS Cluster. A large cluster also avoids a 458 topological mismatch between IP Mrouters and ATM switches.) 460 5. Relationship to Distributed MARS protocols. 462 The ION working group is looking closely at the development of 463 distributed MARS architectures. An outline of some issues is provided 464 in [5,6]. As noted earlier in this document the problem space looks 465 very similar that faced by our hypothetical VENUS Domain. For 466 example, in the load-sharing distributed MARS model: 468 - The Cluster is partitioned into sub-clusters. 470 - Each Active MARS is assigned a particular sub-cluster, and uses 471 its own sub-ClusterControlVC to propagate JOIN/LEAVE messages to 472 members of its sub-cluster. 474 - The MARS_REQUEST from any sub-cluster member must return 475 information from all the sub-clusters, so as to ensure that all a 476 group's members across the cluster are identified. 478 - Group membership changes in any one sub-cluster must be 479 immediately propagated to all the other sub-clusters. 481 There is a clear analogy to be made between a distributed MARS 482 Cluster, and a VENUS Domain made up of multiple single-MARS Clusters. 483 The information that must be shared between sub-clusters in a 484 distributed MARS scenario is similar to the information that must be 485 shared between Clusters in a VENUS Domain. 487 The distributed MARS problem is slightly simpler than that faced by 488 VENUS: 490 - There are no Mrouters (IDMR nodes) within the scope of the 491 distributed Cluster. 493 - In a distributed MARS Cluster an MCS supported group uses the 494 same MCS across all the sub-clusters (unlike the VENUS Domain, 495 where complete generality makes it necessary to cope with mixtures 496 of MCS and VC Mesh based Clusters). 498 6. Conclusion. 500 This document has described a hypothetical multicast shortcut 501 connection scheme, dubbed `Very Extensive NonUnicast Service' 502 (VENUS). The two phases of multicast support - SVC establishment, 503 and SVC management - are shown to be essential whether the scope is a 504 Cluster or a wider VENUS Domain. It has been shown that once the 505 potential scope of a pt-mpt SVC at establishment phase has been 506 expanded, the scope of the SVC management mechanism must similarly be 507 expanded. This means timely tracking and propagation of group 508 membership changes across the entire scope of a VENUS Domain. 510 It has also been noted that there is little difference in result 511 between a VENUS Domain and a large MARS Cluster. Both suffer from the 512 same fundamental scaling limitations, and both can be arranged to 513 provide shortcut of unicast routing boundaries. However, a completely 514 general multi-cluster VENUS solution ends up being more complex. It 515 needs to deal with bypassed Mrouter boundaries, and dynamically 516 changing group membership densities along multicast distribution 517 trees established by the IDMR protocols in use. 519 No solutions have been presented. This document's role is to provide 520 context for future developments. 522 Security Considerations 524 Security considerations are not addressed in this document. 526 Author's Address 528 Grenville Armitage 529 Bellcore, 445 South Street 530 Morristown, NJ, 07960 531 USA 533 Email: gja@bellcore.com 535 References 537 [1] Laubach, M., "Classical IP and ARP over ATM", RFC1577, Hewlett- 538 Packard Laboratories, December 1993. 540 [2] G. Armitage, "Support for Multicast over UNI 3.0/3.1 based ATM 541 Networks.", Bellcore, RFC 2022, November 1996. 543 [3] J. Luciani, et al, "NBMA Next Hop Resolution Protocol (NHRP)", 544 INTERNET DRAFT, draft-ietf-rolc-nhrp-11.txt, February 1997. 546 [4] G. Armitage, "Issues affecting MARS Cluster Size", Bellcore, RFC 547 2121, March 1997. 549 [5] G. Armitage, "Redundant MARS architectures and SCSP", Bellcore, 550 INTERNET DRAFT, draft-armitage-ion-mars-scsp-02.txt, November 1996. 552 [6] J. Luciani, G. Armitage, J. Jalpern, "Server Cache 553 Synchronization Protocol (SCSP) - NBMA", INTERNET DRAFT, draft-ietf- 554 ion-scsp-01.txt, March 1997 556 [7] Y. Rekhter, D. Farinacci, " Support for Sparse Mode PIM over 557 ATM", Cisco Systems, INTERNET DRAFT, draft-ietf-rolc-pim-atm-00.txt, 558 April 1996.