idnits 2.17.1 draft-ietf-issll-atm-support-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-23) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** Bad filename characters: the document name given in the document, 'draft-ietf-issll-atm-support-01.ps', contains other characters than digits, lowercase letters and dash. ** Missing revision: the document name given in the document, 'draft-ietf-issll-atm-support-01.ps', does not give the document revision number == Mismatching filename: the document gives the document name as 'draft-ietf-issll-atm-support-01.ps', but the file name used is 'draft-ietf-issll-atm-support-01' == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** The abstract seems to contain references ([12]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 72 has weird spacing: '... worked well ...' == Line 136 has weird spacing: '...discuss advan...' == Line 233 has weird spacing: '... Figure shows...' == Line 328 has weird spacing: '... figure where...' == Line 366 has weird spacing: '... figure where...' == (7 more instances...) == Couldn't figure out when the document was first submitted -- there may comments or warnings related to the use of a disclaimer for pre-RFC5378 work that could not be issued because of this. Please check the Legal Provisions document at https://trustee.ietf.org/license-info to determine if you need the pre-RFC5378 disclaimer. -- The document date (September 24, 1996) is 10073 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Looks like a reference, but probably isn't: 'Note 1' on line 481 -- Looks like a reference, but probably isn't: 'Note 2' on line 483 -- Looks like a reference, but probably isn't: 'Note 3' on line 529 -- Looks like a reference, but probably isn't: 'Note 4' on line 625 -- Looks like a reference, but probably isn't: 'Note 5' on line 812 -- Looks like a reference, but probably isn't: 'Note 6' on line 813 -- Looks like a reference, but probably isn't: 'Note 7' on line 1003 == Unused Reference: '4' is defined on line 1112, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '3' -- Possible downref: Non-RFC (?) normative reference: ref. '4' -- Possible downref: Non-RFC (?) normative reference: ref. '5' ** Obsolete normative reference: RFC 1483 (ref. '6') (Obsoleted by RFC 2684) -- Possible downref: Non-RFC (?) normative reference: ref. '7' -- Possible downref: Non-RFC (?) normative reference: ref. '8' -- Possible downref: Non-RFC (?) normative reference: ref. '9' -- Possible downref: Non-RFC (?) normative reference: ref. '10' -- Possible downref: Non-RFC (?) normative reference: ref. '12' ** Downref: Normative reference to an Informational RFC: RFC 1633 (ref. '13') ** Obsolete normative reference: RFC 1577 (ref. '14') (Obsoleted by RFC 2225) -- Possible downref: Non-RFC (?) normative reference: ref. '15' -- Possible downref: Non-RFC (?) normative reference: ref. '16' -- Possible downref: Non-RFC (?) normative reference: ref. '17' Summary: 13 errors (**), 0 flaws (~~), 10 warnings (==), 21 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Draft S. Berson 3 Expiration: March 1997 ISI 4 File: draft-ietf-issll-atm-support-01.ps L. Berger 5 FORE Systems 7 IP Integrated Services with RSVP over ATM 9 September 24, 1996 11 Status of Memo 13 This document is an Internet-Draft. Internet-Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its areas, 15 and its working groups. Note that other groups may also distribute 16 working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference 21 material or to cite them other than as "work in progress." 23 To learn the current status of any Internet-Draft, please check the 24 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow 25 Directories on ds.internic.net (US East Coast), nic.nordu.net 26 (Europe), ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific 27 Rim). 29 Abstract 31 This draft describes a method for providing IP Integrated Services 32 with RSVP over ATM switched virtual circuits (SVCs). It provides an 33 overall approach to the problem as well as a specific method for 34 running over today's ATM networks. There are two parts of this 35 problem. This draft provides guidelines for using ATM VCs with QoS 36 as part of an Integrated Services Internet. A related draft[12] 37 describes service mappings between IP Integrated Services and ATM 38 services. 40 Authors' Note 42 The postscript version of this document contains figures that are not 43 included in the text version, so it is best to use the postscript 44 version. Figures will be converted to ASCII in a future version. 46 Table of Contents 48 1. Introduction ...................................................3 49 1.1 Terms ......................................................4 50 1.2 Assumptions ................................................5 51 2. Policy .........................................................6 52 2.1 Implementation Guidelines ..................................7 53 3. Data VC Management .............................................7 54 3.1 Heterogeneity ..............................................7 55 3.2 Multicast Data Distribution ................................11 56 3.3 Receiver Transitions .......................................12 57 3.4 Multicast End-Point Identification .........................13 58 3.5 Reservation to VC Mapping ..................................14 59 3.6 Dynamic QoS ................................................15 60 4. Tear down old VC .............................................16 61 5. Activate timer ...............................................16 62 5.1 Implementation Guidelines ..................................22 63 6. Security .......................................................23 64 7. Future Work ....................................................23 65 8. Authors' Addresses .............................................24 67 1. Introduction 69 The Internet currently has one class of service normally referred to 70 as "best effort." This service is typified by first-come, first- 71 serve scheduling at each hop in the network. Best effort service has 72 worked well for electronic mail, World Wide Web (WWW) access, file 73 transfer (e.g. ftp), etc. For real-time traffic such as voice and 74 video, the current Internet has performed well only across unloaded 75 portions of the network. In order to provide guaranteed quality 76 real-time traffic, new classes of service and a QoS signalling 77 protocol are being introduced in the Internet[13,16,15], while 78 retaining the existing best effort service. The QoS signalling 79 protocol is RSVP[5,17], the Resource ReSerVation Protocol. 81 ATM is rapidly becoming an important link layer technology. One of 82 the important features of ATM technology is the ability to request a 83 point-to-point Virtual Circuit (VC) with a specified Quality of 84 Service (QoS). An additional feature of ATM technology is the ability 85 to request point-to-multipoint VCs with a specified QoS. Point-to- 86 multipoint VCs allows leaf nodes to be added and removed from the VC 87 dynamically and so provide a mechanism for supporting IP multicast. 88 It is only natural that RSVP and the Internet Integrated Services 89 (IIS) model would like to utilize the QoS properties of any 90 underlying link layer including ATM 92 Classical IP over ATM[14] has solved part of this problem, supporting 93 IP unicast best effort traffic over ATM. Classical IP over ATM is 94 based on a Logical IP Subnetwork (LIS), which is a separately 95 administered IP sub-network. Hosts within a LIS communicate using 96 the ATM network, while hosts from different sub-nets communicate only 97 by going through an IP router (even though it may be possible to open 98 a direct VC between the two hosts over the ATM network). Classical 99 IP over ATM provides an Address Resolution Protocol (ATMARP) for ATM 100 edge devices to resolve IP addresses to native ATM addresses. For 101 any pair of IP/ATM edge devices (i.e. hosts or routers), a single VC 102 is created on demand and shared for all traffic between the two 103 devices. A second part of the RSVP and IIS over ATM problem, IP 104 multicast, is close to being solved with MARS[1], the Multicast 105 Address Resolution Server. MARS compliments ATMARP by allowing an IP 106 address to resolve into a list of native ATM addresses, rather than 107 just a single address. 109 A key remaining issue for IP over ATM is the integration of RSVP 110 signalling and ATM signalling in support of the Internet Integrated 111 Services (IIS) model. There are two main areas involved in 112 supporting the IIS model, QoS translation and VC management. QoS 113 translation concerns mapping a QoS from the IIS model to a proper ATM 114 QoS, while VC management concentrates on how many VCs are needed and 115 which traffic flows are routed over which VCs. Mapping of IP QoS to 116 ATM QoS is the subject of a companion draft[12]. 118 This draft concentrates on VC management (and we assume in this draft 119 that the QoS for a single reserved flow can be acceptably translated 120 to an ATM QoS). Two types of VCs need to be managed, data VCs which 121 handle the actual data traffic, and control VCs which handle the RSVP 122 signalling traffic. Several VC management schemes for both data and 123 control VCs are described in this draft. For each scheme, there are 124 two major issues - (1) heterogeneity and (2) dynamic behavior. 125 Heterogeneity refers to how requests for different QoS's are handled, 126 while dynamic behavior refers to how changes in QoS and changes in 127 multicast group membership are handled. These schemes will be 128 evaluated in terms of the following metrics - (1) number of VCs 129 needed to implement the scheme, (2) bandwidth wasted due to duplicate 130 packets, and (3) flexibility in handling heterogeneity and dynamic 131 behavior. 133 The general issues related to running RSVP[5,17] over ATM have been 134 covered in several papers including [2,3,10]. This document will 135 review key issues that must be addressed by any RSVP over ATM UNI 136 solution. It will discuss advantages and disadvantages of different 137 methods for running RSVP over ATM. It will also provide specific 138 guidelines to implementors using ATM UNI3.x and 4.0. These guidelines 139 are intended to provide a baseline set of functionality, while 140 allowing for more sophisticated approaches. We expect some vendors 141 to also provide some of the more sophisticated approaches described 142 below, and some networks to only make use of such approaches. 144 1.1 Terms 146 The terms "reservation" and "flow" are used in many contexts, 147 often with different meaning. These terms are used in this 148 document with the following meaning: 150 o Reservation is used in this document to refer to an RSVP 151 initiated request for resources. Resource requests may be 152 made based on RSVP sessions and RSVP reservation styles. RSVP 153 styles dictate whether the reserved resources are used by one 154 sender or shared by multiple senders. See [5] for details of 155 each. Each request is referred to in this document as an 156 RSVP reservation, or simply reservation. 158 o Flow is used to refer to the data traffic associated with a 159 particular reservation. The specific meaning of flow is RSVP 160 style dependent. For shared style reservations, there is one 161 flow per session. For distinct style reservations, there is 162 one flow per sender (per session). 164 1.2 Assumptions 166 The following assumptions are made: 168 o Support for IPv4 and IPv6 best effort in addition to QoS 170 o Use RSVP with policy control as signalling protocol 172 o Assume UNI 3.x and 4.0 ATM services 174 o VCs initiation by sub-net senders 176 1.2.1 IPv4 and IPv6 178 Currently IPv4 is the standard protocol of the Internet which 179 now provides only best effort service. We assume that best 180 effort service will continue to be supported while introducing 181 new types of service according to the IP Integrated Services 182 model. We also assume that IPv6 will be supported as well as 183 IPv4. 185 1.2.2 RSVP and Policy 187 We assume RSVP as the Internet signalling protocol which is 188 described in [17]. The reader is assumed to be familiar with 189 [17]. 191 IP Integrated Services discriminates between users by providing 192 some users better service at the expense of others. Policy 193 determines how preferential services are allocated while 194 allowing network operators maximum flexibility to provide 195 value-added services for the marketplace. Mechanisms need to 196 be be provided to enforce access policies. These mechanisms 197 may include such things as permissions and/or billing. 199 For scaling reasons, policies based on bilateral agreements 200 between neighboring providers are considered. The bilateral 201 model has similar scaling properties to multicast while 202 maintaining no global information. Policy control is currently 203 being developed for RSVP (see [8] for details). 205 1.2.3 ATM 207 We assume ATM defined by UNI 3.x and 4.0. ATM provides both 208 point-to-point and point-to-multipoint Virtual Circuits (VCs) 209 with a specified Quality of Service (QoS). ATM provides both 210 Permanent Virtual Circuits (PVCs) and Switched Virtual Circuits 211 (SVCs). In the Permanent Virtual Circuit (PVC) environment, 212 PVCs are typically used as point-to-point link replacements. 213 So the Integrated Services support issues are similar to 214 point-to-point links. This draft describes schemes for 215 supporting Integrated Services using SVCs. 217 1.2.4 VC Initiation 219 There is an apparent mismatch between RSVP and ATM. 220 Specifically, RSVP control is receiver oriented and ATM control 221 is sender oriented. This initially may seem like a major 222 issue, but really is not. While RSVP reservation (RESV) 223 requests are generated at the receiver, actual allocation of 224 resources takes place at the sub-net sender. 226 For data flows, this means that sub-net senders will establish 227 all QoS VCs and the sub-net receiver must be able to accept 228 incoming QoS VCs. These restrictions are consistent with RSVP 229 version 1 processing rules and allow senders to use different 230 flow to VC mappings and even different QoS renegotiation 231 techniques without interoperability problems. All RSVP over 232 ATM approaches that have VCs initiated and controlled by the 233 sub-net senders will interoperate. Figure shows this model of 234 data flow VC initiation. 236 [Figure goes here] 237 Figure 1: Data Flow VC Initiation 239 The use of the reverse path provided by point-to-point VCs by 240 receivers is for further study. Receivers initiating VCs via 241 the reverse path mechanism provided by point-to-point VCs is 242 also for future study. 244 2. Policy 246 RSVP allows for local policy control [8] as well as admission 247 control. Thus a user can request a reservation with a specific QoS 248 and with a policy object that, for example, offers to pay for 249 additional costs setting up a new reservation. The policy module at 250 the entry to a service provider can decide how to satisfy that 251 request - either by merging the request in with an existing 252 reservation or by creating a new reservation for this (and perhaps 253 other) users. This policy can be on a per user-provider basis where 254 a user and a provider have an agreement on the type of service 255 offered, or on a provider-provider basis, where two providers have 256 such an agreement. With the ability to do local policy control, 257 service providers can offer services best suited to their own 258 resources and their customers needs. 260 Policy is expected to be provided as a generic API which will return 261 values indicating what action should be taken for a specific 262 reservation request. The API is expected to have access to the 263 reservation tables with the QoS for each reservation. The RSVP 264 Policy and Integrity objects will be passed to the policy() call. 265 Four possible return values are expected. The request can be 266 rejected. The request can be accepted as is. The request can be 267 accepted but at a different QoS. The request can cause a change of 268 QoS of an existing reservation. The information returned from this 269 call will be used to call the admission control interface. 271 2.1 Implementation Guidelines 273 Currently, the contents of policy data objects is not specified. 274 So specifics of policy implementation are not defined at this 275 time. 277 3. Data VC Management 279 This section describes issues and methods for management of VCs 280 associated with QoS data flows. When establishing and maintaining 281 VCs, the sub-net sender will need to deal with several complicating 282 factors including multiple QoS reservations, requests for QoS 283 changes, ATM short-cuts, and several multicast issues. 285 There are several aspects to running RSVP over ATM that are 286 particular to multicast sessions. These issues result from the nature 287 of ATM connections. The key issues are heterogeneity, data 288 distribution, receiver transitions, and end-point identification. 290 3.1 Heterogeneity 292 Heterogeneity occurs when receivers request different QoS's within 293 a single session. This means that the amount of requested 294 resources differs on a per next hop basis. A related type of 295 heterogeneity occurs due to best-effort receivers. In any IP 296 multicast group, it is possible that some receivers will request 297 QoS (via RSVP) and some receivers will not. Both types of 298 heterogeneity are shown in figure . In shared media, like 299 Ethernet, receivers that have not requested resources can 300 typically be given identical service to those that have without 301 complications. This is not the case with ATM. In ATM networks, 302 any additional end-points of a VC must be explicitly added. There 303 may be costs associated with adding the best-effort receiver, and 304 there might not be adequate resources. An RSVP over ATM solution 305 will need to support heterogeneous receivers even though ATM does 306 not currently provide such support directly. 308 [Figure goes here] 309 Figure 2: Types of Multicast Receivers 311 There are multiple models for supporting RSVP heterogeneity over 312 ATM. Section 3.1.1 examines the multiple VCs per RSVP reservation 313 (or full heterogeneity) model where a single reservation can be 314 forwarded into several VCs each with a different QoS. Section 315 3.1.2 presents a limited heterogeneity model where exactly one QoS 316 VC is used along with a best effort VC. Section 3.1.3 examines 317 the VC per RSVP reservation (or single VC) model, where each RSVP 318 reservation is mapped to a single ATM VC. Section 3.1.4 describes 319 the aggregation model allowing aggregation of multiple RSVP 320 reservations into a single VC. Further study is being done on the 321 aggregation model. 323 3.1.1 Many VCs per RSVP reservation 325 We define the "full heterogeneity" model as providing a 326 separate VC for each distinct QoS for a multicast session 327 including best effort and one or more QoS's. This is shown in 328 figure where S1 is a sender, R1-R3 are receivers, r1-r4 are IP 329 routers, and s1-s2 are ATM switches. Receivers R1 and R3 make 330 reservations with different QoS while R2 is a best effort 331 receiver. Three point-to-multipoint VCs are created for this 332 situation, each with the requested QoS. Note that any leafs 333 requesting QoS 1 or QoS 2 would be added to the existing QoS 334 VC. 336 [Figure goes here] 337 Figure 3: Full heterogeneity 339 Note that while full heterogeneity gives users exactly what 340 they request, it requires more resources of the network than 341 other possible approaches. In figure , three copies of each 342 packet are sent on the link from r1 to s1. Two copies of each 343 packet are then sent from s1 to s2. The exact amount of 344 bandwidth used for duplicate traffic depends on the network 345 topology and group membership. 347 3.1.2 Two VCs per RSVP reservation 349 We define the "limited heterogeneity" model as the case where 350 the receivers of a multicast session are limited to use either 351 best effort service or a single alternate quality of service. 352 The alternate QoS can be chosen either by higher level 353 protocols or by dynamic renegotiation of QoS as described 354 below. 356 [Figure goes here] 357 Figure 4: Limited heterogeneity 359 In order to support limited heterogeneity, each ATM edge device 360 participating in a session would need at most two VCs. One VC 361 would be a point-to-multipoint best effort service VC and would 362 serve all best effort service IP destinations for this RSVP 363 session. The other VC would be a point to multipoint VC with 364 QoS and would serve all IP destinations for this RSVP session 365 that have an RSVP reservation established. This is shown in 366 figure where there are three receivers, R2 requesting best 367 effort service, while R1 and R3 request distinct reservations. 368 Whereas, in figure , R1 and R3 have a separate VC, so each 369 receives precisely the resources requested, in figure , R1 and 370 R3 share the same VC (using the maximum of R1 and R3 QoS) 371 across the ATM network. Note that though the VC and hence the 372 QoS for R1 and R3 are the same within the ATM cloud, the 373 reservation outside the ATM cloud (from router r4 to receiver 374 R3) uses the QoS actually requested by R3. 376 As with full heterogeneity, a disadvantage of the limited 377 heterogeneity scheme is that each packet will need to be 378 duplicated at the network layer and one copy sent into each of 379 the 2 VCs. Again, the exact amount of excess traffic will 380 depend on the network topology and group membership. Looking 381 at figure , there are two VCs going from router r1 to switch 382 s1. Two copies of every packet will traverse the r1-s1 link. 383 Another disadvantage of limited heterogeneity is that a 384 reservation request can be rejected even when the resources are 385 available. This occurs when a new receiver requests a larger 386 QoS. If any of the existing QoS VC end-points cannot upgrade 387 to the new QoS, then the new reservation fails though the 388 resources exist for the new receiver. 390 3.1.3 Single VC per RSVP Reservation 392 An even simpler approach for mapping RSVP reservations into VCs 393 is to have a single VC for each RSVP reservation. This ATM VC 394 can be a point-to-point or point-to-multipoint as appropriate. 395 In this approach even the best-effort receivers use the RSVP 396 triggered QoS VC. The QoS VC is sized to handle the maximum of 397 the requested resources of all the receivers of a session. 398 While this approach is simple to implement providing better 399 than best-effort service may actually be the opposite of what 400 the user desires since in providing ATM QoS, there may be 401 charges incurred or resources that are wrongfully allocated. 402 There are two specific problems. The first problem is that a 403 user making a small or no reservation would share a QoS VC 404 resources without making (and perhaps paying for) an RSVP 405 reservation. The second problem is that a receiver may not 406 receive any data. This may occur when there is insufficient 407 resources to add a receiver. The rejected user would not be 408 added to the single VC and it would not even receive traffic on 409 a best effort basis. 411 3.1.4 Aggregation 413 The last scheme is the multiple RSVP reservations per VC (or 414 aggregation) model. With this model, large VCs could be set up 415 between IP routers and hosts in an ATM network. These VCs 416 could be managed much like IP Integrated Service (IIS) point- 417 to-point links (e.g. T-1, DS-3) are managed now. Traffic from 418 multiple sources over multiple RSVP sessions might be 419 multiplexed on the same VC. This approach has a number of 420 advantages. First, there is typically no signalling latency as 421 VCs would be in existence when the traffic started flowing, so 422 no time is wasted in setting up VCs. Second, the heterogeneity 423 problem in full over ATM has been reduced to a solved problem. 424 Finally, the dynamic QoS problem for ATM has also been reduced 425 to a solved problem. This approach can be used with point-to- 426 point and point-to-multipoint VCs. The problem with the 427 aggregation approach is that the choice of what QoS to use for 428 which of the VCs is difficult, but is made easier since the VCs 429 can be changed as needed. The advantages of this scheme makes 430 this approach an item for high priority study. 432 3.1.5 Implementation Guidelines 434 Multiple options for mapping reservations onto VCs have been 435 discussed. The key issue to be addressed is providing 436 requested QoS downstream. Currently, the aggregation approach 437 is for high priority study, so RSVP over ATM implementations 438 should use one of the other approaches. 440 The current RSVP specification addresses heterogeneous 441 requests, but not within an ATM specific context. The current 442 processing rules and traffic control interface describe a model 443 where the largest requested reservation for a specific outgoing 444 interface is used in resource allocation, and traffic is 445 delivered at the higher rate to all next-hops. The simplest 446 approach for RSVP over ATM will be to emulate this approach 447 even though this approach may be undesirable in certain 448 circumstances. So, RSVP over ATM implementations 449 **should/must** [Note 1] 450 be able to support heterogeneity in QoS requests by providing 451 the largest requested QoS to all next hops using a single QoS 452 VC as described in sections 3.1.2 and 3.1.3. Implementations, 453 may also support heterogeneity through some other mechanism, 454 e.g., using multiple appropriately sized VCs. 456 The other type of heterogeneity to be addressed is best-effort 457 receivers. Two possible approaches for handling best-effort 458 receivers are using a single QoS VC as described in section 459 3.1.3 or using two VCs, as described in section 3.1.2. 460 Unfortunately, neither of these approaches is the right answer 461 for all cases. For some networks, e.g. LANs, it is likely that 462 the single VC approach will be desired. In other networks, e.g. 463 public WANs, it is likely that the multiple approach will be 464 desired. Each sub-network sender (router, or host) may choose 465 how traffic is mapped onto VCs. For this reason, baseline RSVP 466 over ATM implementations **should/must** [Note 2] 468 support best-effort multicast receivers either using the single 469 QoS VC or the limited heterogeneity approach. Implementations 470 should support both approaches and provide the ability to 471 select which method is actually used, but are not required to 472 do so. 474 3.2 Multicast Data Distribution 476 Two models are planned for IP multicast data distribution over 477 ATM. In one model, senders establish point-to-multipoint VCs to 478 all ATM attached destinations, and data is then sent over these 479 VCs. This model is often called "multicast mesh" or "VC mesh" 480 _________________________ 481 [Note 1] The working group must decide if this is requirement or a 482 recommendation. 483 [Note 2] The working group must decide if this is requirement or a 484 recommendation. 486 mode distribution. In the second model, senders send data over 487 point-to-point VCs to a central point and the central point relays 488 the data onto point-to-multipoint VCs that have been established 489 to all receivers of the IP multicast group. This model is often 490 referred to as "multicast server" mode distribution. Figure shows 491 data flow for both modes of IP multicast data distribution. RSVP 492 over ATM solutions must ensure that IP multicast data is 493 distributed with appropriate QoS. 495 [Figure goes here] 496 Figure 5: IP Multicast Data Distribution Over ATM 498 3.2.1 Implementation Guidelines 500 In the Classical IP context, multicast server support is 501 provided via MARS[1]. MARS does not currently provide a way to 502 communicate QoS requirements to a MARS multicast server. 503 Therefore, RSVP over ATM implementations **must/should** [Note 504 3] 505 support "mesh-mode" distribution for RSVP controlled 506 multicast flows. 508 3.3 Receiver Transitions 510 When setting up a point-to-multipoint VCs there will be a time 511 when some receivers have been added to a QoS VC and some have not. 512 During such transition times it is possible to start sending data 513 on the newly established VC. The issue is when to start send data 514 on the new VC. If data is sent both on the new VC and the old VC, 515 then data will be delivered with proper QoS to some receivers and 516 with the old QoS to all receivers. This means the QoS receivers 517 would get duplicate data. If data is sent just on the new QoS VC, 518 the receivers that have not yet been added will lose information. 519 So, the issue comes down to whether to send one or both of the new 520 QoS VC and the old VC. In one case duplicate information will be 521 received, in the other some information may not be received. This 522 issue needs to be considered for three cases: when establishing 523 the first QoS VC, when establishing a VC to support a QoS change, 524 and when adding a new end-point to an already established QoS VC. 526 The first two cases are very similar. It both, it is possible to 527 send data on the partially completed new VC, and the issue of 528 _________________________ 529 [Note 3] The working group must decide if this is requirement or a 530 recommendation. 532 duplicate versus lost information is the same. 534 The last case is when an end-point must be added an existing QoS 535 VC. In this case the end-point must be both added to the QoS VC 536 and dropped from a best-effort VC. The issue is which to do 537 first. If the add is first requested, then the end-point may get 538 duplicate information. If the drop is requested first, then the 539 end-point may loose information. 541 3.3.1 Implementation Guidelines 543 In order to ensure predictable behavior and delivery of data to 544 all receivers, data can only be sent on a new VCs once all 545 parties have been added. This will ensure that all data is 546 only delivered once to all receivers. This approach does not 547 quite apply for the last case. In the last case, the add should 548 be completed first, then the drop. This means that receivers 549 must be prepared to receive some duplicate packets at times of 550 QoS setup. 552 3.4 Multicast End-Point Identification 554 One basic issue is how to identify the ATM end-points 555 participating in an IP multicast group. The ATM end-points will 556 be IP multicast receivers and/or next-hops. Both QoS and best- 557 effort end-points must be identified. RSVP next-hop information 558 will provide QoS end-points, but not best-effort end-points. 560 Another issue is identifying end-points of multicast traffic 561 handled by non-RSVP capable next-hops. In this case a PATH 562 message travels through a non-RSVP egress router on the way to the 563 next hop RSVP node. When the next hop RSVP node sends a RESV 564 message it may arrive at the source over a different route than 565 what the data is using. The source will get the RESV message, but 566 will not know which egress router needs the QoS. For unicast 567 sessions, there is no problem since the ATM end-point will be the 568 IP next-hop router. Unfortunately, multicast routing may not be 569 able to uniquely identify the IP next-hop router. So it is 570 possible that a multicast end-point can not be identified. 572 3.4.1 Implementation Guidelines 574 In the most common case, MARS will be used to identify all 575 end-points of a multicast group. In the router to router case, 576 a multicast routing protocol may provide all next-hops for a 577 particular multicast group. In either case, RSVP over ATM 578 implementations must obtain a full list of end-points, both QoS 579 and non-QoS, using the appropriate mechanisms. The full list 580 can be compared against with the RSVP identified end-points, to 581 determine the list of best-effort receivers. 583 There is no straightforward solution to uniquely identifying 584 end-points of multicast traffic handled by non-RSVP next hops. 585 The preferred solution is to use multicast routing protocols 586 that support unique end-point identification. In cases where 587 such routing protocols are unavailable, all IP routers that 588 will be used to support RSVP over ATM should support RSVP. 590 3.5 Reservation to VC Mapping 592 There is a basic need to map from IP and RSVP to ATM Virtual 593 Circuits (VCs). LAN Emulation [7], Classical IP [14] and, more 594 recently, NHRP [9] discuss mapping IP traffic onto ATM SVCs, but 595 they only cover a single QoS class, i.e., best effort traffic. 596 When QoS is introduced, VC mapping must be revisited. For RSVP 597 controlled QoS flows, one issue is VCs to use for QoS data flows. 599 In the Classic IP over ATM and current NHRP models a single 600 point-to-point VC is used for all traffic between two ATM attached 601 hosts (routers and end-stations). It is likely that such a single 602 VC will not be adequate or optimal when supporting data flows with 603 multiple QoS types. RSVP's basic purpose is to install support for 604 flows with multiple QoS types, so it is essential for any RSVP 605 over ATM solution to address VC usage for QoS data flows. RSVP 606 reservation styles will also need to be taken into account in any 607 VC usage strategy. 609 There are multiple options for mapping flows onto VCs. The key 610 issue to be addressed is providing requested QoS downstream. This 611 can be done by mapping each reservation into a single VC or 612 through more aggregation schemes as discussed in section 3.1.4. 614 3.5.1 Minimum Implementation 616 While it is possible to send multiple flows and multiple 617 distinct reservations (FF) over single VCs, implementation of 618 such approaches is a matter for further study. So, baseline 619 RSVP over ATM implementations **may/must** [Note 4] 620 allow for the use of a single VC to support each RSVP 621 reservation. By using independent VCs per reservation, delivery 622 of requested resources to the associated QoS data flow can be 623 assured. This approach does not preclude support for multiple 624 _________________________ 625 [Note 4] The working group must decide if this is requirement or a 626 suggestion. The appropriate wording will be used based on the result. 628 flows per VC. 630 3.6 Dynamic QoS 632 RSVP provides dynamic quality of service (QoS) in that the 633 resources that are requested may change at any time. There are 634 several common reasons for a change of reservation QoS. First, an 635 existing receiver can request a new larger (or smaller) QoS. 636 Second, a sender may change its traffic specification (TSpec), 637 which can trigger a change in the reservation requests of the 638 receivers. Third, a new sender can start sending to a multicast 639 group with a larger traffic specification than existing senders, 640 triggering larger reservations. Finally, a new receiver can make 641 a reservation that is larger than existing reservations. If the 642 merge node for the larger reservation is an ATM edge device, a new 643 larger reservation must be set up across the ATM network. 645 Since ATM service, as currently defined in UNI 3.x and UNI 4.0, 646 does not allow renegotiating the QoS of a VC, dynamically changing 647 the reservation means creating a new VC with the new QoS, and 648 tearing down an established VC. Tearing down a VC and setting up 649 a new VC in ATM are complex operations that involve a non-trivial 650 amount of processor time, and may have a substantial latency. 652 There are several options for dealing with this mismatch in 653 service. A specific approach will need to be a part of any RSVP 654 over ATM solution. 656 3.6.1 Implementation Guidelines 658 The proposed approach for supporting changes in RSVP 659 reservations is to attempt to replace an existing VC with a new 660 appropriately sized VC. During setup of the replacement VC, the 661 old VC is left in place unmodified. The old VC is left 662 unmodified to minimize interruption of QoS data delivery. Once 663 the replacement VC is established, data transmission is shifted 664 to the new VC, and the old VC is then closed. 666 If setup of the replacement VC fails, then the old QoS VC 667 should continue to be used. When the new reservation is greater 668 than the old reservation, the reservation request should be 669 answered with an error. When the new reservation is less than 670 the old reservation, the request should be treated as if the 671 modification was successful. While leaving the larger 672 allocation in place is suboptimal, it maximizes delivery of 673 service to the user. Implementations should retry replacing 674 the too large VC after some appropriate elapsed time. 676 One additional issue is that only one QoS change can be 677 processed at one time per reservation. If the (RSVP) requested 678 QoS is changed while the first replacement VC is still being 679 setup, then the replacement VC is released and the whole VC 680 replacement process is restarted. 682 To limit the number of changes and to avoid excessive 683 signalling load, implementations may limit the number of 684 changes that will be processed in a given period. One 685 implementation approach would have each ATM edge device 686 configured with a time parameter tau (which can change over 687 time) that gives the minimum amount of time the edge device 688 will wait between successive changes of the QoS of a particular 689 VC. Thus if the QoS of a VC is changed at time t, all messages 690 that would change the QoS of that VC that arrive before time 691 t+tau would be queued. If several messages changing the QoS of 692 a VC arrive during the interval, redundant messages can be 693 discarded. At time t+tau, the remaining change(s) of QoS, if 694 any, can be executed. 696 The sequence of events for a single VC would be 698 1. Wait if timer is active 700 2. Establish VC with new QoS 702 3. Remap data traffic to new VC 704 4. Tear down old VC 706 5. Activate timer 708 There is an interesting interaction between heterogeneous 709 reservations and dynamic QoS. In the case where a RESV message 710 is received from a new next-hop and the requested resources are 711 larger than any existing reservation, both dynamic QoS and 712 heterogeneity need to be addressed. A key issue is whether to 713 first add the new next-hop or to change to the new QoS. This 714 is a fairly straight forward special case. Since the older, 715 smaller reservation does not support the new next-hop, the 716 dynamic QoS process should be initiated first. Since the new 717 QoS is only needed by the new next-hop, it should be the first 718 end-point of the new VC. This way signalling is minimized when 719 the set-up to the new next-hop fails. 721 3.7 Short-Cuts 723 Short-cuts [9] allow ATM attached routers and hosts to directly 724 establish point-to-point VCs across LIS boundaries,i.e., the VC 725 end-points are on different IP sub-nets. The ability for short- 726 cuts and RSVP to interoperate has been raised as a general 727 question. The area of concern is the ability to handle asymmetric 728 short-cuts. Specifically how RSVP can handle the case where a 729 downstream short-cut may not have a matching upstream short-cut. 730 In this case, which is shown in figure , PATH and RESV messages 731 following different paths. 733 [Figure goes here] 734 Figure 6: Asymmetric RSVP Message Forwarding With ATM Short-Cuts 736 Examination of RSVP shows that the protocol already includes 737 mechanisms that will support short-cuts. The mechanism is the 738 same one used to support RESV messages arriving at the wrong 739 router and the wrong interface. The key aspect of this mechanism 740 is RSVP only processing messages that arrive at the proper 741 interface and RSVP forwarding of messages that arrive on the wrong 742 interface. The proper interface is indicated in the NHOP object 743 of the message. So, existing RSVP mechanisms will support 744 asymmetric short-cuts. 746 The short-cut model of VC establishment still poses several issues 747 when running with RSVP. The major issues are dealing with 748 established best-effort short-cuts, when to establish short-cuts, 749 and QoS only short-cuts. These issues will need to be addressed by 750 RSVP implementations. 752 3.7.1 Implementation Guidelines 754 The key issue to be addressed by the baseline RSVP over ATM 755 solution is when to establish a short-cut for a QoS data flow. 756 The proposed approach is to simply follow best-effort traffic. 757 When a short-cut has been established for best-effort traffic 758 to a destination or next-hop, that same end-point should be 759 used when setting up RSVP triggered VCs for QoS traffic to the 760 same destination or next-hop. This will happen naturally when 761 PATH messages are forwarded over the best-effort short-cut. 762 Note that in this approach when best-effort short-cuts are 763 never established, RSVP triggered QoS short-cuts will also 764 never be established. 766 3.8 VC Teardown 768 RSVP can identify from either explicit messages or timeouts when a 769 data VC is no longer needed. Therefore, data VCs set up to 770 support RSVP controlled flows should only be released at the 771 direction of RSVP. VCs must not be timed out due to inactivity by 772 either the VC initiator or the VC receiver. This conflicts with 773 VCs timing out as described in RFC 1755[11], section 3.4 on VC 774 Teardown. RFC 1755 recommends tearing down a VC that is inactive 775 for a certain length of time. Twenty minutes is recommended. This 776 timeout is typically implemented at both the VC initiator and the 777 VC receiver. When this timeout occurs for an RSVP initiated VC, a 778 valid VC with QoS will be torn down unexpectedly. While this 779 behavior is acceptable for best-effort traffic, it is important 780 that RSVP controlled VCs not be torn down. If there is no choice 781 about the VC being torn down, the RSVP daemon must be notified, so 782 a reservation failure message can be sent. The RSVP daemon must 783 also be notified whenever a VC is torn down without direction from 784 RSVP. 786 3.8.1 Implementation Guidelines 788 For VCs initiated at the request of RSVP, the configurable 789 inactivity timer mentioned in [11] must be set to "infinite". 790 Setting the inactivity timer value at the VC initiator should 791 not be problematic since the proper value can be relayed 792 internally at the originator. 794 Setting the inactivity timer at the VC receiver is more 795 difficult. To properly set the timer it is necessary to 796 identify an incoming VC setup as RSVP initiated. We propose to 797 make this identification as part of the negotiation of 798 encapsulation. Specifically, to indicate in the B-LLI IE in 799 the SETUP message that the associated VC is controlled by an 800 internet layer signalling protocol and should not be timed out. 802 The format of the B-LLI IE is [Note 5] : 804 4. RSVP Control VC Management 806 One last important issue is providing a data path for the RSVP 807 messages themselves. There are two main types of messages in RSVP, 808 PATH and RESV. PATH messages are sent to a multicast address, while 809 RESV messages are sent to a unicast address. Other RSVP messages are 810 handled similar to either PATH or RESV [Note 6] So ATM VCs used for 811 _________________________ 812 [Note 5] This will be defined in a future version 813 [Note 6] This can be slightly more complicated for RERR messages 814 RSVP signalling messages need to provide both unicast and multicast 815 functionality. 817 There are several different approaches for how to assign VCs to use 818 for RSVP signalling messages. The main approaches are: 820 o use same VC as data 822 o single VC per session 824 o single point-to-multipoint VC multiplexed among sessions 826 o multiple point-to-point VCs multiplexed among sessions 828 There are several different issues that affect the choice of how to 829 assign VCs for RSVP signalling. One issue is the number of 830 additional VCs needed for RSVP signalling. Related to this issue is 831 the degree of multiplexing on the RSVP VCs. In general more 832 multiplexing means less VCs. An additional issue is the latency in 833 dynamically setting up new RSVP signalling VCs. A final issue is 834 complexity of implementation. The remainder of this section 835 discusses the issues and tradeoffs among these different approaches 836 and suggests guidelines for when to use which alternative. 838 4.1 Mixed data and control traffic 840 In this scheme RSVP signalling messages are sent on the same VCs 841 as is the data traffic. The main advantage of this scheme is that 842 no additional VCs are needed beyond what is needed for the data 843 traffic. An additional advantage is that there is no ATM 844 signalling latency for PATH messages (which follow the same 845 routing as the data messages). However there can be a major 846 problem when data traffic on a VC is nonconforming. With 847 nonconforming traffic, RSVP signalling messages may be dropped. 848 While RSVP is resilient to a moderate level of dropped messages, 849 excessive drops would lead to repeated tearing down and re- 850 establishing QoS VCs, a very undesirable behavior for ATM. Due to 851 these problems, this is not a good choice for providing RSVP 852 signalling messages, even though the number of VCs needed for this 853 scheme is minimized. 855 One variation of this scheme is to use the best effort data path 856 for signalling traffic. In this scheme, there is no issue with 857 nonconforming traffic, but there is an issue with congestion in 858 the ATM network. 860 RSVP provides some resiliency to message loss due to congestion, 861 but RSVP control messages should be offered a preferred class of 862 service. A related variation of this scheme that is hopeful but 863 requires further study is to have a packet scheduling algorithm 864 (before entering the ATM network) that gives priority to the RSVP 865 signalling traffic. This can be difficult to do at the IP layer. 867 4.2 Single RSVP VC per RSVP Reservation 869 In this scheme, there is a parallel RSVP signalling VC for each 870 RSVP reservation. This scheme results in twice the minimum number 871 of VCs, but means that RSVP signalling messages have the advantage 872 of a separate VC. This separate VC means that RSVP signalling 873 messages have their own traffic contract and compliant signalling 874 messages are not subject to dropping due to other noncompliant 875 traffic (such as can happen with the scheme in section 4.1). The 876 advantage of this scheme is its simplicity - whenever a data VC is 877 created, a separate RSVP signalling VC is created. The 878 disadvantage of the extra VC is that extra ATM signalling needs to 879 be done. 881 Additionally, this scheme requires twice the minimum number of VCs 882 and also additional latency, but is quite simple. 884 4.3 Multiplexed point-to-multipoint RSVP VCs 886 In this scheme, there is a single point-to-multipoint RSVP 887 signalling VC for each unique ingress router and unique set of 888 egress routers. This scheme allows multiplexing of RSVP 889 signalling traffic that shares the same ingress router and the 890 same egress routers. This can save on the number of VCs, by 891 multiplexing, but there are problems when the destinations of the 892 multiplexed point-to-multipoint VCs are changing. Several 893 alternatives exist in these cases, that have applicability in 894 different situations. First, when the egress routers change, the 895 ingress router can check if it already has a point-to-multipoint 896 RSVP signalling VC for the new list of egress routers. If the 897 RSVP signalling VC already exists, then the RSVP signalling 898 traffic can be switched to this existing VC. If no such VC 899 exists, one approach would be to create a new VC with the new list 900 of egress routers. Other approaches include modifying the 901 existing VC to add an egress router or using a separate new VC for 902 the new egress routers. When a destination drops out of a group, 903 an alternative would be to keep sending to the existing VC even 904 though some traffic is wasted. 906 The number of VCs used in this scheme is a function of traffic 907 patterns across the ATM network, but is always less than the 908 number used with the Single RSVP VC per data VC. In addition, 909 existing best effort data VCs could be used for RSVP signalling. 911 Reusing best effort VCs saves on the number of VCs at the cost of 912 higher probability of RSVP signalling packet loss. One possible 913 place where this scheme will work well is in the core of the 914 network where there is the most opportunity to take advantage of 915 the savings due to multiplexing. The exact savings depend on the 916 patterns of traffic and the topology of the ATM network. 918 4.4 Multiplexed point-to-point RSVP VCs 920 In this scheme, multiple point-to-point RSVP signalling VCs are 921 used for a single point-to-multipoint data VC. This scheme allows 922 multiplexing of RSVP signalling traffic but requires the same 923 traffic to be sent on each of several VCs. This scheme is quite 924 flexible and allows a large amount of multiplexing. Since point- 925 to-point VCs can set up a reverse channel at the same time as 926 setting up the forward channel, this scheme could save 927 substantially on signalling cost. In addition, signalling traffic 928 could share existing best effort VCs. Sharing existing best 929 effort VCs reduces the total number of VCs needed, but might cause 930 signalling traffic drops if there is congestion in the ATM 931 network. 933 This point-to-point scheme would work well in the core of the 934 network where there is much opportunity for multiplexing. Also in 935 the core of the network, RSVP VCs can stay permanently established 936 either as Permanent Virtual Circuits (PVCs) or as long lived 937 Switched Virtual Circuits (SVCs). The number of VCs in this 938 scheme will depend on traffic patterns, but in the core of a 939 network would be approximately n(n-1)/2 where n is the number of 940 IP nodes in the network. In the core of the network, this will 941 typically be small compared to the total number of VCs. 943 4.5 QoS for RSVP VCs 945 There is an issue for what QoS, if any, to assign to the RSVP VCs. 946 Three solutions have been covered in section 4.1 and in the shared 947 best effort VC variations in sections 4.4 and 4.3. For other RSVP 948 VC schemes, a QoS (possibly best effort) will be needed. What QoS 949 to use partially depends on the expected level of multiplexing 950 that is being done on the VCs, and the expected reliability of 951 best effort VCs. Since RSVP signalling is infrequent (typically 952 every 30 seconds), only a relatively small QoS should be needed. 953 This is important since using a larger QoS risks the VC setup 954 being rejected for lack of resources. Falling back to best effort 955 when a QoS call is rejected is possible, but if the ATM net is 956 congested, there will likely be problems with RSVP packet loss on 957 the best effort VC also. Additional experimentation is needed in 958 this area. 960 4.6 Implementation Guidelines 962 Implementations **will/should** [Note 7] , at a minimum, be able 963 to send RSVP control (messages) over the best effort data path, 964 see figure . The specific best effort paths that will be used by 965 RSVP are: for unicast, the same VC used to reach the unicast 966 destination; and for multicast, the same VC that is used for best 967 effort traffic destined to the IP multicast group. Note that 968 there may be another best effort VC that is used to carry session 969 data traffic. 971 [Figure goes here] 972 Figure 7: RSVP Control Message VC Usage 974 An issue with this approach is that best effort VCs may not 975 provide the reliability that RSVP needs. However RSVP allows for 976 a certain amount of packet loss without any loss of state 977 synchronization. And in all cases, RSVP control traffic should be 978 offered a preferred class of service. 980 5. Encapsulation 982 Since RSVP is a signalling protocol used to control flows of IP data 983 packets, encapsulation for both RSVP packets and associated IP data 984 packets must be defined. There are two encapsulation options for 985 running IP over ATM, RFC 1483 and LANE. The first option is 986 described in RFC 1483[6] and is currently used for "Classical" IP 987 over ATM and NHRP. 989 The second option is LAN Emulation, as described in [7]. LANE 990 encapsulation does not currently include a QoS signalling interface. 991 If LANE encapsulation is needed, LANE QoS signalling would first need 992 to be defined by the ATM Forum. It is possible that LANE 2.0 will 993 include the required QoS support. 995 5.1 Implementation Guidelines 997 While it is possible to use different encapsulations for RSVP 998 packets and associated IP data packets, this does not seem to make 999 sense. So, the same encapsulation must be used for each. 1001 The choice of encapsulation options is clear. Currently LANE 1002 _________________________ 1003 [Note 7] The working group must decide if this is requirement or a 1004 recommendation. 1006 doesn't have a QoS control interface and there is no way to 1007 communicate QoS requirements to the LANE BUS. Since QoS control 1008 is needed to make RSVP over ATM useful, RFC 1483 encapsulation 1009 must be used by RSVP over ATM. 1011 6. Security 1013 The same considerations stated in [5] and [11] apply to this 1014 document. There are no additional security issues raised in this 1015 document. 1017 7. Future Work 1019 We have described a set of schemes for deploying RSVP over IP over 1020 ATM. There are a number of other issues that are subjects of 1021 continuing research. These issues (and others) are covered in [3], 1022 and are briefly repeated here. 1024 A major issue is providing policy control for ATM VC creation. There 1025 is work going on in the RSVP working group [8] on defining an 1026 architecture for policy support. Further work is needed in defining 1027 an API and policy objects. As this area is critical to deployment, 1028 progress will need to be made in this area. 1030 NHRP provides advantages in allowing short-cuts across 2 or more 1031 LIS's. Short cutting router hops can lead to more efficient data 1032 delivery. Work on NHRP is on-going, but currently provides only a 1033 unicast delivery service. Further study is needed to determine how 1034 NHRP can be used with RSVP and ATM. Future work depends on the 1035 development of NHRP for multicast. 1037 Furthermore, when using RSVP it may be desirable to establish 1038 multiple short-cut VCs, to use these VCs for specific QoS flows, and 1039 to use the hop-by-hop path for other QoS and non-QoS flows. The 1040 current NHRP specification [9] does not preclude such an approach, 1041 but nor does it explicitly support it. We believe that explicit 1042 support of flow based short-cuts would improve RSVP over ATM 1043 solutions. We also believe that such support may require the ability 1044 to include flow information in the NHRP request. 1046 There is work in the ION working group on MultiCast Server (MCS) 1047 architectures for MARS. An MCS provides savings in the number of VCs 1048 in certain situations. When using a multicast server, the sub- 1049 network sender could establish a point-to-point VC with a specific 1050 QoS to the server, but there is not current mechanism to relay QoS 1051 requirements to the MCS. Future work includes providing RSVP and ATM 1052 support over MARS MCS's. 1054 Unicast ATM VCs are inherently bi-directional and have the capability 1055 of supporting a "reverse channel". By using the reverse channel for 1056 unicast VCs, the number of VCs used can potentially be reduced. 1057 Future work includes examining how the reverse VCs can be used most 1058 effectively. 1060 Current work in the ATM Forum and ITU promises additional advantages 1061 for RSVP and ATM including renegotiating QoS parameters and 1062 variegated VCs. QoS renegotiation would be particularly beneficial 1063 since the only option available today for changing VC QoS parameters 1064 is replacing the VC. It is important to keep current with changes in 1065 ATM, and to keep this document up-to-date. 1067 Scaling of the number of sessions is an issue. The key ATM related 1068 implication of a large number of sessions is the number of VCs and 1069 associated (buffer and queue) memory. The approach to solve this 1070 problem is aggregation either at the RSVP layer or at the ISSLL layer 1071 (or both). 1073 This document describes approaches that can be used with ATM UNI4.0, 1074 but does not make use of the available leaf-initiated join, or LIJ, 1075 capability. The use of LIJ may be useful in addressing scaling 1076 issues. The coordination of RSVP with LIJ remains a research issue. 1078 Lastly, it is likely that LANE 2.0 will provide some QoS support 1079 mechanisms, including proper QoS allocation for multicast traffic. 1080 It is important to track developments, and develop suitable RSVP over 1081 ATM LANE at the appropriate time. 1083 8. Authors' Addresses 1085 Steven Berson 1086 USC Information Sciences Institute 1087 4676 Admiralty Way 1088 Marina del Rey, CA 90292 1090 Phone: +1 310 822 1511 1091 EMail: berson@isi.edu 1092 Lou Berger 1093 FORE Systems 1094 6905 Rockledge Drive 1095 Suite 800 1096 Bethesda, MD 20817 1098 Phone: +1 301 571 2534 1099 EMail: lberger@fore.com 1101 REFERENCES 1103 [1] Armitage, G., "Support for Multicast over UNI 3.0/3.1 based ATM 1104 Networks," Internet Draft, February 1996. 1106 [2] Berson, S., "`Classical' RSVP and IP over ATM," INET '96, July 1996. 1108 [3] Borden, M., Crawley, E., Krawczyk, J, Baker, F., and Berson, S., 1109 "Issues for RSVP and Integrated Services over ATM," Internet Draft, 1110 February 1996. 1112 [4] Borden, M., and Garrett, M., "Interoperation of Controlled-Load and 1113 Guaranteed-Service with ATM," Internet Draft, June 1996. 1115 [5] Braden, R., Zhang, L., Berson, S., Herzog, S., and Jamin, S., 1116 "Resource ReSerVation Protocol (RSVP) -- Version 1 Functional 1117 Specification," Internet Draft, August 1996. 1119 [6] Heinanen, J., "Multiprotocol Encapsulation over ATM Adaptation Layer 1120 5," RFC 1483. 1122 [7] The ATM Forum, "LAN Emulation Over ATM Specification", Version 1.0. 1124 [8] Herzog, S., "Accounting and Access Control Policies for Resource 1125 Reservation Protocols," Internet Draft, June 1996. 1127 [9] Luciani, J., Katz, D., Piscitello, D., Cole, B., "NBMA Next Hop 1128 Resolution Protocol (NHRP)," Internet Draft, June 1996. 1130 [10] Onvural, R., Srinivasan, V., "A Framework for Supporting RSVP Flows 1131 Over ATM Networks," Internet Draft, March 1996. 1133 [11] Perez, M., Liaw, F., Grossman, D., Mankin, A., Hoffman, E., and 1134 Malis, A., "ATM Signalling Support for IP over ATM," RFC 1755. 1136 [12] "ATM User-Network Interface (UNI) Specification - Version 3.1", 1137 Prentice Hall. 1139 [13] Braden, R., Clark, D., Shenker, S. "Integrated Services in the 1140 Internet Architecture: an Overview," RFC 1633, June 1994. 1142 [14] Laubach, M., "Classical IP and ARP over ATM," RFC 1577, January 1143 1994. 1145 [15] Shenker, S., Partridge, C., Guerin, R., "Specification of 1146 Guaranteed Quality of Service," Internet Draft, August 1996. 1148 [16] Wroclawski, J., "Specification of the Controlled-Load Network 1149 Element Service," Internet Draft, August, 1996. 1151 [17] Zhang, L., Deering, S., Estrin, D., Shenker, S., Zappala, D., 1152 "RSVP: A New Resource ReSerVation Protocol," IEEE Network, September 1153 1993.