idnits 2.17.1 draft-ietf-issll-atm-support-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-20) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 16 instances of too long lines in the document, the longest one being 5 characters in excess of 72. ** The abstract seems to contain references ([6]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 70 has weird spacing: '... worked well ...' == Line 224 has weird spacing: '... Figure shows...' == Line 311 has weird spacing: '...tion of great...' == Line 313 has weird spacing: '... to the other...' == Line 372 has weird spacing: '... figure where...' == (2 more instances...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (March 26, 1997) is 9887 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Looks like a reference, but probably isn't: 'Note 1' on line 809 == Unused Reference: '4' is defined on line 1034, but no explicit reference was found in the text == Unused Reference: '13' is defined on line 1062, but no explicit reference was found in the text == Unused Reference: '16' is defined on line 1071, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '3' -- Possible downref: Non-RFC (?) normative reference: ref. '4' -- Possible downref: Non-RFC (?) normative reference: ref. '5' -- Possible downref: Non-RFC (?) normative reference: ref. '6' ** Downref: Normative reference to an Informational RFC: RFC 1633 (ref. '7') -- Possible downref: Non-RFC (?) normative reference: ref. '8' ** Obsolete normative reference: RFC 1483 (ref. '9') (Obsoleted by RFC 2684) -- Possible downref: Non-RFC (?) normative reference: ref. '10' ** Obsolete normative reference: RFC 1577 (ref. '11') (Obsoleted by RFC 2225) -- Possible downref: Non-RFC (?) normative reference: ref. '12' -- Possible downref: Non-RFC (?) normative reference: ref. '13' -- Possible downref: Non-RFC (?) normative reference: ref. '15' -- Possible downref: Non-RFC (?) normative reference: ref. '16' -- Possible downref: Non-RFC (?) normative reference: ref. '17' -- Possible downref: Non-RFC (?) normative reference: ref. '18' -- Possible downref: Non-RFC (?) normative reference: ref. '19' Summary: 12 errors (**), 0 flaws (~~), 10 warnings (==), 18 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Draft S. Berson 3 Expiration: September 1997 ISI 4 File: draft-ietf-issll-atm-support-03.txt L. Berger 5 FORE Systems 7 IP Integrated Services with RSVP over ATM 9 March 26, 1997 11 Status of Memo 13 This document is an Internet-Draft. Internet-Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its areas, 15 and its working groups. Note that other groups may also distribute 16 working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or obsoleted by other documents at any 20 time. It is inappropriate to use Internet-Drafts as reference 21 material or to cite them other than as "work in progress." 23 To learn the current status of any Internet-Draft, please check the 24 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow 25 Directories on ds.internic.net (US East Coast), nic.nordu.net 26 (Europe), ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific 27 Rim). 29 Abstract 31 This draft describes a method for providing IP Integrated Services 32 with RSVP over ATM switched virtual circuits (SVCs). It provides an 33 overall approach to the problem as well as a specific method for 34 running over today's ATM networks. There are two parts of this 35 problem. This draft provides guidelines for using ATM VCs with QoS 36 as part of an Integrated Services Internet. A related draft[6] 37 describes service mappings between IP Integrated Services and ATM 38 services. 40 Authors' Note 42 The postscript version of this document contains figures that are not 43 included in the text version, so it is best to use the postscript 44 version. Figures will be converted to ASCII in a future version. 46 Table of Contents 48 1. Introduction ........................................................3 49 1.1 Terms ...........................................................4 50 1.2 Assumptions .....................................................4 51 2. Policy ..............................................................6 52 3. Data VC Management ..................................................7 53 3.1 Reservation to VC Mapping .......................................7 54 3.2 Heterogeneity ...................................................8 55 3.3 Multicast End-Point Identification ..............................12 56 3.4 Multicast Data Distribution .....................................13 57 3.5 Receiver Transitions ............................................14 58 3.6 Dynamic QoS .....................................................14 59 4. Tear down old VC ............................................16 60 5. Activate timer ..............................................16 61 6. Security ............................................................21 62 7. Future Work .........................................................21 63 8. Authors' Addresses ..................................................23 65 1. Introduction 67 The Internet currently has one class of service normally referred to 68 as "best effort." This service is typified by first-come, first- 69 serve scheduling at each hop in the network. Best effort service has 70 worked well for electronic mail, World Wide Web (WWW) access, file 71 transfer (e.g. ftp), etc. For real-time traffic such as voice and 72 video, the current Internet has performed well only across unloaded 73 portions of the network. In order to provide guaranteed quality 74 real-time traffic, new classes of service and a QoS signalling 75 protocol are being introduced in the Internet[7,18,17], while 76 retaining the existing best effort service. The QoS signalling 77 protocol is RSVP[8,19], the Resource ReSerVation Protocol. 79 ATM is rapidly becoming an important link layer technology. One of 80 the important features of ATM technology is the ability to request a 81 point-to-point Virtual Circuit (VC) with a specified Quality of 82 Service (QoS). An additional feature of ATM technology is the ability 83 to request point-to-multipoint VCs with a specified QoS. Point-to- 84 multipoint VCs allows leaf nodes to be added and removed from the VC 85 dynamically and so provide a mechanism for supporting IP multicast. 86 It is only natural that RSVP and the Internet Integrated Services 87 (IIS) model would like to utilize the QoS properties of any 88 underlying link layer including ATM. 90 Classical IP over ATM[11] has solved part of this problem, supporting 91 IP unicast best effort traffic over ATM. Classical IP over ATM is 92 based on a Logical IP Subnetwork (LIS), which is a separately 93 administered IP sub-network. Hosts within a LIS communicate using 94 the ATM network, while hosts from different sub-nets communicate only 95 by going through an IP router (even though it may be possible to open 96 a direct VC between the two hosts over the ATM network). Classical 97 IP over ATM provides an Address Resolution Protocol (ATMARP) for ATM 98 edge devices to resolve IP addresses to native ATM addresses. For 99 any pair of IP/ATM edge devices (i.e. hosts or routers), a single VC 100 is created on demand and shared for all traffic between the two 101 devices. A second part of the RSVP and IIS over ATM problem, IP 102 multicast, is close to being solved with MARS[1], the Multicast 103 Address Resolution Server. MARS compliments ATMARP by allowing an IP 104 address to resolve into a list of native ATM addresses, rather than 105 just a single address. 107 A key remaining issue for IP over ATM is the integration of RSVP 108 signalling and ATM signalling in support of the Internet Integrated 109 Services (IIS) model. There are two main areas involved in 110 supporting the IIS model, QoS translation and VC management. QoS 111 translation concerns mapping a QoS from the IIS model to a proper ATM 112 QoS, while VC management concentrates on how many VCs are needed and 113 which traffic flows are routed over which VCs. Mapping of IP QoS to 114 ATM QoS is the subject of a companion draft[6]. 116 This draft concentrates on VC management (and we assume in this draft 117 that the QoS for a single reserved flow can be acceptably translated 118 to an ATM QoS). Two types of VCs need to be managed, data VCs which 119 handle the actual data traffic, and control VCs which handle the RSVP 120 signalling traffic. Several VC management schemes for both data and 121 control VCs are described in this draft. For each scheme, there are 122 two major issues - (1) heterogeneity and (2) dynamic behavior. 123 Heterogeneity refers to how requests for different QoS's are handled, 124 while dynamic behavior refers to how changes in QoS and changes in 125 multicast group membership are handled. These schemes will be 126 evaluated in terms of the following metrics - (1) number of VCs 127 needed to implement the scheme, (2) bandwidth wasted due to duplicate 128 packets, and (3) flexibility in handling heterogeneity and dynamic 129 behavior. Examples where each scheme is most applicable are given. 131 1.1 Terms 133 The terms "reservation" and "flow" are used in many contexts, 134 often with different meaning. These terms are used in this 135 document with the following meaning: 137 o Reservation is used in this document to refer to an RSVP 138 initiated request for resources. RSVP initiates requests for 139 resources based on RESV message processing. RESV messages 140 that simply refresh state do not trigger resource requests. 141 Resource requests may be made based on RSVP sessions and RSVP 142 reservation styles. RSVP styles dictate whether the reserved 143 resources are used by one sender or shared by multiple 144 senders. See [8] for details of each. Each new request is 145 referred to in this document as an RSVP reservation, or 146 simply reservation. 148 o Flow is used to refer to the data traffic associated with a 149 particular reservation. The specific meaning of flow is RSVP 150 style dependent. For shared style reservations, there is one 151 flow per session. For distinct style reservations, there is 152 one flow per sender (per session). 154 1.2 Assumptions 156 The following assumptions are made: 158 o Support for IPv4 and IPv6 best effort in addition to QoS 159 o Use RSVP with policy control as signalling protocol 161 o Assume UNI 3.x and 4.0 ATM services 163 o VCs initiation by sub-net senders 165 1.2.1 IPv4 and IPv6 167 Currently IPv4 is the standard protocol of the Internet which 168 now provides only best effort service. We assume that best 169 effort service will continue to be supported while introducing 170 new types of service according to the IP Integrated Services 171 model. We also assume that IPv6 will be supported as well as 172 IPv4. 174 1.2.2 RSVP and Policy 176 We assume RSVP as the Internet signalling protocol which is 177 described in [19]. The reader is assumed to be familiar with 178 [19]. 180 IP Integrated Services discriminates between users by providing 181 some users better service at the expense of others. For ATM, 182 preferential service reflects when a new VC is created for a 183 user rather than joining an existing VC. Policy determines how 184 preferential services are allocated while allowing network 185 operators maximum flexibility to provide value-added services 186 for the marketplace. Mechanisms need to be be provided to 187 enforce access policies. These mechanisms may include such 188 things as permissions and/or billing. 190 For scaling reasons, policies based on bilateral agreements 191 between neighboring providers are considered. The bilateral 192 model has similar scaling properties to multicast while 193 maintaining no global information. Policy control is currently 194 being developed for RSVP (see [10] for details). 196 1.2.3 ATM 198 We assume ATM defined by UNI 3.x and 4.0, plus TM 4.0. ATM 199 provides both point-to-point and point-to-multipoint Virtual 200 Circuits (VCs) with a specified Quality of Service (QoS). ATM 201 provides both Permanent Virtual Circuits (PVCs) and Switched 202 Virtual Circuits (SVCs). In the Permanent Virtual Circuit 203 (PVC) environment, PVCs are typically used as point-to-point 204 link replacements. So the Integrated Services support issues 205 are similar to point-to-point links. This draft describes 206 schemes for supporting Integrated Services using SVCs. 208 1.2.4 VC Initiation 210 There is an apparent mismatch between RSVP and ATM. 211 Specifically, RSVP control is receiver oriented and ATM control 212 is sender oriented. This initially may seem like a major 213 issue, but really is not. While RSVP reservation (RESV) 214 requests are generated at the receiver, actual allocation of 215 resources takes place at the sub-net sender. 217 For data flows, this means that sub-net senders will establish 218 all QoS VCs and the sub-net receiver must be able to accept 219 incoming QoS VCs. These restrictions are consistent with RSVP 220 version 1 processing rules and allow senders to use different 221 flow to VC mappings and even different QoS renegotiation 222 techniques without interoperability problems. All RSVP over 223 ATM approaches that have VCs initiated and controlled by the 224 sub-net senders will interoperate. Figure shows this model of 225 data flow VC initiation. 227 [Figure goes here] 228 Figure 1: Data Flow VC Initiation 230 The use of the reverse path provided by point-to-point VCs by 231 receivers is for further study. There are two related issues. 232 The first is that use of the reverse path requires the VC 233 initiator to set appropriate reverse path QoS parameters. The 234 second issue is that reverse paths are not available with 235 point-to-multipoint VCs, so reverse paths could only be used to 236 support unicast RSVP reservations. Receivers initiating VCs 237 via the reverse path mechanism provided by point-to-point VCs 238 is also for future study. 240 2. Policy 242 RSVP allows for local policy control [10] as well as admission 243 control. Thus a user can request a reservation with a specific QoS 244 and with a policy object that, for example, offers to pay for 245 additional costs setting up a new reservation. The policy module at 246 the entry to a provider can decide how to satisfy that request - 247 either by merging the request in with an existing reservation or by 248 creating a new reservation for this (and perhaps other) users. This 249 policy can be on a per user-provider basis where a user and a 250 provider have an agreement on the type of service offered, or on a 251 provider-provider basis, where two providers have such an agreement. 252 With the ability to do local policy control, providers can offer 253 services best suited to their own resources and their customers 254 needs. 256 Policy is expected to be provided as a generic API which will return 257 values indicating what action should be taken for a specific 258 reservation request. The API is expected to have access to the 259 reservation tables with the QoS for each reservation. The RSVP 260 Policy and Integrity objects will be passed to the policy() call. 261 Four possible return values are expected. The request can be 262 rejected. The request can be accepted as is. The request can be 263 accepted but at a different QoS. The request can cause a change of 264 QoS of an existing reservation. The information returned from this 265 call will be used to call the admission control interface. As this 266 area is critical to deployment, progress will need to be made in this 267 area. 269 3. Data VC Management 271 Any RSVP over ATM implementation must map RSVP and RSVP associated 272 data flows to ATM Virtual Circuits (VCs). LAN Emulation [2], 273 Classical IP [11] and, more recently, NHRP [12] discuss mapping IP 274 traffic onto ATM SVCs, but they only cover a single QoS class, i.e., 275 best effort traffic. When QoS is introduced, VC mapping must be 276 revisited. For RSVP controlled QoS flows, one issue is VCs to use for 277 QoS data flows. 279 In the Classic IP over ATM and current NHRP models, a single point- 280 to-point VC is used for all traffic between two ATM attached hosts 281 (routers and end-stations). It is likely that such a single VC will 282 not be adequate or optimal when supporting data flows with multiple 283 QoS types. RSVP's basic purpose is to install support for flows with 284 multiple QoS types, so it is essential for any RSVP over ATM solution 285 to address VC usage for QoS data flows. 287 This section describes issues and methods for management of VCs 288 associated with QoS data flows. When establishing and maintaining 289 VCs, the sub-net sender will need to deal with several complicating 290 factors including multiple QoS reservations, requests for QoS 291 changes, ATM short-cuts, and several multicast specific issues. The 292 multicast specific issues result from the nature of ATM connections. 293 The key multicast related issues are heterogeneity, data 294 distribution, receiver transitions, and end-point identification. 296 3.1 Reservation to VC Mapping 298 There are various approaches available for mapping reservations on 299 to VCs. A distinguishing attribute of all approaches is how 300 reservations are combined on to individual VCs. When mapping 301 reservations on to VCs, individual VCs can be used to support a 302 single reservation, or reservation can be combined with others on 303 to "aggregate" VCs. In the first case, each reservation will be 304 supported by one or more VCs. Multicast reservation requests may 305 translate into the setup of multiple VCs as is described in more 306 detail in section 3.2. Unicast reservation requests will always 307 translate into the setup of a single QoS VC. In both cases, each 308 VC will only carry data associated with a single reservation. The 309 greatest benefit if this approach is ease of implementation, but 310 it comes at the cost of increased (VC) setup time and the 311 consumption of greater number of VC and associated resources. 313 We refer to the other case, when reservations are not combined, 314 as the "aggregation" model. With this model, large VCs could be 315 set up between IP routers and hosts in an ATM network. These VCs 316 could be managed much like IP Integrated Service (IIS) point-to- 317 point links (e.g. T-1, DS-3) are managed now. Traffic from 318 multiple sources over multiple RSVP sessions might be multiplexed 319 on the same VC. This approach has a number of advantages. First, 320 there is typically no signalling latency as VCs would be in 321 existence when the traffic started flowing, so no time is wasted 322 in setting up VCs. Second, the heterogeneity problem (section 323 3.2) in full over ATM has been reduced to a solved problem. 324 Finally, the dynamic QoS problem (section 3.6) for ATM has also 325 been reduced to a solved problem. This approach can be used with 326 point-to-point and point-to-multipoint VCs. The problem with the 327 aggregation approach is that the choice of what QoS to use for 328 which of the VCs is difficult, but is made easier since the VCs 329 can be changed as needed. The advantages of this scheme makes 330 this approach an item for high priority study. 332 3.2 Heterogeneity 334 Heterogeneity occurs when receivers request different QoS's within 335 a single session. This means that the amount of requested 336 resources differs on a per next hop basis. A related type of 337 heterogeneity occurs due to best-effort receivers. In any IP 338 multicast group, it is possible that some receivers will request 339 QoS (via RSVP) and some receivers will not. Both types of 340 heterogeneity are shown in figure . In shared media, like 341 Ethernet, receivers that have not requested resources can 342 typically be given identical service to those that have without 343 complications. This is not the case with ATM. In ATM networks, 344 any additional end-points of a VC must be explicitly added. There 345 may be costs associated with adding the best-effort receiver, and 346 there might not be adequate resources. An RSVP over ATM solution 347 will need to support heterogeneous receivers even though ATM does 348 not currently provide such support directly. 350 [Figure goes here] 351 Figure 2: Types of Multicast Receivers 353 RSVP heterogeneity is supported over ATM in the way RSVP 354 reservations are mapped into ATM VCs. There are multiple models 355 for supporting RSVP heterogeneity over ATM. Section 3.2.1 356 examines the multiple VCs per RSVP reservation (or full 357 heterogeneity) model where a single reservation can be forwarded 358 into several VCs each with a different QoS. Section 3.2.2 359 presents a limited heterogeneity model where exactly one QoS VC is 360 used along with a best effort VC. Section 3.2.3 examines the VC 361 per RSVP reservation (or homogeneous) model, where each RSVP 362 reservation is mapped to a single ATM VC. Section 3.2.4 describes 363 the aggregation model allowing aggregation of multiple RSVP 364 reservations into a single VC. Further study is being done on the 365 aggregation model. 367 3.2.1 Full Heterogeneity Model 369 We define the "full heterogeneity" model as providing a 370 separate VC for each distinct QoS for a multicast session 371 including best effort and one or more QoS's. This is shown in 372 figure where S1 is a sender, R1-R3 are receivers, r1-r4 are IP 373 routers, and s1-s2 are ATM switches. Receivers R1 and R3 make 374 reservations with different QoS while R2 is a best effort 375 receiver. Three point-to-multipoint VCs are created for this 376 situation, each with the requested QoS. Note that any leafs 377 requesting QoS 1 or QoS 2 would be added to the existing QoS 378 VC. 380 [Figure goes here] 381 Figure 3: Full heterogeneity 383 Note that while full heterogeneity gives users exactly what 384 they request, it requires more resources of the network than 385 other possible approaches. In figure , three copies of each 386 packet are sent on the link from r1 to s1. Two copies of each 387 packet are then sent from s1 to s2. The exact amount of 388 bandwidth used for duplicate traffic depends on the network 389 topology and group membership. 391 3.2.2 Limited Heterogeneity Model 393 An important special case of full heterogeneity is limited 394 heterogeneity. We define the "limited heterogeneity" model as 395 the case where the receivers of a multicast session are limited 396 to use either best effort service or a single alternate quality 397 of service. The alternate QoS can be chosen either by higher 398 level protocols or by dynamic renegotiation of QoS as described 399 below. 401 [Figure goes here] 402 Figure 4: Limited heterogeneity 404 In order to support limited heterogeneity, each ATM edge device 405 participating in a session would need at most two VCs. One VC 406 would be a point-to-multipoint best effort service VC and would 407 serve all best effort service IP destinations for this RSVP 408 session. The other VC would be a point to multipoint VC with 409 QoS and would serve all IP destinations for this RSVP session 410 that have an RSVP reservation established. This is shown in 411 figure where there are three receivers, R2 requesting best 412 effort service, while R1 and R3 request distinct reservations. 413 Whereas, in figure , R1 and R3 have a separate VC, so each 414 receives precisely the resources requested, in figure , R1 and 415 R3 share the same VC (using the maximum of R1 and R3 QoS) 416 across the ATM network. Note that though the VC and hence the 417 QoS for R1 and R3 are the same within the ATM cloud, the 418 reservation outside the ATM cloud (from router r4 to receiver 419 R3) uses the QoS actually requested by R3. 421 As with full heterogeneity, a disadvantage of the limited 422 heterogeneity scheme is that each packet will need to be 423 duplicated at the network layer and one copy sent into each of 424 the 2 VCs. Again, the exact amount of excess traffic will 425 depend on the network topology and group membership. Looking 426 at figure , there are two VCs going from router r1 to switch 427 s1. Two copies of every packet will traverse the r1-s1 link. 428 Another disadvantage of limited heterogeneity is that a 429 reservation request can be rejected even when the resources are 430 available. This occurs when a new receiver requests a larger 431 QoS. If any of the existing QoS VC end-points cannot upgrade 432 to the new QoS, then the new reservation fails though the 433 resources exist for the new receiver. 435 3.2.3 Homogeneous and Modified Homogeneous Models 437 We define the "homogeneous" model as the case where all 438 receivers of a multicast session use a single quality of 439 service VC. Best-effort receivers also use the single RSVP 440 triggered QoS VC. The single VC can be a point-to-point or 441 point-to-multipoint as appropriate. The QoS VC is sized to 442 provide the maximum resources requested by all RSVP next-hops. 444 This model matches the way the current RSVP specification 445 addresses heterogeneous requests. The current processing rules 446 and traffic control interface describe a model where the 447 largest requested reservation for a specific outgoing interface 448 is used in resource allocation, and traffic is transmitted at 449 the higher rate to all next-hops. This approach would be the 450 simplest method for RSVP over ATM implementations. 452 While this approach is simple to implement, providing better 453 than best-effort service may actually be the opposite of what 454 the user desires since in providing ATM QoS. There may be 455 charges incurred or resources that are wrongfully allocated. 456 There are two specific problems. The first problem is that a 457 user making a small or no reservation would share a QoS VC 458 resources without making (and perhaps paying for) an RSVP 459 reservation. The second problem is that a receiver may not 460 receive any data. This may occur when there is insufficient 461 resources to add a receiver. The rejected user would not be 462 added to the single VC and it would not even receive traffic on 463 a best effort basis. 465 Not sending data traffic to best-effort receivers because of 466 another receiver's RSVP request is clearly unacceptable. The 467 previously described limited heterogeneous model ensures that 468 data is always sent to both QoS and best-effort receivers, but 469 it does so by requiring replication of data at the sender in 470 all cases. It is possible to extend the homogeneous model to 471 both ensure that data is always sent to best-effort receivers 472 and also to avoid replication in the normal case. This 473 extension is to add special handling for the case where a 474 best-effort receiver cannot be added to the QoS VC. In this 475 case, a best-effort VC can be established to any receivers that 476 could not be added to the QoS VC. Only in this special error 477 case would senders be required to replicate data. We define 478 this approach as the "modified homogeneous" model. 480 3.2.4 Aggregation 482 The last scheme is the multiple RSVP reservations per VC (or 483 aggregation) model. With this model, large VCs could be set up 484 between IP routers and hosts in an ATM network. These VCs 485 could be managed much like IP Integrated Service (IIS) point- 486 to-point links (e.g. T-1, DS-3) are managed now. Traffic from 487 multiple sources over multiple RSVP sessions might be 488 multiplexed on the same VC. This approach has a number of 489 advantages. First, there is typically no signalling latency as 490 VCs would be in existence when the traffic started flowing, so 491 no time is wasted in setting up VCs. Second, the heterogeneity 492 problem in full over ATM has been reduced to a solved problem. 493 Finally, the dynamic QoS problem for ATM has also been reduced 494 to a solved problem. This approach can be used with point-to- 495 point and point-to-multipoint VCs. The problem with the 496 aggregation approach is that the choice of what QoS to use for 497 which of the VCs is difficult, but is made easier since the VCs 498 can be changed as needed. The advantages of this scheme makes 499 this approach an item for high priority study. 501 Multiple options for mapping reservations onto VCs have been 502 discussed. No matter which model or combination of models is 503 used by an implementation, implementations must not normally 504 send more than one copy of a particular data packet to a 505 particular next-hop (ATM end-point). Some transient over 506 transmission is acceptable, but only during VC setup and 507 transition. Implementations must also ensure that data traffic 508 is sent to best-effort receivers. Data traffic may be sent to 509 best-effort receivers via best-effort or QoS VCs as is 510 appropriate for the implemented model. In all cases, 511 implementations must not create VCs in such a way that data 512 cannot be sent to best-effort receivers. This includes the 513 case of not being able to add a best-effort receiver to a QoS 514 VC, but does not include the case where best-effort VCs cannot 515 be setup. The failure to establish best-effort VCs is 516 considered to be a general IP over ATM failure and is therefore 517 beyond the scope of this document. 519 3.3 Multicast End-Point Identification 521 Implementations must be able to identify ATM end-points 522 participating in an IP multicast group. The ATM end-points will 523 be IP multicast receivers and/or next-hops. Both QoS and best- 524 effort end-points must be identified. RSVP next-hop information 525 will provide QoS end-points, but not best-effort end-points. 527 Another issue is identifying end-points of multicast traffic 528 handled by non-RSVP capable next-hops. In this case a PATH 529 message travels through a non-RSVP egress router on the way to the 530 next hop RSVP node. When the next hop RSVP node sends a RESV 531 message it may arrive at the source over a different route than 532 what the data is using. The source will get the RESV message, but 533 will not know which egress router needs the QoS. For unicast 534 sessions, there is no problem since the ATM end-point will be the 535 IP next-hop router. Unfortunately, multicast routing may not be 536 able to uniquely identify the IP next-hop router. So it is 537 possible that a multicast end-point can not be identified. 539 In the most common case, MARS will be used to identify all end- 540 points of a multicast group. In the router to router case, a 541 multicast routing protocol may provide all next-hops for a 542 particular multicast group. In either case, RSVP over ATM 543 implementations must obtain a full list of end-points, both QoS 544 and non-QoS, using the appropriate mechanisms. The full list can 545 be compared against the RSVP identified end-points to determine 546 the list of best-effort receivers. 548 There is no straightforward solution to uniquely identifying end- 549 points of multicast traffic handled by non-RSVP next hops. The 550 preferred solution is to use multicast routing protocols that 551 support unique end-point identification. In cases where such 552 routing protocols are unavailable, all IP routers that will be 553 used to support RSVP over ATM should support RSVP. 555 3.4 Multicast Data Distribution 557 Two models are planned for IP multicast data distribution over 558 ATM. In one model, senders establish point-to-multipoint VCs to 559 all ATM attached destinations, and data is then sent over these 560 VCs. This model is often called "multicast mesh" or "VC mesh" 561 mode distribution. In the second model, senders send data over 562 point-to-point VCs to a central point and the central point relays 563 the data onto point-to-multipoint VCs that have been established 564 to all receivers of the IP multicast group. This model is often 565 referred to as "multicast server" mode distribution. Figure shows 566 data flow for both modes of IP multicast data distribution. RSVP 567 over ATM solutions must ensure that IP multicast data is 568 distributed with appropriate QoS. 570 [Figure goes here] 571 Figure 5: IP Multicast Data Distribution Over ATM 573 In the Classical IP context, multicast server support is provided 574 via MARS[1]. MARS does not currently provide a way to communicate 575 QoS requirements to a MARS multicast server (MCS). When using 576 multicast servers that do not support QoS requests, a sender must 577 set the service, not global, break bit(s). 579 3.5 Receiver Transitions 581 When setting up a point-to-multipoint VCs there will be a time 582 when some receivers have been added to a QoS VC and some have not. 583 During such transition times it is possible to start sending data 584 on the newly established VC. The issue is when to start send data 585 on the new VC. If data is sent both on the new VC and the old VC, 586 then data will be delivered with proper QoS to some receivers and 587 with the old QoS to all receivers. This means the QoS receivers 588 would get duplicate data. If data is sent just on the new QoS VC, 589 the receivers that have not yet been added will lose information. 590 So, the issue comes down to whether to send to both the old and 591 new VCs, or to send to just one of the VCs. In one case duplicate 592 information will be received, in the other some information may 593 not be received. This issue needs to be considered for three 594 cases: when establishing the first QoS VC, when establishing a VC 595 to support a QoS change, and when adding a new end-point to an 596 already established QoS VC. 598 The first two cases are very similar. It both, it is possible to 599 send data on the partially completed new VC, and the issue of 600 duplicate versus lost information is the same. 602 The last case is when an end-point must be added to an existing 603 QoS VC. In this case the end-point must be both added to the QoS 604 VC and dropped from a best-effort VC. The issue is which to do 605 first. If the add is first requested, then the end-point may get 606 duplicate information. If the drop is requested first, then the 607 end-point may loose information. 609 In order to ensure predictable behavior and delivery of data to 610 all receivers, data can only be sent on a new VCs once all parties 611 have been added. This will ensure that all data is only delivered 612 once to all receivers. This approach does not quite apply for the 613 last case. In the last case, the add should be completed first, 614 then the drop. This means that receivers must be prepared to 615 receive some duplicate packets at times of QoS setup. 617 3.6 Dynamic QoS 619 RSVP provides dynamic quality of service (QoS) in that the 620 resources that are requested may change at any time. There are 621 several common reasons for a change of reservation QoS. First, an 622 existing receiver can request a new larger (or smaller) QoS if the 623 current received quality is unacceptable. Second, a sender may 624 change its traffic specification (TSpec), which can trigger a 625 change in the reservation requests of the receivers. Third, a new 626 sender can start sending to a multicast group with a larger 627 traffic specification than existing senders, triggering larger 628 reservations. Finally, a new receiver can make a reservation that 629 is larger than existing reservations. If the limited heterogeneity 630 model is being used and the merge node for the larger reservation 631 is an ATM edge device, a new larger reservation must be set up 632 across the ATM network. 634 Since ATM service, as currently defined in UNI 3.x and UNI 4.0, 635 does not allow renegotiating the QoS of a VC, dynamically changing 636 the reservation means creating a new VC with the new QoS, and 637 tearing down an established VC. Tearing down a VC and setting up 638 a new VC in ATM are complex operations that involve a non-trivial 639 amount of processor time, and may have a substantial latency. 641 There are several options for dealing with this mismatch in 642 service. A specific approach will need to be a part of any RSVP 643 over ATM solution. 645 One method for supporting changes in RSVP reservations is to 646 attempt to replace an existing VC with a new appropriately sized 647 VC. During setup of the replacement VC, the old VC must be left in 648 place unmodified. The old VC is left unmodified to minimize 649 interruption of QoS data delivery. Once the replacement VC is 650 established, data transmission is shifted to the new VC, and the 651 old VC is then closed. 653 If setup of the replacement VC fails, then the old QoS VC should 654 continue to be used. When the new reservation is greater than the 655 old reservation, the reservation request should be answered with 656 an error. When the new reservation is less than the old 657 reservation, the request should be treated as if the modification 658 was successful. While leaving the larger allocation in place is 659 suboptimal, it maximizes delivery of service to the user. 660 Implementations should retry replacing the too large VC after some 661 appropriate elapsed time. 663 One additional issue is that only one QoS change can be processed 664 at one time per reservation. If the (RSVP) requested QoS is 665 changed while the first replacement VC is still being setup, then 666 the replacement VC is released and the whole VC replacement 667 process is restarted. 669 To limit the number of changes and to avoid excessive signalling 670 load, implementations may limit the number of changes that will be 671 processed in a given period. One implementation approach would 672 have each ATM edge device configured with a time parameter tau 673 (which can change over time) that gives the minimum amount of time 674 the edge device will wait between successive changes of the QoS of 675 a particular VC. Thus if the QoS of a VC is changed at time t, 676 all messages that would change the QoS of that VC that arrive 677 before time t+tau would be queued. If several messages changing 678 the QoS of a VC arrive during the interval, redundant messages can 679 be discarded. At time t+tau, the remaining change(s) of QoS, if 680 any, can be executed. 682 The sequence of events for a single VC would be 684 1. Wait if timer is active 686 2. Establish VC with new QoS 688 3. Remap data traffic to new VC 690 4. Tear down old VC 692 5. Activate timer 694 There is an interesting interaction between heterogeneous 695 reservations and dynamic QoS. In the case where a RESV message is 696 received from a new next-hop and the requested resources are 697 larger than any existing reservation, both dynamic QoS and 698 heterogeneity need to be addressed. A key issue is whether to 699 first add the new next-hop or to change to the new QoS. This is a 700 fairly straight forward special case. Since the older, smaller 701 reservation does not support the new next-hop, the dynamic QoS 702 process should be initiated first. Since the new QoS is only 703 needed by the new next-hop, it should be the first end-point of 704 the new VC. This way signalling is minimized when the setup to 705 the new next-hop fails. 707 3.7 Short-Cuts 709 Short-cuts [12] allow ATM attached routers and hosts to directly 710 establish point-to-point VCs across LIS boundaries, i.e., the VC 711 end-points are on different IP sub-nets. The ability for short- 712 cuts and RSVP to interoperate has been raised as a general 713 question. The area of concern is the ability to handle asymmetric 714 short-cuts. Specifically how RSVP can handle the case where a 715 downstream short-cut may not have a matching upstream short-cut. 716 In this case, which is shown in figure , PATH and RESV messages 717 following different paths. 719 [Figure goes here] 720 Figure 6: Asymmetric RSVP Message Forwarding With ATM Short-Cuts 722 Examination of RSVP shows that the protocol already includes 723 mechanisms that will support short-cuts. The mechanism is the 724 same one used to support RESV messages arriving at the wrong 725 router and the wrong interface. The key aspect of this mechanism 726 is RSVP only processing messages that arrive at the proper 727 interface and RSVP forwarding of messages that arrive on the wrong 728 interface. The proper interface is indicated in the NHOP object 729 of the message. So, existing RSVP mechanisms will support 730 asymmetric short-cuts. 732 The short-cut model of VC establishment still poses several issues 733 when running with RSVP. The major issues are dealing with 734 established best-effort short-cuts, when to establish short-cuts, 735 and QoS only short-cuts. These issues will need to be addressed by 736 RSVP implementations. 738 3.8 VC Teardown 740 RSVP can identify from either explicit messages or timeouts when a 741 data VC is no longer needed. Therefore, data VCs set up to 742 support RSVP controlled flows should only be released at the 743 direction of RSVP. VCs must not be timed out due to inactivity by 744 either the VC initiator or the VC receiver. This conflicts with 745 VCs timing out as described in RFC 1755[14], section 3.4 on VC 746 Teardown. RFC 1755 recommends tearing down a VC that is inactive 747 for a certain length of time. Twenty minutes is recommended. This 748 timeout is typically implemented at both the VC initiator and the 749 VC receiver. Although, section 3.1 of the update to RFC 1755[15] 750 states that inactivity timers must not be used at the VC receiver. 752 When this timeout occurs for an RSVP initiated VC, a valid VC with 753 QoS will be torn down unexpectedly. While this behavior is 754 acceptable for best-effort traffic, it is important that RSVP 755 controlled VCs not be torn down. If there is no choice about the 756 VC being torn down, the RSVP daemon must be notified, so a 757 reservation failure message can be sent. 759 4. RSVP Control VC Management 761 One last important issue is providing a data path for the RSVP 762 messages themselves. There are two main types of messages in RSVP, 763 PATH and RESV. PATH messages are sent to a multicast address, while 764 RESV messages are sent to a unicast address. Other RSVP messages are 765 handled similar to either PATH or RESV [Note 1] So ATM VCs used for 766 RSVP signalling messages need to provide both unicast and multicast 767 functionality. 769 There are several different approaches for how to assign VCs to use 770 for RSVP signalling messages. The main approaches are: 772 o use same VC as data 774 o single VC per session 776 o single point-to-multipoint VC multiplexed among sessions 778 o multiple point-to-point VCs multiplexed among sessions 780 There are several different issues that affect the choice of how to 781 assign VCs for RSVP signalling. One issue is the number of 782 additional VCs needed for RSVP signalling. Related to this issue is 783 the degree of multiplexing on the RSVP VCs. In general more 784 multiplexing means less VCs. An additional issue is the latency in 785 dynamically setting up new RSVP signalling VCs. A final issue is 786 complexity of implementation. The remainder of this section 787 discusses the issues and tradeoffs among these different approaches 788 and suggests guidelines for when to use which alternative. 790 4.1 Mixed data and control traffic 792 In this scheme RSVP signalling messages are sent on the same VCs 793 as is the data traffic. The main advantage of this scheme is that 794 no additional VCs are needed beyond what is needed for the data 795 traffic. An additional advantage is that there is no ATM 796 signalling latency for PATH messages (which follow the same 797 routing as the data messages). However there can be a major 798 problem when data traffic on a VC is nonconforming. With 799 nonconforming traffic, RSVP signalling messages may be dropped. 800 While RSVP is resilient to a moderate level of dropped messages, 801 excessive drops would lead to repeated tearing down and re- 802 establishing QoS VCs, a very undesirable behavior for ATM. Due to 803 these problems, this is not a good choice for providing RSVP 804 signalling messages, even though the number of VCs needed for this 805 scheme is minimized. 807 One variation of this scheme is to use the best effort data path 808 _________________________ 809 [Note 1] This can be slightly more complicated for RERR messages 810 for signalling traffic. In this scheme, there is no issue with 811 nonconforming traffic, but there is an issue with congestion in 812 the ATM network. 814 RSVP provides some resiliency to message loss due to congestion, 815 but RSVP control messages should be offered a preferred class of 816 service. A related variation of this scheme that is hopeful but 817 requires further study is to have a packet scheduling algorithm 818 (before entering the ATM network) that gives priority to the RSVP 819 signalling traffic. This can be difficult to do at the IP layer. 820 One possible approach at the ATM layer would be to use the Cell 821 Loss Priority (CLP) bit for RSVP signalling traffic to ensure 822 better service. 824 4.2 Single RSVP VC per RSVP Reservation 826 In this scheme, there is a parallel RSVP signalling VC for each 827 RSVP reservation. This scheme results in twice the minimum number 828 of VCs, but means that RSVP signalling messages have the advantage 829 of a separate VC. This separate VC means that RSVP signalling 830 messages have their own traffic contract and compliant signalling 831 messages are not subject to dropping due to other noncompliant 832 traffic (such as can happen with the scheme in section 4.1). The 833 advantage of this scheme is its simplicity - whenever a data VC is 834 created, a separate RSVP signalling VC is created. The 835 disadvantage of the extra VC is that extra ATM signalling needs to 836 be done. 838 Additionally, this scheme requires twice the minimum number of VCs 839 and also additional latency, but is quite simple. This approach 840 would tend to work well on hosts. 842 4.3 Multiplexed point-to-multipoint RSVP VCs 844 In this scheme, there is a single point-to-multipoint RSVP 845 signalling VC for each unique ingress router and unique set of 846 egress routers. This scheme allows multiplexing of RSVP 847 signalling traffic that shares the same ingress router and the 848 same egress routers. This can save on the number of VCs, by 849 multiplexing, but there are problems when the destinations of the 850 multiplexed point-to-multipoint VCs are changing. Several 851 alternatives exist in these cases, that have applicability in 852 different situations. First, when the egress routers change, the 853 ingress router can check if it already has a point-to-multipoint 854 RSVP signalling VC for the new list of egress routers. If the 855 RSVP signalling VC already exists, then the RSVP signalling 856 traffic can be switched to this existing VC. If no such VC 857 exists, one approach would be to create a new VC with the new list 858 of egress routers. Other approaches include modifying the 859 existing VC to add an egress router or using a separate new VC for 860 the new egress routers. When a destination drops out of a group, 861 an alternative would be to keep sending to the existing VC even 862 though some traffic is wasted. 864 The number of VCs used in this scheme is a function of traffic 865 patterns across the ATM network, but is always less than the 866 number used with the Single RSVP VC per data VC. In addition, 867 existing best effort data VCs could be used for RSVP signalling. 868 Reusing best effort VCs saves on the number of VCs at the cost of 869 higher probability of RSVP signalling packet loss. One possible 870 place where this scheme will work well is in the core of the 871 network where there is the most opportunity to take advantage of 872 the savings due to multiplexing. The exact savings depend on the 873 patterns of traffic and the topology of the ATM network. 875 4.4 Multiplexed point-to-point RSVP VCs 877 In this scheme, multiple point-to-point RSVP signalling VCs are 878 used for a single point-to-multipoint data VC. This scheme allows 879 multiplexing of RSVP signalling traffic but requires the same 880 traffic to be sent on each of several VCs. This scheme is quite 881 flexible and allows a large amount of multiplexing. Since point- 882 to-point VCs can set up a reverse channel at the same time as 883 setting up the forward channel, this scheme could save 884 substantially on signalling cost. In addition, signalling traffic 885 could share existing best effort VCs. Sharing existing best 886 effort VCs reduces the total number of VCs needed, but might cause 887 signalling traffic drops if there is congestion in the ATM 888 network. 890 This point-to-point scheme would work well in the core of the 891 network where there is much opportunity for multiplexing. Also in 892 the core of the network, RSVP VCs can stay permanently established 893 either as Permanent Virtual Circuits (PVCs) or as long lived 894 Switched Virtual Circuits (SVCs). The number of VCs in this 895 scheme will depend on traffic patterns, but in the core of a 896 network would be approximately n(n-1)/2 where n is the number of 897 IP nodes in the network. In the core of the network, this will 898 typically be small compared to the total number of VCs. 900 4.5 QoS for RSVP VCs 902 There is an issue for what QoS, if any, to assign to the RSVP VCs. 903 Three solutions have been covered in section 4.1 and in the shared 904 best effort VC variations in sections 4.4 and 4.3. For other RSVP 905 VC schemes, a QoS (possibly best effort) will be needed. What QoS 906 to use partially depends on the expected level of multiplexing 907 that is being done on the VCs, and the expected reliability of 908 best effort VCs. Since RSVP signalling is infrequent (typically 909 every 30 seconds), only a relatively small QoS should be needed. 910 This is important since using a larger QoS risks the VC setup 911 being rejected for lack of resources. Falling back to best effort 912 when a QoS call is rejected is possible, but if the ATM net is 913 congested, there will likely be problems with RSVP packet loss on 914 the best effort VC also. Additional experimentation is needed in 915 this area. 917 5. Encapsulation 919 Since RSVP is a signalling protocol used to control flows of IP data 920 packets, encapsulation for both RSVP packets and associated IP data 921 packets must be defined. There are currently two encapsulation 922 options for running IP over ATM, RFC 1483 and LANE. There is also 923 the possibility of future encapsulation options, such as MPOA[3]. 924 The first option is described in RFC 1483[9] and is currently used 925 for "Classical" IP over ATM and NHRP. 927 The second option is LAN Emulation, as described in [2]. LANE 928 encapsulation does not currently include a QoS signalling interface. 929 If LANE encapsulation is needed, LANE QoS signalling would first need 930 to be defined by the ATM Forum. It is possible that LANE 2.0 will 931 include the required QoS support. 933 6. Security 935 The same considerations stated in [8] and [14] apply to this 936 document. There are no additional security issues raised in this 937 document. 939 7. Future Work 941 We have described a set of schemes for deploying RSVP over IP over 942 ATM. There are a number of other issues that are subjects of 943 continuing research. These issues (and others) are covered in [5], 944 and are briefly repeated here. 946 A major issue is providing policy control for ATM VC creation. There 947 is work going on in the RSVP working group [8] on defining an 948 architecture for policy support. Further work is needed in defining 949 an API and policy objects. As this area is critical to deployment, 950 progress will need to be made in this area. 952 NHRP provides advantages in allowing short-cuts across 2 or more 953 LIS's. Short cutting router hops can lead to more efficient data 954 delivery. Work on NHRP is on-going, but currently provides only a 955 unicast delivery service. Further study is needed to determine how 956 NHRP can be used with RSVP and ATM. Future work depends on the 957 development of NHRP for multicast. 959 Furthermore, when using RSVP it may be desirable to establish 960 multiple short-cut VCs, to use these VCs for specific QoS flows, and 961 to use the hop-by-hop path for other QoS and non-QoS flows. The 962 current NHRP specification [12] does not preclude such an approach, 963 but nor does it explicitly support it. We believe that explicit 964 support of flow based short-cuts would improve RSVP over ATM 965 solutions. We also believe that such support may require the ability 966 to include flow information in the NHRP request. 968 There is work in the ION working group on MultiCast Server (MCS) 969 architectures for MARS. An MCS provides savings in the number of VCs 970 in certain situations. When using a multicast server, the sub- 971 network sender could establish a point-to-point VC with a specific 972 QoS to the server, but there is not current mechanism to relay QoS 973 requirements to the MCS. Future work includes providing RSVP and ATM 974 support over MARS MCS's. 976 Unicast ATM VCs are inherently bi-directional and have the capability 977 of supporting a "reverse channel". By using the reverse channel for 978 unicast VCs, the number of VCs used can potentially be reduced. 979 Future work includes examining how the reverse VCs can be used most 980 effectively. 982 Current work in the ATM Forum and ITU promises additional advantages 983 for RSVP and ATM including renegotiating QoS parameters and 984 variegated VCs. QoS renegotiation would be particularly beneficial 985 since the only option available today for changing VC QoS parameters 986 is replacing the VC. It is important to keep current with changes in 987 ATM, and to keep this document up-to-date. 989 Scaling of the number of sessions is an issue. The key ATM related 990 implication of a large number of sessions is the number of VCs and 991 associated (buffer and queue) memory. The approach to solve this 992 problem is aggregation either at the RSVP layer or at the ISSLL layer 993 (or both). 995 This document describes approaches that can be used with ATM UNI4.0, 996 but does not make use of the available leaf-initiated join, or LIJ, 997 capability. The use of LIJ may be useful in addressing scaling 998 issues. The coordination of RSVP with LIJ remains a research issue. 1000 Lastly, it is likely that LANE 2.0 will provide some QoS support 1001 mechanisms, including proper QoS allocation for multicast traffic. 1003 It is important to track developments, and develop suitable RSVP over 1004 ATM LANE at the appropriate time. 1006 8. Authors' Addresses 1008 Steven Berson 1009 USC Information Sciences Institute 1010 4676 Admiralty Way 1011 Marina del Rey, CA 90292 1013 Phone: +1 310 822 1511 1014 EMail: berson@isi.edu 1016 Lou Berger 1017 FORE Systems 1018 6905 Rockledge Drive 1019 Suite 800 1020 Bethesda, MD 20817 1022 Phone: +1 301 571 2534 1023 EMail: lberger@fore.com 1025 REFERENCES 1027 [1] Armitage, G., "Support for Multicast over UNI 3.0/3.1 based ATM 1028 Networks," Internet Draft, February 1996. 1030 [2] The ATM Forum, "LAN Emulation Over ATM Specification", Version 1.0. 1032 [3] The ATM Forum, "MPOA Baseline Version 1", 95-0824r9, September 1996. 1034 [4] Berson, S., "`Classical' RSVP and IP over ATM," INET '96, July 1996. 1036 [5] Borden, M., Crawley, E., Krawczyk, J, Baker, F., and Berson, S., 1037 "Issues for RSVP and Integrated Services over ATM," Internet Draft, 1038 February 1996. 1040 [6] Borden, M., and Garrett, M., "Interoperation of Controlled-Load and 1041 Guaranteed-Service with ATM," Internet Draft, June 1996. 1043 [7] Braden, R., Clark, D., Shenker, S. "Integrated Services in the 1044 Internet Architecture: an Overview," RFC 1633, June 1994. 1046 [8] Braden, R., Zhang, L., Berson, S., Herzog, S., and Jamin, S., 1047 "Resource ReSerVation Protocol (RSVP) -- Version 1 Functional 1048 Specification," Internet Draft, November 1996. 1050 [9] Heinanen, J., "Multiprotocol Encapsulation over ATM Adaptation Layer 1051 5," RFC 1483. 1053 [10] Herzog, S., "Accounting and Access Control Policies for Resource 1054 Reservation Protocols," Internet Draft, June 1996. 1056 [11] Laubach, M., "Classical IP and ARP over ATM," RFC 1577, January 1057 1994. 1059 [12] Luciani, J., Katz, D., Piscitello, D., Cole, B., "NBMA Next Hop 1060 Resolution Protocol (NHRP)," Internet Draft, June 1996. 1062 [13] Onvural, R., Srinivasan, V., "A Framework for Supporting RSVP Flows 1063 Over ATM Networks," Internet Draft, March 1996. 1065 [14] Perez, M., Liaw, F., Grossman, D., Mankin, A., Hoffman, E., and 1066 Malis, A., "ATM Signalling Support for IP over ATM," RFC 1755. 1068 [15] Perez, M., Mankin, A. "ATM Signalling Support for IP over ATM - 1069 UNI 4.0 Update" Internet Draft, November 1996. 1071 [16] "ATM User-Network Interface (UNI) Specification - Version 3.1", 1072 Prentice Hall. 1074 [17] Shenker, S., Partridge, C., Guerin, R., "Specification of 1075 Guaranteed Quality of Service," Internet Draft, August 1996. 1077 [18] Wroclawski, J., "Specification of the Controlled-Load Network 1078 Element Service," Internet Draft, August, 1996. 1080 [19] Zhang, L., Deering, S., Estrin, D., Shenker, S., Zappala, D., 1081 "RSVP: A New Resource ReSerVation Protocol," IEEE Network, September 1082 1993.