idnits 2.17.1 draft-ietf-issll-atm-support-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-23) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** Bad filename characters: the document name given in the document, 'draft-ietf-issll-atm-support-02.ps', contains other characters than digits, lowercase letters and dash. ** Missing revision: the document name given in the document, 'draft-ietf-issll-atm-support-02.ps', does not give the document revision number == Mismatching filename: the document gives the document name as 'draft-ietf-issll-atm-support-02.ps', but the file name used is 'draft-ietf-issll-atm-support-02' == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 2) being 66 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 29 instances of too long lines in the document, the longest one being 2 characters in excess of 72. ** The abstract seems to contain references ([16]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 81 has weird spacing: '... worked well ...' == Line 245 has weird spacing: '... Figure shows...' == Line 331 has weird spacing: '...tion of great...' == Line 333 has weird spacing: '... to the other...' == Line 396 has weird spacing: '... figure where...' == (2 more instances...) == Couldn't figure out when the document was first submitted -- there may comments or warnings related to the use of a disclaimer for pre-RFC5378 work that could not be issued because of this. Please check the Legal Provisions document at https://trustee.ietf.org/license-info to determine if you need the pre-RFC5378 disclaimer. -- The document date (November 26, 1996) is 10010 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Looks like a reference, but probably isn't: 'Note 1' on line 871 == Unused Reference: '6' is defined on line 1224, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '3' -- Possible downref: Non-RFC (?) normative reference: ref. '4' -- Possible downref: Non-RFC (?) normative reference: ref. '5' -- Possible downref: Non-RFC (?) normative reference: ref. '6' ** Downref: Normative reference to an Informational RFC: RFC 1633 (ref. '7') -- Possible downref: Non-RFC (?) normative reference: ref. '8' ** Obsolete normative reference: RFC 1483 (ref. '9') (Obsoleted by RFC 2684) -- Possible downref: Non-RFC (?) normative reference: ref. '10' ** Obsolete normative reference: RFC 1577 (ref. '11') (Obsoleted by RFC 2225) -- Possible downref: Non-RFC (?) normative reference: ref. '12' -- Possible downref: Non-RFC (?) normative reference: ref. '13' -- Possible downref: Non-RFC (?) normative reference: ref. '15' -- Possible downref: Non-RFC (?) normative reference: ref. '16' -- Possible downref: Non-RFC (?) normative reference: ref. '17' -- Possible downref: Non-RFC (?) normative reference: ref. '18' -- Possible downref: Non-RFC (?) normative reference: ref. '19' Summary: 14 errors (**), 0 flaws (~~), 11 warnings (==), 17 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Draft S. Berson 2 Expires: May 1997 ISI 3 File: draft-ietf-issll-atm-support-02.ps L. Berger 4 FORE Systems 6 IP Integrated Services with RSVP over ATM 8 November 26, 1996 10 Status of Memo 12 This document is an Internet-Draft. Internet-Drafts are working 13 documents of the Internet Engineering Task Force (IETF), its areas, 14 and its working groups. Note that other groups may also distribute 15 working documents as Internet-Drafts. 17 Internet-Drafts are draft documents valid for a maximum of six months 18 and may be updated, replaced, or obsoleted by other documents at any 19 time. It is inappropriate to use Internet-Drafts as reference 20 material or to cite them other than as "work in progress." 22 To learn the current status of any Internet-Draft, please check the 23 "1id-abstracts.txt" listing contained in the Internet-Drafts Shadow 24 Directories on ds.internic.net (US East Coast), nic.nordu.net 25 (Europe), ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific 26 Rim). 28 Abstract 30 This draft describes a method for providing IP Integrated Services 31 with RSVP over ATM switched virtual circuits (SVCs). It provides an 32 overall approach to the problem as well as a specific method for 33 running over today's ATM networks. There are two parts of this 34 problem. This draft provides guidelines for using ATM VCs with QoS 35 as part of an Integrated Services Internet. A related draft[16] 36 describes service mappings between IP Integrated Services and ATM 37 services. 39 Authors' Note 41 The postscript version of this document contains figures that are not 42 included in the text version, so it is best to use the postscript 43 version. Figures will be converted to ASCII in a future version. 45 Table of Contents 47 1. Introduction ........................................................3 48 1.1 Terms ...........................................................4 49 1.2 Assumptions .....................................................5 50 2. Policy ..............................................................6 51 2.1 Implementation Guidelines .......................................7 52 3. Data VC Management ..................................................7 53 3.1 Reservation to VC Mapping .......................................8 54 3.2 Heterogeneity ...................................................9 55 3.3 Multicast End-Point Identification ..............................13 56 3.4 Multicast Data Distribution .....................................13 57 3.5 Receiver Transitions ............................................14 58 3.6 Dynamic QoS .....................................................15 59 3.7 Short-Cuts ......................................................17 60 3.8 VC Teardown .....................................................18 61 4. RSVP Control VC Management ..........................................19 62 4.1 Mixed data and control traffic ..................................19 63 4.2 Single RSVP VC per RSVP Reservation .............................20 64 4.3 Multiplexed point-to-multipoint RSVP VCs ........................20 65 4.4 Multiplexed point-to-point RSVP VCs .............................21 66 4.5 QoS for RSVP VCs ................................................21 67 4.6 Implementation Guidelines .......................................22 68 5. Encapsulation .......................................................22 69 5.1 Implementation Guidelines .......................................23 70 6. Security ............................................................23 71 7. Implementation Summary ..............................................23 72 7.1 Requirements ....................................................23 73 7.2 Default Behavior ................................................24 74 8. Future Work .........................................................25 75 9. Authors' Addresses ..................................................26 76 1. Introduction 78 The Internet currently has one class of service normally referred to 79 as "best effort." This service is typified by first-come, first- 80 serve scheduling at each hop in the network. Best effort service has 81 worked well for electronic mail, World Wide Web (WWW) access, file 82 transfer (e.g. ftp), etc. For real-time traffic such as voice and 83 video, the current Internet has performed well only across unloaded 84 portions of the network. In order to provide guaranteed quality 85 real-time traffic, new classes of service and a QoS signalling 86 protocol are being introduced in the Internet[7,18,17], while 87 retaining the existing best effort service. The QoS signalling 88 protocol is RSVP[8,19], the Resource ReSerVation Protocol. 90 ATM is rapidly becoming an important link layer technology. One of 91 the important features of ATM technology is the ability to request a 92 point-to-point Virtual Circuit (VC) with a specified Quality of 93 Service (QoS). An additional feature of ATM technology is the ability 94 to request point-to-multipoint VCs with a specified QoS. Point-to- 95 multipoint VCs allows leaf nodes to be added and removed from the VC 96 dynamically and so provide a mechanism for supporting IP multicast. 97 It is only natural that RSVP and the Internet Integrated Services 98 (IIS) model would like to utilize the QoS properties of any 99 underlying link layer including ATM 101 Classical IP over ATM[11] has solved part of this problem, supporting 102 IP unicast best effort traffic over ATM. Classical IP over ATM is 103 based on a Logical IP Subnetwork (LIS), which is a separately 104 administered IP sub-network. Hosts within a LIS communicate using 105 the ATM network, while hosts from different sub-nets communicate only 106 by going through an IP router (even though it may be possible to open 107 a direct VC between the two hosts over the ATM network). Classical 108 IP over ATM provides an Address Resolution Protocol (ATMARP) for ATM 109 edge devices to resolve IP addresses to native ATM addresses. For 110 any pair of IP/ATM edge devices (i.e. hosts or routers), a single VC 111 is created on demand and shared for all traffic between the two 112 devices. A second part of the RSVP and IIS over ATM problem, IP 113 multicast, is close to being solved with MARS[1], the Multicast 114 Address Resolution Server. MARS compliments ATMARP by allowing an IP 115 address to resolve into a list of native ATM addresses, rather than 116 just a single address. 118 A key remaining issue for IP over ATM is the integration of RSVP 119 signalling and ATM signalling in support of the Internet Integrated 120 Services (IIS) model. There are two main areas involved in 121 supporting the IIS model, QoS translation and VC management. QoS 122 translation concerns mapping a QoS from the IIS model to a proper ATM 123 QoS, while VC management concentrates on how many VCs are needed and 124 which traffic flows are routed over which VCs. Mapping of IP QoS to 125 ATM QoS is the subject of a companion draft[16]. 127 This draft concentrates on VC management (and we assume in this draft 128 that the QoS for a single reserved flow can be acceptably translated 129 to an ATM QoS). Two types of VCs need to be managed, data VCs which 130 handle the actual data traffic, and control VCs which handle the RSVP 131 signalling traffic. Several VC management schemes for both data and 132 control VCs are described in this draft. For each scheme, there are 133 two major issues - (1) heterogeneity and (2) dynamic behavior. 134 Heterogeneity refers to how requests for different QoS's are handled, 135 while dynamic behavior refers to how changes in QoS and changes in 136 multicast group membership are handled. These schemes will be 137 evaluated in terms of the following metrics - (1) number of VCs 138 needed to implement the scheme, (2) bandwidth wasted due to duplicate 139 packets, and (3) flexibility in handling heterogeneity and dynamic 140 behavior. 142 The general issues related to running RSVP[8,19] over ATM have been 143 covered in several papers including [4,5,13]. This document will 144 review key issues that must be addressed by any RSVP over ATM UNI 145 solution. It will discuss advantages and disadvantages of different 146 methods for running RSVP over ATM. It will also define default 147 behavior for implementations using ATM UNI3.x and 4.0. Default 148 behavior provides a baseline set of functionality, while allowing for 149 more sophisticated approaches. We expect some vendors to also 150 provide some of the more sophisticated approaches described below, 151 and some networks to only make use of such approaches. 153 1.1 Terms 155 The terms "reservation" and "flow" are used in many contexts, 156 often with different meaning. These terms are used in this 157 document with the following meaning: 159 o Reservation is used in this document to refer to an RSVP 160 initiated request for resources. RSVP initiates requests for 161 resources based on RESV message processing. RESV messages 162 that simply refresh state do not trigger resource requests. 163 Resource requests may be made based on RSVP sessions and RSVP 164 reservation styles. RSVP styles dictate whether the reserved 165 resources are used by one sender or shared by multiple 166 senders. See [8] for details of each. Each new request is 167 referred to in this document as an RSVP reservation, or 168 simply reservation. 170 o Flow is used to refer to the data traffic associated with a 171 particular reservation. The specific meaning of flow is RSVP 172 style dependent. For shared style reservations, there is one 173 flow per session. For distinct style reservations, there is 174 one flow per sender (per session). 176 1.2 Assumptions 178 The following assumptions are made: 180 o Support for IPv4 and IPv6 best effort in addition to QoS 182 o Use RSVP with policy control as signalling protocol 184 o Assume UNI 3.x and 4.0 ATM services 186 o VCs initiation by sub-net senders 188 1.2.1 IPv4 and IPv6 190 Currently IPv4 is the standard protocol of the Internet which 191 now provides only best effort service. We assume that best 192 effort service will continue to be supported while introducing 193 new types of service according to the IP Integrated Services 194 model. We also assume that IPv6 will be supported as well as 195 IPv4. 197 1.2.2 RSVP and Policy 199 We assume RSVP as the Internet signalling protocol which is 200 described in [19]. The reader is assumed to be familiar with 201 [19]. 203 IP Integrated Services discriminates between users by providing 204 some users better service at the expense of others. Policy 205 determines how preferential services are allocated while 206 allowing network operators maximum flexibility to provide 207 value-added services for the marketplace. Mechanisms need to 208 be be provided to enforce access policies. These mechanisms 209 may include such things as permissions and/or billing. 211 For scaling reasons, policies based on bilateral agreements 212 between neighboring providers are considered. The bilateral 213 model has similar scaling properties to multicast while 214 maintaining no global information. Policy control is currently 215 being developed for RSVP (see [10] for details). 217 1.2.3 ATM 219 We assume ATM defined by UNI 3.x and 4.0. ATM provides both 220 point-to-point and point-to-multipoint Virtual Circuits (VCs) 221 with a specified Quality of Service (QoS). ATM provides both 222 Permanent Virtual Circuits (PVCs) and Switched Virtual Circuits 223 (SVCs). In the Permanent Virtual Circuit (PVC) environment, 224 PVCs are typically used as point-to-point link replacements. 225 So the Integrated Services support issues are similar to 226 point-to-point links. This draft describes schemes for 227 supporting Integrated Services using SVCs. 229 1.2.4 VC Initiation 231 There is an apparent mismatch between RSVP and ATM. 232 Specifically, RSVP control is receiver oriented and ATM control 233 is sender oriented. This initially may seem like a major 234 issue, but really is not. While RSVP reservation (RESV) 235 requests are generated at the receiver, actual allocation of 236 resources takes place at the sub-net sender. 238 For data flows, this means that sub-net senders will establish 239 all QoS VCs and the sub-net receiver must be able to accept 240 incoming QoS VCs. These restrictions are consistent with RSVP 241 version 1 processing rules and allow senders to use different 242 flow to VC mappings and even different QoS renegotiation 243 techniques without interoperability problems. All RSVP over 244 ATM approaches that have VCs initiated and controlled by the 245 sub-net senders will interoperate. Figure shows this model of 246 data flow VC initiation. 248 [Figure goes here] 249 Figure 1: Data Flow VC Initiation 251 The use of the reverse path provided by point-to-point VCs by 252 receivers is for further study. Receivers initiating VCs via 253 the reverse path mechanism provided by point-to-point VCs is 254 also for future study. 256 2. Policy 258 RSVP allows for local policy control [10] as well as admission 259 control. Thus a user can request a reservation with a specific QoS 260 and with a policy object that, for example, offers to pay for 261 additional costs setting up a new reservation. The policy module at 262 the entry to a provider can decide how to satisfy that request - 263 either by merging the request in with an existing reservation or by 264 creating a new reservation for this (and perhaps other) users. This 265 policy can be on a per user-provider basis where a user and a 266 provider have an agreement on the type of service offered, or on a 267 provider-provider basis, where two providers have such an agreement. 268 With the ability to do local policy control, providers can offer 269 services best suited to their own resources and their customers 270 needs. 272 Policy is expected to be provided as a generic API which will return 273 values indicating what action should be taken for a specific 274 reservation request. The API is expected to have access to the 275 reservation tables with the QoS for each reservation. The RSVP 276 Policy and Integrity objects will be passed to the policy() call. 277 Four possible return values are expected. The request can be 278 rejected. The request can be accepted as is. The request can be 279 accepted but at a different QoS. The request can cause a change of 280 QoS of an existing reservation. The information returned from this 281 call will be used to call the admission control interface. 283 2.1 Implementation Guidelines 285 Currently, the contents of policy data objects is not specified. 286 So specifics of policy implementation are not defined at this 287 time. 289 3. Data VC Management 291 Any RSVP over ATM implementation must map RSVP and RSVP associated 292 data flows to ATM Virtual Circuits (VCs). LAN Emulation [2], 293 Classical IP [11] and, more recently, NHRP [12] discuss mapping IP 294 traffic onto ATM SVCs, but they only cover a single QoS class, i.e., 295 best effort traffic. When QoS is introduced, VC mapping must be 296 revisited. For RSVP controlled QoS flows, one issue is VCs to use for 297 QoS data flows. 299 In the Classic IP over ATM and current NHRP models, a single point- 300 to-point VC is used for all traffic between two ATM attached hosts 301 (routers and end-stations). It is likely that such a single VC will 302 not be adequate or optimal when supporting data flows with multiple 303 QoS types. RSVP's basic purpose is to install support for flows with 304 multiple QoS types, so it is essential for any RSVP over ATM solution 305 to address VC usage for QoS data flows. 307 This section describes issues and methods for management of VCs 308 associated with QoS data flows. When establishing and maintaining 309 VCs, the sub-net sender will need to deal with several complicating 310 factors including multiple QoS reservations, requests for QoS 311 changes, ATM short-cuts, and several multicast specific issues. The 312 multicast specific issues result from the nature of ATM connections. 313 The key multicast related issues are heterogeneity, data 314 distribution, receiver transitions, and end-point identification. 316 3.1 Reservation to VC Mapping 318 There are various approaches available for mapping reservations on 319 to VCs. A distinguishing attribute of all approaches is how 320 reservations are combined on to individual VCs. When mapping 321 reservations on to VCs, individual VCs can be used to support a 322 single reservation, or reservation can be combined with others on 323 to "aggregate" VCs. In the first case, each reservation will be 324 supported by one or more VCs. Multicast reservation requests may 325 translate into the setup of multiple VCs as is described in more 326 detail in section 3.2. Unicast reservation requests will always 327 translate into the setup of a single QoS VC. In both cases, each 328 VC will only carry data associated with a single reservation. The 329 greatest benefit if this approach is ease of implementation, but 330 it comes at the cost of increased (VC) setup time and the 331 consumption of greater number of VC and associated resources. 333 We refer to the other case, when reservations are not combined, 334 as the "aggregation" model. With this model, large VCs could be 335 set up between IP routers and hosts in an ATM network. These VCs 336 could be managed much like IP Integrated Service (IIS) point-to- 337 point links (e.g. T-1, DS-3) are managed now. Traffic from 338 multiple sources over multiple RSVP sessions might be multiplexed 339 on the same VC. This approach has a number of advantages. First, 340 there is typically no signalling latency as VCs would be in 341 existence when the traffic started flowing, so no time is wasted 342 in setting up VCs. Second, the heterogeneity problem (section 343 3.2) has been reduced to a solved problem. Finally, the dynamic 344 QoS problem (section 3.6) for ATM has also been reduced to a 345 solved problem. This approach can be used with point-to-point and 346 point-to-multipoint VCs. The problem with the aggregation 347 approach is that the choice of what QoS to use for which of the 348 VCs is difficult, but is made easier since the VCs can be changed 349 as needed. The advantages of this scheme makes this approach an 350 item for high priority study. 352 3.1.1 Implementation Guidelines 354 While it is possible and even desirable to send multiple flows 355 and multiple distinct reservations (FF) over single VCs, 356 implementation of such approaches is a matter for further 357 study. So, RSVP over ATM implementations must, by default, use 358 a single VC to support each RSVP reservation. Implementations 359 may also support an aggregation approach. 361 3.2 Heterogeneity 363 Heterogeneity occurs when receivers request different QoS's within 364 a single session. This means that the amount of requested 365 resources differs on a per next hop basis. A related type of 366 heterogeneity occurs due to best-effort receivers. In any IP 367 multicast group, it is possible that some receivers will request 368 QoS (via RSVP) and some receivers will not. Both types of 369 heterogeneity are shown in figure . In shared media, like 370 Ethernet, receivers that have not requested resources can 371 typically be given identical service to those that have without 372 complications. This is not the case with ATM. In ATM networks, 373 any additional end-points of a VC must be explicitly added. There 374 may be costs associated with adding the best-effort receiver, and 375 there might not be adequate resources. An RSVP over ATM solution 376 will need to support heterogeneous receivers even though ATM does 377 not currently provide such support directly. 379 [Figure goes here] 380 Figure 2: Types of Multicast Receivers 382 There are multiple models for supporting RSVP heterogeneity over 383 ATM. Section 3.2.1 examines the multiple VCs per RSVP reservation 384 (or full heterogeneity) model where a single reservation can be 385 forwarded into several VCs each with a different QoS. Section 386 3.2.2 presents a limited heterogeneity model where exactly one QoS 387 VC is used along with a best effort VC. Section 3.2.3 examines 388 the VC per RSVP reservation (or homogeneous) model, where each 389 RSVP reservation is mapped to a single ATM VC. 391 3.2.1 Full Heterogeneity Model 393 We define the "full heterogeneity" model as providing a 394 separate VC for each distinct QoS for a multicast session 395 including best effort and one or more QoS's. This is shown in 396 figure where S1 is a sender, R1-R3 are receivers, r1-r4 are IP 397 routers, and s1-s2 are ATM switches. Receivers R1 and R3 make 398 reservations with different QoS while R2 is a best effort 399 receiver. Three point-to-multipoint VCs are created for this 400 situation, each with the requested QoS. Note that any leafs 401 requesting QoS 1 or QoS 2 would be added to the existing QoS 402 VC. 404 [Figure goes here] 405 Figure 3: Full heterogeneity 407 Note that while full heterogeneity gives users exactly what 408 they request, it requires more resources of the network than 409 other possible approaches. In figure , three copies of each 410 packet are sent on the link from r1 to s1. Two copies of each 411 packet are then sent from s1 to s2. The exact amount of 412 bandwidth used for duplicate traffic depends on the network 413 topology and group membership. 415 3.2.2 Limited Heterogeneity Model 417 We define the "limited heterogeneity" model as the case where 418 the receivers of a multicast session are limited to use either 419 best effort service or a single alternate quality of service. 420 The alternate QoS can be chosen either by higher level 421 protocols or by dynamic renegotiation of QoS as described 422 below. 424 [Figure goes here] 425 Figure 4: Limited heterogeneity 427 In order to support limited heterogeneity, each ATM edge device 428 participating in a session would need at most two VCs. One VC 429 would be a point-to-multipoint best effort service VC and would 430 serve all best effort service IP destinations for this RSVP 431 session. The other VC would be a point to multipoint VC with 432 QoS and would serve all IP destinations for this RSVP session 433 that have an RSVP reservation established. This is shown in 434 figure where there are three receivers, R2 requesting best 435 effort service, while R1 and R3 request distinct reservations. 436 Whereas, in figure , R1 and R3 have a separate VC, so each 437 receives precisely the resources requested, in figure , R1 and 438 R3 share the same VC (using the maximum of R1 and R3 QoS) 439 across the ATM network. Note that though the VC and hence the 440 QoS for R1 and R3 are the same within the ATM cloud, the 441 reservation outside the ATM cloud (from router r4 to receiver 442 R3) uses the QoS actually requested by R3. 444 As with full heterogeneity, a disadvantage of the limited 445 heterogeneity scheme is that each packet will need to be 446 duplicated at the network layer and one copy sent into each of 447 the 2 VCs. Again, the exact amount of excess traffic will 448 depend on the network topology and group membership. Looking 449 at figure , there are two VCs going from router r1 to switch 450 s1. Two copies of every packet will traverse the r1-s1 link. 451 Another disadvantage of limited heterogeneity is that a 452 reservation request can be rejected even when the resources are 453 available. This occurs when a new receiver requests a larger 454 QoS. If any of the existing QoS VC end-points cannot upgrade 455 to the new QoS, then the new reservation fails though the 456 resources exist for the new receiver. 458 3.2.3 Homogeneous and Modified Homogeneous Models 460 We define the "homogeneous" model as the case where all 461 receivers of a multicast session use a single quality of 462 service VC. Best-effort receivers also use the single RSVP 463 triggered QoS VC. The single VC can be a point-to-point or 464 point-to-multipoint as appropriate. The QoS VC is sized to 465 provide the maximum resources requested by all RSVP next-hops. 467 This model matches the way the current RSVP specification 468 addresses heterogeneous requests. The current processing rules 469 and traffic control interface describe a model where the 470 largest requested reservation for a specific outgoing interface 471 is used in resource allocation, and traffic is transmitted at 472 the higher rate to all next-hops. This approach would be the 473 simplest method for RSVP over ATM implementations. 475 While this approach is simple to implement, providing better 476 than best-effort service may actually be the opposite of what 477 the user desires since in providing ATM QoS. There may be 478 charges incurred or resources that are wrongfully allocated. 479 There are two specific problems. The first problem is that a 480 user making a small or no reservation would share a QoS VC 481 resources without making (and perhaps paying for) an RSVP 482 reservation. The second problem is that a receiver may not 483 receive any data. This may occur when there is insufficient 484 resources to add a receiver. The rejected user would not be 485 added to the single VC and it would not even receive traffic on 486 a best effort basis. 488 Not sending data traffic to best-effort receivers because of 489 another receiver's RSVP request is clearly unacceptable. The 490 previously described limited heterogeneous model ensures that 491 data is always sent to both QoS and best-effort receivers, but 492 it does so by requiring replication of data at the sender in 493 all cases. It is possible to extend the homogeneous model to 494 both ensure that data is always sent to best-effort receivers 495 and also to avoid replication in the normal case. This 496 extension is to add special handling for the case where a 497 best-effort receiver cannot be added to the QoS VC. In this 498 case, a best-effort VC can be established to any receivers that 499 could not be added to the QoS VC. Only in this special error 500 case would senders be required to replicate data. We define 501 this approach as the "modified homogeneous" model. 503 3.2.4 Implementation Guidelines 505 Multiple options for mapping reservations onto VCs have been 506 discussed. No matter which model or combination of models is 507 used by an implementation, implementations must not normally 508 send more than one copy of a particular data packet to a 509 particular next-hop (ATM end-point). Some transient over 510 transmission is acceptable, but only during VC setup and 511 transition. Implementations must also ensure that data traffic 512 is sent to best-effort receivers. Data traffic may be sent to 513 best-effort receivers via best-effort or QoS VCs as is 514 appropriate for the implemented model. In all cases, 515 implementations must not create VCs in such a way that data 516 cannot be sent to best-effort receivers. This includes the 517 case of not being able to add a best-effort receiver to a QoS 518 VC, but does not include the case where best-effort VCs cannot 519 be setup. The failure to establish best-effort VCs is 520 considered to be a general IP over ATM failure and is therefore 521 beyond the scope of this document. 523 The key issue to be addressed by an implementation is providing 524 requested QoS downstream. One of or some combination of the 525 previously discussed models may be used to provide requested 526 QoS. Currently, the aggregation approach is being studied, so 527 RSVP over ATM implementations are limited to the other models. 528 Unfortunately, none of the described models is the right answer 529 for all cases. For some networks, e.g. public WANs, it is 530 likely that the limited heterogeneous model or a hybrid 531 limited-full heterogeneous model will be desired. In other 532 networks, e.g. LANs, it is likely that a the modified 533 homogeneous model will be desired. 535 Since there is not one model that satisfies all cases, 536 implementations must, by default, implement either the limited 537 heterogeneity model or the modified homogeneous model. 538 Implementations should support both approaches and provide the 539 ability to select which method is actually used, but are not 540 required to do so. Implementations, may also support 541 heterogeneity through some other mechanism, e.g., using 542 multiple appropriately sized VCs. 544 3.3 Multicast End-Point Identification 546 Implementations must be able to identify ATM end-points 547 participating in an IP multicast group. The ATM end-points will 548 be IP multicast receivers and/or next-hops. Both QoS and best- 549 effort end-points must be identified. RSVP next-hop information 550 will provide QoS end-points, but not best-effort end-points. 552 Another issue is identifying end-points of multicast traffic 553 handled by non-RSVP capable next-hops. In this case a PATH 554 message travels through a non-RSVP egress router on the way to the 555 next hop RSVP node. When the next hop RSVP node sends a RESV 556 message it may arrive at the source over a different route than 557 what the data is using. The source will get the RESV message, but 558 will not know which egress router needs the QoS. For unicast 559 sessions, there is no problem since the ATM end-point will be the 560 IP next-hop router. Unfortunately, multicast routing may not be 561 able to uniquely identify the IP next-hop router. So it is 562 possible that a multicast end-point can not be identified. 564 3.3.1 Implementation Guidelines 566 In the most common case, MARS will be used to identify all 567 end-points of a multicast group. In the router to router case, 568 a multicast routing protocol may provide all next-hops for a 569 particular multicast group. In either case, RSVP over ATM 570 implementations must obtain a full list of end-points, both QoS 571 and non-QoS, using the appropriate mechanisms. The full list 572 can be compared against the RSVP identified end-points to 573 determine the list of best-effort receivers. 575 There is no straightforward solution to uniquely identifying 576 end-points of multicast traffic handled by non-RSVP next hops. 577 The preferred solution is to use multicast routing protocols 578 that support unique end-point identification. In cases where 579 such routing protocols are unavailable, all IP routers that 580 will be used to support RSVP over ATM should support RSVP. To 581 ensure proper behavior, implementations should, by default, 582 only establish RSVP-initiated VCs to RSVP capable end-points. 584 3.4 Multicast Data Distribution 586 Two models are planned for IP multicast data distribution over 587 ATM. In one model, senders establish point-to-multipoint VCs to 588 all ATM attached destinations, and data is then sent over these 589 VCs. This model is often called "multicast mesh" or "VC mesh" 590 mode distribution. In the second model, senders send data over 591 point-to-point VCs to a central point and the central point relays 592 the data onto point-to-multipoint VCs that have been established 593 to all receivers of the IP multicast group. This model is often 594 referred to as "multicast server" mode distribution. Figure shows 595 data flow for both modes of IP multicast data distribution. RSVP 596 over ATM solutions must ensure that IP multicast data is 597 distributed with appropriate QoS. 599 [Figure goes here] 600 Figure 5: IP Multicast Data Distribution Over ATM 602 3.4.1 Implementation Guidelines 604 In the Classical IP context, multicast server support is 605 provided via MARS[1]. MARS does not currently provide a way to 606 communicate QoS requirements to a MARS multicast server. 607 Therefore, RSVP over ATM implementations must, by default, 608 support "mesh-mode" distribution for RSVP controlled multicast 609 flows. When using multicast servers that do not support QoS 610 requests, a sender must set the service, not global, break 611 bit(s). 613 3.5 Receiver Transitions 615 When setting up a point-to-multipoint VCs there will be a time 616 when some receivers have been added to a QoS VC and some have not. 617 During such transition times it is possible to start sending data 618 on the newly established VC. The issue is when to start send data 619 on the new VC. If data is sent both on the new VC and the old VC, 620 then data will be delivered with proper QoS to some receivers and 621 with the old QoS to all receivers. This means the QoS receivers 622 would get duplicate data. If data is sent just on the new QoS VC, 623 the receivers that have not yet been added will lose information. 624 So, the issue comes down to whether to send to both the old and 625 new VCs, or to send to just one of the VCs. In one case duplicate 626 information will be received, in the other some information may 627 not be received. This issue needs to be considered for three 628 cases: when establishing the first QoS VC, when establishing a VC 629 to support a QoS change, and when adding a new end-point to an 630 already established QoS VC. 632 The first two cases are very similar. It both, it is possible to 633 send data on the partially completed new VC, and the issue of 634 duplicate versus lost information is the same. 636 The last case is when an end-point must be added to an existing 637 QoS VC. In this case the end-point must be both added to the QoS 638 VC and dropped from a best-effort VC. The issue is which to do 639 first. If the add is first requested, then the end-point may get 640 duplicate information. If the drop is requested first, then the 641 end-point may loose information. 643 3.5.1 Implementation Guidelines 645 In order to ensure predictable behavior and delivery of data to 646 all receivers, data can only be sent on a new VCs once all 647 parties have been added. This will ensure that all data is 648 only delivered once to all receivers. This approach does not 649 quite apply for the last case. In the last case, the add should 650 be completed first, then the drop. This means that receivers 651 must be prepared to receive some duplicate packets at times of 652 QoS setup. 654 3.6 Dynamic QoS 656 RSVP provides dynamic quality of service (QoS) in that the 657 resources that are requested may change at any time. There are 658 several common reasons for a change of reservation QoS. First, an 659 existing receiver can request a new larger (or smaller) QoS. 660 Second, a sender may change its traffic specification (TSpec), 661 which can trigger a change in the reservation requests of the 662 receivers. Third, a new sender can start sending to a multicast 663 group with a larger traffic specification than existing senders, 664 triggering larger reservations. Finally, a new receiver can make 665 a reservation that is larger than existing reservations. If the 666 merge node for the larger reservation is an ATM edge device, a new 667 larger reservation must be set up across the ATM network. 669 Since ATM service, as currently defined in UNI 3.x and UNI 4.0, 670 does not allow renegotiating the QoS of a VC, dynamically changing 671 the reservation means creating a new VC with the new QoS, and 672 tearing down an established VC. Tearing down a VC and setting up 673 a new VC in ATM are complex operations that involve a non-trivial 674 amount of processor time, and may have a substantial latency. 676 There are several options for dealing with this mismatch in 677 service. A specific approach will need to be a part of any RSVP 678 over ATM solution. 680 3.6.1 Implementation Guidelines 682 The default method for supporting changes in RSVP reservations 683 is to attempt to replace an existing VC with a new 684 appropriately sized VC. During setup of the replacement VC, the 685 old VC must be left in place unmodified. The old VC is left 686 unmodified to minimize interruption of QoS data delivery. Once 687 the replacement VC is established, data transmission is shifted 688 to the new VC, and the old VC is then closed. 690 If setup of the replacement VC fails, then the old QoS VC 691 should continue to be used. When the new reservation is greater 692 than the old reservation, the reservation request should be 693 answered with an error. When the new reservation is less than 694 the old reservation, the request should be treated as if the 695 modification was successful. While leaving the larger 696 allocation in place is suboptimal, it maximizes delivery of 697 service to the user. Implementations should retry replacing 698 the too large VC after some appropriate elapsed time. 700 One additional issue is that only one QoS change can be 701 processed at one time per reservation. If the (RSVP) requested 702 QoS is changed while the first replacement VC is still being 703 setup, then the replacement VC is released and the whole VC 704 replacement process is restarted. 706 To limit the number of changes and to avoid excessive 707 signalling load, implementations may limit the number of 708 changes that will be processed in a given period. One 709 implementation approach would have each ATM edge device 710 configured with a time parameter tau (which can change over 711 time) that gives the minimum amount of time the edge device 712 will wait between successive changes of the QoS of a particular 713 VC. Thus if the QoS of a VC is changed at time t, all messages 714 that would change the QoS of that VC that arrive before time 715 t+tau would be queued. If several messages changing the QoS of 716 a VC arrive during the interval, redundant messages can be 717 discarded. At time t+tau, the remaining change(s) of QoS, if 718 any, can be executed. 720 The sequence of events for a single VC would be 722 1. Wait if timer is active 724 2. Establish VC with new QoS 726 3. Remap data traffic to new VC 728 4. Tear down old VC 730 5. Activate timer 732 There is an interesting interaction between heterogeneous 733 reservations and dynamic QoS. In the case where a RESV message 734 is received from a new next-hop and the requested resources are 735 larger than any existing reservation, both dynamic QoS and 736 heterogeneity need to be addressed. A key issue is whether to 737 first add the new next-hop or to change to the new QoS. This 738 is a fairly straight forward special case. Since the older, 739 smaller reservation does not support the new next-hop, the 740 dynamic QoS process should be initiated first. Since the new 741 QoS is only needed by the new next-hop, it should be the first 742 end-point of the new VC. This way signalling is minimized when 743 the setup to the new next-hop fails. 745 3.7 Short-Cuts 747 Short-cuts [12] allow ATM attached routers and hosts to directly 748 establish point-to-point VCs across LIS boundaries, i.e., the VC 749 end-points are on different IP sub-nets. The ability for short- 750 cuts and RSVP to interoperate has been raised as a general 751 question. The area of concern is the ability to handle asymmetric 752 short-cuts. Specifically how RSVP can handle the case where a 753 downstream short-cut may not have a matching upstream short-cut. 754 In this case, which is shown in figure , PATH and RESV messages 755 following different paths. 757 [Figure goes here] 758 Figure 6: Asymmetric RSVP Message Forwarding With ATM Short-Cuts 760 Examination of RSVP shows that the protocol already includes 761 mechanisms that will support short-cuts. The mechanism is the 762 same one used to support RESV messages arriving at the wrong 763 router and the wrong interface. The key aspect of this mechanism 764 is RSVP only processing messages that arrive at the proper 765 interface and RSVP forwarding of messages that arrive on the wrong 766 interface. The proper interface is indicated in the NHOP object 767 of the message. So, existing RSVP mechanisms will support 768 asymmetric short-cuts. 770 The short-cut model of VC establishment still poses several issues 771 when running with RSVP. The major issues are dealing with 772 established best-effort short-cuts, when to establish short-cuts, 773 and QoS only short-cuts. These issues will need to be addressed by 774 RSVP implementations. 776 3.7.1 Implementation Guidelines 778 The key issue to be addressed by any RSVP over ATM solution is 779 when to establish a short-cut for a QoS data flow. The default 780 behavior is to simply follow best-effort traffic. When a 781 short-cut has been established for best-effort traffic to a 782 destination or next-hop, that same end-point should be used 783 when setting up RSVP triggered VCs for QoS traffic to the same 784 destination or next-hop. This will happen naturally when PATH 785 messages are forwarded over the best-effort short-cut. Note 786 that in this approach when best-effort short-cuts are never 787 established, RSVP triggered QoS short-cuts will also never be 788 established. 790 3.8 VC Teardown 792 RSVP can identify from either explicit messages or timeouts when a 793 data VC is no longer needed. Therefore, data VCs set up to 794 support RSVP controlled flows should only be released at the 795 direction of RSVP. VCs must not be timed out due to inactivity by 796 either the VC initiator or the VC receiver. This conflicts with 797 VCs timing out as described in RFC 1755[14], section 3.4 on VC 798 Teardown. RFC 1755 recommends tearing down a VC that is inactive 799 for a certain length of time. Twenty minutes is recommended. This 800 timeout is typically implemented at both the VC initiator and the 801 VC receiver. Although, section 3.1 of the update to RFC 1755[15] 802 states that inactivity timers must not be used at the VC receiver. 804 When this timeout occurs for an RSVP initiated VC, a valid VC with 805 QoS will be torn down unexpectedly. While this behavior is 806 acceptable for best-effort traffic, it is important that RSVP 807 controlled VCs not be torn down. If there is no choice about the 808 VC being torn down, the RSVP daemon must be notified, so a 809 reservation failure message can be sent. 811 3.8.1 Implementation Guidelines 813 For VCs initiated at the request of RSVP, the configurable 814 inactivity timer mentioned in [14] must be set to "infinite". 815 Setting the inactivity timer value at the VC initiator should 816 not be problematic since the proper value can be relayed 817 internally at the originator. 819 Setting the inactivity timer at the VC receiver is more 820 difficult, and would require some mechanism to signal that an 821 incoming VC was RSVP initiated. To avoid this complexity and 822 to conform to [15], implementations must not use an inactivity 823 timer to clear received connections. 825 4. RSVP Control VC Management 827 One last important issue is providing a data path for the RSVP 828 messages themselves. There are two main types of messages in RSVP, 829 PATH and RESV. PATH messages are sent to a multicast address, while 830 RESV messages are sent to a unicast address. Other RSVP messages are 831 handled similar to either PATH or RESV [Note 1] So ATM VCs used for 832 RSVP signalling messages need to provide both unicast and multicast 833 functionality. 835 There are several different approaches for how to assign VCs to use 836 for RSVP signalling messages. The main approaches are: 838 o use same VC as data 840 o single VC per session 842 o single point-to-multipoint VC multiplexed among sessions 844 o multiple point-to-point VCs multiplexed among sessions 846 There are several different issues that affect the choice of how to 847 assign VCs for RSVP signalling. One issue is the number of 848 additional VCs needed for RSVP signalling. Related to this issue is 849 the degree of multiplexing on the RSVP VCs. In general more 850 multiplexing means less VCs. An additional issue is the latency in 851 dynamically setting up new RSVP signalling VCs. A final issue is 852 complexity of implementation. The remainder of this section 853 discusses the issues and tradeoffs among these different approaches 854 and suggests guidelines for when to use which alternative. 856 4.1 Mixed data and control traffic 858 In this scheme RSVP signalling messages are sent on the same VCs 859 as is the data traffic. The main advantage of this scheme is that 860 no additional VCs are needed beyond what is needed for the data 861 traffic. An additional advantage is that there is no ATM 862 signalling latency for PATH messages (which follow the same 863 routing as the data messages). However there can be a major 864 problem when data traffic on a VC is nonconforming. With 865 nonconforming traffic, RSVP signalling messages may be dropped. 866 While RSVP is resilient to a moderate level of dropped messages, 867 excessive drops would lead to repeated tearing down and re- 868 establishing QoS VCs, a very undesirable behavior for ATM. Due to 869 these problems, this is not a good choice for providing RSVP 870 _________________________ 871 [Note 1] This can be slightly more complicated for RERR messages 872 signalling messages, even though the number of VCs needed for this 873 scheme is minimized. 875 One variation of this scheme is to use the best effort data path 876 for signalling traffic. In this scheme, there is no issue with 877 nonconforming traffic, but there is an issue with congestion in 878 the ATM network. 880 RSVP provides some resiliency to message loss due to congestion, 881 but RSVP control messages should be offered a preferred class of 882 service. A related variation of this scheme that is hopeful but 883 requires further study is to have a packet scheduling algorithm 884 (before entering the ATM network) that gives priority to the RSVP 885 signalling traffic. This can be difficult to do at the IP layer. 887 4.2 Single RSVP VC per RSVP Reservation 889 In this scheme, there is a parallel RSVP signalling VC for each 890 RSVP reservation. This scheme results in twice the minimum number 891 of VCs, but means that RSVP signalling messages have the advantage 892 of a separate VC. This separate VC means that RSVP signalling 893 messages have their own traffic contract and compliant signalling 894 messages are not subject to dropping due to other noncompliant 895 traffic (such as can happen with the scheme in section 4.1). The 896 advantage of this scheme is its simplicity - whenever a data VC is 897 created, a separate RSVP signalling VC is created. The 898 disadvantage of the extra VC is that extra ATM signalling needs to 899 be done. 901 Additionally, this scheme requires twice the minimum number of VCs 902 and also additional latency, but is quite simple. 904 4.3 Multiplexed point-to-multipoint RSVP VCs 906 In this scheme, there is a single point-to-multipoint RSVP 907 signalling VC for each unique ingress router and unique set of 908 egress routers. This scheme allows multiplexing of RSVP 909 signalling traffic that shares the same ingress router and the 910 same egress routers. This can save on the number of VCs, by 911 multiplexing, but there are problems when the destinations of the 912 multiplexed point-to-multipoint VCs are changing. Several 913 alternatives exist in these cases, that have applicability in 914 different situations. First, when the egress routers change, the 915 ingress router can check if it already has a point-to-multipoint 916 RSVP signalling VC for the new list of egress routers. If the 917 RSVP signalling VC already exists, then the RSVP signalling 918 traffic can be switched to this existing VC. If no such VC 919 exists, one approach would be to create a new VC with the new list 920 of egress routers. Other approaches include modifying the 921 existing VC to add an egress router or using a separate new VC for 922 the new egress routers. When a destination drops out of a group, 923 an alternative would be to keep sending to the existing VC even 924 though some traffic is wasted. 926 The number of VCs used in this scheme is a function of traffic 927 patterns across the ATM network, but is always less than the 928 number used with the Single RSVP VC per data VC. In addition, 929 existing best effort data VCs could be used for RSVP signalling. 930 Reusing best effort VCs saves on the number of VCs at the cost of 931 higher probability of RSVP signalling packet loss. One possible 932 place where this scheme will work well is in the core of the 933 network where there is the most opportunity to take advantage of 934 the savings due to multiplexing. The exact savings depend on the 935 patterns of traffic and the topology of the ATM network. 937 4.4 Multiplexed point-to-point RSVP VCs 939 In this scheme, multiple point-to-point RSVP signalling VCs are 940 used for a single point-to-multipoint data VC. This scheme allows 941 multiplexing of RSVP signalling traffic but requires the same 942 traffic to be sent on each of several VCs. This scheme is quite 943 flexible and allows a large amount of multiplexing. Since point- 944 to-point VCs can set up a reverse channel at the same time as 945 setting up the forward channel, this scheme could save 946 substantially on signalling cost. In addition, signalling traffic 947 could share existing best effort VCs. Sharing existing best 948 effort VCs reduces the total number of VCs needed, but might cause 949 signalling traffic drops if there is congestion in the ATM 950 network. 952 This point-to-point scheme would work well in the core of the 953 network where there is much opportunity for multiplexing. Also in 954 the core of the network, RSVP VCs can stay permanently established 955 either as Permanent Virtual Circuits (PVCs) or as long lived 956 Switched Virtual Circuits (SVCs). The number of VCs in this 957 scheme will depend on traffic patterns, but in the core of a 958 network would be approximately n(n-1)/2 where n is the number of 959 IP nodes in the network. In the core of the network, this will 960 typically be small compared to the total number of VCs. 962 4.5 QoS for RSVP VCs 964 There is an issue for what QoS, if any, to assign to the RSVP VCs. 965 Three solutions have been covered in section 4.1 and in the shared 966 best effort VC variations in sections 4.4 and 4.3. For other RSVP 967 VC schemes, a QoS (possibly best effort) will be needed. What QoS 968 to use partially depends on the expected level of multiplexing 969 that is being done on the VCs, and the expected reliability of 970 best effort VCs. Since RSVP signalling is infrequent (typically 971 every 30 seconds), only a relatively small QoS should be needed. 972 This is important since using a larger QoS risks the VC setup 973 being rejected for lack of resources. Falling back to best effort 974 when a QoS call is rejected is possible, but if the ATM net is 975 congested, there will likely be problems with RSVP packet loss on 976 the best effort VC also. Additional experimentation is needed in 977 this area. 979 4.6 Implementation Guidelines 981 Implementations must, by default, send RSVP control (messages) 982 over the best effort data path, see figure . This approach 983 minimizes VC requirements since the best effort data path will 984 need to exist in order for RSVP sessions to be established and in 985 order for RSVP reservations to be initiated. The specific best 986 effort paths that will be used by RSVP are: for unicast, the same 987 VC used to reach the unicast destination; and for multicast, the 988 same VC that is used for best effort traffic destined to the IP 989 multicast group. Note that there may be another best effort VC 990 that is used to carry session data traffic. 992 [Figure goes here] 993 Figure 7: RSVP Control Message VC Usage 995 The disadvantage of this approach is that best effort VCs may not 996 provide the reliability that RSVP needs. However the best-effort 997 path is expected to satisfy RSVP reliability requirements in most 998 networks. Especially since RSVP allows for a certain amount of 999 packet loss without any loss of state synchronization. In all 1000 cases, RSVP control traffic should be offered a preferred class of 1001 service. 1003 5. Encapsulation 1005 Since RSVP is a signalling protocol used to control flows of IP data 1006 packets, encapsulation for both RSVP packets and associated IP data 1007 packets must be defined. There are currently two encapsulation 1008 options for running IP over ATM, RFC 1483 and LANE. There is also 1009 the possibility of future encapsulation options, such as MPOA[3]. 1010 The first option is described in RFC 1483[9] and is currently used 1011 for "Classical" IP over ATM and NHRP. 1013 The second option is LAN Emulation, as described in [2]. LANE 1014 encapsulation does not currently include a QoS signalling interface. 1015 If LANE encapsulation is needed, LANE QoS signalling would first need 1016 to be defined by the ATM Forum. It is possible that LANE 2.0 will 1017 include the required QoS support. 1019 5.1 Implementation Guidelines 1021 The default behavior for implementations must be to use a 1022 consistent encapsulation scheme for all IP over ATM packets. This 1023 includes RSVP packets and associated IP data packets. So, 1024 encapsulation used on QoS data VCs and related control VCs must, 1025 by default, be the same as used by best-effort VCs. 1027 6. Security 1029 The same considerations stated in [8] and [14] apply to this 1030 document. There are no additional security issues raised in this 1031 document. 1033 7. Implementation Summary 1035 This section provides a summary of previously stated requirements and 1036 default implementation behavior. 1038 7.1 Requirements 1040 All RSVP over ATM UNI 3.0 and 4.0 implementations must conform to 1041 the following: 1043 o VC Initiation 1044 All RSVP triggered QoS VCs must be established by the sub-net 1045 senders. 1046 VC receivers must be able to accept incoming QoS VCs. 1048 o VC Teardown 1049 VC initiators must not tear down RSVP initiated VCs due to 1050 inactivity. 1051 VC receivers must not tear down any incoming VCs due to 1052 inactivity. 1054 o Heterogeneity 1055 Implementations must not, in the normal case, send more than 1056 one copy of a particular data packet to a particular next-hop 1057 (ATM end-point). 1058 Implementations must ensure that data traffic is sent to 1059 best-effort receivers. 1061 o Multicast Data Distribution 1062 When using multicast servers that do not support QoS 1063 requests, a sender must set the service, not global, break 1064 bit(s). 1066 o Receiver Transitions 1067 While creating new VCs, senders must either send on only the 1068 old VC or on both the old and the new VCs. 1070 7.2 Default Behavior 1072 Default behavior defines a baseline set of functionality that must 1073 be provided by implementations. Implementations may also provide 1074 additional functionality that may be configured to override the 1075 default behavior. Which behavior is selected is a policy issue 1076 for network providers. We expect some networks to only make use 1077 of default functionality and others to only make use of additional 1078 functionality. 1080 o Reservation to VC Mapping 1081 Implementations must, by default, use a single VC to support 1082 each RSVP reservation. 1083 Implementations may also support aggregation approaches. 1085 o Heterogeneity 1086 Either limited heterogeneity model or the modified 1087 homogeneous model must be the default model for handling 1088 heterogeneity. 1089 Implementations should support both approaches and provide 1090 the ability to select which method is actually used, but are 1091 not required to do so. 1092 Implementations, may also support heterogeneity through other 1093 mechanisms. 1095 o Multicast End-Point Identification 1096 Implementations should, by default, only establish RSVP- 1097 initiated VCs to RSVP capable end-points. 1099 o Multicast Data Distribution 1100 Implementations must, by default, support "mesh-mode" 1101 distribution for RSVP controlled multicast flows. 1103 o Dynamic QoS 1104 Implementations must, by default, support RSVP requested 1105 changes in reservations by attempting to replace an existing 1106 VC with a new appropriately sized VC. During setup of the 1107 replacement VC, the old VC must be left in place unmodified. 1109 o Short-Cuts 1110 Implementations should, by default, establish QoS short-cut 1111 whenever a best-effort short-cut is in use to a particular 1112 destination or next-hop. This means that, by default, when 1113 best-effort short-cuts are never established, RSVP triggered 1114 short-cuts must also not be established. 1116 o RSVP Control VC Management 1117 Implementations must, by default, send RSVP control 1118 (messages) over the best effort data path. 1120 o Encapsulation 1121 Implementations must, by default, encapsulate data sent on 1122 QoS VCs with the same encapsulation as is used on best-effort 1123 VCs. 1125 8. Future Work 1127 We have described a set of schemes for deploying RSVP over IP over 1128 ATM. There are a number of other issues that are subjects of 1129 continuing research. These issues (and others) are covered in [5], 1130 and are briefly repeated here. 1132 A major issue is providing policy control for ATM VC creation. There 1133 is work going on in the RSVP working group [8] on defining an 1134 architecture for policy support. Further work is needed in defining 1135 an API and policy objects. As this area is critical to deployment, 1136 progress will need to be made in this area. 1138 NHRP provides advantages in allowing short-cuts across 2 or more 1139 LIS's. Short cutting router hops can lead to more efficient data 1140 delivery. Work on NHRP is on-going, but currently provides only a 1141 unicast delivery service. Further study is needed to determine how 1142 NHRP can be used with RSVP and ATM. Future work depends on the 1143 development of NHRP for multicast. 1145 Furthermore, when using RSVP it may be desirable to establish 1146 multiple short-cut VCs, to use these VCs for specific QoS flows, and 1147 to use the hop-by-hop path for other QoS and non-QoS flows. The 1148 current NHRP specification [12] does not preclude such an approach, 1149 but nor does it explicitly support it. We believe that explicit 1150 support of flow based short-cuts would improve RSVP over ATM 1151 solutions. We also believe that such support may require the ability 1152 to include flow information in the NHRP request. 1154 There is work in the ION working group on MultiCast Server (MCS) 1155 architectures for MARS. An MCS provides savings in the number of VCs 1156 in certain situations. When using a multicast server, the sub- 1157 network sender could establish a point-to-point VC with a specific 1158 QoS to the server, but there is not current mechanism to relay QoS 1159 requirements to the MCS. Future work includes providing RSVP and ATM 1160 support over MARS MCS's. 1162 Unicast ATM VCs are inherently bi-directional and have the capability 1163 of supporting a "reverse channel". By using the reverse channel for 1164 unicast VCs, the number of VCs used can potentially be reduced. 1165 Future work includes examining how the reverse VCs can be used most 1166 effectively. 1168 Current work in the ATM Forum and ITU promises additional advantages 1169 for RSVP and ATM including renegotiating QoS parameters and 1170 variegated VCs. QoS renegotiation would be particularly beneficial 1171 since the only option available today for changing VC QoS parameters 1172 is replacing the VC. It is important to keep current with changes in 1173 ATM, and to keep this document up-to-date. 1175 Scaling of the number of sessions is an issue. The key ATM related 1176 implication of a large number of sessions is the number of VCs and 1177 associated (buffer and queue) memory. The approach to solve this 1178 problem is aggregation either at the RSVP layer or at the ISSLL layer 1179 (or both). 1181 This document describes approaches that can be used with ATM UNI4.0, 1182 but does not make use of the available leaf-initiated join, or LIJ, 1183 capability. The use of LIJ may be useful in addressing scaling 1184 issues. The coordination of RSVP with LIJ remains a research issue. 1186 Lastly, it is likely that LANE 2.0 will provide some QoS support 1187 mechanisms, including proper QoS allocation for multicast traffic. 1188 It is important to track developments, and develop suitable RSVP over 1189 ATM LANE at the appropriate time. 1191 9. Authors' Addresses 1193 Steven Berson 1194 USC Information Sciences Institute 1195 4676 Admiralty Way 1196 Marina del Rey, CA 90292 1198 Phone: +1 310 822 1511 1199 EMail: berson@isi.edu 1200 Lou Berger 1201 FORE Systems 1202 6905 Rockledge Drive 1203 Suite 800 1204 Bethesda, MD 20817 1206 Phone: +1 301 571 2534 1207 EMail: lberger@fore.com 1209 REFERENCES 1211 [1] Armitage, G., "Support for Multicast over UNI 3.0/3.1 based ATM 1212 Networks," Internet Draft, February 1996. 1214 [2] The ATM Forum, "LAN Emulation Over ATM Specification", Version 1.0. 1216 [3] The ATM Forum, "MPOA Baseline Version 1", 95-0824r9, September 1996. 1218 [4] Berson, S., "`Classical' RSVP and IP over ATM," INET '96, July 1996. 1220 [5] Borden, M., Crawley, E., Krawczyk, J, Baker, F., and Berson, S., 1221 "Issues for RSVP and Integrated Services over ATM," Internet Draft, 1222 February 1996. 1224 [6] Borden, M., and Garrett, M., "Interoperation of Controlled-Load and 1225 Guaranteed-Service with ATM," Internet Draft, June 1996. 1227 [7] Braden, R., Clark, D., Shenker, S. "Integrated Services in the 1228 Internet Architecture: an Overview," RFC 1633, June 1994. 1230 [8] Braden, R., Zhang, L., Berson, S., Herzog, S., and Jamin, S., 1231 "Resource ReSerVation Protocol (RSVP) -- Version 1 Functional 1232 Specification," Internet Draft, November 1996. 1234 [9] Heinanen, J., "Multiprotocol Encapsulation over ATM Adaptation Layer 1235 5," RFC 1483. 1237 [10] Herzog, S., "Accounting and Access Control Policies for Resource 1238 Reservation Protocols," Internet Draft, June 1996. 1240 [11] Laubach, M., "Classical IP and ARP over ATM," RFC 1577, January 1241 1994. 1243 [12] Luciani, J., Katz, D., Piscitello, D., Cole, B., "NBMA Next Hop 1244 Resolution Protocol (NHRP)," Internet Draft, June 1996. 1246 [13] Onvural, R., Srinivasan, V., "A Framework for Supporting RSVP Flows 1247 Over ATM Networks," Internet Draft, March 1996. 1249 [14] Perez, M., Liaw, F., Grossman, D., Mankin, A., Hoffman, E., and 1250 Malis, A., "ATM Signalling Support for IP over ATM," RFC 1755. 1252 [15] Perez, M., Mankin, A. "ATM Signalling Support for IP over ATM - 1253 UNI 4.0 Update" Internet Draft, November 1996. 1255 [16] "ATM User-Network Interface (UNI) Specification - Version 3.1", 1256 Prentice Hall. 1258 [17] Shenker, S., Partridge, C., Guerin, R., "Specification of 1259 Guaranteed Quality of Service," Internet Draft, August 1996. 1261 [18] Wroclawski, J., "Specification of the Controlled-Load Network 1262 Element Service," Internet Draft, August, 1996. 1264 [19] Zhang, L., Deering, S., Estrin, D., Shenker, S., Zappala, D., 1265 "RSVP: A New Resource ReSerVation Protocol," IEEE Network, September 1266 1993.