idnits 2.17.1 draft-ietf-issll-atm-framework-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-27) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing document type: Expected "INTERNET-DRAFT" in the upper left hand corner of the first page ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 24 longer pages, the longest (page 23) being 78 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. (A line matching the expected section header was found, but with an unexpected indentation: ' 1. Introduction' ) ** The document seems to lack a Security Considerations section. (A line matching the expected section header was found, but with an unexpected indentation: ' 5. Security Considerations' ) ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack an Authors' Addresses Section. ** There are 427 instances of too long lines in the document, the longest one being 6 characters in excess of 72. ** The abstract seems to contain references ([20], [15], [2], [16], [21], [3], [17], [22], [4], [23], [18], [5], [6], [19], [7], [8], [1,6,7], [9], [8,9], [10], [11], [12], [13,14], [13], [14], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 1102 has weird spacing: '...VCs) or as...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 24, 1997) is 9774 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '1' on line 1174 looks like a reference -- Missing reference section? '6' on line 1194 looks like a reference -- Missing reference section? '7' on line 1198 looks like a reference -- Missing reference section? '10' on line 522 looks like a reference -- Missing reference section? '5' on line 1191 looks like a reference -- Missing reference section? '18' on line 1152 looks like a reference -- Missing reference section? '14' on line 494 looks like a reference -- Missing reference section? '3' on line 1183 looks like a reference -- Missing reference section? '8' on line 1202 looks like a reference -- Missing reference section? '9' on line 1205 looks like a reference -- Missing reference section? '21' on line 200 looks like a reference -- Missing reference section? '4' on line 1187 looks like a reference -- Missing reference section? '22' on line 378 looks like a reference -- Missing reference section? '23' on line 397 looks like a reference -- Missing reference section? '11' on line 1168 looks like a reference -- Missing reference section? '12' on line 459 looks like a reference -- Missing reference section? '13' on line 494 looks like a reference -- Missing reference section? '17' on line 1154 looks like a reference -- Missing reference section? '19' on line 1153 looks like a reference -- Missing reference section? '2' on line 1179 looks like a reference Summary: 14 errors (**), 0 flaws (~~), 3 warnings (==), 22 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Crawley, Editor 3 Internet Draft Gigapacket Networks 4 draft-ietf-issll-atm-framework-00.txt L. Berger 5 Fore Systems 6 S. Berson 7 ISI 8 F. Baker 9 Cisco Systems 10 M. Borden 11 New Oak Communications 12 J. Krawczyk 13 ArrowPoint Communications 15 July 24, 1997 17 A Framework for Integrated Services and RSVP over ATM 19 Status of this Memo 20 This document is an Internet Draft. Internet Drafts are working 21 documents of the Internet Engineering Task Force (IETF), its 22 Areas, and its Working Groups. Note that other groups may also 23 distribute working documents as Internet Drafts). 25 Internet Drafts are draft documents valid for a maximum of six 26 months. Internet Drafts may be updated, replaced, or obsoleted by 27 other documents at any time. It is not appropriate to use 28 Internet Drafts as reference material or to cite them other than 29 as a "working draft" or "work in progress." 31 To learn the current status of any Internet-Draft, please check 32 the ``1id-abstracts.txt'' listing contained in the Internet- 33 Drafts Shadow Directories on ds.internic.net (US East Coast), 34 nic.nordu.net (Eur4ope), ftp.isi.edu (US West Coast), or 35 munnari.oz.au (Pacific Rim). 37 Abstract 38 This document outlines the framework and issues related to 39 providing IP Integrated Services with RSVP over ATM. It provides 40 an overall approach to the problem(s) and related issues. These 41 issues and problems are to be addressed in further documents from 42 the ISATM subgroup of the ISSLL working group. 44 Editor's Note 45 This document is the merger of two previous documents, draft- 46 ietf-issll-atm-support-02.txt by Berger and Berson and draft- 47 crawley-rsvp-over-atm-00.txt by Baker, Berson, Borden, Crawley, 48 and Krawczyk. The former document has been split into this 49 document and a set of documents on RSVP over ATM implementation 50 requirements and guidelines. 52 1. Introduction 54 The Internet currently has one class of service normally referred 55 to as "best effort." This service is typified by first-come, 56 first-serve scheduling at each hop in the network. Best effort 57 service has worked well for electronic mail, World Wide Web (WWW) 58 access, file transfer (e.g. ftp), etc. For real-time traffic 59 such as voice and video, the current Internet has performed well 60 only across unloaded portions of the network. In order to 61 provide guaranteed quality real-time traffic, new classes of 62 service and a QoS signalling protocol are being introduced in the 63 Internet [1,6,7], while retaining the existing best effort 64 service. The QoS signalling protocol is RSVP [1], the Resource 65 ReSerVation Protocol. 67 One of the important features of ATM technology is the ability to 68 request a point-to-point Virtual Circuit (VC) with a specified 69 Quality of Service (QoS). An additional feature of ATM 70 technology is the ability to request point-to-multipoint VCs with 71 a specified QoS. Point-to-multipoint VCs allows leaf nodes to be 72 added and removed from the VC dynamically and so provides a 73 mechanism for supporting IP multicast. It is only natural that 74 RSVP and the Internet Integrated Services (IIS) model would like 75 to utilize the QoS properties of any underlying link layer 76 including ATM, and this draft concentrates on ATM. 78 Classical IP over ATM [10] has solved part of this problem, 79 supporting IP unicast best effort traffic over ATM. Classical IP 80 over ATM is based on a Logical IP Subnetwork (LIS), which is a 81 separately administered IP subnetwork. Hosts within an LIS 82 communicate using the ATM network, while hosts from different 83 subnets communicate only by going through an IP router (even 84 though it may be possible to open a direct VC between the two 85 hosts over the ATM network). Classical IP over ATM provides an 86 Address Resolution Protocol (ATMARP) for ATM edge devices to 87 resolve IP addresses to native ATM addresses. For any pair of 88 IP/ATM edge devices (i.e. hosts or routers), a single VC is 89 created on demand and shared for all traffic between the two 90 devices. A second part of the RSVP and IIS over ATM problem, IP 91 multicast, is being solved with MARS [5], the Multicast Address 92 Resolution Server. 94 MARS compliments ATMARP by allowing an IP address to resolve into 95 a list of native ATM addresses, rather than just a single 96 address. 98 The ATM Forum's LAN Emulation (LANE) [17 and 20] and 99 Multiprotocol Over ATM (MPOA) [18] also address the support of IP 100 best effort traffic over ATM through similar means. 102 A key remaining issue for IP in an ATM environment is the 103 integration of RSVP signalling and ATM signalling in support of 104 the Internet Integrated Services (IIS) model. There are two main 105 areas involved in supporting the IIS model, QoS translation and 106 VC management. QoS translation concerns mapping a QoS from the 107 IIS model to a proper ATM QoS, while VC management concentrates 108 on how many VCs are needed and which traffic flows are routed 109 over which VCs. 111 1.1 Structure and Related Documents 113 This document provides a guide to the issues for IIS over ATM. 114 It is intended to frame the problems that are to be addressed in 115 further documents. In this document, the modes and models for 116 RSVP operation over ATM will be discussed followed by a 117 discussion of management of ATM VCs for RSVP data and control. 118 Lastly, the topic of encapsulations will be discussed in relation 119 to the models presented. 121 This document is part of a group of documents from the ISATM 122 subgroup of the ISSLL working group related to the operation of 123 IntServ and RSVP over ATM. [14] discusses the mapping of the 124 IntServ models for Controlled Load and Guaranteed Service to ATM. 125 [15 and 16] discuss implementation requirements and guidelines 126 for RSVP over ATM, respectively. While these documents may not 127 address all the issues raised in this document, they should 128 provide enough information for development of solutions for 129 IntServ and RSVP over ATM. 131 1.2 Terms 133 The terms "reservation" and "flow" are used in many contexts, 134 often with different meaning. These terms are used in this 135 document with the following meaning: 137 - Reservation is used in this document to refer to an RSVP 138 initiated request for resources. RSVP initiates requests for 139 resources based on RESV message processing. RESV messages that 140 simply refresh state do not trigger resource requests. 141 Resource requests may be made based on RSVP sessions and RSVP 142 reservation styles. RSVP styles dictate whether the reserved 143 resources are used by one sender or shared by multiple 144 senders. See [1] for details of each. Each new request is 145 referred to in this document as an RSVP reservation, or simply 146 reservation. 147 - Flow is used to refer to the data traffic associated with a 148 particular reservation. The specific meaning of flow is RSVP 149 style dependent. For shared style reservations, there is one 150 flow per session. For distinct style reservations, there is 151 one flow per sender (per session). 153 2. Issues Regarding the Operation of RSVP and IntServ over ATM 155 The issues related to RSVP and IntServ over ATM fall into several 156 general classes: 157 - How to make RSVP run over ATM now and in the future 158 - When to set up a virtual circuit (VC) for a specific Quality 159 of Service (QoS) related to RSVP 160 - How to map the IntServ models to ATM QoS models 161 - How to know that an ATM network is providing the QoS necessary 162 for a flow 163 - How to handle the many-to-many connectionless features of IP 164 multicast and RSVP in the one-to-many connection-oriented 165 world of ATM 167 2.1 Modes/Models for RSVP and IntServ over ATM 169 [3] Discusses several different models for running IP over ATM 170 networks. [17, 18, and 20] also provide models for IP in ATM 171 environments. Any one of these models would work as long as the 172 RSVP control packets (IP protocol 46) and data packets can follow 173 the same IP path through the network. It is important that the 174 RSVP PATH messages follow the same IP path as the data such that 175 appropriate PATH state may be installed in the routers along the 176 path. For an ATM subnetwork, this means the ingress and egress 177 points must be the same in both directions for the RSVP control 178 and data messages. Note that the RSVP protocol does not require 179 symmetric routing. The PATH state installed by RSVP allows the 180 RESV messages to "retrace" the hops that the PATH message 181 crossed. Within each of the models for IP over ATM, there are 182 decisions about using different types of data distribution in ATM 183 as well as different connection initiation. The following 184 sections look at some of the different ways QoS connections can 185 be set up for RSVP. 187 2.1.1 UNI 3.x and 4.0 189 In the User Network Interface (UNI) 3.0 and 3.1 specifications 190 [8,9] and 4.0 specification, both permanent and switched virtual 191 circuits (PVC and SVC) may be established with a specified 192 service category (CBR, VBR, and UBR for UNI 3.x and VBR-rt and 193 ABR for 4.0) and specific traffic descriptors in point-to-point 194 and point-to-multipoint configurations. Additional QoS 195 parameters are not available in UNI 3.x and those that are 196 available are vendor-specific. Consequently, the level of QoS 197 control available in standard UNI 3.x networks is somewhat 198 limited. However, using these building blocks, it is possible to 199 use RSVP and the IntServ models. ATM 4.0 with the Traffic 200 Management (TM) 4.0 specification [21] allows much greater 201 control of QoS. [14] provides the details of mapping the IntServ 202 models to UNI 3.x and 4.0 service categories and traffic 203 parameters. 205 2.1.1.1 Permanent Virtual Circuits (PVCs) 207 PVCs emulate dedicated point-to-point lines in a network, so the 208 operation of RSVP can be identical to the operation over any 209 point-to-point network. The QoS of the PVC must be consistent 210 and equivalent to the type of traffic and service model used. 211 The devices on either end of the PVC have to provide traffic 212 control services in order to multiplex multiple flows over the 213 same PVC. With PVCs, there is no issue of when or how long it 214 takes to set up VCs, since they are made in advance but the 215 resources of the PVC are limited to what has been pre-allocated. 216 PVCs that are not fully utilized can tie up ATM network resources 217 that could be used for SVCs. 219 2.1.1.2 Switched Virtual Circuits (SVCs) 221 SVCs allow paths in the ATM network to be set up "on demand". 222 This allows flexibility in the use of RSVP over ATM along with 223 some complexity. Parallel VCs can be set up to allow best-effort 224 and better service class paths through the network. The cost and 225 time to set up SVCs can impact their use. For example, it may be 226 better to initially route QoS traffic over existing VCs until an 227 SVC with the desired QoS has been set up. Scaling issues can 228 come into play if a single VC is used per RSVP flow. An RSVP 229 flow is a data flow from one or more sources to one or more 230 receivers as defined by an RSVP filter specification. The number 231 of VCs in any ATM device is limited so the number of RSVP flows 232 that can be handled by a device would be strictly limited to the 233 number of VCs available, if we assume one VC per flow. 235 2.1.1.3 Point to MultiPoint 237 In order to provide QoS for IP multicast, an important feature of 238 RSVP, data flows must be distributed to multiple destinations 239 from a given source. Point-to-multipoint VCs provide such a 240 mechanism. It is important to map the actions of IP multicasting 241 and RSVP (e.g. IGMP JOIN/LEAVE and RSVP RESV/RESV TEAR) to add 242 party and drop party functions for ATM. Point-to-multipoint VCs 243 as defined in UNI 3.x have a single service class for all 244 destinations. This is contrary to the RSVP "heterogeneous 245 receiver" concept. It is possible to set up a different VC to 246 each receiver requesting a different QoS, but this again can run 247 into scaling and resource problems when managing multiple VCs on 248 the same interface to different destinations. 250 RSVP sends messages both up and down the multicast distribution 251 tree. In the case of a large ATM cloud, this could result in a 252 RSVP message implosion at an RSVP traffic source with many 253 receivers. 255 ATM 4.0 expands on the point-to-multipoint VCs by adding a Leaf 256 Initiated Join (LIJ) capability. LIJ allows an ATM end point to 257 join into an existing point-to-multipoint VC without necessarily 258 contacting the source of the VC. This will reduce the burden on 259 the ATM source point for setting up new branches and more closely 260 matches the receiver-based model of RSVP and IP multicast. 261 However, many of the same scaling issues exist and the new 262 branches added to a point-to-multipoint VC would use the same QoS 263 as existing branches. 265 2.1.1.4 Multicast Servers 267 IP-over-ATM has the concept of a multicast server or reflector 268 that can accept cells from multiple senders and send them via a 269 point-to-multipoint VC to a set of receivers. This moves the VC 270 scaling issues noted previously for point-to-multipoint VCs to 271 the multicast server. Additionally, the multicast server will 272 need to know how to interpret RSVP packets so it will be able to 273 provide VCs of the appropriate QoS for the flows. 275 2.1.2 Hop-by-Hop vs. Short Cut 277 If the ATM "cloud" is made up a number of logical IP subnets 278 (LISs), then it is possible to use "short cuts" from a node on 279 one LIS directly to a node on another LIS, avoiding router hops 280 between the LISs. NHRP [4], is one mechanism for determining the 281 ATM address of the egress point on the ATM network given a 282 destination IP address. It is a topic for further study to 283 determine if significant benefit is achieved from short cut 284 routes vs. the extra state required. 286 2.1.3 Future Models 288 ATM is constantly evolving. If we assume that RSVP and IntServ 289 applications are going to be wide-spread, it makes sense to 290 consider changes to ATM that would improve the operation of RSVP 291 and IntServ over ATM. Similarly, the RSVP protocol and IntServ 292 models will continue to evolve and changes that affect them 293 should also be considered. The following are a few ideas that 294 have been discussed that would make the integration of the 295 IntServ models and RSVP easier or more complete. They are 296 presented here to encourage continued development of ideas that 297 can help aid in the integration of RSVP, IntServ, and ATM. 299 2.1.3.1 Heterogeneous Point-to-MultiPoint 301 The IntServ models and RSVP support the idea of "heterogeneous 302 receivers"; e.g., not all receivers of a particular multicast 303 flow are required to ask for the same QoS from the network. 305 The most important scenario that can utilize this feature occurs 306 when some receivers in an RSVP session ask for a specific QoS 307 while others receive the flow with a best-effort service. In 308 some cases where there are multiple senders on a shared- 309 reservation flow (e.g., an audio conference), an individual 310 receiver only needs to reserve enough resources to receive one 311 sender at a time. However, other receivers may elect to reserve 312 more resources, perhaps to allow for some amount of "over- 313 speaking" or in order to record the conference (post processing 314 during playback can separate the senders by their source 315 addresses). 317 In order to prevent denial-of-service attacks via reservations, 318 the service models do not allow the service elements to simply 319 drop non-conforming packets. For example, Controlled Load 320 service model [7] assigns non-conformant packets to best-effort 321 status (which may result in packet drops if there is congestion). 323 Emulating these behaviors over an ATM network is problematic and 324 needs to be studied. If a single maximum QoS is used over a 325 point-to-multipoint VC, resources could be wasted if cells are 326 sent over certain links where the reassembled packets will 327 eventually be dropped. In addition, the "maximum QoS" may 328 actually cause a degradation in service to the best-effort 329 branches. 331 The term "variegated VC" has been coined to describe a point-to- 332 multipoint VC that allows a different QoS on each branch. This 333 approach seems to match the spirit of the Integrated Service and 334 RSVP models, but some thought has to be put into the cell drop 335 strategy when traversing from a "bigger" branch to a "smaller" 336 one. The "best-effort for non-conforming packets" behavior must 337 also be retained. Early Packet Discard (EPD) schemes must be 338 used so that all the cells for a given packet can be discarded at 339 the same time rather than discarding only a few cells from 340 several packets making all the packets useless to the receivers. 342 2.1.3.2 Lightweight Signalling 344 Q.2931 signalling is very complete and carries with it a 345 significant burden for signalling in all possible public and 346 private connections. It might be worth investigating a lighter 347 weight signalling mechanism for faster connection setup in 348 private networks. 350 2.1.3.3 QoS Renegotiation 352 Another change that would help RSVP over ATM is the ability to 353 request a different QoS for an active VC. This would eliminate 354 the need to setup and tear down VCs as the QoS changed. RSVP 355 allows receivers to change their reservations and senders to 356 change their traffic descriptors dynamically. This, along with 357 the merging of reservations, can create a situation where the QoS 358 needs of a VC can change. Allowing changes to the QoS of an 359 existing VC would allow these features to work without creating a 360 new VC. In the ITU-T ATM specifications [??REF??], some cell 361 rates can be renegotiated. It is unclear if this is sufficient 362 for the QoS renegotiation needs of the IntServ models. 364 2.1.3.4 Group Addressing 366 The model of one-to-many communications provided by point-to- 367 multipoint VCs does not really match the many-to-many 368 communications provided by IP multicasting. A scaleable mapping 369 from IP multicast addresses to an ATM "group address" can address 370 this problem. 372 2.1.3.5 Label Switching 374 The MultiProtocol Label Switching (MPLS) working group is 375 discussing methods for optimizing the use of ATM and other 376 switched networks for IP by encapsulating the data with a header 377 that is used by the interior switches to achieve faster 378 forwarding lookups. [22] discusses a framework for this work. 379 It is unclear how this work will affect IntServ and RSVP over 380 label switched networks but there may be some interactions. 382 2.1.4 QoS Routing 384 RSVP is explicitly not a routing protocol. However, since it 385 conveys QoS information, it may prove to be a valuable input to a 386 routing protocol that could make path determinations based on QoS 387 and network load information. In other words, instead of asking 388 for just the IP next hop for a given destination address, it 389 might be worthwhile for RSVP to provide information on the QoS 390 needs of the flow if routing has the ability to use this 391 information in order to determine a route. Other forms of QoS 392 routing have existed in the past such as using the IP TOS and 393 Precedence bits to select a path through the network. Some have 394 discussed using these same bits to select one of a set of 395 parallel ATM VCs as a form of QoS routing. The work in this area 396 is just starting and there are numerous issues to consider. 397 [23], as part of the work of the QoSR working group frame the 398 issues for QoS Routing in the Internet. 400 2.2 Reliance on Unicast and Multicast Routing 402 RSVP was designed to support both unicast and IP multicast 403 applications. This means that RSVP needs to work closely with 404 multicast and unicast routing. Unicast routing over ATM has been 405 addressed [10] and [11]. MARS [5] provides multicast address 406 resolution for IP over ATM networks, an important part of the 407 solution for multicast but still relies on multicast routing 408 protocols to connect multicast senders and receivers on different 409 subnets. 411 2.3 Aggregation of Flows 413 Some of the scaling issues noted in previous sections can be 414 addressed by aggregating several RSVP flows over a single VC if 415 the destinations of the VC match for all the flows being 416 aggregated. However, this causes considerable complexity in the 417 management of VCs and in the scheduling of packets within each VC 418 at the root point of the VC. Note that the rescheduling of flows 419 within a VC is not possible in the switches in the core of the 420 ATM network. Additionally, virtual paths (VPs) could be used for 421 aggregating multiple VCs. 423 2.4 Mapping QoS Parameters 425 The mapping of QoS parameters from the IntServ models to the ATM 426 service classes is an important issue in making RSVP and IntServ 427 work over ATM. [14] addresses these issues very completely for 428 the Controlled Load and Guaranteed Service models. An additional 429 issue is that while some guidelines can be developed for mapping 430 the parameters of a given service model to the traffic 431 descriptors of an ATM traffic class, implementation variables, 432 policy, and cost factors can make strict standards problematic. 434 2.5 Directly Connected ATM Hosts 436 It is obvious that the needs of hosts that are directly connected 437 to ATM networks must be considered for RSVP and IntServ over ATM. 438 Functionality for RSVP over ATM must not assume that an ATM host 439 has all the functionality of a router, but such things as MARS 440 and NHRP clients might be worthwhile features. 442 2.6 Accounting and Policy Issues 444 Since RSVP and IntServ create classes of preferential service, 445 some form of administrative control and/or cost allocation is 446 needed to control abuse. There are certain types of policies 447 specific to ATM and IP over ATM that need to be studied to 448 determine how they interoperate with the IP policies being 449 developed. Typical IP policies would be that only certain users 450 are allowed to make reservations. This policy would translate 451 well to IP over ATM due to the similarity to the mechanisms used 452 for Call Admission Control (CAC). There may be a need for 453 policies specific to IP over ATM. For example, since signalling 454 costs in ATM are high relative to IP, an IP over ATM specific 455 policy might restrict the ability to change the prevailing QoS in 456 a VC. If VCs are relatively scarce, there also might be specific 457 accounting costs in creating a new VC. The work so far has been 458 preliminary, and much work remains to be done. The policy 459 mechanisms outlined in [12] and [13] provide the basic mechanisms 460 for implementing policies for RSVP and IntServ over any media, 461 not just ATM. 463 3. RSVP VC Management 465 This section goes into more detail on the issues related to the 466 management of SVCs for RSVP and IntServ. 468 3.1 VC Initiation 470 There is an apparent mismatch between RSVP and ATM. Specifically, 471 RSVP control is receiver oriented and ATM control is sender 472 oriented. This initially may seem like a major issue, but really 473 is not. While RSVP reservation (RESV) requests are generated at 474 the receiver, actual allocation of resources takes place at the 475 subnet sender. For data flows, this means that subnet senders 476 will establish all QoS VCs and the subnet receiver must be able 477 to accept incoming QoS VCs. These restrictions are consistent 478 with RSVP version 1 processing rules and allow senders to use 479 different flow to VC mappings and even different QoS 480 renegotiation techniques without interoperability problems. All 481 RSVP over ATM approaches that have VCs initiated and controlled 482 by the subnet senders will interoperate. 484 The use of the reverse path provided by point-to-point VCs by 485 receivers is for further study. There are two related issues. The 486 first is that use of the reverse path requires the VC initiator 487 to set appropriate reverse path QoS parameters. The second issue 488 is that reverse paths are not available with point-to-multipoint 489 VCs, so reverse paths could only be used to support unicast RSVP 490 reservations. 492 3.2 Policy 494 RSVP allows for local policy control [13,14] as well as admission 495 control. Thus a user can request a reservation with a specific 496 QoS and with a policy object that, for example, offers to pay for 497 additional costs setting up a new reservation. The policy module 498 at the entry to a provider can decide how to satisfy that request 499 - either by merging the request in with an existing reservation 500 or by creating a new reservation for this (and perhaps other) 501 users. This policy can be on a per user-provider basis where a 502 user and a provider have an agreement on the type of service 503 offered, or on a provider-provider basis, where two providers 504 have such an agreement. With the ability to do local policy 505 control, providers can offer services best suited to their own 506 resources and their customers needs. Policy is expected to be 507 provided as a generic API which will return values indicating 508 what action should be taken for a specific reservation request. 509 The API is expected to have access to the reservation tables with 510 the QoS for each reservation. The RSVP Policy and Integrity 511 objects will be passed to the policy() call. Four possible return 512 values are expected. The request can be rejected. The request can 513 be accepted as is. The request can be accepted but at a different 514 QoS. The request can cause a change of QoS of an existing 515 reservation. The information returned from this call is be used 516 to call the admission control interface. 518 3.3 Data VC Management 520 Any RSVP over ATM implementation must map RSVP and RSVP 521 associated data flows to ATM Virtual Circuits (VCs). LAN 522 Emulation [17], Classical IP [10] and, more recently, NHRP [4] 523 discuss mapping IP traffic onto ATM SVCs, but they only cover a 524 single QoS class, i.e., best effort traffic. When QoS is 525 introduced, VC mapping must be revisited. For RSVP controlled QoS 526 flows, one issue is VCs to use for QoS data flows. 528 In the Classic IP over ATM and current NHRP models, a single 529 point-to-point VC is used for all traffic between two ATM 530 attached hosts (routers and end-stations). It is likely that 531 such a single VC will not be adequate or optimal when supporting 532 data flows with multiple QoS types. RSVP's basic purpose is to 533 install support for flows with multiple QoS types, so it is 534 essential for any RSVP over ATM solution to address VC usage for 535 QoS data flows. 537 RSVP reservation styles will also need to be taken into account 538 in any VC usage strategy. 540 This section describes issues and methods for management of VCs 541 associated with QoS data flows. When establishing and maintaining 542 VCs, the subnet sender will need to deal with several 543 complicating factors including multiple QoS reservations, 544 requests for QoS changes, ATM short-cuts, and several multicast 545 specific issues. The multicast specific issues result from the 546 nature of ATM connections. The key multicast related issues are 547 heterogeneity, data distribution, receiver transitions, and end- 548 point identification. 550 3.3.1 Reservation to VC Mapping 552 There are various approaches available for mapping reservations 553 on to VCs. A distinguishing attribute of all approaches is how 554 reservations are combined on to individual VCs. When mapping 555 reservations on to VCs, individual VCs can be used to support a 556 single reservation, or reservation can be combined with others on 557 to "aggregate" VCs. In the first case, each reservation will be 558 supported by one or more VCs. Multicast reservation requests may 559 translate into the setup of multiple VCs as is described in more 560 detail in section 3.3.2. Unicast reservation requests will 561 always translate into the setup of a single QoS VC. In both 562 cases, each VC will only carry data associated with a single 563 reservation. The greatest benefit if this approach is ease of 564 implementation, but it comes at the cost of increased (VC) setup 565 time and the consumption of greater number of VC and associated 566 resources. We refer to the other case, when reservations are not 567 combined, as the "aggregation" model. With this model, large VCs 568 could be set up between IP routers and hosts in an ATM network. 569 These VCs could be managed much like IP Integrated Service (IIS) 570 point-to-point links (e.g. T-1, DS-3) are managed now. Traffic 571 from multiple sources over multiple RSVP sessions might be 572 multiplexed on the same VC. This approach has a number of 573 advantages. First, there is typically no signalling latency as 574 VCs would be in existence when the traffic started flowing, so no 575 time is wasted in setting up VCs. Second, the heterogeneity 576 problem (section 3.3.2) in full over ATM has been reduced to a 577 solved problem. Finally, the dynamic QoS problem (section 3.3.6) 578 for ATM has also been reduced to a solved problem. 580 This approach can be used with point-to-point and point-to- 581 multipoint VCs. The problem with the aggregation approach is 582 that the choice of what QoS to use for which of the VCs is 583 difficult, but is made easier since the VCs can be changed as 584 needed. The advantages of this scheme makes this approach an 585 item for high priority study. 587 3.3.2 Heterogeneity 589 Heterogeneity occurs when receivers request different qualities 590 of service within a single session. This means that the amount 591 of requested resources differs on a per next hop basis. A related 592 type of heterogeneity occurs due to best-effort receivers. In 593 any IP multicast group, it is possible that some receivers will 594 request QoS (via RSVP) and some receivers will not. In shared 595 media, like Ethernet, receivers that have not requested resources 596 can typically be given identical service to those that have 597 without complications. This is not the case with ATM. In ATM 598 networks, any additional end-points of a VC must be explicitly 599 added. There may be costs associated with adding the best-effort 600 receiver, and there might not be adequate resources. An RSVP 601 over ATM solution will need to support heterogeneous receivers 602 even though ATM does not currently provide such support directly. 604 RSVP heterogeneity is supported over ATM in the way RSVP 605 reservations are mapped into ATM VCs. There are four alternative 606 approaches this mapping. There are multiple models for supporting 607 RSVP heterogeneity over ATM. Section 3.3.2.1 examines the 608 multiple VCs per RSVP reservation (or full heterogeneity) model 609 where a single reservation can be forwarded into several VCs each 610 with a different QoS. Section 3.3.2.2 presents a limited 611 heterogeneity model where exactly one QoS VC is used along with a 612 best effort VC. Section 3.3.2.3 examines the VC per RSVP 613 reservation (or homogeneous) model, where each RSVP reservation 614 is mapped to a single ATM VC. Section 3.3.2.4 describes the 615 aggregation model allowing aggregation of multiple RSVP 616 reservations into a single VC. Further study is being done on 617 the aggregation model. 619 3.3.2.1 Full Heterogeneity Model 621 RSVP supports heterogeneous QoS, meaning that different receivers 622 of the same multicast group can request a different QoS. But 623 importantly, some receivers might have no reservation at all and 624 want to receive the traffic on a best effort service basis. The 625 IP model allows receivers to join a multicast group at any time 626 on a best effort basis, and it is important that ATM as part of 627 the Internet continue to provide this service. We define the 628 "full heterogeneity" model as providing a separate VC for each 629 distinct QoS for a multicast session including best effort and 630 one or more qualities of service. 632 Note that while full heterogeneity gives users exactly what they 633 request, it requires more resources of the network than other 634 possible approaches. The exact amount of bandwidth used for 635 duplicate traffic depends on the network topology and group 636 membership. 638 3.3.2.2 Limited Heterogeneity Model 640 We define the "limited heterogeneity" model as the case where the 641 receivers of a multicast session are limited to use either best 642 effort service or a single alternate quality of service. The 643 alternate QoS can be chosen either by higher level protocols or 644 by dynamic renegotiation of QoS as described below. 646 In order to support limited heterogeneity, each ATM edge device 647 participating in a session would need at most two VCs. One VC 648 would be a point-to-multipoint best effort service VC and would 649 serve all best effort service IP destinations for this RSVP 650 session. 652 The other VC would be a point to multipoint VC with QoS and would 653 serve all IP destinations for this RSVP session that have an RSVP 654 reservation established. 656 As with full heterogeneity, a disadvantage of the limited 657 heterogeneity scheme is that each packet will need to be 658 duplicated at the network layer and one copy sent into each of 659 the 2 VCs. Again, the exact amount of excess traffic will depend 660 on the network topology and group membership. If any of the 661 existing QoS VC end-points cannot upgrade to the new QoS, then 662 the new reservation fails though the resources exist for the new 663 receiver. 665 3.3.2.3 Homogeneous and Modified Homogeneous Models 667 We define the "homogeneous" model as the case where all receivers 668 of a multicast session use a single quality of service VC. Best- 669 effort receivers also use the single RSVP triggered QoS VC. The 670 single VC can be a point-to-point or point-to-multipoint as 671 appropriate. The QoS VC is sized to provide the maximum resources 672 requested by all RSVP next-hops. 674 This model matches the way the current RSVP specification 675 addresses heterogeneous requests. The current processing rules 676 and traffic control interface describe a model where the largest 677 requested reservation for a specific outgoing interface is used 678 in resource allocation, and traffic is transmitted at the higher 679 rate to all next-hops. This approach would be the simplest method 680 for RSVP over ATM implementations. 682 While this approach is simple to implement, providing better than 683 best-effort service may actually be the opposite of what the user 684 desires. There may be charges incurred or resources that are 685 wrongfully allocated. There are two specific problems. The first 686 problem is that a user making a small or no reservation would 687 share a QoS VC resources without making (and perhaps paying for) 688 an RSVP reservation. The second problem is that a receiver may 689 not receive any data. This may occur when there is insufficient 690 resources to add a receiver. The rejected user would not be 691 added to the single VC and it would not even receive traffic on a 692 best effort basis. 694 Not sending data traffic to best-effort receivers because of 695 another receiver's RSVP request is clearly unacceptable. The 696 previously described limited heterogeneous model ensures that 697 data is always sent to both QoS and best-effort receivers, but it 698 does so by requiring replication of data at the sender in all 699 cases. It is possible to extend the homogeneous model to both 700 ensure that data is always sent to best-effort receivers and also 701 to avoid replication in the normal case. This extension is to 702 add special handling for the case where a best-effort receiver 703 cannot be added to the QoS VC. In this case, a best effort VC 704 can be established to any receivers that could not be added to 705 the QoS VC. Only in this special error case would senders be 706 required to replicate data. We define this approach as the 707 "modified homogeneous" model. 709 3.3.2.4 Aggregation 711 The last scheme is the multiple RSVP reservations per VC (or 712 aggregation) model. With this model, large VCs could be set up 713 between IP routers and hosts in an ATM network. These VCs could 714 be managed much like IP Integrated Service (IIS) point-to-point 715 links (e.g. T-1, DS-3) are managed now. Traffic from multiple 716 sources over multiple RSVP sessions might be multiplexed on the 717 same VC. This approach has a number of advantages. First, there 718 is typically no signalling latency as VCs would be in existence 719 when the traffic started flowing, so no time is wasted in setting 720 up VCs. Second, the heterogeneity problem in full over ATM has 721 been reduced to a solved problem. Finally, the dynamic QoS 722 problem for ATM has also been reduced to a solved problem. This 723 approach can be used with point-to-point and point-to-multipoint 724 VCs. The problem with the aggregation approach is that the choice 725 of what QoS to use for which of the VCs is difficult, but is made 726 easier since the VCs can be changed as needed. The advantages of 727 this scheme makes this approach an item for high priority study. 729 3.3.3 Multicast End-Point Identification 731 Implementations must be able to identify ATM end-points 732 participating in an IP multicast group. The ATM end-points will 733 be IP multicast receivers and/or next-hops. Both QoS and best- 734 effort end-points must be identified. RSVP next-hop information 735 will provide QoS end-points, but not best-effort end-points. 736 Another issue is identifying end-points of multicast traffic 737 handled by non-RSVP capable next-hops. In this case a PATH 738 message travels through a non-RSVP egress router on the way to 739 the next hop RSVP node. When the next hop RSVP node sends a RESV 740 message it may arrive at the source over a different route than 741 what the data is using. The source will get the RESV message, but 742 will not know which egress router needs the QoS. For unicast 743 sessions, there is no problem since the ATM end-point will be the 744 IP next-hop router. Unfortunately, multicast routing may not be 745 able to uniquely identify the IP next-hop router. So it is 746 possible that a multicast end-point can not be identified. 748 In the most common case, MARS will be used to identify all end- 749 points of a multicast group. In the router to router case, a 750 multicast routing protocol may provide all next-hops for a 751 particular multicast group. In either case, RSVP over ATM 752 implementations must obtain a full list of end-points, both QoS 753 and non-QoS, using the appropriate mechanisms. The full list can 754 be compared against the RSVP identified end-points to determine 755 the list of best-effort receivers. There is no straightforward 756 solution to uniquely identifying end-points of multicast traffic 757 handled by non-RSVP next hops. The preferred solution is to use 758 multicast routing protocols that support unique end-point 759 identification. In cases where such routing protocols are 760 unavailable, all IP routers that will be used to support RSVP 761 over ATM should support RSVP. To ensure proper behavior, 762 implementations should, by default, only establish RSVP-initiated 763 VCs to RSVP capable end-points. 765 3.3.4 Multicast Data Distribution 767 Two models are planned for IP multicast data distribution over 768 ATM. In one model, senders establish point-to-multipoint VCs to 769 all ATM attached destinations, and data is then sent over these 770 VCs. This model is often called "multicast mesh" or "VC mesh" 771 mode distribution. In the second model, senders send data over 772 point-to-point VCs to a central point and the central point 773 relays the data onto point-to-multipoint VCs that have been 774 established to all receivers of the IP multicast group. This 775 model is often referred to as "multicast server" mode 776 distribution. RSVP over ATM solutions must ensure that IP 777 multicast data is distributed with appropriate QoS. 779 In the Classical IP context, multicast server support is provided 780 via MARS [5]. MARS does not currently provide a way to 781 communicate QoS requirements to a MARS multicast server. 782 Therefore, RSVP over ATM implementations must, by default, 783 support "mesh-mode" distribution for RSVP controlled multicast 784 flows. When using multicast servers that do not support QoS 785 requests, a sender must set the service, not global, break 786 bit(s). 788 3.3.5 Receiver Transitions 790 When setting up a point-to-multipoint VCs there will be a time 791 when some receivers have been added to a QoS VC and some have 792 not. During such transition times it is possible to start 793 sending data on the newly established VC. The issue is when to 794 start send data on the new VC. If data is sent both on the new 795 VC and the old VC, then data will be delivered with proper QoS to 796 some receivers and with the old QoS to all receivers. This means 797 the QoS receivers would get duplicate data. If data is sent just 798 on the new QoS VC, the receivers that have not yet been added 799 will lose information. So, the issue comes down to whether to 800 send to both the old and new VCs, or to send to just one of the 801 VCs. In one case duplicate information will be received, in the 802 other some information may not be received. 804 This issue needs to be considered for three cases: when 805 establishing the first QoS VC, when establishing a VC to support 806 a QoS change, and when adding a new end-point to an already 807 established QoS VC. 809 The first two cases are very similar. It both, it is possible to 810 send data on the partially completed new VC, and the issue of 811 duplicate versus lost information is the same. The last case is 812 when an end-point must be added to an existing QoS VC. In this 813 case the end-point must be both added to the QoS VC and dropped 814 from a best-effort VC. The issue is which to do first. If the 815 add is first requested, then the end-point may get duplicate 816 information. If the drop is requested first, then the end-point 817 may loose information. 819 In order to ensure predictable behavior and delivery of data to 820 all receivers, data can only be sent on a new VCs once all 821 parties have been added. This will ensure that all data is only 822 delivered once to all receivers. This approach does not quite 823 apply for the last case. In the last case, the add operation 824 should be completed first, then the drop operation. This means 825 that receivers must be prepared to receive some duplicate packets 826 at times of QoS setup. 828 3.3.6 Dynamic QoS 830 RSVP provides dynamic quality of service (QoS) in that the 831 resources that are requested may change at any time. There are 832 several common reasons for a change of reservation QoS. First, 833 an existing receiver can request a new larger (or smaller) QoS. 834 Second, a sender may change its traffic specification (TSpec), 835 which can trigger a change in the reservation requests of the 836 receivers. Third, a new sender can start sending to a multicast 837 group with a larger traffic specification than existing senders, 838 triggering larger reservations. Finally, a new receiver can make 839 a reservation that is larger than existing reservations. If the 840 limited heterogeneity model is being used and the merge node for 841 the larger reservation is an ATM edge device, a new larger 842 reservation must be set up across the ATM network. Since ATM 843 service, as currently defined in UNI 3.x and UNI 4.0, does not 844 allow renegotiating the QoS of a VC, dynamically changing the 845 reservation means creating a new VC with the new QoS, and tearing 846 down an established VC. Tearing down a VC and setting up a new VC 847 in ATM are complex operations that involve a non-trivial amount 848 of processor time, and may have a substantial latency. There are 849 several options for dealing with this mismatch in service. A 850 specific approach will need to be a part of any RSVP over ATM 851 solution. 853 The default method for supporting changes in RSVP reservations is 854 to attempt to replace an existing VC with a new appropriately 855 sized VC. During setup of the replacement VC, the old VC must be 856 left in place unmodified. The old VC is left unmodified to 857 minimize interruption of QoS data delivery. Once the replacement 858 VC is established, data transmission is shifted to the new VC, 859 and the old VC is then closed. If setup of the replacement VC 860 fails, then the old QoS VC should continue to be used. When the 861 new reservation is greater than the old reservation, the 862 reservation request should be answered with an error. When the 863 new reservation is less than the old reservation, the request 864 should be treated as if the modification was successful. While 865 leaving the larger allocation in place is suboptimal, it 866 maximizes delivery of service to the user. Implementations should 867 retry replacing the too large VC after some appropriate elapsed 868 time. 870 One additional issue is that only one QoS change can be processed 871 at one time per reservation. If the (RSVP) requested QoS is 872 changed while the first replacement VC is still being setup, then 873 the replacement VC is released and the whole VC replacement 874 process is restarted. To limit the number of changes and to avoid 875 excessive signalling load, implementations may limit the number 876 of changes that will be processed in a given period. One 877 implementation approach would have each ATM edge device 878 configured with a time parameter T (which can change over time) 879 that gives the minimum amount of time the edge device will wait 880 between successive changes of the QoS of a particular VC. Thus 881 if the QoS of a VC is changed at time t, all messages that would 882 change the QoS of that VC that arrive before time t+T would be 883 queued. If several messages changing the QoS of a VC arrive 884 during the interval, redundant messages can be discarded. At time 885 t+T, the remaining change(s) of QoS, if any, can be executed. 886 This timer approach would apply more generally to any network 887 structure, and might be worthwhile to incorporate into RSVP. 888 The sequence of events for a single VC would be 890 - Wait if timer is active 891 - Establish VC with new QoS 892 - Remap data traffic to new VC 893 - Tear down old VC 894 - Activate timer 896 There is an interesting interaction between heterogeneous 897 reservations and dynamic QoS. In the case where a RESV message is 898 received from a new next-hop and the requested resources are 899 larger than any existing reservation, both dynamic QoS and 900 heterogeneity need to be addressed. A key issue is whether to 901 first add the new next-hop or to change to the new QoS. This is a 902 fairly straight forward special case. Since the older, smaller 903 reservation does not support the new next-hop, the dynamic QoS 904 process should be initiated first. Since the new QoS is only 905 needed by the new next-hop, it should be the first end-point of 906 the new VC. This way signalling is minimized when the setup to 907 the new next-hop fails. 909 3.3.7 Short-Cuts 911 Short-cuts [4] allow ATM attached routers and hosts to directly 912 establish point-to-point VCs across LIS boundaries, i.e., the VC 913 end-points are on different IP subnets. The ability for short- 914 cuts and RSVP to interoperate has been raised as a general 915 question. The area of concern is the ability to handle 916 asymmetric short-cuts. Specifically how RSVP can handle the case 917 where a downstream short-cut may not have a matching upstream 918 short-cut. In this case, PATH and RESV messages following 919 different paths. 921 Examination of RSVP shows that the protocol already includes 922 mechanisms that will support short-cuts. The mechanism is the 923 same one used to support RESV messages arriving at the wrong 924 router and the wrong interface. The key aspect of this mechanism 925 is RSVP only processing messages that arrive at the proper 926 interface and RSVP forwarding of messages that arrive on the 927 wrong interface. The proper interface is indicated in the NHOP 928 object of the message. So, existing RSVP mechanisms will support 929 asymmetric short-cuts. The short-cut model of VC establishment 930 still poses several issues when running with RSVP. The major 931 issues are dealing with established best-effort short-cuts, when 932 to establish short-cuts, and QoS only short-cuts. These issues 933 will need to be addressed by RSVP implementations. 935 The key issue to be addressed by any RSVP over ATM solution is 936 when to establish a short-cut for a QoS data flow. The default 937 behavior is to simply follow best-effort traffic. When a short- 938 cut has been established for best-effort traffic to a destination 939 or next-hop, that same end-point should be used when setting up 940 RSVP triggered VCs for QoS traffic to the same destination or 941 next-hop. This will happen naturally when PATH messages are 942 forwarded over the best-effort short-cut. Note that in this 943 approach when best-effort short-cuts are never established, RSVP 944 triggered QoS short-cuts will also never be established. 946 3.3.8 VC Teardown 948 RSVP can identify from either explicit messages or timeouts when 949 a data VC is no longer needed. Therefore, data VCs set up to 950 support RSVP controlled flows should only be released at the 951 direction of RSVP. VCs must not be timed out due to inactivity by 952 either the VC initiator or the VC receiver. This conflicts with 953 VCs timing out as described in RFC 1755 [11], section 3.4 on VC 954 Teardown. RFC 1755 recommends tearing down a VC that is inactive 955 for a certain length of time. Twenty minutes is recommended. This 956 timeout is typically implemented at both the VC initiator and the 957 VC receiver. Although, section 3.1 of the update to RFC 1755 958 [11] states that inactivity timers must not be used at the VC 959 receiver. 961 When this timeout occurs for an RSVP initiated VC, a valid VC 962 with QoS will be torn down unexpectedly. While this behavior is 963 acceptable for best-effort traffic, it is important that RSVP 964 controlled VCs not be torn down. If there is no choice about the 965 VC being torn down, the RSVP daemon must be notified, so a 966 reservation failure message can be sent. 968 For VCs initiated at the request of RSVP, the configurable 969 inactivity timer mentioned in [11] must be set to "infinite". 970 Setting the inactivity timer value at the VC initiator should not 971 be problematic since the proper value can be relayed internally 972 at the originator. Setting the inactivity timer at the VC 973 receiver is more difficult, and would require some mechanism to 974 signal that an incoming VC was RSVP initiated. To avoid this 975 complexity and to conform to \cite{kn:1755up}, implementations 976 must not use an inactivity timer to clear received connections. 978 3.4 RSVP Control Management 980 One last important issue is providing a data path for the RSVP 981 messages themselves. There are two main types of messages in 982 RSVP, PATH and RESV. PATH messages are sent to a multicast 983 address, while RESV messages are sent to a unicast address. Other 984 1 985 RSVP messages are handled similar to either PATH or RESV . So 986 ATM VCs used for RSVP signalling messages need to provide both 987 unicast and multicast functionality. There are several different 988 approaches for how to assign VCs to use for RSVP signalling 989 messages. 991 The main approaches are: 992 - use same VC as data 993 - single VC per session 994 - single point-to-multipoint VC multiplexed among sessions 996 1 997 This can be slightly more complicated for RERR messages 998 - multiple point-to-point VCs multiplexed among sessions 1000 There are several different issues that affect the choice of how 1001 to assign VCs for RSVP signalling. One issue is the number of 1002 additional VCs needed for RSVP signalling. Related to this issue 1003 is the degree of multiplexing on the RSVP VCs. In general more 1004 multiplexing means less VCs. An additional issue is the latency 1005 in dynamically setting up new RSVP signalling VCs. A final issue 1006 is complexity of implementation. The remainder of this section 1007 discusses the issues and tradeoffs among these different 1008 approaches and suggests guidelines for when to use which 1009 alternative. 1011 3.4.1 Mixed data and control traffic 1013 In this scheme RSVP signalling messages are sent on the same VCs 1014 as is the data traffic. The main advantage of this scheme is that 1015 no additional VCs are needed beyond what is needed for the data 1016 traffic. An additional advantage is that there is no ATM 1017 signalling latency for PATH messages (which follow the same 1018 routing as the data messages). However there can be a major 1019 problem when data traffic on a VC is nonconforming. With 1020 nonconforming traffic, RSVP signalling messages may be dropped. 1021 While RSVP is resilient to a moderate level of dropped messages, 1022 excessive drops would lead to repeated tearing down and re- 1023 establishing QoS VCs, a very undesirable behavior for ATM. Due to 1024 these problems, this is not a good choice for providing RSVP 1025 signalling messages, even though the number of VCs needed for 1026 this scheme is minimized. One variation of this scheme is to use 1027 the best effort data path for signalling traffic. In this scheme, 1028 there is no issue with nonconforming traffic, but there is an 1029 issue with congestion in the ATM network. RSVP provides some 1030 resiliency to message loss due to congestion, but RSVP control 1031 messages should be offered a preferred class of service. A 1032 related variation of this scheme that is hopeful but requires 1033 further study is to have a packet scheduling algorithm (before 1034 entering the ATM network) that gives priority to the RSVP 1035 signalling traffic. This can be difficult to do at the IP layer. 1037 3.4.1.1 Single RSVP VC per RSVP Reservation 1039 In this scheme, there is a parallel RSVP signalling VC for each 1040 RSVP reservation. This scheme results in twice the minimum number 1041 of VCs, but means that RSVP signalling messages have the 1042 advantage of a separate VC. This separate VC means that RSVP 1043 signalling messages have their own traffic contract and compliant 1044 signalling messages are not subject to dropping due to other 1045 noncompliant traffic (such as can happen with the scheme in 1046 section 3.4.1). The advantage of this scheme is its simplicity - 1047 whenever a data VC is created, a separate RSVP signalling VC is 1048 created. The disadvantage of the extra VC is that extra ATM 1049 signalling needs to be done. Additionally, this scheme requires 1050 twice the minimum number of VCs and also additional latency, but 1051 is quite simple. 1053 3.4.1.2 Multiplexed point-to-multipoint RSVP VCs 1055 In this scheme, there is a single point-to-multipoint RSVP 1056 signalling VC for each unique ingress router and unique set of 1057 egress routers. This scheme allows multiplexing of RSVP 1058 signalling traffic that shares the same ingress router and the 1059 same egress routers. This can save on the number of VCs, by 1060 multiplexing, but there are problems when the destinations of the 1061 multiplexed point-to-multipoint VCs are changing. Several 1062 alternatives exist in these cases, that have applicability in 1063 different situations. First, when the egress routers change, the 1064 ingress router can check if it already has a point-to-multipoint 1065 RSVP signalling VC for the new list of egress routers. If the 1066 RSVP signalling VC already exists, then the RSVP signalling 1067 traffic can be switched to this existing VC. If no such VC 1068 exists, one approach would be to create a new VC with the new 1069 list of egress routers. Other approaches include modifying the 1070 existing VC to add an egress router or using a separate new VC 1071 for the new egress routers. When a destination drops out of a 1072 group, an alternative would be to keep sending to the existing VC 1073 even though some traffic is wasted. The number of VCs used in 1074 this scheme is a function of traffic patterns across the ATM 1075 network, but is always less than the number used with the Single 1076 RSVP VC per data VC. In addition, existing best effort data VCs 1077 could be used for RSVP signalling. Reusing best effort VCs saves 1078 on the number of VCs at the cost of higher probability of RSVP 1079 signalling packet loss. One possible place where this scheme 1080 will work well is in the core of the network where there is the 1081 most opportunity to take advantage of the savings due to 1082 multiplexing. The exact savings depend on the patterns of 1083 traffic and the topology of the ATM network. 1085 3.4.1.3 Multiplexed point-to-point RSVP VCs 1087 In this scheme, multiple point-to-point RSVP signalling VCs are 1088 used for a single point-to-multipoint data VC. This scheme 1089 allows multiplexing of RSVP signalling traffic but requires the 1090 same traffic to be sent on each of several VCs. This scheme is 1091 quite flexible and allows a large amount of multiplexing. 1093 Since point-to-point VCs can set up a reverse channel at the same 1094 time as setting up the forward channel, this scheme could save 1095 substantially on signalling cost. In addition, signalling 1096 traffic could share existing best effort VCs. Sharing existing 1097 best effort VCs reduces the total number of VCs needed, but might 1098 cause signalling traffic drops if there is congestion in the ATM 1099 network. This point-to-point scheme would work well in the core 1100 of the network where there is much opportunity for multiplexing. 1101 Also in the core of the network, RSVP VCs can stay permanently 1102 established either as Permanent Virtual Circuits (PVCs) or as 1103 long lived Switched Virtual Circuits (SVCs). The number of VCs in 1104 this scheme will depend on traffic patterns, but in the core of a 1105 network would be approximately n(n-1)/2 where n is the number of 1106 IP nodes in the network. In the core of the network, this will 1107 typically be small compared to the total number of VCs. 1109 3.4.2 QoS for RSVP VCs 1111 There is an issue for what QoS, if any, to assign to the RSVP 1112 VCs. For other RSVP VC schemes, a QoS (possibly best effort) will 1113 be needed. What QoS to use partially depends on the expected 1114 level of multiplexing that is being done on the VCs, and the 1115 expected reliability of best effort VCs. Since RSVP signalling is 1116 infrequent (typically every 30 seconds), only a relatively small 1117 QoS should be needed. This is important since using a larger QoS 1118 risks the VC setup being rejected for lack of resources. Falling 1119 back to best effort when a QoS call is rejected is possible, but 1120 if the ATM net is congested, there will likely be problems with 1121 RSVP packet loss on the best effort VC also. Additional 1122 experimentation is needed in this area. 1124 Implementations must, by default, send RSVP control (messages) 1125 over the best effort data path, see figure. This approach 1126 minimizes VC requirements since the best effort data path will 1127 need to exist in order for RSVP sessions to be established and in 1128 order for RSVP reservations to be initiated. 1130 The specific best effort paths that will be used by RSVP are: for 1131 unicast, the same VC used to reach the unicast destination; and 1132 for multicast, the same VC that is used for best effort traffic 1133 destined to the IP multicast group. Note that there may be 1134 another best effort VC that is used to carry session data 1135 traffic. 1137 The disadvantage of this approach is that best effort VCs may not 1138 provide the reliability that RSVP needs. However the best-effort 1139 path is expected to satisfy RSVP reliability requirements in most 1140 networks. Especially since RSVP allows for a certain amount of 1141 packet loss without any loss of state synchronization. In all 1142 cases, RSVP control traffic should be offered a preferred class 1143 of service. 1145 4. Encapsulation 1147 Since RSVP is a signalling protocol used to control flows of IP 1148 data packets, encapsulation for both RSVP packets and associated 1149 IP data packets must be defined. There are currently two 1150 encapsulation options for running IP over ATM, RFC 1483 and LANE. 1151 There is also the possibility of future encapsulation options, 1152 such as MPOA [18]. The first option is described in RFC 1483 1153 [19] and is currently used for "Classical" IP over ATM and NHRP. 1154 The second option is LAN Emulation, as described in [17]. LANE 1155 encapsulation does not currently include a QoS signalling 1156 interface. If LANE encapsulation is needed, LANE QoS signalling 1157 would first need to be defined by the ATM Forum. It is possible 1158 that LANE 2.0 will include the required QoS support. 1160 The default behavior for implementations must be to use a 1161 consistent encapsulation scheme for all IP over ATM packets. 1162 This includes RSVP packets and associated IP data packets. So, 1163 encapsulation used on QoS data VCs and related control VCs must, 1164 by default, be the same as used by best-effort VCs. 1166 5. Security Considerations 1168 The same considerations stated in [1] and [11] apply to this 1169 document. There are no additional security issues raised in this 1170 document. 1172 6. References 1174 [1] R. Braden, L. Zhang, S. Berson, S. Herzog, S. 1175 Jamin. Resource 1176 ReSerVation Protocol (RSVP) -- Version 1 Functional 1177 Specification. Internet Draft, draft-ietf-rsvp-spec-14. 1178 November 1996. 1179 [2] M. Borden, E. Crawley, B. Davie, S. Batsell. 1180 Integration of 1181 Real-time Services in an IP-ATM Network Architecture. 1182 Request for Comments (Informational) RFC 1821, August 1995. 1183 [3] R. Cole, D. Shur, C. Villamizar. IP over ATM: A 1184 Framework 1185 Document. Request for Comments (Informational), RFC 1932, 1186 April 1996. 1187 [4] D. Katz, D. Piscitello, B. Cole, J. Luciani. 1188 NBMA Next Hop 1189 Resolution Protocol (NHRP). Internet Draft, draft-ietf-rolc- 1190 nhrp-11.txt, March 1997. 1191 [5] G. Armitage, Support for Multicast over UNI 1192 3.0/3.1 based ATM 1193 Networks. RFC 2022. November 1996. 1194 [6] S. Shenker, C. Partridge. Specification of 1195 Guaranteed Quality 1196 of Service. Internet Draft, draft-ietf-intserv-guaranteed- 1197 svc-07.txt, February 1997. 1198 [7] J. Wroclawski. Specification of the 1199 Controlled-Load Network 1200 Element Service. Internet Draft, draft-ietf-intserv-ctrl- 1201 load-svc-05.txt, May 1997. 1202 [8] ATM Forum. ATM User-Network Interface 1203 Specification Version 1204 3.0. Prentice Hall, September 1993 1205 [9] ATM Forum. ATM User Network Interface (UNI) 1206 Specification 1207 Version 3.1. Prentice Hall, June 1995. 1208 [10]M . Laubach, Classical IP and ARP over ATM. 1209 Request for 1210 Comments (Proposed Standard) RFC1577, January 1994. 1211 [11]M . Perez, A. Mankin, E. Hoffman, G. Grossman, 1212 A. Malis, ATM 1213 Signalling Support for IP over ATM, Request for Comments 1214 (Proposed Standard) RFC1755, February 1995. 1215 [12]S . Herzog. RSVP Extensions for Policy 1216 Control. Internet 1217 Draft, draft-ietf-rsvp-policy-ext-02.txt, April 1997. 1218 [13]S . Herzog. Local Policy Modules (LPM): Policy 1219 Control for 1220 RSVP, Internet Draft, draft-ietf-rsvp-policy-lpm-01.txt, 1221 November 1996. 1223 [14]M . Borden, M. Garrett. Interoperation of 1224 Controlled-Load and 1225 Guaranteed Service with ATM, Internet Draft, draft-ietf- 1226 issll-atm-mapping-02.txt, March 1997. 1227 [15]L . Berger. RSVP over ATM Implementation 1228 Requirements. 1229 Internet Draft, draft-ietf-issll-atm-imp-req-00.txt, July 1230 1997. 1231 [16]L . Berger. RSVP over ATM Implementation 1232 Guidelines. Internet 1233 Draft, draft-ietf-issll-atm-imp-guide-01.txt, July 1997. 1234 [17]A TM Forum Technical Committee. LAN Emulation 1235 over ATM, 1236 Version 1.0 Specification, af-lane-0021.000, January 1995. 1237 [18]A TM Forum Technical Committee. Baseline Text 1238 for MPOA, af-95- 1239 0824r9, September 1996. 1240 [19]J . Heinanen. Multiprotocol Encapsulation over 1241 ATM Adaptation 1242 Layer 5, RFC 1483, July 1993. 1243 [20]A TM Forum Technical Committee. LAN Emulation 1244 over ATM Version 1245 2 - LUNI Specification, December 1996. [zzz Need to update 1246 this ref.] 1247 [21]A TM Forum Technical Committee. Traffic Management 1248 Specification v4.0, af-tm-0056.000, April 1996. 1249 [22]R . Callon, et al. A Framework for 1250 Multiprotocol Label 1251 Switching, Internet Draft, draft-ietf-mpls-framework-00.txt, 1252 May 1997. 1253 [23]B . Rajagopalan, R. Nair, H. Sandick, E. 1254 Crawley. A Framework 1255 for QoS-based Routing in the Internet, Internet Draft, draft- 1256 ietf-qosr-framework-00.txt, March 1997. 1258 7. Author's Address 1260 Eric S. Crawley 1261 Gigapacket Networks 1262 25 Porter Road 1263 Littleton, Ma 01460 1264 +1 508 486-0665 1265 esc@gigapacket.com 1267 Lou Berger 1268 FORE Systems 1269 6905 Rockledge Drive 1270 Suite 800 1271 Bethesda, MD 20817 1272 +1 301 571-2534 1273 lberger@fore.com 1275 Steven Berson 1276 USC Information Sciences Institute 1277 4676 Admiralty Way 1278 Marina del Rey, CA 90292 1279 +1 310 822-1511 1280 berson@isi.edu 1282 Fred Baker 1283 Cisco Systems 1284 519 Lado Drive 1285 Santa Barbara, California 93111 1286 +1 805 681-0115 1287 fred@cisco.com 1289 Marty Borden 1290 New Oak Communications 1291 42 Nanog Park 1292 Acton, MA 01720 1293 +1 508 266-1011 1294 mborden@newoak.com 1296 John J. Krawczyk 1297 ArrowPoint Communications 1298 235 Littleton Road 1299 Westford, Massachusetts 01886 1300 +1 508 692-5875 1301 jj@arrowpoint.com