idnits 2.17.1 draft-ietf-issll-atm-framework-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-25) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing document type: Expected "INTERNET-DRAFT" in the upper left hand corner of the first page ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 25 longer pages, the longest (page 21) being 68 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. (A line matching the expected section header was found, but with an unexpected indentation: ' 1. Introduction' ) ** The document seems to lack a Security Considerations section. (A line matching the expected section header was found, but with an unexpected indentation: ' 6. Security Considerations' ) ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack an Authors' Addresses Section. ** There are 441 instances of too long lines in the document, the longest one being 13 characters in excess of 72. ** The abstract seems to contain references ([20], [2], [15], [16], [21], [3], [17], [22], [4], [23], [18], [5], [24], [6], [19], [25], [7], [17,20], [8], [26], [1,6,7], [9], [8,9], [10], [11], [12], [13], [24,25], [14], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 1187 has weird spacing: '...VCs) or as...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (Novemver 18, 1997) is 9990 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '1' on line 1233 looks like a reference -- Missing reference section? '6' on line 1247 looks like a reference -- Missing reference section? '7' on line 1249 looks like a reference -- Missing reference section? '10' on line 1256 looks like a reference -- Missing reference section? '5' on line 1245 looks like a reference -- Missing reference section? '17' on line 1274 looks like a reference -- Missing reference section? '20' on line 1280 looks like a reference -- Missing reference section? '18' on line 1276 looks like a reference -- Missing reference section? '14' on line 1266 looks like a reference -- Missing reference section? '3' on line 1239 looks like a reference -- Missing reference section? '8' on line 1251 looks like a reference -- Missing reference section? '9' on line 1253 looks like a reference -- Missing reference section? '21' on line 1282 looks like a reference -- Missing reference section? '4' on line 1242 looks like a reference -- Missing reference section? '24' on line 1290 looks like a reference -- Missing reference section? '25' on line 1293 looks like a reference -- Missing reference section? '22' on line 1284 looks like a reference -- Missing reference section? '26' on line 1296 looks like a reference -- Missing reference section? '23' on line 1287 looks like a reference -- Missing reference section? '11' on line 1258 looks like a reference -- Missing reference section? '12' on line 1261 looks like a reference -- Missing reference section? '13' on line 1263 looks like a reference -- Missing reference section? '15' on line 1269 looks like a reference -- Missing reference section? '16' on line 1272 looks like a reference -- Missing reference section? '19' on line 1278 looks like a reference -- Missing reference section? '2' on line 1236 looks like a reference Summary: 14 errors (**), 0 flaws (~~), 3 warnings (==), 28 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Crawley, Editor 3 Internet Draft (Argon Networks) 4 draft-ietf-issll-atm-framework-01.txt L. Berger 5 (Fore Systems) 6 S. Berson 7 (ISI) 8 F. Baker 9 (Cisco Systems) 10 M. Borden 11 (New Oak Communications) 12 J. Krawczyk 13 (ArrowPoint Communications) 15 Novemver 18, 1997 17 A Framework for Integrated Services and RSVP over ATM 19 Status of this Memo 20 This document is an Internet Draft. Internet Drafts are working 21 documents of the Internet Engineering Task Force (IETF), its 22 Areas, and its Working Groups. Note that other groups may also 23 distribute working documents as Internet Drafts). 25 Internet Drafts are draft documents valid for a maximum of six 26 months. Internet Drafts may be updated, replaced, or obsoleted by 27 other documents at any time. It is not appropriate to use 28 Internet Drafts as reference material or to cite them other than 29 as a "working draft" or "work in progress." 31 To learn the current status of any Internet-Draft, please check 32 the ``1id-abstracts.txt'' listing contained in the Internet- 33 Drafts Shadow Directories on ds.internic.net (US East Coast), 34 nic.nordu.net (Eur4ope), ftp.isi.edu (US West Coast), or 35 munnari.oz.au (Pacific Rim). 37 Abstract 38 This document outlines the issues and framework related to 39 providing IP Integrated Services with RSVP over ATM. It provides 40 an overall approach to the problem(s) and related issues. These 41 issues and problems are to be addressed in further documents from 42 the ISATM subgroup of the ISSLL working group. 44 Editor's Note 45 This document is the merger of two previous documents, draft- 46 ietf-issll-atm-support-02.txt by Berger and Berson and draft- 47 crawley-rsvp-over-atm-00.txt by Baker, Berson, Borden, Crawley, 48 and Krawczyk. The former document has been split into this 49 document and a set of documents on RSVP over ATM implementation 50 requirements and guidelines. 52 1. Introduction 54 The Internet currently has one class of service normally referred 55 to as "best effort." This service is typified by first-come, 56 first-serve scheduling at each hop in the network. Best effort 57 service has worked well for electronic mail, World Wide Web (WWW) 58 access, file transfer (e.g. ftp), etc. For real-time traffic 59 such as voice and video, the current Internet has performed well 60 only across unloaded portions of the network. In order to 61 provide quality real-time traffic, new classes of service and a 62 QoS signalling protocol are being introduced in the Internet 63 [1,6,7], while retaining the existing best effort service. The 64 QoS signalling protocol is RSVP [1], the Resource ReSerVation 65 Protocol and the service models 67 One of the important features of ATM technology is the ability to 68 request a point-to-point Virtual Circuit (VC) with a specified 69 Quality of Service (QoS). An additional feature of ATM 70 technology is the ability to request point-to-multipoint VCs with 71 a specified QoS. Point-to-multipoint VCs allows leaf nodes to be 72 added and removed from the VC dynamically and so provides a 73 mechanism for supporting IP multicast. It is only natural that 74 RSVP and the Internet Integrated Services (IIS) model would like 75 to utilize the QoS properties of any underlying link layer 76 including ATM, and this draft concentrates on ATM. 78 Classical IP over ATM [10] has solved part of this problem, 79 supporting IP unicast best effort traffic over ATM. Classical IP 80 over ATM is based on a Logical IP Subnetwork (LIS), which is a 81 separately administered IP subnetwork. Hosts within an LIS 82 communicate using the ATM network, while hosts from different 83 subnets communicate only by going through an IP router (even 84 though it may be possible to open a direct VC between the two 85 hosts over the ATM network). Classical IP over ATM provides an 86 Address Resolution Protocol (ATMARP) for ATM edge devices to 87 resolve IP addresses to native ATM addresses. For any pair of 88 IP/ATM edge devices (i.e. hosts or routers), a single VC is 89 created on demand and shared for all traffic between the two 90 devices. A second part of the RSVP and IIS over ATM problem, IP 91 multicast, is being solved with MARS [5], the Multicast Address 92 Resolution Server. 94 MARS compliments ATMARP by allowing an IP address to resolve into 95 a list of native ATM addresses, rather than just a single 96 address. 98 The ATM Forum's LAN Emulation (LANE) [17, 20] and Multiprotocol 99 Over ATM (MPOA) [18] also address the support of IP best effort 100 traffic over ATM through similar means. 102 A key remaining issue for IP in an ATM environment is the 103 integration of RSVP signalling and ATM signalling in support of 104 the Internet Integrated Services (IIS) model. There are two main 105 areas involved in supporting the IIS model, QoS translation and 106 VC management. QoS translation concerns mapping a QoS from the 107 IIS model to a proper ATM QoS, while VC management concentrates 108 on how many VCs are needed and which traffic flows are routed 109 over which VCs. 111 1.1 Structure and Related Documents 113 This document provides a guide to the issues for IIS over ATM. 114 It is intended to frame the problems that are to be addressed in 115 further documents. In this document, the modes and models for 116 RSVP operation over ATM will be discussed followed by a 117 discussion of management of ATM VCs for RSVP data and control. 118 Lastly, the topic of encapsulations will be discussed in relation 119 to the models presented. 121 This document is part of a group of documents from the ISATM 122 subgroup of the ISSLL working group related to the operation of 123 IntServ and RSVP over ATM. [14] discusses the mapping of the 124 IntServ models for Controlled Load and Guaranteed Service to ATM. 125 [15 and 16] discuss detailed implementation requirements and 126 guidelines for RSVP over ATM, respectively. While these 127 documents may not address all the issues raised in this document, 128 they should provide enough information for development of 129 solutions for IntServ and RSVP over ATM. 131 1.2 Terms 133 Several term used in this document are used in many contexts, 134 often with different meaning. These terms are used in this 135 document with the following meaning: 137 - Sender is used in this document to mean the ingress point to 138 the ATM network or "cloud". 139 - Receiver is used in this document to refer to the egress point 140 from the ATM network or "cloud". 141 - Reservation is used in this document to refer to an RSVP 142 initiated request for resources. RSVP initiates requests for 143 resources based on RESV message processing. RESV messages that 144 simply refresh state do not trigger resource requests. 145 Resource requests may be made based on RSVP sessions and RSVP 146 reservation styles. RSVP styles dictate whether the reserved 147 resources are used by one sender or shared by multiple 148 senders. See [1] for details of each. Each new request is 149 referred to in this document as an RSVP reservation, or simply 150 reservation. 151 - Flow is used to refer to the data traffic associated with a 152 particular reservation. The specific meaning of flow is RSVP 153 style dependent. For shared style reservations, there is one 154 flow per session. For distinct style reservations, there is 155 one flow per sender (per session). 157 2. Issues Regarding the Operation of RSVP and IntServ over ATM 159 The issues related to RSVP and IntServ over ATM fall into several 160 general classes: 161 - How to make RSVP run over ATM now and in the future 162 - When to set up a virtual circuit (VC) for a specific Quality 163 of Service (QoS) related to RSVP 164 - How to map the IntServ models to ATM QoS models 165 - How to know that an ATM network is providing the QoS necessary 166 for a flow 167 - How to handle the many-to-many connectionless features of IP 168 multicast and RSVP in the one-to-many connection-oriented 169 world of ATM 171 2.1 Modes/Models for RSVP and IntServ over ATM 173 [3] Discusses several different models for running IP over ATM 174 networks. [17, 18, and 20] also provide models for IP in ATM 175 environments. Any one of these models would work as long as the 176 RSVP control packets (IP protocol 46) and data packets can follow 177 the same IP path through the network. It is important that the 178 RSVP PATH messages follow the same IP path as the data such that 179 appropriate PATH state may be installed in the routers along the 180 path. For an ATM subnetwork, this means the ingress and egress 181 points must be the same in both directions for the RSVP control 182 and data messages. Note that the RSVP protocol does not require 183 symmetric routing. The PATH state installed by RSVP allows the 184 RESV messages to "retrace" the hops that the PATH message 185 crossed. Within each of the models for IP over ATM, there are 186 decisions about using different types of data distribution in ATM 187 as well as different connection initiation. The following 188 sections look at some of the different ways QoS connections can 189 be set up for RSVP. 191 2.1.1 UNI 3.x and 4.0 193 In the User Network Interface (UNI) 3.0 and 3.1 specifications 194 [8,9] and 4.0 specification, both permanent and switched virtual 195 circuits (PVC and SVC) may be established with a specified 196 service category (CBR, VBR, and UBR for UNI 3.x and VBR-rt and 197 ABR for 4.0) and specific traffic descriptors in point-to-point 198 and point-to-multipoint configurations. Additional QoS 199 parameters are not available in UNI 3.x and those that are 200 available are vendor-specific. Consequently, the level of QoS 201 control available in standard UNI 3.x networks is somewhat 202 limited. However, using these building blocks, it is possible to 203 use RSVP and the IntServ models. ATM 4.0 with the Traffic 204 Management (TM) 4.0 specification [21] allows much greater 205 control of QoS. [14] provides the details of mapping the IntServ 206 models to UNI 3.x and 4.0 service categories and traffic 207 parameters. 209 2.1.1.1 Permanent Virtual Circuits (PVCs) 211 PVCs emulate dedicated point-to-point lines in a network, so the 212 operation of RSVP can be identical to the operation over any 213 point-to-point network. The QoS of the PVC must be consistent 214 and equivalent to the type of traffic and service model used. 215 The devices on either end of the PVC have to provide traffic 216 control services in order to multiplex multiple flows over the 217 same PVC. With PVCs, there is no issue of when or how long it 218 takes to set up VCs, since they are made in advance but the 219 resources of the PVC are limited to what has been pre-allocated. 220 PVCs that are not fully utilized can tie up ATM network resources 221 that could be used for SVCs. 223 An additional issue for using PVCs is one of network engineering. 224 Frequently, multiple PVCs are set up such that if all the PVCs 225 were running at full capacity, the link would be over-subscribed. 226 This frequently used "statistical multiplexing gain" makes 227 providing IIS over PVCs very difficult and unreliable. Any 228 application of IIS over PVCs has to be assured that the PVCs are 229 able to receive all the requested QoS. 231 2.1.1.2 Switched Virtual Circuits (SVCs) 233 SVCs allow paths in the ATM network to be set up "on demand". 234 This allows flexibility in the use of RSVP over ATM along with 235 some complexity. Parallel VCs can be set up to allow best-effort 236 and better service class paths through the network, as shown in 237 Figure 1. The cost and time to set up SVCs can impact their use. 238 For example, it may be better to initially route QoS traffic over 239 existing VCs until a SVC with the desired QoS can be set up for 240 the flow. Scaling issues can come into play if a single RSVP 241 flow is used per VC, as will be discussed in Section 4.3.1.1. The 242 number of VCs in any ATM device may also be limited so the number 243 of RSVP flows that can be supported by a device can be strictly 244 limited to the number of VCs available, if we assume one flow per 245 VC. Section 4 discusses the topic of VC management for RSVP in 246 greater detail. 248 Data Flow ==========> 250 +-----+ 251 | | --------------> +----+ 252 | Src | --------------> | R1 | 253 | *| --------------> +----+ 254 +-----+ QoS VCs 255 /\ 256 || 257 VC || 258 Initiator 260 Figure 1: Data Flow VC Initiation 261 While RSVP is receiver oriented, ATM is sender oriented. This 262 might seem like a problem but the sender or ingress point 263 receives RSVP RESV messages and can determine whether a new VC 264 has to be set up to the destination or egress point. 266 2.1.1.3 Point to MultiPoint 268 In order to provide QoS for IP multicast, an important feature of 269 RSVP, data flows must be distributed to multiple destinations 270 from a given source. Point-to-multipoint VCs provide such a 271 mechanism. It is important to map the actions of IP multicasting 272 and RSVP (e.g. IGMP JOIN/LEAVE and RSVP RESV/RESV TEAR) to add 273 party and drop party functions for ATM. Point-to-multipoint VCs 274 as defined in UNI 3.x have a single service class for all 275 destinations. This is contrary to the RSVP "heterogeneous 276 receiver" concept. It is possible to set up a different VC to 277 each receiver requesting a different QoS, as shown in Figure 2. 278 This again can run into scaling and resource problems when 279 managing multiple VCs on the same interface to different 280 destinations. 282 +----+ 283 +------> | R1 | 284 | +----+ 285 | 286 | +----+ 287 +-----+ -----+ +--> | R2 | 288 | | ---------+ +----+ Receiver 289 Request Types: 290 | Src | ----> QoS 1 291 and QoS 2 292 | | .........+ +----+ ....> Best- 293 Effort 294 +-----+ .....+ +..> | R3 | 295 : +----+ 296 /\ : 297 || : +----+ 298 || +......> | R4 | 299 || +----+ 300 Single 301 IP Mulicast 302 Group 304 Figure 2: Types of Multicast Receivers 306 RSVP sends messages both up and down the multicast distribution 307 tree. In the case of a large ATM cloud, this could result in a 308 RSVP message implosion at an ATM ingress point with many 309 receivers. 311 ATM 4.0 expands on the point-to-multipoint VCs by adding a Leaf 312 Initiated Join (LIJ) capability. LIJ allows an ATM end point to 313 join into an existing point-to-multipoint VC without necessarily 314 contacting the source of the VC. This can reduce the burden on 315 the ATM source point for setting up new branches and more closely 316 matches the receiver-based model of RSVP and IP multicast. 317 However, many of the same scaling issues exist and the new 318 branches added to a point-to-multipoint VC must use the same QoS 319 as existing branches. 321 2.1.1.4 Multicast Servers 323 IP-over-ATM has the concept of a multicast server or reflector 324 that can accept cells from multiple senders and send them via a 325 point-to-multipoint VC to a set of receivers. This moves the VC 326 scaling issues noted previously for point-to-multipoint VCs to 327 the multicast server. Additionally, the multicast server will 328 need to know how to interpret RSVP packets or receive instruction 329 from another node so it will be able to provide VCs of the 330 appropriate QoS for the RSVP flows. 332 2.1.2 Hop-by-Hop vs. Short Cut 334 If the ATM "cloud" is made up a number of logical IP subnets 335 (LISs), then it is possible to use "short cuts" from a node on 336 one LIS directly to a node on another LIS, avoiding router hops 337 between the LISs. NHRP [4], is one mechanism for determining the 338 ATM address of the egress point on the ATM network given a 339 destination IP address. It is a topic for further study to 340 determine if significant benefit is achieved from short cut 341 routes vs. the extra state required. 343 2.1.3 Future Models 345 ATM is constantly evolving. If we assume that RSVP and IntServ 346 applications are going to be wide-spread, it makes sense to 347 consider changes to ATM that would improve the operation of RSVP 348 and IntServ over ATM. Similarly, the RSVP protocol and IntServ 349 models will continue to evolve and changes that affect them 350 should also be considered. The following are a few ideas that 351 have been discussed that would make the integration of the 352 IntServ models and RSVP easier or more complete. They are 353 presented here to encourage continued development and discussion 354 of ideas that can help aid in the integration of RSVP, IntServ, 355 and ATM. 357 2.1.3.1 Heterogeneous Point-to-MultiPoint 359 The IntServ models and RSVP support the idea of "heterogeneous 360 receivers"; e.g., not all receivers of a particular multicast 361 flow are required to ask for the same QoS from the network, as 362 shown in Figure 2. 364 The most important scenario that can utilize this feature occurs 365 when some receivers in an RSVP session ask for a specific QoS 366 while others receive the flow with a best-effort service. In 367 some cases where there are multiple senders on a shared- 368 reservation flow (e.g., an audio conference), an individual 369 receiver only needs to reserve enough resources to receive one 370 sender at a time. However, other receivers may elect to reserve 371 more resources, perhaps to allow for some amount of "over- 372 speaking" or in order to record the conference (post processing 373 during playback can separate the senders by their source 374 addresses). 376 In order to prevent denial-of-service attacks via reservations, 377 the service models do not allow the service elements to simply 378 drop non-conforming packets. For example, Controlled Load 379 service model [7] assigns non-conformant packets to best-effort 380 status (which may result in packet drops if there is congestion). 382 Emulating these behaviors over an ATM network is problematic and 383 needs to be studied. If a single maximum QoS is used over a 384 point-to-multipoint VC, resources could be wasted if cells are 385 sent over certain links where the reassembled packets will 386 eventually be dropped. In addition, the "maximum QoS" may 387 actually cause a degradation in service to the best-effort 388 branches. 390 The term "variegated VC" has been coined to describe a point-to- 391 multipoint VC that allows a different QoS on each branch. This 392 approach seems to match the spirit of the Integrated Service and 393 RSVP models, but some thought has to be put into the cell drop 394 strategy when traversing from a "bigger" branch to a "smaller" 395 one. The "best-effort for non-conforming packets" behavior must 396 also be retained. Early Packet Discard (EPD) schemes must be 397 used so that all the cells for a given packet can be discarded at 398 the same time rather than discarding only a few cells from 399 several packets making all the packets useless to the receivers. 401 2.1.3.2 Lightweight Signalling 403 Q.2931 signalling is very complete and carries with it a 404 significant burden for signalling in all possible public and 405 private connections. It might be worth investigating a lighter 406 weight signalling mechanism for faster connection setup in 407 private networks. 409 2.1.3.3 QoS Renegotiation 411 Another change that would help RSVP over ATM is the ability to 412 request a different QoS for an active VC. This would eliminate 413 the need to setup and tear down VCs as the QoS changed. RSVP 414 allows receivers to change their reservations and senders to 415 change their traffic descriptors dynamically. This, along with 416 the merging of reservations, can create a situation where the QoS 417 needs of a VC can change. Allowing changes to the QoS of an 418 existing VC would allow these features to work without creating a 419 new VC. In the ITU-T ATM specifications [24,25], some cell rates 420 can be renegotiated or changed. Specifically, the Peak Cell Rate 421 (PCR) of an existing VC can be changed and, in some cases, QoS 422 parameters may be renegotiated during the call setup phase. It is 423 unclear if this is sufficient for the QoS renegotiation needs of 424 the IntServ models. 426 2.1.3.4 Group Addressing 428 The model of one-to-many communications provided by point-to- 429 multipoint VCs does not really match the many-to-many 430 communications provided by IP multicasting. A scaleable mapping 431 from IP multicast addresses to an ATM "group address" can address 432 this problem. 434 2.1.3.5 Label Switching 436 The MultiProtocol Label Switching (MPLS) working group is 437 discussing methods for optimizing the use of ATM and other 438 switched networks for IP by encapsulating the data with a header 439 that is used by the interior switches to achieve faster 440 forwarding lookups. [22] discusses a framework for this work. 441 It is unclear how this work will affect IntServ and RSVP over 442 label switched networks but there may be some interactions. 444 2.1.4 QoS Routing 446 RSVP is explicitly not a routing protocol. However, since it 447 conveys QoS information, it may prove to be a valuable input to a 448 routing protocol that can make path determinations based on QoS 449 and network load information. In other words, instead of asking 450 for just the IP next hop for a given destination address, it 451 might be worthwhile for RSVP to provide information on the QoS 452 needs of the flow if routing has the ability to use this 453 information in order to determine a route. Other forms of QoS 454 routing have existed in the past such as using the IP TOS and 455 Precedence bits to select a path through the network. Some have 456 discussed using these same bits to select one of a set of 457 parallel ATM VCs as a form of QoS routing. ATM routing has also 458 considered the problem of QoS routing through the Private 459 Network-to-Network Interface (PNNI) [26] routing protocol for 460 routing ATM VCs on a path that can support their needs. The work 461 in this area is just starting and there are numerous issues to 462 consider. [23], as part of the work of the QoSR working group 463 frame the issues for QoS Routing in the Internet. 465 2.2 Reliance on Unicast and Multicast Routing 467 RSVP was designed to support both unicast and IP multicast 468 applications. This means that RSVP needs to work closely with 469 multicast and unicast routing. Unicast routing over ATM has been 470 addressed [10] and [11]. MARS [5] provides multicast address 471 resolution for IP over ATM networks, an important part of the 472 solution for multicast but still relies on multicast routing 473 protocols to connect multicast senders and receivers on different 474 subnets. 476 2.3 Aggregation of Flows 478 Some of the scaling issues noted in previous sections can be 479 addressed by aggregating several RSVP flows over a single VC if 480 the destinations of the VC match for all the flows being 481 aggregated. However, this causes considerable complexity in the 482 management of VCs and in the scheduling of packets within each VC 483 at the root point of the VC. Note that the rescheduling of flows 484 within a VC is not possible in the switches in the core of the 485 ATM network. Virtual Paths (VPs) can be used for aggregating 486 multiple VCs. This topic is discussed in greater detail as it 487 applies to multicast data distribution in section 4.2.3.4 489 2.4 Mapping QoS Parameters 491 The mapping of QoS parameters from the IntServ models to the ATM 492 service classes is an important issue in making RSVP and IntServ 493 work over ATM. [14] addresses these issues very completely for 494 the Controlled Load and Guaranteed Service models. An additional 495 issue is that while some guidelines can be developed for mapping 496 the parameters of a given service model to the traffic 497 descriptors of an ATM traffic class, implementation variables, 498 policy, and cost factors can make strict mapping problematic. 499 So, a set of workable mappings that can be applied to different 500 network requirements and scenarios is needed as long as the 501 mappings can satisfy the needs of the service model(s). 503 2.5 Directly Connected ATM Hosts 505 It is obvious that the needs of hosts that are directly connected 506 to ATM networks must be considered for RSVP and IntServ over ATM. 507 Functionality for RSVP over ATM must not assume that an ATM host 508 has all the functionality of a router, but such things as MARS 509 and NHRP clients would be worthwhile features. A host must 510 managed VCs just like any other ATM sender or receiver as 511 described later in section 4. 513 2.6 Accounting and Policy Issues 515 Since RSVP and IntServ create classes of preferential service, 516 some form of administrative control and/or cost allocation is 517 needed to control access. There are certain types of policies 518 specific to ATM and IP over ATM that need to be studied to 519 determine how they interoperate with the IP and IntServ policies 520 being developed. Typical IP policies would be that only certain 521 users are allowed to make reservations. This policy would 522 translate well to IP over ATM due to the similarity to the 523 mechanisms used for Call Admission Control (CAC). There may be a 524 need for policies specific to IP over ATM. For example, since 525 signalling costs in ATM are high relative to IP, an IP over ATM 526 specific policy might restrict the ability to change the 527 prevailing QoS in a VC. If VCs are relatively scarce, there also 528 might be specific accounting costs in creating a new VC. The 529 work so far has been preliminary, and much work remains to be 530 done. The policy mechanisms outlined in [12] and [13] provide 531 the basic mechanisms for implementing policies for RSVP and 532 IntServ over any media, not just ATM. 534 3. Framework for IntServ and RSVP over ATM 536 Now that we have defined some of the issues for IntServ and RSVP 537 over ATM, we can formulate a framework for solutions. The 538 problem breaks down to two very distinct areas; the mapping of 539 IntServ models to ATM service categories and QoS parameters and 540 the operation of RSVP over ATM. 542 Mapping IntServ models to ATM service categories and QoS 543 parameters is a matter of determining which categories can 544 support the goals of the service models and matching up the 545 parameters and variables between the IntServ description and the 546 ATM description(s). Since ATM has such a wide variety of service 547 categories and parameters, more than one ATM service category 548 should be able to support each of the two IntServ models. This 549 will provide a good bit of flexibility in configuration and 550 deployment. [14] examines this topic completely. 552 The operation of RSVP over ATM requires careful management of VCs 553 in order to match the dynamics of the RSVP protocol. VCs need to 554 be managed for both the RSVP QoS data and the RSVP signalling 555 messages. The remainder of this document will discuss several 556 approaches to managing VCs for RSVP and [15] and [16] discuss 557 their application for implementations in term of interoperability 558 requirement and implementation guidelines. 560 4. RSVP VC Management 562 This section provides more detail on the issues related to the 563 management of SVCs for RSVP and IntServ. 565 4.1 VC Initiation 567 As discussed in section 2.1.1.2, there is an apparent mismatch 568 between RSVP and ATM. Specifically, RSVP control is receiver 569 oriented and ATM control is sender oriented. This initially may 570 seem like a major issue, but really is not. While RSVP 571 reservation (RESV) requests are generated at the receiver, actual 572 allocation of resources takes place at the subnet sender. For 573 data flows, this means that subnet senders will establish all QoS 574 VCs and the subnet receiver must be able to accept incoming QoS 575 VCs, as illustrated in Figure 1. These restrictions are 576 consistent with RSVP version 1 processing rules and allow senders 577 to use different flow to VC mappings and even different QoS 578 renegotiation techniques without interoperability problems. 580 The use of the reverse path provided by point-to-point VCs by 581 receivers is for further study. There are two related issues. The 582 first is that use of the reverse path requires the VC initiator 583 to set appropriate reverse path QoS parameters. The second issue 584 is that reverse paths are not available with point-to-multipoint 585 VCs, so reverse paths could only be used to support unicast RSVP 586 reservations. 588 4.2 Data VC Management 590 Any RSVP over ATM implementation must map RSVP and RSVP 591 associated data flows to ATM Virtual Circuits (VCs). LAN 592 Emulation [17], Classical IP [10] and, more recently, NHRP [4] 593 discuss mapping IP traffic onto ATM SVCs, but they only cover a 594 single QoS class, i.e., best effort traffic. When QoS is 595 introduced, VC mapping must be revisited. For RSVP controlled QoS 596 flows, one issue is VCs to use for QoS data flows. 598 In the Classic IP over ATM and current NHRP models, a single 599 point-to-point VC is used for all traffic between two ATM 600 attached hosts (routers and end-stations). It is likely that 601 such a single VC will not be adequate or optimal when supporting 602 data flows with multiple QoS types. RSVP's basic purpose is to 603 install support for flows with multiple QoS types, so it is 604 essential for any RSVP over ATM solution to address VC usage for 605 QoS data flows, as shown in Figure 1. 607 RSVP reservation styles must also be taken into account in any VC 608 usage strategy. 610 This section describes issues and methods for management of VCs 611 associated with QoS data flows. When establishing and maintaining 612 VCs, the subnet sender will need to deal with several 613 complicating factors including multiple QoS reservations, 614 requests for QoS changes, ATM short-cuts, and several multicast 615 specific issues. The multicast specific issues result from the 616 nature of ATM connections. The key multicast related issues are 617 heterogeneity, data distribution, receiver transitions, and end- 618 point identification. 620 4.2.1 Reservation to VC Mapping 622 There are various approaches available for mapping reservations 623 on to VCs. A distinguishing attribute of all approaches is how 624 reservations are combined on to individual VCs. When mapping 625 reservations on to VCs, individual VCs can be used to support a 626 single reservation, or reservation can be combined with others on 627 to "aggregate" VCs. In the first case, each reservation will be 628 supported by one or more VCs. Multicast reservation requests may 629 translate into the setup of multiple VCs as is described in more 630 detail in section 4.2.2. Unicast reservation requests will 631 always translate into the setup of a single QoS VC. In both 632 cases, each VC will only carry data associated with a single 633 reservation. The greatest benefit if this approach is ease of 634 implementation, but it comes at the cost of increased (VC) setup 635 time and the consumption of greater number of VC and associated 636 resources. 638 When multiple reservations are combined onto a single VC, it is 639 referred to as the "aggregation" model. With this model, large 640 VCs could be set up between IP routers and hosts in an ATM 641 network. These VCs could be managed much like IP Integrated 642 Service (IIS) point-to-point links (e.g. T-1, DS-3) are managed 643 now. Traffic from multiple sources over multiple RSVP sessions 644 might be multiplexed on the same VC. This approach has a number 645 of advantages. First, there is typically no signalling latency as 646 VCs would be in existence when the traffic started flowing, so no 647 time is wasted in setting up VCs. Second, the heterogeneity 648 problem (section 4.2.2) in full over ATM has been reduced to a 649 solved problem. Finally, the dynamic QoS problem (section 4.2.7) 650 for ATM has also been reduced to a solved problem. 652 The aggregation model can be used with point-to-point and point- 653 to-multipoint VCs. The problem with the aggregation model is 654 that the choice of what QoS to use for the VCs may be difficult, 655 without knowledge of the likely reservation types and sizes but 656 is made easier since the VCs can be changed as needed. 658 4.2.2 Unicast Data VC Management 660 Unicast data VC management is much simpler than multicast data VC 661 management but there are still some similar issues. If one 662 considers unicast to be a devolved case of multicast, then 663 implementing the multicast solutions will cover unicast. 664 However, some may want to consider unicast-only implementations. 665 In these situations, the choice of using a single flow per VC or 666 aggregation of flows onto a single VC remains but the problem of 667 heterogeneity discussed in the following section is removed. 669 4.2.3 Multicast Heterogeneity 671 As mentioned in section 2.1.3.1 and shown in figure 2, multicast 672 heterogeneity occurs when receivers request different qualities 673 of service within a single session. This means that the amount 674 of requested resources differs on a per next hop basis. A related 675 type of heterogeneity occurs due to best-effort receivers. In 676 any IP multicast group, it is possible that some receivers will 677 request QoS (via RSVP) and some receivers will not. In shared 678 media networks, like Ethernet, receivers that have not requested 679 resources can typically be given identical service to those that 680 have without complications. This is not the case with ATM. In 681 ATM networks, any additional end-points of a VC must be 682 explicitly added. There may be costs associated with adding the 683 best-effort receiver, and there might not be adequate resources. 684 An RSVP over ATM solution will need to support heterogeneous 685 receivers even though ATM does not currently provide such support 686 directly. 688 RSVP heterogeneity is supported over ATM in the way RSVP 689 reservations are mapped into ATM VCs. There are four alternative 690 approaches this mapping. There are multiple models for supporting 691 RSVP heterogeneity over ATM. Section 4.2.3.1 examines the 692 multiple VCs per RSVP reservation (or full heterogeneity) model 693 where a single reservation can be forwarded onto several VCs each 694 with a different QoS. Section 4.2.3.2 presents a limited 695 heterogeneity model where exactly one QoS VC is used along with a 696 best effort VC. Section 4.2.3.3 examines the VC per RSVP 697 reservation (or homogeneous) model, where each RSVP reservation 698 is mapped to a single ATM VC. Section 4.2.3.4 describes the 699 aggregation model allowing aggregation of multiple RSVP 700 reservations into a single VC. 702 4.2.3.1 Full Heterogeneity Model 704 RSVP supports heterogeneous QoS, meaning that different receivers 705 of the same multicast group can request a different QoS. But 706 importantly, some receivers might have no reservation at all and 707 want to receive the traffic on a best effort service basis. The 708 IP model allows receivers to join a multicast group at any time 709 on a best effort basis, and it is important that ATM as part of 710 the Internet continue to provide this service. We define the 711 "full heterogeneity" model as providing a separate VC for each 712 distinct QoS for a multicast session including best effort and 713 one or more qualities of service. 715 Note that while full heterogeneity gives users exactly what they 716 request, it requires more resources of the network than other 717 possible approaches. The exact amount of bandwidth used for 718 duplicate traffic depends on the network topology and group 719 membership. 721 4.2.3.2 Limited Heterogeneity Model 723 We define the "limited heterogeneity" model as the case where the 724 receivers of a multicast session are limited to use either best 725 effort service or a single alternate quality of service. The 726 alternate QoS can be chosen either by higher level protocols or 727 by dynamic renegotiation of QoS as described below. 729 In order to support limited heterogeneity, each ATM edge device 730 participating in a session would need at most two VCs. One VC 731 would be a point-to-multipoint best effort service VC and would 732 serve all best effort service IP destinations for this RSVP 733 session. 735 The other VC would be a point to multipoint VC with QoS and would 736 serve all IP destinations for this RSVP session that have an RSVP 737 reservation established. 739 As with full heterogeneity, a disadvantage of the limited 740 heterogeneity scheme is that each packet will need to be 741 duplicated at the network layer and one copy sent into each of 742 the 2 VCs. Again, the exact amount of excess traffic will depend 743 on the network topology and group membership. If any of the 744 existing QoS VC end-points cannot upgrade to the new QoS, then 745 the new reservation fails though the resources exist for the new 746 receiver. 748 4.2.3.3 Homogeneous and Modified Homogeneous Models 750 We define the "homogeneous" model as the case where all receivers 751 of a multicast session use a single quality of service VC. Best- 752 effort receivers also use the single RSVP triggered QoS VC. The 753 single VC can be a point-to-point or point-to-multipoint as 754 appropriate. The QoS VC is sized to provide the maximum resources 755 requested by all RSVP next-hops. 757 This model matches the way the current RSVP specification 758 addresses heterogeneous requests. The current processing rules 759 and traffic control interface describe a model where the largest 760 requested reservation for a specific outgoing interface is used 761 in resource allocation, and traffic is transmitted at the higher 762 rate to all next-hops. This approach would be the simplest method 763 for RSVP over ATM implementations. 765 While this approach is simple to implement, providing better than 766 best-effort service may actually be the opposite of what the user 767 desires. There may be charges incurred or resources that are 768 wrongfully allocated. There are two specific problems. The first 769 problem is that a user making a small or no reservation would 770 share a QoS VC resources without making (and perhaps paying for) 771 an RSVP reservation. The second problem is that a receiver may 772 not receive any data. This may occur when there is insufficient 773 resources to add a receiver. The rejected user would not be 774 added to the single VC and it would not even receive traffic on a 775 best effort basis. 777 Not sending data traffic to best-effort receivers because of 778 another receiver's RSVP request is clearly unacceptable. The 779 previously described limited heterogeneous model ensures that 780 data is always sent to both QoS and best-effort receivers, but it 781 does so by requiring replication of data at the sender in all 782 cases. It is possible to extend the homogeneous model to both 783 ensure that data is always sent to best-effort receivers and also 784 to avoid replication in the normal case. This extension is to 785 add special handling for the case where a best-effort receiver 786 cannot be added to the QoS VC. In this case, a best effort VC 787 can be established to any receivers that could not be added to 788 the QoS VC. Only in this special error case would senders be 789 required to replicate data. We define this approach as the 790 "modified homogeneous" model. 792 4.2.3.4 Aggregation 794 The last scheme is the multiple RSVP reservations per VC (or 795 aggregation) model. With this model, large VCs could be set up 796 between IP routers and hosts in an ATM network. These VCs could 797 be managed much like IP Integrated Service (IIS) point-to-point 798 links (e.g. T-1, DS-3) are managed now. Traffic from multiple 799 sources over multiple RSVP sessions might be multiplexed on the 800 same VC. This approach has a number of advantages. First, there 801 is typically no signalling latency as VCs would be in existence 802 when the traffic started flowing, so no time is wasted in setting 803 up VCs. Second, the heterogeneity problem in full over ATM has 804 been reduced to a solved problem. Finally, the dynamic QoS 805 problem for ATM has also been reduced to a solved problem. This 806 approach can be used with point-to-point and point-to-multipoint 807 VCs. The problem with the aggregation approach is that the choice 808 of what QoS to use for which of the VCs is difficult, but is made 809 easier if the VCs can be changed as needed. 811 4.2.4 Multicast End-Point Identification 813 Implementations must be able to identify ATM end-points 814 participating in an IP multicast group. The ATM end-points will 815 be IP multicast receivers and/or next-hops. Both QoS and best- 816 effort end-points must be identified. RSVP next-hop information 817 will provide QoS end-points, but not best-effort end-points. 818 Another issue is identifying end-points of multicast traffic 819 handled by non-RSVP capable next-hops. In this case a PATH 820 message travels through a non-RSVP egress router on the way to 821 the next hop RSVP node. When the next hop RSVP node sends a RESV 822 message it may arrive at the source over a different route than 823 what the data is using. The source will get the RESV message, but 824 will not know which egress router needs the QoS. For unicast 825 sessions, there is no problem since the ATM end-point will be the 826 IP next-hop router. Unfortunately, multicast routing may not be 827 able to uniquely identify the IP next-hop router. So it is 828 possible that a multicast end-point can not be identified. 830 In the most common case, MARS will be used to identify all end- 831 points of a multicast group. In the router to router case, a 832 multicast routing protocol may provide all next-hops for a 833 particular multicast group. In either case, RSVP over ATM 834 implementations must obtain a full list of end-points, both QoS 835 and non-QoS, using the appropriate mechanisms. The full list can 836 be compared against the RSVP identified end-points to determine 837 the list of best-effort receivers. There is no straightforward 838 solution to uniquely identifying end-points of multicast traffic 839 handled by non-RSVP next hops. The preferred solution is to use 840 multicast routing protocols that support unique end-point 841 identification. In cases where such routing protocols are 842 unavailable, all IP routers that will be used to support RSVP 843 over ATM should support RSVP. To ensure proper behavior, 844 implementations should, by default, only establish RSVP-initiated 845 VCs to RSVP capable end-points. 847 4.2.5 Multicast Data Distribution 849 Two models are planned for IP multicast data distribution over 850 ATM. In one model, senders establish point-to-multipoint VCs to 851 all ATM attached destinations, and data is then sent over these 852 VCs. This model is often called "multicast mesh" or "VC mesh" 853 mode distribution. In the second model, senders send data over 854 point-to-point VCs to a central point and the central point 855 relays the data onto point-to-multipoint VCs that have been 856 established to all receivers of the IP multicast group. This 857 model is often referred to as "multicast server" mode 858 distribution. RSVP over ATM solutions must ensure that IP 859 multicast data is distributed with appropriate QoS. 861 In the Classical IP context, multicast server support is provided 862 via MARS [5]. MARS does not currently provide a way to 863 communicate QoS requirements to a MARS multicast server. 864 Therefore, RSVP over ATM implementations must, by default, 865 support "mesh-mode" distribution for RSVP controlled multicast 866 flows. When using multicast servers that do not support QoS 867 requests, a sender must set the service, not global, break 868 bit(s). 870 4.2.6 Receiver Transitions 872 When setting up a point-to-multipoint VCs for multicast RSVP 873 sessions, there will be a time when some receivers have been 874 added to a QoS VC and some have not. During such transition 875 times it is possible to start sending data on the newly 876 established VC. The issue is when to start send data on the new 877 VC. If data is sent both on the new VC and the old VC, then data 878 will be delivered with proper QoS to some receivers and with the 879 old QoS to all receivers. This means the QoS receivers can get 880 duplicate data. If data is sent just on the new QoS VC, the 881 receivers that have not yet been added will lose information. 882 So, the issue comes down to whether to send to both the old and 883 new VCs, or to send to just one of the VCs. In one case 884 duplicate information will be received, in the other some 885 information may not be received. 887 This issue needs to be considered for three cases: 888 - When establishing the first QoS VC 889 - When establishing a VC to support a QoS change 890 - When adding a new end-point to an already established QoS VC 891 The first two cases are very similar. It both, it is possible to 892 send data on the partially completed new VC, and the issue of 893 duplicate versus lost information is the same. The last case is 894 when an end-point must be added to an existing QoS VC. In this 895 case the end-point must be both added to the QoS VC and dropped 896 from a best-effort VC. The issue is which to do first. If the 897 add is first requested, then the end-point may get duplicate 898 information. If the drop is requested first, then the end-point 899 may loose information. 901 In order to ensure predictable behavior and delivery of data to 902 all receivers, data can only be sent on a new VCs once all 903 parties have been added. This will ensure that all data is only 904 delivered once to all receivers. This approach does not quite 905 apply for the last case. In the last case, the add operation 906 should be completed first, then the drop operation. This means 907 that receivers must be prepared to receive some duplicate packets 908 at times of QoS setup. 910 4.2.7 Dynamic QoS 912 RSVP provides dynamic quality of service (QoS) in that the 913 resources that are requested may change at any time. There are 914 several common reasons for a change of reservation QoS. 916 1. An existing receiver can request a new larger (or smaller) 917 QoS. 918 2. A sender may change its traffic specification (TSpec), which 919 can trigger a change in the reservation requests of the 920 receivers. 921 3. A new sender can start sending to a multicast group with a 922 larger traffic specification than existing senders, triggering 923 larger reservations. 924 4. A new receiver can make a reservation that is larger than 925 existing reservations. 927 If the limited heterogeneity model is being used and the merge 928 node for the larger reservation is an ATM edge device, a new 929 larger reservation must be set up across the ATM network. Since 930 ATM service, as currently defined in UNI 3.x and UNI 4.0, does 931 not allow renegotiating the QoS of a VC, dynamically changing the 932 reservation means creating a new VC with the new QoS, and tearing 933 down an established VC. Tearing down a VC and setting up a new VC 934 in ATM are complex operations that involve a non-trivial amount 935 of processing time, and may have a substantial latency. There are 936 several options for dealing with this mismatch in service. A 937 specific approach will need to be a part of any RSVP over ATM 938 solution. 940 The default method for supporting changes in RSVP reservations is 941 to attempt to replace an existing VC with a new appropriately 942 sized VC. During setup of the replacement VC, the old VC must be 943 left in place unmodified. The old VC is left unmodified to 944 minimize interruption of QoS data delivery. Once the replacement 945 VC is established, data transmission is shifted to the new VC, 946 and the old VC is then closed. If setup of the replacement VC 947 fails, then the old QoS VC should continue to be used. When the 948 new reservation is greater than the old reservation, the 949 reservation request should be answered with an error. When the 950 new reservation is less than the old reservation, the request 951 should be treated as if the modification was successful. While 952 leaving the larger allocation in place is suboptimal, it 953 maximizes delivery of service to the user. Implementations should 954 retry replacing the too large VC after some appropriate elapsed 955 time. 957 One additional issue is that only one QoS change can be processed 958 at one time per reservation. If the (RSVP) requested QoS is 959 changed while the first replacement VC is still being setup, then 960 the replacement VC is released and the whole VC replacement 961 process is restarted. To limit the number of changes and to avoid 962 excessive signalling load, implementations may limit the number 963 of changes that will be processed in a given period. One 964 implementation approach would have each ATM edge device 965 configured with a time parameter T (which can change over time) 966 that gives the minimum amount of time the edge device will wait 967 between successive changes of the QoS of a particular VC. Thus 968 if the QoS of a VC is changed at time t, all messages that would 969 change the QoS of that VC that arrive before time t+T would be 970 queued. If several messages changing the QoS of a VC arrive 971 during the interval, redundant messages can be discarded. At time 972 t+T, the remaining change(s) of QoS, if any, can be executed. 973 This timer approach would apply more generally to any network 974 structure, and might be worthwhile to incorporate into RSVP. 975 The sequence of events for a single VC would be 977 - Wait if timer is active 978 - Establish VC with new QoS 979 - Remap data traffic to new VC 980 - Tear down old VC 981 - Activate timer 983 There is an interesting interaction between heterogeneous 984 reservations and dynamic QoS. In the case where a RESV message is 985 received from a new next-hop and the requested resources are 986 larger than any existing reservation, both dynamic QoS and 987 heterogeneity need to be addressed. A key issue is whether to 988 first add the new next-hop or to change to the new QoS. This is a 989 fairly straight forward special case. Since the older, smaller 990 reservation does not support the new next-hop, the dynamic QoS 991 process should be initiated first. Since the new QoS is only 992 needed by the new next-hop, it should be the first end-point of 993 the new VC. This way signalling is minimized when the setup to 994 the new next-hop fails. 996 4.2.8 Short-Cuts 998 Short-cuts [4] allow ATM attached routers and hosts to directly 999 establish point-to-point VCs across LIS boundaries, i.e., the VC 1000 end-points are on different IP subnets. The ability for short- 1001 cuts and RSVP to interoperate has been raised as a general 1002 question. An area of concern is the ability to handle asymmetric 1003 short-cuts. Specifically how RSVP can handle the case where a 1004 downstream short-cut may not have a matching upstream short-cut. 1005 In this case, PATH and RESV messages following different paths. 1007 Examination of RSVP shows that the protocol already includes 1008 mechanisms that will support short-cuts. The mechanism is the 1009 same one used to support RESV messages arriving at the wrong 1010 router and the wrong interface. The key aspect of this mechanism 1011 is RSVP only processing messages that arrive at the proper 1012 interface and RSVP forwarding of messages that arrive on the 1013 wrong interface. The proper interface is indicated in the NHOP 1014 object of the message. So, existing RSVP mechanisms will support 1015 asymmetric short-cuts. The short-cut model of VC establishment 1016 still poses several issues when running with RSVP. The major 1017 issues are dealing with established best-effort short-cuts, when 1018 to establish short-cuts, and QoS only short-cuts. These issues 1019 will need to be addressed by RSVP implementations. 1021 The key issue to be addressed by any RSVP over ATM solution is 1022 when to establish a short-cut for a QoS data flow. The default 1023 behavior is to simply follow best-effort traffic. When a short- 1024 cut has been established for best-effort traffic to a destination 1025 or next-hop, that same end-point should be used when setting up 1026 RSVP triggered VCs for QoS traffic to the same destination or 1027 next-hop. This will happen naturally when PATH messages are 1028 forwarded over the best-effort short-cut. Note that in this 1029 approach when best-effort short-cuts are never established, RSVP 1030 triggered QoS short-cuts will also never be established. More 1031 study is expected in this area. 1033 4.2.9 VC Teardown 1035 RSVP can identify from either explicit messages or timeouts when 1036 a data VC is no longer needed. Therefore, data VCs set up to 1037 support RSVP controlled flows should only be released at the 1038 direction of RSVP. VCs must not be timed out due to inactivity by 1039 either the VC initiator or the VC receiver. This conflicts with 1040 VCs timing out as described in RFC 1755 [11], section 3.4 on VC 1041 Teardown. RFC 1755 recommends tearing down a VC that is inactive 1042 for a certain length of time. Twenty minutes is recommended. This 1043 timeout is typically implemented at both the VC initiator and the 1044 VC receiver. Although, section 3.1 of the update to RFC 1755 1045 [11] states that inactivity timers must not be used at the VC 1046 receiver. 1048 When this timeout occurs for an RSVP initiated VC, a valid VC 1049 with QoS will be torn down unexpectedly. While this behavior is 1050 acceptable for best-effort traffic, it is important that RSVP 1051 controlled VCs not be torn down. If there is no choice about the 1052 VC being torn down, the RSVP daemon must be notified, so a 1053 reservation failure message can be sent. 1055 For VCs initiated at the request of RSVP, the configurable 1056 inactivity timer mentioned in [11] must be set to "infinite". 1057 Setting the inactivity timer value at the VC initiator should not 1058 be problematic since the proper value can be relayed internally 1059 at the originator. Setting the inactivity timer at the VC 1060 receiver is more difficult, and would require some mechanism to 1061 signal that an incoming VC was RSVP initiated. To avoid this 1062 complexity and to conform to [11] implementations must not use an 1063 inactivity timer to clear received connections. 1065 4.3 RSVP Control Management 1067 One last important issue is providing a data path for the RSVP 1068 messages themselves. There are two main types of messages in 1069 RSVP, PATH and RESV. PATH messages are sent to unicast or 1070 multicast addresses, while RESV messages are sent only to unicast 1071 addresses. Other RSVP messages are handled similar to either PATH 1072 1 1073 or RESV . So ATM VCs used for RSVP signalling messages need to 1074 provide both unicast and multicast functionality. There are 1075 several different approaches for how to assign VCs to use for 1076 RSVP signalling messages. 1078 The main approaches are: 1079 - use same VC as data 1080 - single VC per session 1081 - single point-to-multipoint VC multiplexed among sessions 1082 - multiple point-to-point VCs multiplexed among sessions 1084 There are several different issues that affect the choice of how 1085 to assign VCs for RSVP signalling. One issue is the number of 1086 additional VCs needed for RSVP signalling. Related to this issue 1087 is the degree of multiplexing on the RSVP VCs. In general more 1088 multiplexing means fewer VCs. An additional issue is the latency 1089 in dynamically setting up new RSVP signalling VCs. A final issue 1090 is complexity of implementation. The remainder of this section 1091 discusses the issues and tradeoffs among these different 1092 approaches and suggests guidelines for when to use which 1093 alternative. 1095 1 1096 This can be slightly more complicated for RERR messages 1097 4.3.1 Mixed data and control traffic 1099 In this scheme RSVP signalling messages are sent on the same VCs 1100 as is the data traffic. The main advantage of this scheme is that 1101 no additional VCs are needed beyond what is needed for the data 1102 traffic. An additional advantage is that there is no ATM 1103 signalling latency for PATH messages (which follow the same 1104 routing as the data messages). However there can be a major 1105 problem when data traffic on a VC is nonconforming. With 1106 nonconforming traffic, RSVP signalling messages may be dropped. 1107 While RSVP is resilient to a moderate level of dropped messages, 1108 excessive drops would lead to repeated tearing down and re- 1109 establishing of QoS VCs, a very undesirable behavior for ATM. Due 1110 to these problems, this may not be a good choice for providing 1111 RSVP signalling messages, even though the number of VCs needed 1112 for this scheme is minimized. One variation of this scheme is to 1113 use the best effort data path for signalling traffic. In this 1114 scheme, there is no issue with nonconforming traffic, but there 1115 is an issue with congestion in the ATM network. RSVP provides 1116 some resiliency to message loss due to congestion, but RSVP 1117 control messages should be offered a preferred class of service. 1118 A related variation of this scheme that is hopeful but requires 1119 further study is to have a packet scheduling algorithm (before 1120 entering the ATM network) that gives priority to the RSVP 1121 signalling traffic. This can be difficult to do at the IP layer. 1123 4.3.1.1 Single RSVP VC per RSVP Reservation 1125 In this scheme, there is a parallel RSVP signalling VC for each 1126 RSVP reservation. This scheme results in twice the number of VCs, 1127 but means that RSVP signalling messages have the advantage of a 1128 separate VC. This separate VC means that RSVP signalling messages 1129 have their own traffic contract and compliant signalling messages 1130 are not subject to dropping due to other noncompliant traffic 1131 (such as can happen with the scheme in section 4.3.1). The 1132 advantage of this scheme is its simplicity - whenever a data VC 1133 is created, a separate RSVP signalling VC is created. The 1134 disadvantage of the extra VC is that extra ATM signalling needs 1135 to be done. Additionally, this scheme requires twice the minimum 1136 number of VCs and also additional latency, but is quite simple. 1138 4.3.1.2 Multiplexed point-to-multipoint RSVP VCs 1140 In this scheme, there is a single point-to-multipoint RSVP 1141 signalling VC for each unique ingress router and unique set of 1142 egress routers. This scheme allows multiplexing of RSVP 1143 signalling traffic that shares the same ingress router and the 1144 same egress routers. This can save on the number of VCs, by 1145 multiplexing, but there are problems when the destinations of the 1146 multiplexed point-to-multipoint VCs are changing. Several 1147 alternatives exist in these cases, that have applicability in 1148 different situations. First, when the egress routers change, the 1149 ingress router can check if it already has a point-to-multipoint 1150 RSVP signalling VC for the new list of egress routers. If the 1151 RSVP signalling VC already exists, then the RSVP signalling 1152 traffic can be switched to this existing VC. If no such VC 1153 exists, one approach would be to create a new VC with the new 1154 list of egress routers. Other approaches include modifying the 1155 existing VC to add an egress router or using a separate new VC 1156 for the new egress routers. When a destination drops out of a 1157 group, an alternative would be to keep sending to the existing VC 1158 even though some traffic is wasted. The number of VCs used in 1159 this scheme is a function of traffic patterns across the ATM 1160 network, but is always less than the number used with the Single 1161 RSVP VC per data VC. In addition, existing best effort data VCs 1162 could be used for RSVP signalling. Reusing best effort VCs saves 1163 on the number of VCs at the cost of higher probability of RSVP 1164 signalling packet loss. One possible place where this scheme 1165 will work well is in the core of the network where there is the 1166 most opportunity to take advantage of the savings due to 1167 multiplexing. The exact savings depend on the patterns of 1168 traffic and the topology of the ATM network. 1170 4.3.1.3 Multiplexed point-to-point RSVP VCs 1172 In this scheme, multiple point-to-point RSVP signalling VCs are 1173 used for a single point-to-multipoint data VC. This scheme 1174 allows multiplexing of RSVP signalling traffic but requires the 1175 same traffic to be sent on each of several VCs. This scheme is 1176 quite flexible and allows a large amount of multiplexing. 1178 Since point-to-point VCs can set up a reverse channel at the same 1179 time as setting up the forward channel, this scheme could save 1180 substantially on signalling cost. In addition, signalling 1181 traffic could share existing best effort VCs. Sharing existing 1182 best effort VCs reduces the total number of VCs needed, but might 1183 cause signalling traffic drops if there is congestion in the ATM 1184 network. This point-to-point scheme would work well in the core 1185 of the network where there is much opportunity for multiplexing. 1186 Also in the core of the network, RSVP VCs can stay permanently 1187 established either as Permanent Virtual Circuits (PVCs) or as 1188 long lived Switched Virtual Circuits (SVCs). The number of VCs in 1189 this scheme will depend on traffic patterns, but in the core of a 1190 network would be approximately n(n-1)/2 where n is the number of 1191 IP nodes in the network. In the core of the network, this will 1192 typically be small compared to the total number of VCs. 1194 4.3.2 QoS for RSVP VCs 1196 There is an issue of what QoS, if any, to assign to the RSVP 1197 signalling VCs. For other RSVP VC schemes, a QoS (possibly best 1198 effort) will be needed. What QoS to use partially depends on the 1199 expected level of multiplexing that is being done on the VCs, and 1200 the expected reliability of best effort VCs. Since RSVP 1201 signalling is infrequent (typically every 30 seconds), only a 1202 relatively small QoS should be needed. This is important since 1203 using a larger QoS risks the VC setup being rejected for lack of 1204 resources. Falling back to best effort when a QoS call is 1205 rejected is possible, but if the ATM net is congested, there will 1206 likely be problems with RSVP packet loss on the best effort VC 1207 also. Additional experimentation is needed in this area. 1209 5. Encapsulation 1211 Since RSVP is a signalling protocol used to control flows of IP 1212 data packets, encapsulation for both RSVP packets and associated 1213 IP data packets must be defined. There are currently two 1214 encapsulation options for running IP over ATM, RFC 1483 and LANE. 1215 There is also the possibility of future encapsulation options, 1216 such as MPOA[18]. The first option is described in RFC 1483[19] 1217 and is currently used for "Classical" IP over ATM and NHRP. 1219 The second option is LAN Emulation, as described in [17]. LANE 1220 encapsulation does not currently include a QoS signalling 1221 interface. If LANE encapsulation is needed, LANE QoS signalling 1222 would first need to be defined by the ATM Forum. It is possible 1223 that LANE 2.0 will include the required QoS support. 1225 6. Security Considerations 1227 The same considerations stated in [1] and [11] apply to this 1228 document. There are no additional security issues raised in this 1229 document. 1231 7. References 1233 [1] R. Braden, L. Zhang, S. Berson, S. Herzog, S. Jamin. Resource 1234 ReSerVation Protocol (RSVP) -- Version 1 Functional 1235 Specification RFC 2209, September 1997. 1236 [2] M. Borden, E. Crawley, B. Davie, S. Batsell. Integration of 1237 Real-time Services in an IP-ATM Network Architecture. 1238 Request for Comments (Informational) RFC 1821, August 1995. 1239 [3] R. Cole, D. Shur, C. Villamizar. IP over ATM: A Framework 1240 Document. Request for Comments (Informational), RFC 1932, 1241 April 1996. 1242 [4] D. Katz, D. Piscitello, B. Cole, J. Luciani. NBMA Next Hop 1243 Resolution Protocol (NHRP). Internet Draft, draft-ietf-rolc- 1244 nhrp-12.txt, October 1997. 1245 [5] G. Armitage, Support for Multicast over UNI 3.0/3.1 based ATM 1246 Networks. RFC 2022. November 1996. 1247 [6] S. Shenker, C. Partridge. Specification of Guaranteed Quality 1248 of Service. RFC 2212, September 1997. 1249 [7] J. Wroclawski. Specification of the Controlled-Load Network 1250 Element Service. RFC 2211, September 1997. 1251 [8] ATM Forum. ATM User-Network Interface Specification Version 1252 3.0. Prentice Hall, September 1993 1253 [9] ATM Forum. ATM User Network Interface (UNI) Specification 1254 Version 3.1. Prentice Hall, June 1995. 1256 [10] M. Laubach, Classical IP and ARP over ATM. Request for 1257 Comments (Proposed Standard) RFC1577, January 1994. 1258 [11] M. Perez, A. Mankin, E. Hoffman, G. Grossman, A. Malis, ATM 1259 Signalling Support for IP over ATM, Request for Comments 1260 (Proposed Standard) RFC1755, February 1995. 1261 [12] S. Herzog. RSVP Extensions for Policy Control. Internet 1262 Draft, draft-ietf-rsvp-policy-ext-02.txt, April 1997. 1263 [13] S. Herzog. Local Policy Modules (LPM): Policy Control for 1264 RSVP, Internet Draft, draft-ietf-rsvp-policy-lpm-01.txt, 1265 November 1996. 1266 [14] M. Borden, M. Garrett. Interoperation of Controlled-Load and 1267 Guaranteed Service with ATM, Internet Draft, draft-ietf- 1268 issll-atm-mapping-03.txt, August 1997. 1269 [15] L. Berger. RSVP over ATM Implementation Requirements. 1270 Internet Draft, draft-ietf-issll-atm-imp-req-00.txt, July 1271 1997. 1272 [16] L. Berger. RSVP over ATM Implementation Guidelines. Internet 1273 Draft, draft-ietf-issll-atm-imp-guide-01.txt, July 1997. 1274 [17] ATM Forum Technical Committee. LAN Emulation over ATM, 1275 Version 1.0 Specification, af-lane-0021.000, January 1995. 1276 [18] ATM Forum Technical Committee. Baseline Text for MPOA, af-95- 1277 0824r9, September 1996. 1278 [19] J. Heinanen. Multiprotocol Encapsulation over ATM Adaptation 1279 Layer 5, RFC 1483, July 1993. 1280 [20] ATM Forum Technical Committee. LAN Emulation over ATM Version 1281 2 - LUNI Specification, December 1996. 1282 [21] ATM Forum Technical Committee. Traffic Management 1283 Specification v4.0, af-tm-0056.000, April 1996. 1284 [22] R. Callon, et al. A Framework for Multiprotocol Label 1285 Switching, Internet Draft, draft-ietf-mpls-framework-01.txt, 1286 July 1997. 1287 [23] B. Rajagopalan, R. Nair, H. Sandick, E. Crawley. A Framework 1288 for QoS-based Routing in the Internet, Internet Draft, draft- 1289 ietf-qosr-framework-01.txt, July 1997. 1290 [24] ITU-T. Digital Subscriber Signaling System No. 2-Connection 1291 modification: Peak cell rate modification by the connection 1292 owner, ITU-T Recommendation Q.2963.1, July 1996. 1293 [25] ITU-T. Digital Subscriber Signaling System No. 2-Connection 1294 characteristics negotiation during call/connection 1295 establishment phase, ITU-T Recommendation Q.2962, July 1996. 1296 [26] ATM Forum Technical Committee. Private Network-Network 1297 Interface Specification v1.0 (PNNI), March 1996 1299 8. Author's Address 1301 Eric S. Crawley 1302 Argon Networks 1303 25 Porter Road 1304 Littleton, Ma 01460 1305 +1 508 486-0665 1306 esc@argon-net.com 1307 Lou Berger 1308 FORE Systems 1309 6905 Rockledge Drive 1310 Suite 800 1311 Bethesda, MD 20817 1312 +1 301 571-2534 1313 lberger@fore.com 1315 Steven Berson 1316 USC Information Sciences Institute 1317 4676 Admiralty Way 1318 Marina del Rey, CA 90292 1319 +1 310 822-1511 1320 berson@isi.edu 1322 Fred Baker 1323 Cisco Systems 1324 519 Lado Drive 1325 Santa Barbara, California 93111 1326 +1 805 681-0115 1327 fred@cisco.com 1329 Marty Borden 1330 New Oak Communications 1331 42 Nanog Park 1332 Acton, MA 01720 1333 +1 508 266-1011 1334 mborden@newoak.com 1336 John J. Krawczyk 1337 ArrowPoint Communications 1338 235 Littleton Road 1339 Westford, Massachusetts 01886 1340 +1 508 692-5875 1341 jj@arrowpoint.com