idnits 2.17.1 draft-ietf-issll-atm-framework-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-19) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 24 longer pages, the longest (page 8) being 66 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 656 instances of too long lines in the document, the longest one being 6 characters in excess of 72. == There are 16 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 1109 has weird spacing: '...VCs) or as lo...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (April 2, 1998) is 9514 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: '10' is mentioned on line 1134, but not defined == Missing Reference: '17' is mentioned on line 1134, but not defined == Missing Reference: '20' is mentioned on line 94, but not defined == Missing Reference: '18' is mentioned on line 1134, but not defined == Missing Reference: '14' is mentioned on line 1148, but not defined == Missing Reference: '21' is mentioned on line 193, but not defined == Missing Reference: '24' is mentioned on line 396, but not defined == Missing Reference: '25' is mentioned on line 396, but not defined == Missing Reference: '22' is mentioned on line 414, but not defined == Missing Reference: '26' is mentioned on line 433, but not defined == Missing Reference: '23' is mentioned on line 435, but not defined == Missing Reference: '11' is mentioned on line 1154, but not defined == Missing Reference: '12' is mentioned on line 498, but not defined == Missing Reference: '13' is mentioned on line 498, but not defined == Missing Reference: '15' is mentioned on line 522, but not defined == Missing Reference: '16' is mentioned on line 522, but not defined == Missing Reference: '19' is mentioned on line 1135, but not defined == Unused Reference: '2' is defined on line 1162, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2209 (ref. '1') ** Downref: Normative reference to an Informational RFC: RFC 1821 (ref. '2') ** Downref: Normative reference to an Informational RFC: RFC 1932 (ref. '3') == Outdated reference: A later version (-14) exists of draft-ietf-rolc-nhrp-12 -- Possible downref: Non-RFC (?) normative reference: ref. '8' -- Possible downref: Non-RFC (?) normative reference: ref. '9' Summary: 13 errors (**), 0 flaws (~~), 23 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Crawley, Editor 3 Internet Draft (Argon Networks) 4 draft-ietf-issll-atm-framework-03.txt L. Berger 5 (Fore Systems) 6 S. Berson 7 (ISI) 8 F. Baker 9 (Cisco Systems) 10 M. Borden 11 (Bay Networks) 12 J. Krawczyk 13 (ArrowPoint Communications) 15 April 2, 1998 17 A Framework for Integrated Services and RSVP over ATM 19 Status of this Memo 20 This document is an Internet Draft. Internet Drafts are working 21 documents of the Internet Engineering Task Force (IETF), its Areas, and 22 its Working Groups. Note that other groups may also distribute working 23 documents as Internet Drafts). 25 Internet Drafts are draft documents valid for a maximum of six months. 26 Internet Drafts may be updated, replaced, or obsoleted by other 27 documents at any time. It is not appropriate to use Internet Drafts as 28 reference material or to cite them other than as a "working draft" or 29 "work in progress." 31 To view the entire list of current Internet-Drafts, please check 32 the "1id-abstracts.txt" listing contained in the Internet-Drafts 33 Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net 34 (Northern Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au 35 (Pacific Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu 36 (US West Coast). 38 Abstract 39 This document outlines the issues and framework related to providing IP 40 Integrated Services with RSVP over ATM. It provides an overall approach 41 to the problem(s) and related issues. These issues and problems are to 42 be addressed in further documents from the ISATM subgroup of the ISSLL 43 working group. 45 Editor's Note 46 This document is the merger of two previous documents, draft-ietf- 47 issll-atm-support-02.txt by Berger and Berson and draft-crawley-rsvp- 48 over-atm-00.txt by Baker, Berson, Borden, Crawley, and Krawczyk. The 49 former document has been split into this document and a set of 50 documents on RSVP over ATM implementation requirements and guidelines. 52 1. Introduction 54 The Internet currently has one class of service normally referred to as 55 "best effort." This service is typified by first-come, first-serve 56 scheduling at each hop in the network. Best effort service has worked 57 well for electronic mail, World Wide Web (WWW) access, file transfer 58 (e.g. ftp), etc. For real-time traffic such as voice and video, the 59 current Internet has performed well only across unloaded portions of 60 the network. In order to provide quality real-time traffic, new 61 classes of service and a QoS signalling protocol are being introduced 62 in the Internet [1,6,7], while retaining the existing best effort 63 service. The QoS signalling protocol is RSVP [1], the Resource 64 ReSerVation Protocol and the service models 66 One of the important features of ATM technology is the ability to 67 request a point-to-point Virtual Circuit (VC) with a specified Quality 68 of Service (QoS). An additional feature of ATM technology is the 69 ability to request point-to-multipoint VCs with a specified QoS. 70 Point-to-multipoint VCs allows leaf nodes to be added and removed from 71 the VC dynamically and so provides a mechanism for supporting IP 72 multicast. It is only natural that RSVP and the Internet Integrated 73 Services (IIS) model would like to utilize the QoS properties of any 74 underlying link layer including ATM, and this draft concentrates on 75 ATM. 77 Classical IP over ATM [10] has solved part of this problem, supporting 78 IP unicast best effort traffic over ATM. Classical IP over ATM is 79 based on a Logical IP Subnetwork (LIS), which is a separately 80 administered IP subnetwork. Hosts within an LIS communicate using the 81 ATM network, while hosts from different subnets communicate only by 82 going through an IP router (even though it may be possible to open a 83 direct VC between the two hosts over the ATM network). Classical IP 84 over ATM provides an Address Resolution Protocol (ATMARP) for ATM edge 85 devices to resolve IP addresses to native ATM addresses. For any pair 86 of IP/ATM edge devices (i.e. hosts or routers), a single VC is created 87 on demand and shared for all traffic between the two devices. A second 88 part of the RSVP and IIS over ATM problem, IP multicast, is being 89 solved with MARS [5], the Multicast Address Resolution Server. 91 MARS compliments ATMARP by allowing an IP address to resolve into a 92 list of native ATM addresses, rather than just a single address. 94 The ATM Forum's LAN Emulation (LANE) [17, 20] and Multiprotocol Over 95 ATM (MPOA) [18] also address the support of IP best effort traffic over 96 ATM through similar means. 98 A key remaining issue for IP in an ATM environment is the integration 99 of RSVP signalling and ATM signalling in support of the Internet 100 Integrated Services (IIS) model. There are two main areas involved in 101 supporting the IIS model, QoS translation and VC management. QoS 102 translation concerns mapping a QoS from the IIS model to a proper ATM 103 QoS, while VC management concentrates on how many VCs are needed and 104 which traffic flows are routed over which VCs. 106 1.1 Structure and Related Documents 108 This document provides a guide to the issues for IIS over ATM. It is 109 intended to frame the problems that are to be addressed in further 110 documents. In this document, the modes and models for RSVP operation 111 over ATM will be discussed followed by a discussion of management of 112 ATM VCs for RSVP data and control. Lastly, the topic of encapsulations 113 will be discussed in relation to the models presented. 115 This document is part of a group of documents from the ISATM subgroup 116 of the ISSLL working group related to the operation of IntServ and RSVP 117 over ATM. [14] discusses the mapping of the IntServ models for 118 Controlled Load and Guaranteed Service to ATM. [15 and 16] discuss 119 detailed implementation requirements and guidelines for RSVP over ATM, 120 respectively. While these documents may not address all the issues 121 raised in this document, they should provide enough information for 122 development of solutions for IntServ and RSVP over ATM. 124 1.2 Terms 126 Several term used in this document are used in many contexts, often 127 with different meaning. These terms are used in this document with the 128 following meaning: 130 - Sender is used in this document to mean the ingress point to the ATM 131 network or "cloud". 132 - Receiver is used in this document to refer to the egress point from 133 the ATM network or "cloud". 134 - Reservation is used in this document to refer to an RSVP initiated 135 request for resources. RSVP initiates requests for resources based 136 on RESV message processing. RESV messages that simply refresh state 137 do not trigger resource requests. Resource requests may be made 138 based on RSVP sessions and RSVP reservation styles. RSVP styles 139 dictate whether the reserved resources are used by one sender or 140 shared by multiple senders. See [1] for details of each. Each new 141 request is referred to in this document as an RSVP reservation, or 142 simply reservation. 143 - Flow is used to refer to the data traffic associated with a 144 particular reservation. The specific meaning of flow is RSVP style 145 dependent. For shared style reservations, there is one flow per 146 session. For distinct style reservations, there is one flow per 147 sender (per session). 149 2. Issues Regarding the Operation of RSVP and IntServ over ATM 151 The issues related to RSVP and IntServ over ATM fall into several 152 general classes: 153 - How to make RSVP run over ATM now and in the future 154 - When to set up a virtual circuit (VC) for a specific Quality of 155 Service (QoS) related to RSVP 156 - How to map the IntServ models to ATM QoS models 157 - How to know that an ATM network is providing the QoS necessary for a 158 flow 159 - How to handle the many-to-many connectionless features of IP 160 multicast and RSVP in the one-to-many connection-oriented world of 161 ATM 162 2.1 Modes/Models for RSVP and IntServ over ATM 164 [3] Discusses several different models for running IP over ATM 165 networks. [17, 18, and 20] also provide models for IP in ATM 166 environments. Any one of these models would work as long as the RSVP 167 control packets (IP protocol 46) and data packets can follow the same 168 IP path through the network. It is important that the RSVP PATH 169 messages follow the same IP path as the data such that appropriate PATH 170 state may be installed in the routers along the path. For an ATM 171 subnetwork, this means the ingress and egress points must be the same 172 in both directions for the RSVP control and data messages. Note that 173 the RSVP protocol does not require symmetric routing. The PATH state 174 installed by RSVP allows the RESV messages to "retrace" the hops that 175 the PATH message crossed. Within each of the models for IP over ATM, 176 there are decisions about using different types of data distribution in 177 ATM as well as different connection initiation. The following sections 178 look at some of the different ways QoS connections can be set up for 179 RSVP. 181 2.1.1 UNI 3.x and 4.0 183 In the User Network Interface (UNI) 3.0 and 3.1 specifications [8,9] 184 and 4.0 specification, both permanent and switched virtual circuits 185 (PVC and SVC) may be established with a specified service category 186 (CBR, VBR, and UBR for UNI 3.x and VBR-rt and ABR for 4.0) and specific 187 traffic descriptors in point-to-point and point-to-multipoint 188 configurations. Additional QoS parameters are not available in UNI 3.x 189 and those that are available are vendor-specific. Consequently, the 190 level of QoS control available in standard UNI 3.x networks is somewhat 191 limited. However, using these building blocks, it is possible to use 192 RSVP and the IntServ models. ATM 4.0 with the Traffic Management (TM) 193 4.0 specification [21] allows much greater control of QoS. [14] 194 provides the details of mapping the IntServ models to UNI 3.x and 4.0 195 service categories and traffic parameters. 197 2.1.1.1 Permanent Virtual Circuits (PVCs) 199 PVCs emulate dedicated point-to-point lines in a network, so the 200 operation of RSVP can be identical to the operation over any point-to- 201 point network. The QoS of the PVC must be consistent and equivalent to 202 the type of traffic and service model used. The devices on either end 203 of the PVC have to provide traffic control services in order to 204 multiplex multiple flows over the same PVC. With PVCs, there is no 205 issue of when or how long it takes to set up VCs, since they are made 206 in advance but the resources of the PVC are limited to what has been 207 pre-allocated. PVCs that are not fully utilized can tie up ATM network 208 resources that could be used for SVCs. 210 An additional issue for using PVCs is one of network engineering. 211 Frequently, multiple PVCs are set up such that if all the PVCs were 212 running at full capacity, the link would be over-subscribed. This 213 frequently used "statistical multiplexing gain" makes providing IIS 214 over PVCs very difficult and unreliable. Any application of IIS over 215 PVCs has to be assured that the PVCs are able to receive all the 216 requested QoS. 218 2.1.1.2 Switched Virtual Circuits (SVCs) 220 SVCs allow paths in the ATM network to be set up "on demand". This 221 allows flexibility in the use of RSVP over ATM along with some 222 complexity. Parallel VCs can be set up to allow best-effort and better 223 service class paths through the network, as shown in Figure 1. The 224 cost and time to set up SVCs can impact their use. For example, it may 225 be better to initially route QoS traffic over existing VCs until a SVC 226 with the desired QoS can be set up for the flow. Scaling issues can 227 come into play if a single RSVP flow is used per VC, as will be 228 discussed in Section 4.3.1.1. The number of VCs in any ATM device may 229 also be limited so the number of RSVP flows that can be supported by a 230 device can be strictly limited to the number of VCs available, if we 231 assume one flow per VC. Section 4 discusses the topic of VC management 232 for RSVP in greater detail. 234 Data Flow ==========> 236 +-----+ 237 | | --------------> +----+ 238 | Src | --------------> | R1 | 239 | *| --------------> +----+ 240 +-----+ QoS VCs 241 /\ 242 || 243 VC || 244 Initiator 246 Figure 1: Data Flow VC Initiation 248 While RSVP is receiver oriented, ATM is sender oriented. This might 249 seem like a problem but the sender or ingress point receives RSVP RESV 250 messages and can determine whether a new VC has to be set up to the 251 destination or egress point. 253 2.1.1.3 Point to MultiPoint 255 In order to provide QoS for IP multicast, an important feature of RSVP, 256 data flows must be distributed to multiple destinations from a given 257 source. Point-to-multipoint VCs provide such a mechanism. It is 258 important to map the actions of IP multicasting and RSVP (e.g. IGMP 259 JOIN/LEAVE and RSVP RESV/RESV TEAR) to add party and drop party 260 functions for ATM. Point-to-multipoint VCs as defined in UNI 3.x and 261 UNI 4.0 have a single service class for all destinations. This is 262 contrary to the RSVP "heterogeneous receiver" concept. It is possible 263 to set up a different VC to each receiver requesting a different QoS, 264 as shown in Figure 2. This again can run into scaling and resource 265 problems when managing multiple VCs on the same interface to different 266 destinations. 268 +----+ 269 +------> | R1 | 270 | +----+ 271 | 272 | +----+ 273 +-----+ -----+ +--> | R2 | 274 | | ---------+ +----+ Receiver Request 275 Types: 276 | Src | ----> QoS 1 and QoS 277 2 278 | | .........+ +----+ ....> Best-Effort 279 +-----+ .....+ +..> | R3 | 280 : +----+ 281 /\ : 282 || : +----+ 283 || +......> | R4 | 284 || +----+ 285 Single 286 IP Mulicast 287 Group 289 Figure 2: Types of Multicast Receivers 291 RSVP sends messages both up and down the multicast distribution tree. 292 In the case of a large ATM cloud, this could result in a RSVP message 293 implosion at an ATM ingress point with many receivers. 295 ATM 4.0 expands on the point-to-multipoint VCs by adding a Leaf 296 Initiated Join (LIJ) capability. LIJ allows an ATM end point to join 297 into an existing point-to-multipoint VC without necessarily contacting 298 the source of the VC. This can reduce the burden on the ATM source 299 point for setting up new branches and more closely matches the 300 receiver-based model of RSVP and IP multicast. However, many of the 301 same scaling issues exist and the new branches added to a point-to- 302 multipoint VC must use the same QoS as existing branches. 304 2.1.1.4 Multicast Servers 306 IP-over-ATM has the concept of a multicast server or reflector that can 307 accept cells from multiple senders and send them via a point-to- 308 multipoint VC to a set of receivers. This moves the VC scaling issues 309 noted previously for point-to-multipoint VCs to the multicast server. 310 Additionally, the multicast server will need to know how to interpret 311 RSVP packets or receive instruction from another node so it will be 312 able to provide VCs of the appropriate QoS for the RSVP flows. 314 2.1.2 Hop-by-Hop vs. Short Cut 316 If the ATM "cloud" is made up a number of logical IP subnets (LISs), 317 then it is possible to use "short cuts" from a node on one LIS directly 318 to a node on another LIS, avoiding router hops between the LISs. NHRP 319 [4], is one mechanism for determining the ATM address of the egress 320 point on the ATM network given a destination IP address. It is a topic 321 for further study to determine if significant benefit is achieved from 322 short cut routes vs. the extra state required. 324 2.1.3 Future Models 326 ATM is constantly evolving. If we assume that RSVP and IntServ 327 applications are going to be wide-spread, it makes sense to consider 328 changes to ATM that would improve the operation of RSVP and IntServ 329 over ATM. Similarly, the RSVP protocol and IntServ models will 330 continue to evolve and changes that affect them should also be 331 considered. The following are a few ideas that have been discussed 332 that would make the integration of the IntServ models and RSVP easier 333 or more complete. They are presented here to encourage continued 334 development and discussion of ideas that can help aid in the 335 integration of RSVP, IntServ, and ATM. 337 2.1.3.1 Heterogeneous Point-to-MultiPoint 339 The IntServ models and RSVP support the idea of "heterogeneous 340 receivers"; e.g., not all receivers of a particular multicast flow are 341 required to ask for the same QoS from the network, as shown in Figure 342 2. 344 The most important scenario that can utilize this feature occurs when 345 some receivers in an RSVP session ask for a specific QoS while others 346 receive the flow with a best-effort service. In some cases where there 347 are multiple senders on a shared-reservation flow (e.g., an audio 348 conference), an individual receiver only needs to reserve enough 349 resources to receive one sender at a time. However, other receivers 350 may elect to reserve more resources, perhaps to allow for some amount 351 of "over-speaking" or in order to record the conference (post 352 processing during playback can separate the senders by their source 353 addresses). 355 In order to prevent denial-of-service attacks via reservations, the 356 service models do not allow the service elements to simply drop non- 357 conforming packets. For example, Controlled Load service model [7] 358 assigns non-conformant packets to best-effort status (which may result 359 in packet drops if there is congestion). 361 Emulating these behaviors over an ATM network is problematic and needs 362 to be studied. If a single maximum QoS is used over a point-to- 363 multipoint VC, resources could be wasted if cells are sent over certain 364 links where the reassembled packets will eventually be dropped. In 365 addition, the "maximum QoS" may actually cause a degradation in service 366 to the best-effort branches. 368 The term "variegated VC" has been coined to describe a point-to- 369 multipoint VC that allows a different QoS on each branch. This approach 370 seems to match the spirit of the Integrated Service and RSVP models, 371 but some thought has to be put into the cell drop strategy when 372 traversing from a "bigger" branch to a "smaller" one. The "best-effort 373 for non-conforming packets" behavior must also be retained. Early 374 Packet Discard (EPD) schemes must be used so that all the cells for a 375 given packet can be discarded at the same time rather than discarding 376 only a few cells from several packets making all the packets useless to 377 the receivers. 379 2.1.3.2 Lightweight Signalling 381 Q.2931 signalling is very complete and carries with it a significant 382 burden for signalling in all possible public and private connections. 383 It might be worth investigating a lighter weight signalling mechanism 384 for faster connection setup in private networks. 386 2.1.3.3 QoS Renegotiation 388 Another change that would help RSVP over ATM is the ability to request 389 a different QoS for an active VC. This would eliminate the need to 390 setup and tear down VCs as the QoS changed. RSVP allows receivers to 391 change their reservations and senders to change their traffic 392 descriptors dynamically. This, along with the merging of reservations, 393 can create a situation where the QoS needs of a VC can change. 394 Allowing changes to the QoS of an existing VC would allow these 395 features to work without creating a new VC. In the ITU-T ATM 396 specifications [24,25], some cell rates can be renegotiated or changed. 397 Specifically, the Peak Cell Rate (PCR) of an existing VC can be changed 398 and, in some cases, QoS parameters may be renegotiated during the call 399 setup phase. It is unclear if this is sufficient for the QoS 400 renegotiation needs of the IntServ models. 402 2.1.3.4 Group Addressing 404 The model of one-to-many communications provided by point-to-multipoint 405 VCs does not really match the many-to-many communications provided by 406 IP multicasting. A scaleable mapping from IP multicast addresses to an 407 ATM "group address" can address this problem. 409 2.1.3.5 Label Switching 411 The MultiProtocol Label Switching (MPLS) working group is discussing 412 methods for optimizing the use of ATM and other switched networks for 413 IP by encapsulating the data with a header that is used by the interior 414 switches to achieve faster forwarding lookups. [22] discusses a 415 framework for this work. It is unclear how this work will affect 416 IntServ and RSVP over label switched networks but there may be some 417 interactions. 419 2.1.4 QoS Routing 421 RSVP is explicitly not a routing protocol. However, since it conveys 422 QoS information, it may prove to be a valuable input to a routing 423 protocol that can make path determinations based on QoS and network 424 load information. In other words, instead of asking for just the IP 425 next hop for a given destination address, it might be worthwhile for 426 RSVP to provide information on the QoS needs of the flow if routing has 427 the ability to use this information in order to determine a route. 428 Other forms of QoS routing have existed in the past such as using the 429 IP TOS and Precedence bits to select a path through the network. Some 430 have discussed using these same bits to select one of a set of parallel 431 ATM VCs as a form of QoS routing. ATM routing has also considered the 432 problem of QoS routing through the Private Network-to-Network Interface 433 (PNNI) [26] routing protocol for routing ATM VCs on a path that can 434 support their needs. The work in this area is just starting and there 435 are numerous issues to consider. [23], as part of the work of the QoSR 436 working group frame the issues for QoS Routing in the Internet. 438 2.2 Reliance on Unicast and Multicast Routing 440 RSVP was designed to support both unicast and IP multicast 441 applications. This means that RSVP needs to work closely with 442 multicast and unicast routing. Unicast routing over ATM has been 443 addressed [10] and [11]. MARS [5] provides multicast address 444 resolution for IP over ATM networks, an important part of the solution 445 for multicast but still relies on multicast routing protocols to 446 connect multicast senders and receivers on different subnets. 448 2.3 Aggregation of Flows 450 Some of the scaling issues noted in previous sections can be addressed 451 by aggregating several RSVP flows over a single VC if the destinations 452 of the VC match for all the flows being aggregated. However, this 453 causes considerable complexity in the management of VCs and in the 454 scheduling of packets within each VC at the root point of the VC. Note 455 that the rescheduling of flows within a VC is not possible in the 456 switches in the core of the ATM network. Virtual Paths (VPs) can be 457 used for aggregating multiple VCs. This topic is discussed in greater 458 detail as it applies to multicast data distribution in section 4.2.3.4 460 2.4 Mapping QoS Parameters 462 The mapping of QoS parameters from the IntServ models to the ATM 463 service classes is an important issue in making RSVP and IntServ work 464 over ATM. [14] addresses these issues very completely for the 465 Controlled Load and Guaranteed Service models. An additional issue is 466 that while some guidelines can be developed for mapping the parameters 467 of a given service model to the traffic descriptors of an ATM traffic 468 class, implementation variables, policy, and cost factors can make 469 strict mapping problematic. So, a set of workable mappings that can be 470 applied to different network requirements and scenarios is needed as 471 long as the mappings can satisfy the needs of the service model(s). 473 2.5 Directly Connected ATM Hosts 475 It is obvious that the needs of hosts that are directly connected to 476 ATM networks must be considered for RSVP and IntServ over ATM. 477 Functionality for RSVP over ATM must not assume that an ATM host has 478 all the functionality of a router, but such things as MARS and NHRP 479 clients would be worthwhile features. A host must manage VCs just like 480 any other ATM sender or receiver as described later in section 4. 482 2.6 Accounting and Policy Issues 484 Since RSVP and IntServ create classes of preferential service, some 485 form of administrative control and/or cost allocation is needed to 486 control access. There are certain types of policies specific to ATM 487 and IP over ATM that need to be studied to determine how they 488 interoperate with the IP and IntServ policies being developed. Typical 489 IP policies would be that only certain users are allowed to make 490 reservations. This policy would translate well to IP over ATM due to 491 the similarity to the mechanisms used for Call Admission Control (CAC). 492 There may be a need for policies specific to IP over ATM. For example, 493 since signalling costs in ATM are high relative to IP, an IP over ATM 494 specific policy might restrict the ability to change the prevailing QoS 495 in a VC. If VCs are relatively scarce, there also might be specific 496 accounting costs in creating a new VC. The work so far has been 497 preliminary, and much work remains to be done. The policy mechanisms 498 outlined in [12] and [13] provide the basic mechanisms for implementing 499 policies for RSVP and IntServ over any media, not just ATM. 501 3. Framework for IntServ and RSVP over ATM 503 Now that we have defined some of the issues for IntServ and RSVP over 504 ATM, we can formulate a framework for solutions. The problem breaks 505 down to two very distinct areas; the mapping of IntServ models to ATM 506 service categories and QoS parameters and the operation of RSVP over 507 ATM. 509 Mapping IntServ models to ATM service categories and QoS parameters is 510 a matter of determining which categories can support the goals of the 511 service models and matching up the parameters and variables between the 512 IntServ description and the ATM description(s). Since ATM has such a 513 wide variety of service categories and parameters, more than one ATM 514 service category should be able to support each of the two IntServ 515 models. This will provide a good bit of flexibility in configuration 516 and deployment. [14] examines this topic completely. 518 The operation of RSVP over ATM requires careful management of VCs in 519 order to match the dynamics of the RSVP protocol. VCs need to be 520 managed for both the RSVP QoS data and the RSVP signalling messages. 521 The remainder of this document will discuss several approaches to 522 managing VCs for RSVP and [15] and [16] discuss their application for 523 implementations in term of interoperability requirement and 524 implementation guidelines. 526 4. RSVP VC Management 528 This section provides more detail on the issues related to the 529 management of SVCs for RSVP and IntServ. 531 4.1 VC Initiation 533 As discussed in section 2.1.1.2, there is an apparent mismatch between 534 RSVP and ATM. Specifically, RSVP control is receiver oriented and ATM 535 control is sender oriented. This initially may seem like a major 536 issue, but really is not. While RSVP reservation (RESV) requests are 537 generated at the receiver, actual allocation of resources takes place 538 at the subnet sender. For data flows, this means that subnet senders 539 will establish all QoS VCs and the subnet receiver must be able to 540 accept incoming QoS VCs, as illustrated in Figure 1. These 541 restrictions are consistent with RSVP version 1 processing rules and 542 allow senders to use different flow to VC mappings and even different 543 QoS renegotiation techniques without interoperability problems. 545 The use of the reverse path provided by point-to-point VCs by receivers 546 is for further study. There are two related issues. The first is that 547 use of the reverse path requires the VC initiator to set appropriate 548 reverse path QoS parameters. The second issue is that reverse paths are 549 not available with point-to-multipoint VCs, so reverse paths could only 550 be used to support unicast RSVP reservations. 552 4.2 Data VC Management 554 Any RSVP over ATM implementation must map RSVP and RSVP associated data 555 flows to ATM Virtual Circuits (VCs). LAN Emulation [17], Classical IP 556 [10] and, more recently, NHRP [4] discuss mapping IP traffic onto ATM 557 SVCs, but they only cover a single QoS class, i.e., best effort 558 traffic. When QoS is introduced, VC mapping must be revisited. For RSVP 559 controlled QoS flows, one issue is VCs to use for QoS data flows. 561 In the Classic IP over ATM and current NHRP models, a single point-to- 562 point VC is used for all traffic between two ATM attached hosts 563 (routers and end-stations). It is likely that such a single VC will 564 not be adequate or optimal when supporting data flows with multiple QoS 565 types. RSVP's basic purpose is to install support for flows with 566 multiple QoS types, so it is essential for any RSVP over ATM solution 567 to address VC usage for QoS data flows, as shown in Figure 1. 569 RSVP reservation styles must also be taken into account in any VC usage 570 strategy. 572 This section describes issues and methods for management of VCs 573 associated with QoS data flows. When establishing and maintaining VCs, 574 the subnet sender will need to deal with several complicating factors 575 including multiple QoS reservations, requests for QoS changes, ATM 576 short-cuts, and several multicast specific issues. The multicast 577 specific issues result from the nature of ATM connections. The key 578 multicast related issues are heterogeneity, data distribution, receiver 579 transitions, and end-point identification. 581 4.2.1 Reservation to VC Mapping 583 There are various approaches available for mapping reservations on to 584 VCs. A distinguishing attribute of all approaches is how reservations 585 are combined on to individual VCs. When mapping reservations on to 586 VCs, individual VCs can be used to support a single reservation, or 587 reservation can be combined with others on to "aggregate" VCs. In the 588 first case, each reservation will be supported by one or more VCs. 589 Multicast reservation requests may translate into the setup of multiple 590 VCs as is described in more detail in section 4.2.2. Unicast 591 reservation requests will always translate into the setup of a single 592 QoS VC. In both cases, each VC will only carry data associated with a 593 single reservation. The greatest benefit if this approach is ease of 594 implementation, but it comes at the cost of increased (VC) setup time 595 and the consumption of greater number of VC and associated resources. 597 When multiple reservations are combined onto a single VC, it is 598 referred to as the "aggregation" model. With this model, large VCs 599 could be set up between IP routers and hosts in an ATM network. These 600 VCs could be managed much like IP Integrated Service (IIS) point-to- 601 point links (e.g. T-1, DS-3) are managed now. Traffic from multiple 602 sources over multiple RSVP sessions might be multiplexed on the same 603 VC. This approach has a number of advantages. First, there is 604 typically no signalling latency as VCs would be in existence when the 605 traffic started flowing, so no time is wasted in setting up VCs. 606 Second, the heterogeneity problem (section 4.2.2) in full over ATM has 607 been reduced to a solved problem. Finally, the dynamic QoS problem 608 (section 4.2.7) for ATM has also been reduced to a solved problem. 610 The aggregation model can be used with point-to-point and point-to- 611 multipoint VCs. The problem with the aggregation model is that the 612 choice of what QoS to use for the VCs may be difficult, without 613 knowledge of the likely reservation types and sizes but is made easier 614 since the VCs can be changed as needed. 616 4.2.2 Unicast Data VC Management 618 Unicast data VC management is much simpler than multicast data VC 619 management but there are still some similar issues. If one considers 620 unicast to be a devolved case of multicast, then implementing the 621 multicast solutions will cover unicast. However, some may want to 622 consider unicast-only implementations. In these situations, the choice 623 of using a single flow per VC or aggregation of flows onto a single VC 624 remains but the problem of heterogeneity discussed in the following 625 section is removed. 627 4.2.3 Multicast Heterogeneity 629 As mentioned in section 2.1.3.1 and shown in figure 2, multicast 630 heterogeneity occurs when receivers request different qualities of 631 service within a single session. This means that the amount of 632 requested resources differs on a per next hop basis. A related type of 633 heterogeneity occurs due to best-effort receivers. In any IP multicast 634 group, it is possible that some receivers will request QoS (via RSVP) 635 and some receivers will not. In shared media networks, like Ethernet, 636 receivers that have not requested resources can typically be given 637 identical service to those that have without complications. This is 638 not the case with ATM. In ATM networks, any additional end-points of a 639 VC must be explicitly added. There may be costs associated with adding 640 the best-effort receiver, and there might not be adequate resources. 641 An RSVP over ATM solution will need to support heterogeneous receivers 642 even though ATM does not currently provide such support directly. 644 RSVP heterogeneity is supported over ATM in the way RSVP reservations 645 are mapped into ATM VCs. There are four alternative approaches this 646 mapping. There are multiple models for supporting RSVP heterogeneity 647 over ATM. Section 4.2.3.1 examines the multiple VCs per RSVP 648 reservation (or full heterogeneity) model where a single reservation 649 can be forwarded onto several VCs each with a different QoS. Section 650 4.2.3.2 presents a limited heterogeneity model where exactly one QoS VC 651 is used along with a best effort VC. Section 4.2.3.3 examines the VC 652 per RSVP reservation (or homogeneous) model, where each RSVP 653 reservation is mapped to a single ATM VC. Section 4.2.3.4 describes 654 the aggregation model allowing aggregation of multiple RSVP 655 reservations into a single VC. 657 4.2.3.1 Full Heterogeneity Model 659 RSVP supports heterogeneous QoS, meaning that different receivers of 660 the same multicast group can request a different QoS. But importantly, 661 some receivers might have no reservation at all and want to receive the 662 traffic on a best effort service basis. The IP model allows receivers 663 to join a multicast group at any time on a best effort basis, and it is 664 important that ATM as part of the Internet continue to provide this 665 service. We define the "full heterogeneity" model as providing a 666 separate VC for each distinct QoS for a multicast session including 667 best effort and one or more qualities of service. 669 Note that while full heterogeneity gives users exactly what they 670 request, it requires more resources of the network than other possible 671 approaches. The exact amount of bandwidth used for duplicate traffic 672 depends on the network topology and group membership. 674 4.2.3.2 Limited Heterogeneity Model 676 We define the "limited heterogeneity" model as the case where the 677 receivers of a multicast session are limited to use either best effort 678 service or a single alternate quality of service. The alternate QoS 679 can be chosen either by higher level protocols or by dynamic 680 renegotiation of QoS as described below. 682 In order to support limited heterogeneity, each ATM edge device 683 participating in a session would need at most two VCs. One VC would be 684 a point-to-multipoint best effort service VC and would serve all best 685 effort service IP destinations for this RSVP session. 687 The other VC would be a point to multipoint VC with QoS and would serve 688 all IP destinations for this RSVP session that have an RSVP reservation 689 established. 691 As with full heterogeneity, a disadvantage of the limited heterogeneity 692 scheme is that each packet will need to be duplicated at the network 693 layer and one copy sent into each of the 2 VCs. Again, the exact 694 amount of excess traffic will depend on the network topology and group 695 membership. If any of the existing QoS VC end-points cannot upgrade to 696 the new QoS, then the new reservation fails though the resources exist 697 for the new receiver. 699 4.2.3.3 Homogeneous and Modified Homogeneous Models 701 We define the "homogeneous" model as the case where all receivers of a 702 multicast session use a single quality of service VC. Best-effort 703 receivers also use the single RSVP triggered QoS VC. The single VC can 704 be a point-to-point or point-to-multipoint as appropriate. The QoS VC 705 is sized to provide the maximum resources requested by all RSVP next- 706 hops. 708 This model matches the way the current RSVP specification addresses 709 heterogeneous requests. The current processing rules and traffic 710 control interface describe a model where the largest requested 711 reservation for a specific outgoing interface is used in resource 712 allocation, and traffic is transmitted at the higher rate to all next- 713 hops. This approach would be the simplest method for RSVP over ATM 714 implementations. 716 While this approach is simple to implement, providing better than best- 717 effort service may actually be the opposite of what the user desires. 718 There may be charges incurred or resources that are wrongfully 719 allocated. There are two specific problems. The first problem is that 720 a user making a small or no reservation would share a QoS VC resources 721 without making (and perhaps paying for) an RSVP reservation. The second 722 problem is that a receiver may not receive any data. This may occur 723 when there is insufficient resources to add a receiver. The rejected 724 user would not be added to the single VC and it would not even receive 725 traffic on a best effort basis. 727 Not sending data traffic to best-effort receivers because of another 728 receiver's RSVP request is clearly unacceptable. The previously 729 described limited heterogeneous model ensures that data is always sent 730 to both QoS and best-effort receivers, but it does so by requiring 731 replication of data at the sender in all cases. It is possible to 732 extend the homogeneous model to both ensure that data is always sent to 733 best-effort receivers and also to avoid replication in the normal case. 734 This extension is to add special handling for the case where a best- 735 effort receiver cannot be added to the QoS VC. In this case, a best 736 effort VC can be established to any receivers that could not be added 737 to the QoS VC. Only in this special error case would senders be 738 required to replicate data. We define this approach as the "modified 739 homogeneous" model. 741 4.2.3.4 Aggregation 743 The last scheme is the multiple RSVP reservations per VC (or 744 aggregation) model. With this model, large VCs could be set up between 745 IP routers and hosts in an ATM network. These VCs could be managed much 746 like IP Integrated Service (IIS) point-to-point links (e.g. T-1, DS-3) 747 are managed now. Traffic from multiple sources over multiple RSVP 748 sessions might be multiplexed on the same VC. This approach has a 749 number of advantages. First, there is typically no signalling latency 750 as VCs would be in existence when the traffic started flowing, so no 751 time is wasted in setting up VCs. Second, the heterogeneity problem 752 in full over ATM has been reduced to a solved problem. Finally, the 753 dynamic QoS problem for ATM has also been reduced to a solved problem. 754 This approach can be used with point-to-point and point-to-multipoint 755 VCs. The problem with the aggregation approach is that the choice of 756 what QoS to use for which of the VCs is difficult, but is made easier 757 if the VCs can be changed as needed. 759 4.2.4 Multicast End-Point Identification 761 Implementations must be able to identify ATM end-points participating 762 in an IP multicast group. The ATM end-points will be IP multicast 763 receivers and/or next-hops. Both QoS and best-effort end-points must 764 be identified. RSVP next-hop information will provide QoS end-points, 765 but not best-effort end-points. Another issue is identifying end-points 766 of multicast traffic handled by non-RSVP capable next-hops. In this 767 case a PATH message travels through a non-RSVP egress router on the way 768 to the next hop RSVP node. When the next hop RSVP node sends a RESV 769 message it may arrive at the source over a different route than what 770 the data is using. The source will get the RESV message, but will not 771 know which egress router needs the QoS. For unicast sessions, there is 772 no problem since the ATM end-point will be the IP next-hop router. 773 Unfortunately, multicast routing may not be able to uniquely identify 774 the IP next-hop router. So it is possible that a multicast end-point 775 can not be identified. 777 In the most common case, MARS will be used to identify all end-points 778 of a multicast group. In the router to router case, a multicast 779 routing protocol may provide all next-hops for a particular multicast 780 group. In either case, RSVP over ATM implementations must obtain a 781 full list of end-points, both QoS and non-QoS, using the appropriate 782 mechanisms. The full list can be compared against the RSVP identified 783 end-points to determine the list of best-effort receivers. There is no 784 straightforward solution to uniquely identifying end-points of 785 multicast traffic handled by non-RSVP next hops. The preferred 786 solution is to use multicast routing protocols that support unique end- 787 point identification. In cases where such routing protocols are 788 unavailable, all IP routers that will be used to support RSVP over ATM 789 should support RSVP. To ensure proper behavior, implementations 790 should, by default, only establish RSVP-initiated VCs to RSVP capable 791 end-points. 793 4.2.5 Multicast Data Distribution 795 Two models are planned for IP multicast data distribution over ATM. In 796 one model, senders establish point-to-multipoint VCs to all ATM 797 attached destinations, and data is then sent over these VCs. This 798 model is often called "multicast mesh" or "VC mesh" mode distribution. 799 In the second model, senders send data over point-to-point VCs to a 800 central point and the central point relays the data onto point-to- 801 multipoint VCs that have been established to all receivers of the IP 802 multicast group. This model is often referred to as "multicast server" 803 mode distribution. RSVP over ATM solutions must ensure that IP 804 multicast data is distributed with appropriate QoS. 806 In the Classical IP context, multicast server support is provided via 807 MARS [5]. MARS does not currently provide a way to communicate QoS 808 requirements to a MARS multicast server. Therefore, RSVP over ATM 809 implementations must, by default, support "mesh-mode" distribution for 810 RSVP controlled multicast flows. When using multicast servers that do 811 not support QoS requests, a sender must set the service, not global, 812 break bit(s). 814 4.2.6 Receiver Transitions 816 When setting up a point-to-multipoint VCs for multicast RSVP sessions, 817 there will be a time when some receivers have been added to a QoS VC 818 and some have not. During such transition times it is possible to 819 start sending data on the newly established VC. The issue is when to 820 start send data on the new VC. If data is sent both on the new VC and 821 the old VC, then data will be delivered with proper QoS to some 822 receivers and with the old QoS to all receivers. This means the QoS 823 receivers can get duplicate data. If data is sent just on the new QoS 824 VC, the receivers that have not yet been added will lose information. 825 So, the issue comes down to whether to send to both the old and new 826 VCs, or to send to just one of the VCs. In one case duplicate 827 information will be received, in the other some information may not be 828 received. 830 This issue needs to be considered for three cases: 831 - When establishing the first QoS VC 832 - When establishing a VC to support a QoS change 833 - When adding a new end-point to an already established QoS VC 835 The first two cases are very similar. It both, it is possible to send 836 data on the partially completed new VC, and the issue of duplicate 837 versus lost information is the same. The last case is when an end-point 838 must be added to an existing QoS VC. In this case the end-point must 839 be both added to the QoS VC and dropped from a best-effort VC. The 840 issue is which to do first. If the add is first requested, then the 841 end-point may get duplicate information. If the drop is requested 842 first, then the end-point may loose information. 844 In order to ensure predictable behavior and delivery of data to all 845 receivers, data can only be sent on a new VCs once all parties have 846 been added. This will ensure that all data is only delivered once to 847 all receivers. This approach does not quite apply for the last case. 848 In the last case, the add operation should be completed first, then the 849 drop operation. This means that receivers must be prepared to receive 850 some duplicate packets at times of QoS setup. 852 4.2.7 Dynamic QoS 854 RSVP provides dynamic quality of service (QoS) in that the resources 855 that are requested may change at any time. There are several common 856 reasons for a change of reservation QoS. 858 1. An existing receiver can request a new larger (or smaller) QoS. 859 2. A sender may change its traffic specification (TSpec), which can 860 trigger a change in the reservation requests of the receivers. 861 3. A new sender can start sending to a multicast group with a larger 862 traffic specification than existing senders, triggering larger 863 reservations. 864 4. A new receiver can make a reservation that is larger than existing 865 reservations. 867 If the limited heterogeneity model is being used and the merge node for 868 the larger reservation is an ATM edge device, a new larger reservation 869 must be set up across the ATM network. Since ATM service, as currently 870 defined in UNI 3.x and UNI 4.0, does not allow renegotiating the QoS of 871 a VC, dynamically changing the reservation means creating a new VC with 872 the new QoS, and tearing down an established VC. Tearing down a VC and 873 setting up a new VC in ATM are complex operations that involve a non- 874 trivial amount of processing time, and may have a substantial latency. 875 There are several options for dealing with this mismatch in service. A 876 specific approach will need to be a part of any RSVP over ATM solution. 878 The default method for supporting changes in RSVP reservations is to 879 attempt to replace an existing VC with a new appropriately sized VC. 880 During setup of the replacement VC, the old VC must be left in place 881 unmodified. The old VC is left unmodified to minimize interruption of 882 QoS data delivery. Once the replacement VC is established, data 883 transmission is shifted to the new VC, and the old VC is then closed. 884 If setup of the replacement VC fails, then the old QoS VC should 885 continue to be used. When the new reservation is greater than the old 886 reservation, the reservation request should be answered with an error. 887 When the new reservation is less than the old reservation, the request 888 should be treated as if the modification was successful. While leaving 889 the larger allocation in place is suboptimal, it maximizes delivery of 890 service to the user. Implementations should retry replacing the too 891 large VC after some appropriate elapsed time. 893 One additional issue is that only one QoS change can be processed at 894 one time per reservation. If the (RSVP) requested QoS is changed while 895 the first replacement VC is still being setup, then the replacement VC 896 is released and the whole VC replacement process is restarted. To limit 897 the number of changes and to avoid excessive signalling load, 898 implementations may limit the number of changes that will be processed 899 in a given period. One implementation approach would have each ATM 900 edge device configured with a time parameter T (which can change over 901 time) that gives the minimum amount of time the edge device will wait 902 between successive changes of the QoS of a particular VC. Thus if the 903 QoS of a VC is changed at time t, all messages that would change the 904 QoS of that VC that arrive before time t+T would be queued. If several 905 messages changing the QoS of a VC arrive during the interval, redundant 906 messages can be discarded. At time t+T, the remaining change(s) of QoS, 907 if any, can be executed. This timer approach would apply more generally 908 to any network structure, and might be worthwhile to incorporate into 909 RSVP. 910 The sequence of events for a single VC would be 912 - Wait if timer is active 913 - Establish VC with new QoS 914 - Remap data traffic to new VC 915 - Tear down old VC 916 - Activate timer 918 There is an interesting interaction between heterogeneous reservations 919 and dynamic QoS. In the case where a RESV message is received from a 920 new next-hop and the requested resources are larger than any existing 921 reservation, both dynamic QoS and heterogeneity need to be addressed. A 922 key issue is whether to first add the new next-hop or to change to the 923 new QoS. This is a fairly straight forward special case. Since the 924 older, smaller reservation does not support the new next-hop, the 925 dynamic QoS process should be initiated first. Since the new QoS is 926 only needed by the new next-hop, it should be the first end-point of 927 the new VC. This way signalling is minimized when the setup to the new 928 next-hop fails. 930 4.2.8 Short-Cuts 932 Short-cuts [4] allow ATM attached routers and hosts to directly 933 establish point-to-point VCs across LIS boundaries, i.e., the VC end- 934 points are on different IP subnets. The ability for short-cuts and 935 RSVP to interoperate has been raised as a general question. An area of 936 concern is the ability to handle asymmetric short-cuts. Specifically 937 how RSVP can handle the case where a downstream short-cut may not have 938 a matching upstream short-cut. In this case, PATH and RESV messages 939 following different paths. 941 Examination of RSVP shows that the protocol already includes mechanisms 942 that will support short-cuts. The mechanism is the same one used to 943 support RESV messages arriving at the wrong router and the wrong 944 interface. The key aspect of this mechanism is RSVP only processing 945 messages that arrive at the proper interface and RSVP forwarding of 946 messages that arrive on the wrong interface. The proper interface is 947 indicated in the NHOP object of the message. So, existing RSVP 948 mechanisms will support asymmetric short-cuts. The short-cut model of 949 VC establishment still poses several issues when running with RSVP. The 950 major issues are dealing with established best-effort short-cuts, when 951 to establish short-cuts, and QoS only short-cuts. These issues will 952 need to be addressed by RSVP implementations. 954 The key issue to be addressed by any RSVP over ATM solution is when to 955 establish a short-cut for a QoS data flow. The default behavior is to 956 simply follow best-effort traffic. When a short-cut has been 957 established for best-effort traffic to a destination or next-hop, that 958 same end-point should be used when setting up RSVP triggered VCs for 959 QoS traffic to the same destination or next-hop. This will happen 960 naturally when PATH messages are forwarded over the best-effort short- 961 cut. Note that in this approach when best-effort short-cuts are never 962 established, RSVP triggered QoS short-cuts will also never be 963 established. More study is expected in this area. 965 4.2.9 VC Teardown 967 RSVP can identify from either explicit messages or timeouts when a data 968 VC is no longer needed. Therefore, data VCs set up to support RSVP 969 controlled flows should only be released at the direction of RSVP. VCs 970 must not be timed out due to inactivity by either the VC initiator or 971 the VC receiver. This conflicts with VCs timing out as described in 972 RFC 1755 [11], section 3.4 on VC Teardown. RFC 1755 recommends tearing 973 down a VC that is inactive for a certain length of time. Twenty minutes 974 is recommended. This timeout is typically implemented at both the VC 975 initiator and the VC receiver. Although, section 3.1 of the update to 976 RFC 1755 [11] states that inactivity timers must not be used at the VC 977 receiver. 979 When this timeout occurs for an RSVP initiated VC, a valid VC with QoS 980 will be torn down unexpectedly. While this behavior is acceptable for 981 best-effort traffic, it is important that RSVP controlled VCs not be 982 torn down. If there is no choice about the VC being torn down, the 983 RSVP daemon must be notified, so a reservation failure message can be 984 sent. 986 For VCs initiated at the request of RSVP, the configurable inactivity 987 timer mentioned in [11] must be set to "infinite". Setting the 988 inactivity timer value at the VC initiator should not be problematic 989 since the proper value can be relayed internally at the originator. 990 Setting the inactivity timer at the VC receiver is more difficult, and 991 would require some mechanism to signal that an incoming VC was RSVP 992 initiated. To avoid this complexity and to conform to [11] 993 implementations must not use an inactivity timer to clear received 994 connections. 996 4.3 RSVP Control Management 998 One last important issue is providing a data path for the RSVP messages 999 themselves. There are two main types of messages in RSVP, PATH and 1000 RESV. PATH messages are sent to unicast or multicast addresses, while 1001 RESV messages are sent only to unicast addresses. Other RSVP messages 1002 are handled similar to either PATH or RESV, although this might be more 1003 complicated for RERR messages. So ATM VCs used for RSVP signalling 1004 messages need to provide both unicast and multicast functionality. 1005 There are several different approaches for how to assign VCs to use for 1006 RSVP signalling messages. 1008 The main approaches are: 1009 - use same VC as data 1010 - single VC per session 1011 - single point-to-multipoint VC multiplexed among sessions 1012 - multiple point-to-point VCs multiplexed among sessions 1014 There are several different issues that affect the choice of how to 1015 assign VCs for RSVP signalling. One issue is the number of additional 1016 VCs needed for RSVP signalling. Related to this issue is the degree of 1017 multiplexing on the RSVP VCs. In general more multiplexing means fewer 1018 VCs. An additional issue is the latency in dynamically setting up new 1019 RSVP signalling VCs. A final issue is complexity of implementation. The 1020 remainder of this section discusses the issues and tradeoffs among 1021 these different approaches and suggests guidelines for when to use 1022 which alternative. 1024 4.3.1 Mixed data and control traffic 1026 In this scheme RSVP signalling messages are sent on the same VCs as is 1027 the data traffic. The main advantage of this scheme is that no 1028 additional VCs are needed beyond what is needed for the data traffic. 1029 An additional advantage is that there is no ATM signalling latency for 1030 PATH messages (which follow the same routing as the data messages). 1032 However there can be a major problem when data traffic on a VC is 1033 nonconforming. With nonconforming traffic, RSVP signalling messages may 1034 be dropped. While RSVP is resilient to a moderate level of dropped 1035 messages, excessive drops would lead to repeated tearing down and re- 1036 establishing of QoS VCs, a very undesirable behavior for ATM. Due to 1037 these problems, this may not be a good choice for providing RSVP 1038 signalling messages, even though the number of VCs needed for this 1039 scheme is minimized. One variation of this scheme is to use the best 1040 effort data path for signalling traffic. In this scheme, there is no 1041 issue with nonconforming traffic, but there is an issue with congestion 1042 in the ATM network. RSVP provides some resiliency to message loss due 1043 to congestion, but RSVP control messages should be offered a preferred 1044 class of service. A related variation of this scheme that is hopeful 1045 but requires further study is to have a packet scheduling algorithm 1046 (before entering the ATM network) that gives priority to the RSVP 1047 signalling traffic. This can be difficult to do at the IP layer. 1049 4.3.1.1 Single RSVP VC per RSVP Reservation 1051 In this scheme, there is a parallel RSVP signalling VC for each RSVP 1052 reservation. This scheme results in twice the number of VCs, but means 1053 that RSVP signalling messages have the advantage of a separate VC. This 1054 separate VC means that RSVP signalling messages have their own traffic 1055 contract and compliant signalling messages are not subject to dropping 1056 due to other noncompliant traffic (such as can happen with the scheme 1057 in section 4.3.1). The advantage of this scheme is its simplicity - 1058 whenever a data VC is created, a separate RSVP signalling VC is 1059 created. The disadvantage of the extra VC is that extra ATM signalling 1060 needs to be done. Additionally, this scheme requires twice the minimum 1061 number of VCs and also additional latency, but is quite simple. 1063 4.3.1.2 Multiplexed point-to-multipoint RSVP VCs 1065 In this scheme, there is a single point-to-multipoint RSVP signalling 1066 VC for each unique ingress router and unique set of egress routers. 1067 This scheme allows multiplexing of RSVP signalling traffic that shares 1068 the same ingress router and the same egress routers. This can save on 1069 the number of VCs, by multiplexing, but there are problems when the 1070 destinations of the multiplexed point-to-multipoint VCs are changing. 1071 Several alternatives exist in these cases, that have applicability in 1072 different situations. First, when the egress routers change, the 1073 ingress router can check if it already has a point-to-multipoint RSVP 1074 signalling VC for the new list of egress routers. If the RSVP 1075 signalling VC already exists, then the RSVP signalling traffic can be 1076 switched to this existing VC. If no such VC exists, one approach would 1077 be to create a new VC with the new list of egress routers. Other 1078 approaches include modifying the existing VC to add an egress router or 1079 using a separate new VC for the new egress routers. When a destination 1080 drops out of a group, an alternative would be to keep sending to the 1081 existing VC even though some traffic is wasted. The number of VCs used 1082 in this scheme is a function of traffic patterns across the ATM 1083 network, but is always less than the number used with the Single RSVP 1084 VC per data VC. In addition, existing best effort data VCs could be 1085 used for RSVP signalling. Reusing best effort VCs saves on the number 1086 of VCs at the cost of higher probability of RSVP signalling packet 1087 loss. One possible place where this scheme will work well is in the 1088 core of the network where there is the most opportunity to take 1089 advantage of the savings due to multiplexing. The exact savings depend 1090 on the patterns of traffic and the topology of the ATM network. 1092 4.3.1.3 Multiplexed point-to-point RSVP VCs 1094 In this scheme, multiple point-to-point RSVP signalling VCs are used 1095 for a single point-to-multipoint data VC. This scheme allows 1096 multiplexing of RSVP signalling traffic but requires the same traffic 1097 to be sent on each of several VCs. This scheme is quite flexible and 1098 allows a large amount of multiplexing. 1100 Since point-to-point VCs can set up a reverse channel at the same time 1101 as setting up the forward channel, this scheme could save substantially 1102 on signalling cost. In addition, signalling traffic could share 1103 existing best effort VCs. Sharing existing best effort VCs reduces the 1104 total number of VCs needed, but might cause signalling traffic drops if 1105 there is congestion in the ATM network. This point-to-point scheme 1106 would work well in the core of the network where there is much 1107 opportunity for multiplexing. Also in the core of the network, RSVP VCs 1108 can stay permanently established either as Permanent Virtual Circuits 1109 (PVCs) or as long lived Switched Virtual Circuits (SVCs). The number 1110 of VCs in this scheme will depend on traffic patterns, but in the core 1111 of a network would be approximately n(n-1)/2 where n is the number of 1112 IP nodes in the network. In the core of the network, this will 1113 typically be small compared to the total number of VCs. 1115 4.3.2 QoS for RSVP VCs 1117 There is an issue of what QoS, if any, to assign to the RSVP signalling 1118 VCs. For other RSVP VC schemes, a QoS (possibly best effort) will be 1119 needed. What QoS to use partially depends on the expected level of 1120 multiplexing that is being done on the VCs, and the expected 1121 reliability of best effort VCs. Since RSVP signalling is infrequent 1122 (typically every 30 seconds), only a relatively small QoS should be 1123 needed. This is important since using a larger QoS risks the VC setup 1124 being rejected for lack of resources. Falling back to best effort when 1125 a QoS call is rejected is possible, but if the ATM net is congested, 1126 there will likely be problems with RSVP packet loss on the best effort 1127 VC also. Additional experimentation is needed in this area. 1129 5. Encapsulation 1131 Since RSVP is a signalling protocol used to control flows of IP data 1132 packets, encapsulation for both RSVP packets and associated IP data 1133 packets must be defined. The methods for transmitting IP packets over 1134 ATM (Classical IP over ATM[10], LANE[17], and MPOA[18]) are all based 1135 on the encapsulations defined in RFC1483 [19]. RFC1483 specifies two 1136 encapsulations, LLC Encapsulation and VC-based multiplexing. The 1137 former allows multiple protocols to be encapsulated over the same VC 1138 and the latter requires different VCs for different protocols. 1140 For the purposes of RSVP over ATM, any encapsulation can be used as 1141 long as the VCs are managed in accordance to the methods outlined in 1142 Section 4. Obviously, running multiple protocol data streams over the 1143 same VC with LLC encapsulation can cause the same problems as running 1144 multiple flows over the same VC. 1146 While none of the transmission methods directly address the issue of 1147 QoS, RFC1755 [11] does suggest some common values for VC setup for 1148 best-effort traffic. [14] discusses the relationship of the RFC1755 1149 setup parameters and those needed to support IntServ flows in greater 1150 detail. 1152 6. Security Considerations 1154 The same considerations stated in [1] and [11] apply to this document. 1155 There are no additional security issues raised in this document. 1157 7. References 1159 [1] R. Braden, L. Zhang, S. Berson, S. Herzog, S. Jamin. Resource 1160 ReSerVation Protocol (RSVP) -- Version 1 Functional Specification 1161 RFC 2209, September 1997. 1162 [2] M. Borden, E. Crawley, B. Davie, S. Batsell. Integration of Real- 1163 time Services in an IP-ATM Network Architecture. Request for 1164 Comments (Informational) RFC 1821, August 1995. 1165 [3] R. Cole, D. Shur, C. Villamizar. IP over ATM: A Framework Document. 1166 Request for Comments (Informational), RFC 1932, April 1996. 1167 [4] D. Katz, D. Piscitello, B. Cole, J. Luciani. NBMA Next Hop 1168 Resolution Protocol (NHRP). Internet Draft, draft-ietf-rolc-nhrp- 1169 12.txt, October 1997. 1170 [5] G. Armitage, Support for Multicast over UNI 3.0/3.1 based ATM 1171 Networks. RFC 2022. November 1996. 1172 [6] S. Shenker, C. Partridge. Specification of Guaranteed Quality of 1173 Service. RFC 2212, September 1997. 1174 [7] J. Wroclawski. Specification of the Controlled-Load Network Element 1175 Service. RFC 2211, September 1997. 1176 [8] ATM Forum. ATM User-Network Interface Specification Version 3.0. 1177 Prentice Hall, September 1993 1178 [9] ATM Forum. ATM User Network Interface (UNI) Specification Version 1179 3.1. Prentice Hall, June 1995. 1180 [10]M. Laubach, Classical IP and ARP over ATM. Request for Comments 1181 (Proposed Standard) RFC1577, January 1994. 1182 [11]M. Perez, A. Mankin, E. Hoffman, G. Grossman, A. Malis, ATM 1183 Signalling Support for IP over ATM, Request for Comments (Proposed 1184 Standard) RFC1755, February 1995. 1185 [12]S. Herzog. RSVP Extensions for Policy Control. Internet Draft, 1186 draft-ietf-rsvp-policy-ext-02.txt, April 1997. 1187 [13]S. Herzog. Local Policy Modules (LPM): Policy Control for RSVP, 1188 Internet Draft, draft-ietf-rsvp-policy-lpm-01.txt, November 1996. 1189 [14]M. Borden, M. Garrett. Interoperation of Controlled-Load and 1190 Guaranteed Service with ATM, Internet Draft, draft-ietf-issll-atm- 1191 mapping-03.txt, August 1997. 1192 [15]L. Berger. RSVP over ATM Implementation Requirements. Internet 1193 Draft, draft-ietf-issll-atm-imp-req-00.txt, July 1997. 1194 [16]L. Berger. RSVP over ATM Implementation Guidelines. Internet Draft, 1195 draft-ietf-issll-atm-imp-guide-01.txt, July 1997. 1196 [17]ATM Forum Technical Committee. LAN Emulation over ATM, Version 1.0 1197 Specification, af-lane-0021.000, January 1995. 1199 [18]ATM Forum Technical Committee. Baseline Text for MPOA, af-95- 1200 0824r9, September 1996. 1201 [19]J. Heinanen. Multiprotocol Encapsulation over ATM Adaptation Layer 1202 5, RFC 1483, July 1993. 1203 [20]ATM Forum Technical Committee. LAN Emulation over ATM Version 2 - 1204 LUNI Specification, December 1996. 1205 [21]ATM Forum Technical Committee. Traffic Management Specification 1206 v4.0, af-tm-0056.000, April 1996. 1207 [22]R. Callon, et al. A Framework for Multiprotocol Label Switching, 1208 Internet Draft, draft-ietf-mpls-framework-01.txt, July 1997. 1209 [23]B. Rajagopalan, R. Nair, H. Sandick, E. Crawley. A Framework for 1210 QoS-based Routing in the Internet, Internet Draft, draft-ietf-qosr- 1211 framework-01.txt, July 1997. 1212 [24]ITU-T. Digital Subscriber Signaling System No. 2-Connection 1213 modification: Peak cell rate modification by the connection owner, 1214 ITU-T Recommendation Q.2963.1, July 1996. 1215 [25]ITU-T. Digital Subscriber Signaling System No. 2-Connection 1216 characteristics negotiation during call/connection establishment 1217 phase, ITU-T Recommendation Q.2962, July 1996. 1218 [26]ATM Forum Technical Committee. Private Network-Network Interface 1219 Specification v1.0 (PNNI), March 1996 1221 8. Author's Address 1223 Eric S. Crawley 1224 Argon Networks 1225 25 Porter Road 1226 Littleton, Ma 01460 1227 +1 978 486-0665 1228 esc@argon.com 1230 Lou Berger 1231 FORE Systems 1232 6905 Rockledge Drive 1233 Suite 800 1234 Bethesda, MD 20817 1235 +1 301 571-2534 1236 lberger@fore.com 1238 Steven Berson 1239 USC Information Sciences Institute 1240 4676 Admiralty Way 1241 Marina del Rey, CA 90292 1242 +1 310 822-1511 1243 berson@isi.edu 1245 Fred Baker 1246 Cisco Systems 1247 519 Lado Drive 1248 Santa Barbara, California 93111 1249 +1 805 681-0115 1250 fred@cisco.com 1251 Marty Borden 1252 Bay Networks 1253 125 Nagog Park 1254 Acton, MA 01720 1255 mborden@baynetworks.com 1256 +1 978 266-1011 1258 John J. Krawczyk 1259 ArrowPoint Communications 1260 235 Littleton Road 1261 Westford, Massachusetts 01886 1262 +1 978 692-5875 1263 jj@arrowpoint.com