idnits 2.17.1 draft-ietf-issll-atm-framework-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-27) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 24 longer pages, the longest (page 9) being 66 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 657 instances of too long lines in the document, the longest one being 13 characters in excess of 72. == There are 16 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 1110 has weird spacing: '...VCs) or as lo...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 9, 1998) is 9574 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: '10' is mentioned on line 1135, but not defined == Missing Reference: '17' is mentioned on line 1135, but not defined == Missing Reference: '20' is mentioned on line 92, but not defined == Missing Reference: '18' is mentioned on line 1135, but not defined == Missing Reference: '14' is mentioned on line 1149, but not defined == Missing Reference: '21' is mentioned on line 191, but not defined == Missing Reference: '24' is mentioned on line 395, but not defined == Missing Reference: '25' is mentioned on line 395, but not defined == Missing Reference: '22' is mentioned on line 413, but not defined == Missing Reference: '26' is mentioned on line 432, but not defined == Missing Reference: '23' is mentioned on line 434, but not defined == Missing Reference: '11' is mentioned on line 1155, but not defined == Missing Reference: '12' is mentioned on line 497, but not defined == Missing Reference: '13' is mentioned on line 497, but not defined == Missing Reference: '15' is mentioned on line 521, but not defined == Missing Reference: '16' is mentioned on line 521, but not defined == Missing Reference: '19' is mentioned on line 1136, but not defined == Unused Reference: '2' is defined on line 1163, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2209 (ref. '1') ** Downref: Normative reference to an Informational RFC: RFC 1821 (ref. '2') ** Downref: Normative reference to an Informational RFC: RFC 1932 (ref. '3') == Outdated reference: A later version (-14) exists of draft-ietf-rolc-nhrp-12 -- Possible downref: Non-RFC (?) normative reference: ref. '8' -- Possible downref: Non-RFC (?) normative reference: ref. '9' Summary: 13 errors (**), 0 flaws (~~), 23 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force E. Crawley, Editor 3 Internet Draft (Argon Networks) 4 draft-ietf-issll-atm-framework-02.txt L. Berger 5 (Fore Systems) 6 S. Berson 7 (ISI) 8 F. Baker 9 (Cisco Systems) 10 M. Borden 11 (Bay Networks) 12 J. Krawczyk 13 (ArrowPoint Communications) 15 February 9, 1998 17 A Framework for Integrated Services and RSVP over ATM 19 Status of this Memo 20 This document is an Internet Draft. Internet Drafts are working 21 documents of the Internet Engineering Task Force (IETF), its Areas, and 22 its Working Groups. Note that other groups may also distribute working 23 documents as Internet Drafts). 25 Internet Drafts are draft documents valid for a maximum of six months. 26 Internet Drafts may be updated, replaced, or obsoleted by other 27 documents at any time. It is not appropriate to use Internet Drafts as 28 reference material or to cite them other than as a "working draft" or 29 "work in progress." 31 To learn the current status of any Internet-Draft, please check the 32 ``1id-abstracts.txt'' listing contained in the Internet-Drafts Shadow 33 Directories on ds.internic.net (US East Coast), nic.nordu.net 34 (Eur4ope), ftp.isi.edu (US West Coast), or munnari.oz.au (Pacific Rim). 36 Abstract 37 This document outlines the issues and framework related to providing IP 38 Integrated Services with RSVP over ATM. It provides an overall approach 39 to the problem(s) and related issues. These issues and problems are to 40 be addressed in further documents from the ISATM subgroup of the ISSLL 41 working group. 43 Editor's Note 44 This document is the merger of two previous documents, draft-ietf- 45 issll-atm-support-02.txt by Berger and Berson and draft-crawley-rsvp- 46 over-atm-00.txt by Baker, Berson, Borden, Crawley, and Krawczyk. The 47 former document has been split into this document and a set of 48 documents on RSVP over ATM implementation requirements and guidelines. 50 1. Introduction 52 The Internet currently has one class of service normally referred to as 53 "best effort." This service is typified by first-come, first-serve 54 scheduling at each hop in the network. Best effort service has worked 55 well for electronic mail, World Wide Web (WWW) access, file transfer 56 (e.g. ftp), etc. For real-time traffic such as voice and video, the 57 current Internet has performed well only across unloaded portions of 58 the network. In order to provide quality real-time traffic, new 59 classes of service and a QoS signalling protocol are being introduced 60 in the Internet [1,6,7], while retaining the existing best effort 61 service. The QoS signalling protocol is RSVP [1], the Resource 62 7ReSerVation Protocol and the service models 64 One of the important features of ATM technology is the ability to 65 request a point-to-point Virtual Circuit (VC) with a specified Quality 66 of Service (QoS). An additional feature of ATM technology is the 67 ability to request point-to-multipoint VCs with a specified QoS. 68 Point-to-multipoint VCs allows leaf nodes to be added and removed from 69 the VC dynamically and so provides a mechanism for supporting IP 70 multicast. It is only natural that RSVP and the Internet Integrated 71 Services (IIS) model would like to utilize the QoS properties of any 72 underlying link layer including ATM, and this draft concentrates on 73 ATM. 75 Classical IP over ATM [10] has solved part of this problem, supporting 76 IP unicast best effort traffic over ATM. Classical IP over ATM is 77 based on a Logical IP Subnetwork (LIS), which is a separately 78 administered IP subnetwork. Hosts within an LIS communicate using the 79 ATM network, while hosts from different subnets communicate only by 80 going through an IP router (even though it may be possible to open a 81 direct VC between the two hosts over the ATM network). Classical IP 82 over ATM provides an Address Resolution Protocol (ATMARP) for ATM edge 83 devices to resolve IP addresses to native ATM addresses. For any pair 84 of IP/ATM edge devices (i.e. hosts or routers), a single VC is created 85 on demand and shared for all traffic between the two devices. A second 86 part of the RSVP and IIS over ATM problem, IP multicast, is being 87 solved with MARS [5], the Multicast Address Resolution Server. 89 MARS compliments ATMARP by allowing an IP address to resolve into a 90 list of native ATM addresses, rather than just a single address. 92 The ATM Forum's LAN Emulation (LANE) [17, 20] and Multiprotocol Over 93 ATM (MPOA) [18] also address the support of IP best effort traffic over 94 ATM through similar means. 96 A key remaining issue for IP in an ATM environment is the integration 97 of RSVP signalling and ATM signalling in support of the Internet 98 Integrated Services (IIS) model. There are two main areas involved in 99 supporting the IIS model, QoS translation and VC management. QoS 100 translation concerns mapping a QoS from the IIS model to a proper ATM 101 QoS, while VC management concentrates on how many VCs are needed and 102 which traffic flows are routed over which VCs. 104 1.1 Structure and Related Documents 106 This document provides a guide to the issues for IIS over ATM. It is 107 intended to frame the problems that are to be addressed in further 108 documents. In this document, the modes and models for RSVP operation 109 over ATM will be discussed followed by a discussion of management of 110 ATM VCs for RSVP data and control. Lastly, the topic of encapsulations 111 will be discussed in relation to the models presented. 113 This document is part of a group of documents from the ISATM subgroup 114 of the ISSLL working group related to the operation of IntServ and RSVP 115 over ATM. [14] discusses the mapping of the IntServ models for 116 Controlled Load and Guaranteed Service to ATM. [15 and 16] discuss 117 detailed implementation requirements and guidelines for RSVP over ATM, 118 respectively. While these documents may not address all the issues 119 raised in this document, they should provide enough information for 120 development of solutions for IntServ and RSVP over ATM. 122 1.2 Terms 124 Several term used in this document are used in many contexts, often 125 with different meaning. These terms are used in this document with the 126 following meaning: 128 - Sender is used in this document to mean the ingress point to the ATM 129 network or "cloud". 130 - Receiver is used in this document to refer to the egress point from 131 the ATM network or "cloud". 132 - Reservation is used in this document to refer to an RSVP initiated 133 request for resources. RSVP initiates requests for resources based 134 on RESV message processing. RESV messages that simply refresh state 135 do not trigger resource requests. Resource requests may be made 136 based on RSVP sessions and RSVP reservation styles. RSVP styles 137 dictate whether the reserved resources are used by one sender or 138 shared by multiple senders. See [1] for details of each. Each new 139 request is referred to in this document as an RSVP reservation, or 140 simply reservation. 141 - Flow is used to refer to the data traffic associated with a 142 particular reservation. The specific meaning of flow is RSVP style 143 dependent. For shared style reservations, there is one flow per 144 session. For distinct style reservations, there is one flow per 145 sender (per session). 147 2. Issues Regarding the Operation of RSVP and IntServ over ATM 149 The issues related to RSVP and IntServ over ATM fall into several 150 general classes: 151 - How to make RSVP run over ATM now and in the future 152 - When to set up a virtual circuit (VC) for a specific Quality of 153 Service (QoS) related to RSVP 154 - How to map the IntServ models to ATM QoS models 155 - How to know that an ATM network is providing the QoS necessary for a 156 flow 157 - How to handle the many-to-many connectionless features of IP 158 multicast and RSVP in the one-to-many connection-oriented world of 159 ATM 160 2.1 Modes/Models for RSVP and IntServ over ATM 162 [3] Discusses several different models for running IP over ATM 163 networks. [17, 18, and 20] also provide models for IP in ATM 164 environments. Any one of these models would work as long as the RSVP 165 control packets (IP protocol 46) and data packets can follow the same 166 IP path through the network. It is important that the RSVP PATH 167 messages follow the same IP path as the data such that appropriate PATH 168 state may be installed in the routers along the path. For an ATM 169 subnetwork, this means the ingress and egress points must be the same 170 in both directions for the RSVP control and data messages. Note that 171 the RSVP protocol does not require symmetric routing. The PATH state 172 installed by RSVP allows the RESV messages to "retrace" the hops that 173 the PATH message crossed. Within each of the models for IP over ATM, 174 there are decisions about using different types of data distribution in 175 ATM as well as different connection initiation. The following sections 176 look at some of the different ways QoS connections can be set up for 177 RSVP. 179 2.1.1 UNI 3.x and 4.0 181 In the User Network Interface (UNI) 3.0 and 3.1 specifications [8,9] 182 and 4.0 specification, both permanent and switched virtual circuits 183 (PVC and SVC) may be established with a specified service category 184 (CBR, VBR, and UBR for UNI 3.x and VBR-rt and ABR for 4.0) and specific 185 traffic descriptors in point-to-point and point-to-multipoint 186 configurations. Additional QoS parameters are not available in UNI 3.x 187 and those that are available are vendor-specific. Consequently, the 188 level of QoS control available in standard UNI 3.x networks is somewhat 189 limited. However, using these building blocks, it is possible to use 190 RSVP and the IntServ models. ATM 4.0 with the Traffic Management (TM) 191 4.0 specification [21] allows much greater control of QoS. [14] 192 provides the details of mapping the IntServ models to UNI 3.x and 4.0 193 service categories and traffic parameters. 195 2.1.1.1 Permanent Virtual Circuits (PVCs) 197 PVCs emulate dedicated point-to-point lines in a network, so the 198 operation of RSVP can be identical to the operation over any point-to- 199 point network. The QoS of the PVC must be consistent and equivalent to 200 the type of traffic and service model used. The devices on either end 201 of the PVC have to provide traffic control services in order to 202 multiplex multiple flows over the same PVC. With PVCs, there is no 203 issue of when or how long it takes to set up VCs, since they are made 204 in advance but the resources of the PVC are limited to what has been 205 pre-allocated. PVCs that are not fully utilized can tie up ATM network 206 resources that could be used for SVCs. 208 An additional issue for using PVCs is one of network engineering. 209 Frequently, multiple PVCs are set up such that if all the PVCs were 210 running at full capacity, the link would be over-subscribed. This 211 frequently used "statistical multiplexing gain" makes providing IIS 212 over PVCs very difficult and unreliable. Any application of IIS over 213 PVCs has to be assured that the PVCs are able to receive all the 214 requested QoS. 216 2.1.1.2 Switched Virtual Circuits (SVCs) 218 SVCs allow paths in the ATM network to be set up "on demand". This 219 allows flexibility in the use of RSVP over ATM along with some 220 complexity. Parallel VCs can be set up to allow best-effort and better 221 service class paths through the network, as shown in Figure 1. The 222 cost and time to set up SVCs can impact their use. For example, it may 223 be better to initially route QoS traffic over existing VCs until a SVC 224 with the desired QoS can be set up for the flow. Scaling issues can 225 come into play if a single RSVP flow is used per VC, as will be 226 discussed in Section 4.3.1.1. The number of VCs in any ATM device may 227 also be limited so the number of RSVP flows that can be supported by a 228 device can be strictly limited to the number of VCs available, if we 229 assume one flow per VC. Section 4 discusses the topic of VC management 230 for RSVP in greater detail. 232 Data Flow ==========> 234 +-----+ 235 | | --------------> +----+ 236 | Src | --------------> | R1 | 237 | *| --------------> +----+ 238 +-----+ QoS VCs 239 /\ 240 || 241 VC || 242 Initiator 244 Figure 1: Data Flow VC Initiation 246 While RSVP is receiver oriented, ATM is sender oriented. This might 247 seem like a problem but the sender or ingress point receives RSVP RESV 248 messages and can determine whether a new VC has to be set up to the 249 destination or egress point. 251 2.1.1.3 Point to MultiPoint 253 In order to provide QoS for IP multicast, an important feature of RSVP, 254 data flows must be distributed to multiple destinations from a given 255 source. Point-to-multipoint VCs provide such a mechanism. It is 256 important to map the actions of IP multicasting and RSVP (e.g. IGMP 257 JOIN/LEAVE and RSVP RESV/RESV TEAR) to add party and drop party 258 functions for ATM. Point-to-multipoint VCs as defined in UNI 3.x and 259 UNI 4.0 have a single service class for all destinations. This is 260 contrary to the RSVP "heterogeneous receiver" concept. It is possible 261 to set up a different VC to each receiver requesting a different QoS, 262 as shown in Figure 2. This again can run into scaling and resource 263 problems when managing multiple VCs on the same interface to different 264 destinations. 266 +----+ 267 +------> | R1 | 268 | +----+ 269 | 270 | +----+ 271 +-----+ -----+ +--> | R2 | 272 | | ---------+ +----+ Receiver Request 273 Types: 274 | Src | ----> QoS 1 and QoS 275 2 276 | | .........+ +----+ ....> Best-Effort 277 +-----+ .....+ +..> | R3 | 278 : +----+ 279 /\ : 280 || : +----+ 281 || +......> | R4 | 282 || +----+ 283 Single 284 IP Mulicast 285 Group 287 Figure 2: Types of Multicast Receivers 289 RSVP sends messages both up and down the multicast distribution tree. 290 In the case of a large ATM cloud, this could result in a RSVP message 291 implosion at an ATM ingress point with many receivers. 293 ATM 4.0 expands on the point-to-multipoint VCs by adding a Leaf 294 Initiated Join (LIJ) capability. LIJ allows an ATM end point to join 295 into an existing point-to-multipoint VC without necessarily contacting 296 the source of the VC. This can reduce the burden on the ATM source 297 point for setting up new branches and more closely matches the 298 receiver-based model of RSVP and IP multicast. However, many of the 299 same scaling issues exist and the new branches added to a point-to- 300 multipoint VC must use the same QoS as existing branches. 302 2.1.1.4 Multicast Servers 304 IP-over-ATM has the concept of a multicast server or reflector that can 305 accept cells from multiple senders and send them via a point-to- 306 multipoint VC to a set of receivers. This moves the VC scaling issues 307 noted previously for point-to-multipoint VCs to the multicast server. 308 Additionally, the multicast server will need to know how to interpret 309 RSVP packets or receive instruction from another node so it will be 310 able to provide VCs of the appropriate QoS for the RSVP flows. 312 2.1.2 Hop-by-Hop vs. Short Cut 314 If the ATM "cloud" is made up a number of logical IP subnets (LISs), 315 then it is possible to use "short cuts" from a node on one LIS directly 316 to a node on another LIS, avoiding router hops between the LISs. NHRP 317 [4], is one mechanism for determining the ATM address of the egress 318 point on the ATM network given a destination IP address. It is a topic 319 for further study to determine if significant benefit is achieved from 320 short cut routes vs. the extra state required. 322 2.1.3 Future Models 324 ATM is constantly evolving. If we assume that RSVP and IntServ 325 applications are going to be wide-spread, it makes sense to consider 326 changes to ATM that would improve the operation of RSVP and IntServ 327 over ATM. Similarly, the RSVP protocol and IntServ models will 328 continue to evolve and changes that affect them should also be 329 considered. The following are a few ideas that have been discussed 330 that would make the integration of the IntServ models and RSVP easier 331 or more complete. They are presented here to encourage continued 332 development and discussion of ideas that can help aid in the 333 integration of RSVP, IntServ, and ATM. 335 2.1.3.1 Heterogeneous Point-to-MultiPoint 337 The IntServ models and RSVP support the idea of "heterogeneous 338 receivers"; e.g., not all receivers of a particular multicast flow are 339 required to ask for the same QoS from the network, as shown in Figure 340 2. 342 The most important scenario that can utilize this feature occurs when 343 some receivers in an RSVP session ask for a specific QoS while others 344 receive the flow with a best-effort service. In some cases where there 345 are multiple senders on a shared-reservation flow (e.g., an audio 346 conference), an individual receiver only needs to reserve enough 347 resources to receive one sender at a time. However, other receivers 348 may elect to reserve more resources, perhaps to allow for some amount 349 of "over-speaking" or in order to record the conference (post 350 processing during playback can separate the senders by their source 351 addresses). 353 In order to prevent denial-of-service attacks via reservations, the 354 service models do not allow the service elements to simply drop non- 355 conforming packets. For example, Controlled Load service model [7] 356 assigns non-conformant packets to best-effort status (which may result 357 in packet drops if there is congestion). 359 Emulating these behaviors over an ATM network is problematic and needs 360 to be studied. If a single maximum QoS is used over a point-to- 361 multipoint VC, resources could be wasted if cells are sent over certain 362 links where the reassembled packets will eventually be dropped. In 363 addition, the "maximum QoS" may actually cause a degradation in service 364 to the best-effort branches. 366 The term "variegated VC" has been coined to describe a point-to- 367 multipoint VC that allows a different QoS on each branch. This approach 368 seems to match the spirit of the Integrated Service and RSVP models, 369 but some thought has to be put into the cell drop strategy when 370 traversing from a "bigger" branch to a "smaller" one. The "best-effort 371 for non-conforming packets" behavior must also be retained. Early 372 Packet Discard (EPD) schemes must be used so that all the cells for a 373 given packet can be discarded at the same time rather than discarding 374 only a few cells from several packets making all the packets useless to 375 the receivers. 377 2.1.3.2 Lightweight Signalling 379 Q.2931 signalling is very complete and carries with it a significant 380 burden for signalling in all possible public and private connections. 382 It might be worth investigating a lighter weight signalling mechanism 383 for faster connection setup in private networks. 385 2.1.3.3 QoS Renegotiation 387 Another change that would help RSVP over ATM is the ability to request 388 a different QoS for an active VC. This would eliminate the need to 389 setup and tear down VCs as the QoS changed. RSVP allows receivers to 390 change their reservations and senders to change their traffic 391 descriptors dynamically. This, along with the merging of reservations, 392 can create a situation where the QoS needs of a VC can change. 393 Allowing changes to the QoS of an existing VC would allow these 394 features to work without creating a new VC. In the ITU-T ATM 395 specifications [24,25], some cell rates can be renegotiated or changed. 396 Specifically, the Peak Cell Rate (PCR) of an existing VC can be changed 397 and, in some cases, QoS parameters may be renegotiated during the call 398 setup phase. It is unclear if this is sufficient for the QoS 399 renegotiation needs of the IntServ models. 401 2.1.3.4 Group Addressing 403 The model of one-to-many communications provided by point-to-multipoint 404 VCs does not really match the many-to-many communications provided by 405 IP multicasting. A scaleable mapping from IP multicast addresses to an 406 ATM "group address" can address this problem. 408 2.1.3.5 Label Switching 410 The MultiProtocol Label Switching (MPLS) working group is discussing 411 methods for optimizing the use of ATM and other switched networks for 412 IP by encapsulating the data with a header that is used by the interior 413 switches to achieve faster forwarding lookups. [22] discusses a 414 framework for this work. It is unclear how this work will affect 415 IntServ and RSVP over label switched networks but there may be some 416 interactions. 418 2.1.4 QoS Routing 420 RSVP is explicitly not a routing protocol. However, since it conveys 421 QoS information, it may prove to be a valuable input to a routing 422 protocol that can make path determinations based on QoS and network 423 load information. In other words, instead of asking for just the IP 424 next hop for a given destination address, it might be worthwhile for 425 RSVP to provide information on the QoS needs of the flow if routing has 426 the ability to use this information in order to determine a route. 427 Other forms of QoS routing have existed in the past such as using the 428 IP TOS and Precedence bits to select a path through the network. Some 429 have discussed using these same bits to select one of a set of parallel 430 ATM VCs as a form of QoS routing. ATM routing has also considered the 431 problem of QoS routing through the Private Network-to-Network Interface 432 (PNNI) [26] routing protocol for routing ATM VCs on a path that can 433 support their needs. The work in this area is just starting and there 434 are numerous issues to consider. [23], as part of the work of the QoSR 435 working group frame the issues for QoS Routing in the Internet. 437 2.2 Reliance on Unicast and Multicast Routing 439 RSVP was designed to support both unicast and IP multicast 440 applications. This means that RSVP needs to work closely with 441 multicast and unicast routing. Unicast routing over ATM has been 442 addressed [10] and [11]. MARS [5] provides multicast address 443 resolution for IP over ATM networks, an important part of the solution 444 for multicast but still relies on multicast routing protocols to 445 connect multicast senders and receivers on different subnets. 447 2.3 Aggregation of Flows 449 Some of the scaling issues noted in previous sections can be addressed 450 by aggregating several RSVP flows over a single VC if the destinations 451 of the VC match for all the flows being aggregated. However, this 452 causes considerable complexity in the management of VCs and in the 453 scheduling of packets within each VC at the root point of the VC. Note 454 that the rescheduling of flows within a VC is not possible in the 455 switches in the core of the ATM network. Virtual Paths (VPs) can be 456 used for aggregating multiple VCs. This topic is discussed in greater 457 detail as it applies to multicast data distribution in section 4.2.3.4 459 2.4 Mapping QoS Parameters 461 The mapping of QoS parameters from the IntServ models to the ATM 462 service classes is an important issue in making RSVP and IntServ work 463 over ATM. [14] addresses these issues very completely for the 464 Controlled Load and Guaranteed Service models. An additional issue is 465 that while some guidelines can be developed for mapping the parameters 466 of a given service model to the traffic descriptors of an ATM traffic 467 class, implementation variables, policy, and cost factors can make 468 strict mapping problematic. So, a set of workable mappings that can be 469 applied to different network requirements and scenarios is needed as 470 long as the mappings can satisfy the needs of the service model(s). 472 2.5 Directly Connected ATM Hosts 474 It is obvious that the needs of hosts that are directly connected to 475 ATM networks must be considered for RSVP and IntServ over ATM. 476 Functionality for RSVP over ATM must not assume that an ATM host has 477 all the functionality of a router, but such things as MARS and NHRP 478 clients would be worthwhile features. A host must managed VCs just 479 like any other ATM sender or receiver as described later in section 4. 481 2.6 Accounting and Policy Issues 483 Since RSVP and IntServ create classes of preferential service, some 484 form of administrative control and/or cost allocation is needed to 485 control access. There are certain types of policies specific to ATM 486 and IP over ATM that need to be studied to determine how they 487 interoperate with the IP and IntServ policies being developed. Typical 488 IP policies would be that only certain users are allowed to make 489 reservations. This policy would translate well to IP over ATM due to 490 the similarity to the mechanisms used for Call Admission Control (CAC). 491 There may be a need for policies specific to IP over ATM. For example, 492 since signalling costs in ATM are high relative to IP, an IP over ATM 493 specific policy might restrict the ability to change the prevailing QoS 494 in a VC. If VCs are relatively scarce, there also might be specific 495 accounting costs in creating a new VC. The work so far has been 496 preliminary, and much work remains to be done. The policy mechanisms 497 outlined in [12] and [13] provide the basic mechanisms for implementing 498 policies for RSVP and IntServ over any media, not just ATM. 500 3. Framework for IntServ and RSVP over ATM 502 Now that we have defined some of the issues for IntServ and RSVP over 503 ATM, we can formulate a framework for solutions. The problem breaks 504 down to two very distinct areas; the mapping of IntServ models to ATM 505 service categories and QoS parameters and the operation of RSVP over 506 ATM. 508 Mapping IntServ models to ATM service categories and QoS parameters is 509 a matter of determining which categories can support the goals of the 510 service models and matching up the parameters and variables between the 511 IntServ description and the ATM description(s). Since ATM has such a 512 wide variety of service categories and parameters, more than one ATM 513 service category should be able to support each of the two IntServ 514 models. This will provide a good bit of flexibility in configuration 515 and deployment. [14] examines this topic completely. 517 The operation of RSVP over ATM requires careful management of VCs in 518 order to match the dynamics of the RSVP protocol. VCs need to be 519 managed for both the RSVP QoS data and the RSVP signalling messages. 520 The remainder of this document will discuss several approaches to 521 managing VCs for RSVP and [15] and [16] discuss their application for 522 implementations in term of interoperability requirement and 523 implementation guidelines. 525 4. RSVP VC Management 527 This section provides more detail on the issues related to the 528 management of SVCs for RSVP and IntServ. 530 4.1 VC Initiation 532 As discussed in section 2.1.1.2, there is an apparent mismatch between 533 RSVP and ATM. Specifically, RSVP control is receiver oriented and ATM 534 control is sender oriented. This initially may seem like a major 535 issue, but really is not. While RSVP reservation (RESV) requests are 536 generated at the receiver, actual allocation of resources takes place 537 at the subnet sender. For data flows, this means that subnet senders 538 will establish all QoS VCs and the subnet receiver must be able to 539 accept incoming QoS VCs, as illustrated in Figure 1. These 540 restrictions are consistent with RSVP version 1 processing rules and 541 allow senders to use different flow to VC mappings and even different 542 QoS renegotiation techniques without interoperability problems. 544 The use of the reverse path provided by point-to-point VCs by receivers 545 is for further study. There are two related issues. The first is that 546 use of the reverse path requires the VC initiator to set appropriate 547 reverse path QoS parameters. The second issue is that reverse paths are 548 not available with point-to-multipoint VCs, so reverse paths could only 549 be used to support unicast RSVP reservations. 551 4.2 Data VC Management 553 Any RSVP over ATM implementation must map RSVP and RSVP associated data 554 flows to ATM Virtual Circuits (VCs). LAN Emulation [17], Classical IP 555 [10] and, more recently, NHRP [4] discuss mapping IP traffic onto ATM 556 SVCs, but they only cover a single QoS class, i.e., best effort 557 traffic. When QoS is introduced, VC mapping must be revisited. For RSVP 558 controlled QoS flows, one issue is VCs to use for QoS data flows. 560 In the Classic IP over ATM and current NHRP models, a single point-to- 561 point VC is used for all traffic between two ATM attached hosts 562 (routers and end-stations). It is likely that such a single VC will 563 not be adequate or optimal when supporting data flows with multiple QoS 564 types. RSVP's basic purpose is to install support for flows with 565 multiple QoS types, so it is essential for any RSVP over ATM solution 566 to address VC usage for QoS data flows, as shown in Figure 1. 568 RSVP reservation styles must also be taken into account in any VC usage 569 strategy. 571 This section describes issues and methods for management of VCs 572 associated with QoS data flows. When establishing and maintaining VCs, 573 the subnet sender will need to deal with several complicating factors 574 including multiple QoS reservations, requests for QoS changes, ATM 575 short-cuts, and several multicast specific issues. The multicast 576 specific issues result from the nature of ATM connections. The key 577 multicast related issues are heterogeneity, data distribution, receiver 578 transitions, and end-point identification. 580 4.2.1 Reservation to VC Mapping 582 There are various approaches available for mapping reservations on to 583 VCs. A distinguishing attribute of all approaches is how reservations 584 are combined on to individual VCs. When mapping reservations on to 585 VCs, individual VCs can be used to support a single reservation, or 586 reservation can be combined with others on to "aggregate" VCs. In the 587 first case, each reservation will be supported by one or more VCs. 588 Multicast reservation requests may translate into the setup of multiple 589 VCs as is described in more detail in section 4.2.2. Unicast 590 reservation requests will always translate into the setup of a single 591 QoS VC. In both cases, each VC will only carry data associated with a 592 single reservation. The greatest benefit if this approach is ease of 593 implementation, but it comes at the cost of increased (VC) setup time 594 and the consumption of greater number of VC and associated resources. 596 When multiple reservations are combined onto a single VC, it is 597 referred to as the "aggregation" model. With this model, large VCs 598 could be set up between IP routers and hosts in an ATM network. These 599 VCs could be managed much like IP Integrated Service (IIS) point-to- 600 point links (e.g. T-1, DS-3) are managed now. Traffic from multiple 601 sources over multiple RSVP sessions might be multiplexed on the same 602 VC. This approach has a number of advantages. First, there is 603 typically no signalling latency as VCs would be in existence when the 604 traffic started flowing, so no time is wasted in setting up VCs. 605 Second, the heterogeneity problem (section 4.2.2) in full over ATM has 606 been reduced to a solved problem. Finally, the dynamic QoS problem 607 (section 4.2.7) for ATM has also been reduced to a solved problem. 609 The aggregation model can be used with point-to-point and point-to- 610 multipoint VCs. The problem with the aggregation model is that the 611 choice of what QoS to use for the VCs may be difficult, without 612 knowledge of the likely reservation types and sizes but is made easier 613 since the VCs can be changed as needed. 615 4.2.2 Unicast Data VC Management 617 Unicast data VC management is much simpler than multicast data VC 618 management but there are still some similar issues. If one considers 619 unicast to be a devolved case of multicast, then implementing the 620 multicast solutions will cover unicast. However, some may want to 621 consider unicast-only implementations. In these situations, the choice 622 of using a single flow per VC or aggregation of flows onto a single VC 623 remains but the problem of heterogeneity discussed in the following 624 section is removed. 626 4.2.3 Multicast Heterogeneity 628 As mentioned in section 2.1.3.1 and shown in figure 2, multicast 629 heterogeneity occurs when receivers request different qualities of 630 service within a single session. This means that the amount of 631 requested resources differs on a per next hop basis. A related type of 632 heterogeneity occurs due to best-effort receivers. In any IP multicast 633 group, it is possible that some receivers will request QoS (via RSVP) 634 and some receivers will not. In shared media networks, like Ethernet, 635 receivers that have not requested resources can typically be given 636 identical service to those that have without complications. This is 637 not the case with ATM. In ATM networks, any additional end-points of a 638 VC must be explicitly added. There may be costs associated with adding 639 the best-effort receiver, and there might not be adequate resources. 640 An RSVP over ATM solution will need to support heterogeneous receivers 641 even though ATM does not currently provide such support directly. 643 RSVP heterogeneity is supported over ATM in the way RSVP reservations 644 are mapped into ATM VCs. There are four alternative approaches this 645 mapping. There are multiple models for supporting RSVP heterogeneity 646 over ATM. Section 4.2.3.1 examines the multiple VCs per RSVP 647 reservation (or full heterogeneity) model where a single reservation 648 can be forwarded onto several VCs each with a different QoS. Section 649 4.2.3.2 presents a limited heterogeneity model where exactly one QoS VC 650 is used along with a best effort VC. Section 4.2.3.3 examines the VC 651 per RSVP reservation (or homogeneous) model, where each RSVP 652 reservation is mapped to a single ATM VC. Section 4.2.3.4 describes 653 the aggregation model allowing aggregation of multiple RSVP 654 reservations into a single VC. 656 4.2.3.1 Full Heterogeneity Model 658 RSVP supports heterogeneous QoS, meaning that different receivers of 659 the same multicast group can request a different QoS. But importantly, 660 some receivers might have no reservation at all and want to receive the 661 traffic on a best effort service basis. The IP model allows receivers 662 to join a multicast group at any time on a best effort basis, and it is 663 important that ATM as part of the Internet continue to provide this 664 service. We define the "full heterogeneity" model as providing a 665 separate VC for each distinct QoS for a multicast session including 666 best effort and one or more qualities of service. 668 Note that while full heterogeneity gives users exactly what they 669 request, it requires more resources of the network than other possible 670 approaches. The exact amount of bandwidth used for duplicate traffic 671 depends on the network topology and group membership. 673 4.2.3.2 Limited Heterogeneity Model 675 We define the "limited heterogeneity" model as the case where the 676 receivers of a multicast session are limited to use either best effort 677 service or a single alternate quality of service. The alternate QoS 678 can be chosen either by higher level protocols or by dynamic 679 renegotiation of QoS as described below. 681 In order to support limited heterogeneity, each ATM edge device 682 participating in a session would need at most two VCs. One VC would be 683 a point-to-multipoint best effort service VC and would serve all best 684 effort service IP destinations for this RSVP session. 686 The other VC would be a point to multipoint VC with QoS and would serve 687 all IP destinations for this RSVP session that have an RSVP reservation 688 established. 690 As with full heterogeneity, a disadvantage of the limited heterogeneity 691 scheme is that each packet will need to be duplicated at the network 692 layer and one copy sent into each of the 2 VCs. Again, the exact 693 amount of excess traffic will depend on the network topology and group 694 membership. If any of the existing QoS VC end-points cannot upgrade to 695 the new QoS, then the new reservation fails though the resources exist 696 for the new receiver. 698 4.2.3.3 Homogeneous and Modified Homogeneous Models 700 We define the "homogeneous" model as the case where all receivers of a 701 multicast session use a single quality of service VC. Best-effort 702 receivers also use the single RSVP triggered QoS VC. The single VC can 703 be a point-to-point or point-to-multipoint as appropriate. The QoS VC 704 is sized to provide the maximum resources requested by all RSVP next- 705 hops. 707 This model matches the way the current RSVP specification addresses 708 heterogeneous requests. The current processing rules and traffic 709 control interface describe a model where the largest requested 710 reservation for a specific outgoing interface is used in resource 711 allocation, and traffic is transmitted at the higher rate to all next- 712 hops. This approach would be the simplest method for RSVP over ATM 713 implementations. 715 While this approach is simple to implement, providing better than best- 716 effort service may actually be the opposite of what the user desires. 717 There may be charges incurred or resources that are wrongfully 718 allocated. There are two specific problems. The first problem is that 719 a user making a small or no reservation would share a QoS VC resources 720 without making (and perhaps paying for) an RSVP reservation. The second 721 problem is that a receiver may not receive any data. This may occur 722 when there is insufficient resources to add a receiver. The rejected 723 user would not be added to the single VC and it would not even receive 724 traffic on a best effort basis. 726 Not sending data traffic to best-effort receivers because of another 727 receiver's RSVP request is clearly unacceptable. The previously 728 described limited heterogeneous model ensures that data is always sent 729 to both QoS and best-effort receivers, but it does so by requiring 730 replication of data at the sender in all cases. It is possible to 731 extend the homogeneous model to both ensure that data is always sent to 732 best-effort receivers and also to avoid replication in the normal case. 733 This extension is to add special handling for the case where a best- 734 effort receiver cannot be added to the QoS VC. In this case, a best 735 effort VC can be established to any receivers that could not be added 736 to the QoS VC. Only in this special error case would senders be 737 required to replicate data. We define this approach as the "modified 738 homogeneous" model. 740 4.2.3.4 Aggregation 742 The last scheme is the multiple RSVP reservations per VC (or 743 aggregation) model. With this model, large VCs could be set up between 744 IP routers and hosts in an ATM network. These VCs could be managed much 745 like IP Integrated Service (IIS) point-to-point links (e.g. T-1, DS-3) 746 are managed now. Traffic from multiple sources over multiple RSVP 747 sessions might be multiplexed on the same VC. This approach has a 748 number of advantages. First, there is typically no signalling latency 749 as VCs would be in existence when the traffic started flowing, so no 750 time is wasted in setting up VCs. Second, the heterogeneity problem 751 in full over ATM has been reduced to a solved problem. Finally, the 752 dynamic QoS problem for ATM has also been reduced to a solved problem. 753 This approach can be used with point-to-point and point-to-multipoint 754 VCs. The problem with the aggregation approach is that the choice of 755 what QoS to use for which of the VCs is difficult, but is made easier 756 if the VCs can be changed as needed. 758 4.2.4 Multicast End-Point Identification 760 Implementations must be able to identify ATM end-points participating 761 in an IP multicast group. The ATM end-points will be IP multicast 762 receivers and/or next-hops. Both QoS and best-effort end-points must 763 be identified. RSVP next-hop information will provide QoS end-points, 764 but not best-effort end-points. Another issue is identifying end-points 765 of multicast traffic handled by non-RSVP capable next-hops. In this 766 case a PATH message travels through a non-RSVP egress router on the way 767 to the next hop RSVP node. When the next hop RSVP node sends a RESV 768 message it may arrive at the source over a different route than what 769 the data is using. The source will get the RESV message, but will not 770 know which egress router needs the QoS. For unicast sessions, there is 771 no problem since the ATM end-point will be the IP next-hop router. 772 Unfortunately, multicast routing may not be able to uniquely identify 773 the IP next-hop router. So it is possible that a multicast end-point 774 can not be identified. 776 In the most common case, MARS will be used to identify all end-points 777 of a multicast group. In the router to router case, a multicast 778 routing protocol may provide all next-hops for a particular multicast 779 group. In either case, RSVP over ATM implementations must obtain a 780 full list of end-points, both QoS and non-QoS, using the appropriate 781 mechanisms. The full list can be compared against the RSVP identified 782 end-points to determine the list of best-effort receivers. There is no 783 straightforward solution to uniquely identifying end-points of 784 multicast traffic handled by non-RSVP next hops. The preferred 785 solution is to use multicast routing protocols that support unique end- 786 point identification. In cases where such routing protocols are 787 unavailable, all IP routers that will be used to support RSVP over ATM 788 should support RSVP. To ensure proper behavior, implementations 789 should, by default, only establish RSVP-initiated VCs to RSVP capable 790 end-points. 792 4.2.5 Multicast Data Distribution 794 Two models are planned for IP multicast data distribution over ATM. In 795 one model, senders establish point-to-multipoint VCs to all ATM 796 attached destinations, and data is then sent over these VCs. This 797 model is often called "multicast mesh" or "VC mesh" mode distribution. 798 In the second model, senders send data over point-to-point VCs to a 799 central point and the central point relays the data onto point-to- 800 multipoint VCs that have been established to all receivers of the IP 801 multicast group. This model is often referred to as "multicast server" 802 mode distribution. RSVP over ATM solutions must ensure that IP 803 multicast data is distributed with appropriate QoS. 805 In the Classical IP context, multicast server support is provided via 806 MARS [5]. MARS does not currently provide a way to communicate QoS 807 requirements to a MARS multicast server. Therefore, RSVP over ATM 808 implementations must, by default, support "mesh-mode" distribution for 809 RSVP controlled multicast flows. When using multicast servers that do 810 not support QoS requests, a sender must set the service, not global, 811 break bit(s). 813 4.2.6 Receiver Transitions 815 When setting up a point-to-multipoint VCs for multicast RSVP sessions, 816 there will be a time when some receivers have been added to a QoS VC 817 and some have not. During such transition times it is possible to 818 start sending data on the newly established VC. The issue is when to 819 start send data on the new VC. If data is sent both on the new VC and 820 the old VC, then data will be delivered with proper QoS to some 821 receivers and with the old QoS to all receivers. This means the QoS 822 receivers can get duplicate data. If data is sent just on the new QoS 823 VC, the receivers that have not yet been added will lose information. 824 So, the issue comes down to whether to send to both the old and new 825 VCs, or to send to just one of the VCs. In one case duplicate 826 information will be received, in the other some information may not be 827 received. 829 This issue needs to be considered for three cases: 830 - When establishing the first QoS VC 831 - When establishing a VC to support a QoS change 832 - When adding a new end-point to an already established QoS VC 834 The first two cases are very similar. It both, it is possible to send 835 data on the partially completed new VC, and the issue of duplicate 836 versus lost information is the same. The last case is when an end-point 837 must be added to an existing QoS VC. In this case the end-point must 838 be both added to the QoS VC and dropped from a best-effort VC. The 839 issue is which to do first. If the add is first requested, then the 840 end-point may get duplicate information. If the drop is requested 841 first, then the end-point may loose information. 843 In order to ensure predictable behavior and delivery of data to all 844 receivers, data can only be sent on a new VCs once all parties have 845 been added. This will ensure that all data is only delivered once to 846 all receivers. This approach does not quite apply for the last case. 847 In the last case, the add operation should be completed first, then the 848 drop operation. This means that receivers must be prepared to receive 849 some duplicate packets at times of QoS setup. 851 4.2.7 Dynamic QoS 853 RSVP provides dynamic quality of service (QoS) in that the resources 854 that are requested may change at any time. There are several common 855 reasons for a change of reservation QoS. 857 1. An existing receiver can request a new larger (or smaller) QoS. 858 2. A sender may change its traffic specification (TSpec), which can 859 trigger a change in the reservation requests of the receivers. 860 3. A new sender can start sending to a multicast group with a larger 861 traffic specification than existing senders, triggering larger 862 reservations. 863 4. A new receiver can make a reservation that is larger than existing 864 reservations. 866 If the limited heterogeneity model is being used and the merge node for 867 the larger reservation is an ATM edge device, a new larger reservation 868 must be set up across the ATM network. Since ATM service, as currently 869 defined in UNI 3.x and UNI 4.0, does not allow renegotiating the QoS of 870 a VC, dynamically changing the reservation means creating a new VC with 871 the new QoS, and tearing down an established VC. Tearing down a VC and 872 setting up a new VC in ATM are complex operations that involve a non- 873 trivial amount of processing time, and may have a substantial latency. 874 There are several options for dealing with this mismatch in service. A 875 specific approach will need to be a part of any RSVP over ATM solution. 877 The default method for supporting changes in RSVP reservations is to 878 attempt to replace an existing VC with a new appropriately sized VC. 879 During setup of the replacement VC, the old VC must be left in place 880 unmodified. The old VC is left unmodified to minimize interruption of 881 QoS data delivery. Once the replacement VC is established, data 882 transmission is shifted to the new VC, and the old VC is then closed. 883 If setup of the replacement VC fails, then the old QoS VC should 884 continue to be used. When the new reservation is greater than the old 885 reservation, the reservation request should be answered with an error. 886 When the new reservation is less than the old reservation, the request 887 should be treated as if the modification was successful. While leaving 888 the larger allocation in place is suboptimal, it maximizes delivery of 889 service to the user. Implementations should retry replacing the too 890 large VC after some appropriate elapsed time. 892 One additional issue is that only one QoS change can be processed at 893 one time per reservation. If the (RSVP) requested QoS is changed while 894 the first replacement VC is still being setup, then the replacement VC 895 is released and the whole VC replacement process is restarted. To limit 896 the number of changes and to avoid excessive signalling load, 897 implementations may limit the number of changes that will be processed 898 in a given period. One implementation approach would have each ATM 899 edge device configured with a time parameter T (which can change over 900 time) that gives the minimum amount of time the edge device will wait 901 between successive changes of the QoS of a particular VC. Thus if the 902 QoS of a VC is changed at time t, all messages that would change the 903 QoS of that VC that arrive before time t+T would be queued. If several 904 messages changing the QoS of a VC arrive during the interval, redundant 905 messages can be discarded. At time t+T, the remaining change(s) of QoS, 906 if any, can be executed. This timer approach would apply more generally 907 to any network structure, and might be worthwhile to incorporate into 908 RSVP. 909 The sequence of events for a single VC would be 911 - Wait if timer is active 912 - Establish VC with new QoS 913 - Remap data traffic to new VC 914 - Tear down old VC 915 - Activate timer 917 There is an interesting interaction between heterogeneous reservations 918 and dynamic QoS. In the case where a RESV message is received from a 919 new next-hop and the requested resources are larger than any existing 920 reservation, both dynamic QoS and heterogeneity need to be addressed. A 921 key issue is whether to first add the new next-hop or to change to the 922 new QoS. This is a fairly straight forward special case. Since the 923 older, smaller reservation does not support the new next-hop, the 924 dynamic QoS process should be initiated first. Since the new QoS is 925 only needed by the new next-hop, it should be the first end-point of 926 the new VC. This way signalling is minimized when the setup to the new 927 next-hop fails. 929 4.2.8 Short-Cuts 931 Short-cuts [4] allow ATM attached routers and hosts to directly 932 establish point-to-point VCs across LIS boundaries, i.e., the VC end- 933 points are on different IP subnets. The ability for short-cuts and 934 RSVP to interoperate has been raised as a general question. An area of 935 concern is the ability to handle asymmetric short-cuts. Specifically 936 how RSVP can handle the case where a downstream short-cut may not have 937 a matching upstream short-cut. In this case, PATH and RESV messages 938 following different paths. 940 Examination of RSVP shows that the protocol already includes mechanisms 941 that will support short-cuts. The mechanism is the same one used to 942 support RESV messages arriving at the wrong router and the wrong 943 interface. The key aspect of this mechanism is RSVP only processing 944 messages that arrive at the proper interface and RSVP forwarding of 945 messages that arrive on the wrong interface. The proper interface is 946 indicated in the NHOP object of the message. So, existing RSVP 947 mechanisms will support asymmetric short-cuts. The short-cut model of 948 VC establishment still poses several issues when running with RSVP. The 949 major issues are dealing with established best-effort short-cuts, when 950 to establish short-cuts, and QoS only short-cuts. These issues will 951 need to be addressed by RSVP implementations. 953 The key issue to be addressed by any RSVP over ATM solution is when to 954 establish a short-cut for a QoS data flow. The default behavior is to 955 simply follow best-effort traffic. When a short-cut has been 956 established for best-effort traffic to a destination or next-hop, that 957 same end-point should be used when setting up RSVP triggered VCs for 958 QoS traffic to the same destination or next-hop. This will happen 959 naturally when PATH messages are forwarded over the best-effort short- 960 cut. Note that in this approach when best-effort short-cuts are never 961 established, RSVP triggered QoS short-cuts will also never be 962 established. More study is expected in this area. 964 4.2.9 VC Teardown 966 RSVP can identify from either explicit messages or timeouts when a data 967 VC is no longer needed. Therefore, data VCs set up to support RSVP 968 controlled flows should only be released at the direction of RSVP. VCs 969 must not be timed out due to inactivity by either the VC initiator or 970 the VC receiver. This conflicts with VCs timing out as described in 971 RFC 1755 [11], section 3.4 on VC Teardown. RFC 1755 recommends tearing 972 down a VC that is inactive for a certain length of time. Twenty minutes 973 is recommended. This timeout is typically implemented at both the VC 974 initiator and the VC receiver. Although, section 3.1 of the update to 975 RFC 1755 [11] states that inactivity timers must not be used at the VC 976 receiver. 978 When this timeout occurs for an RSVP initiated VC, a valid VC with QoS 979 will be torn down unexpectedly. While this behavior is acceptable for 980 best-effort traffic, it is important that RSVP controlled VCs not be 981 torn down. If there is no choice about the VC being torn down, the 982 RSVP daemon must be notified, so a reservation failure message can be 983 sent. 985 For VCs initiated at the request of RSVP, the configurable inactivity 986 timer mentioned in [11] must be set to "infinite". Setting the 987 inactivity timer value at the VC initiator should not be problematic 988 since the proper value can be relayed internally at the originator. 989 Setting the inactivity timer at the VC receiver is more difficult, and 990 would require some mechanism to signal that an incoming VC was RSVP 991 initiated. To avoid this complexity and to conform to [11] 992 implementations must not use an inactivity timer to clear received 993 connections. 995 4.3 RSVP Control Management 997 One last important issue is providing a data path for the RSVP messages 998 themselves. There are two main types of messages in RSVP, PATH and 999 RESV. PATH messages are sent to unicast or multicast addresses, while 1000 RESV messages are sent only to unicast addresses. Other RSVP messages 1001 1 1002 are handled similar to either PATH or RESV . So ATM VCs used for RSVP 1003 signalling messages need to provide both unicast and multicast 1004 functionality. There are several different approaches for how to assign 1005 VCs to use for RSVP signalling messages. 1007 The main approaches are: 1008 - use same VC as data 1009 - single VC per session 1010 - single point-to-multipoint VC multiplexed among sessions 1011 - multiple point-to-point VCs multiplexed among sessions 1013 There are several different issues that affect the choice of how to 1014 assign VCs for RSVP signalling. One issue is the number of additional 1015 VCs needed for RSVP signalling. Related to this issue is the degree of 1016 multiplexing on the RSVP VCs. In general more multiplexing means fewer 1017 VCs. An additional issue is the latency in dynamically setting up new 1018 RSVP signalling VCs. A final issue is complexity of implementation. The 1019 remainder of this section discusses the issues and tradeoffs among 1020 these different approaches and suggests guidelines for when to use 1021 which alternative. 1023 4.3.1 Mixed data and control traffic 1025 In this scheme RSVP signalling messages are sent on the same VCs as is 1026 the data traffic. The main advantage of this scheme is that no 1027 additional VCs are needed beyond what is needed for the data traffic. 1028 An additional advantage is that there is no ATM signalling latency for 1029 PATH messages (which follow the same routing as the data messages). 1030 However there can be a major problem when data traffic on a VC is 1031 nonconforming. With nonconforming traffic, RSVP signalling messages may 1032 be dropped. While RSVP is resilient to a moderate level of dropped 1033 messages, excessive drops would lead to repeated tearing down and re- 1034 establishing of QoS VCs, a very undesirable behavior for ATM. Due to 1035 these problems, this may not be a good choice for providing RSVP 1036 signalling messages, even though the number of VCs needed for this 1037 scheme is minimized. One variation of this scheme is to use the best 1039 1 1040 This can be slightly more complicated for RERR messages 1041 effort data path for signalling traffic. In this scheme, there is no 1042 issue with nonconforming traffic, but there is an issue with congestion 1043 in the ATM network. RSVP provides some resiliency to message loss due 1044 to congestion, but RSVP control messages should be offered a preferred 1045 class of service. A related variation of this scheme that is hopeful 1046 but requires further study is to have a packet scheduling algorithm 1047 (before entering the ATM network) that gives priority to the RSVP 1048 signalling traffic. This can be difficult to do at the IP layer. 1050 4.3.1.1 Single RSVP VC per RSVP Reservation 1052 In this scheme, there is a parallel RSVP signalling VC for each RSVP 1053 reservation. This scheme results in twice the number of VCs, but means 1054 that RSVP signalling messages have the advantage of a separate VC. This 1055 separate VC means that RSVP signalling messages have their own traffic 1056 contract and compliant signalling messages are not subject to dropping 1057 due to other noncompliant traffic (such as can happen with the scheme 1058 in section 4.3.1). The advantage of this scheme is its simplicity - 1059 whenever a data VC is created, a separate RSVP signalling VC is 1060 created. The disadvantage of the extra VC is that extra ATM signalling 1061 needs to be done. Additionally, this scheme requires twice the minimum 1062 number of VCs and also additional latency, but is quite simple. 1064 4.3.1.2 Multiplexed point-to-multipoint RSVP VCs 1066 In this scheme, there is a single point-to-multipoint RSVP signalling 1067 VC for each unique ingress router and unique set of egress routers. 1068 This scheme allows multiplexing of RSVP signalling traffic that shares 1069 the same ingress router and the same egress routers. This can save on 1070 the number of VCs, by multiplexing, but there are problems when the 1071 destinations of the multiplexed point-to-multipoint VCs are changing. 1072 Several alternatives exist in these cases, that have applicability in 1073 different situations. First, when the egress routers change, the 1074 ingress router can check if it already has a point-to-multipoint RSVP 1075 signalling VC for the new list of egress routers. If the RSVP 1076 signalling VC already exists, then the RSVP signalling traffic can be 1077 switched to this existing VC. If no such VC exists, one approach would 1078 be to create a new VC with the new list of egress routers. Other 1079 approaches include modifying the existing VC to add an egress router or 1080 using a separate new VC for the new egress routers. When a destination 1081 drops out of a group, an alternative would be to keep sending to the 1082 existing VC even though some traffic is wasted. The number of VCs used 1083 in this scheme is a function of traffic patterns across the ATM 1084 network, but is always less than the number used with the Single RSVP 1085 VC per data VC. In addition, existing best effort data VCs could be 1086 used for RSVP signalling. Reusing best effort VCs saves on the number 1087 of VCs at the cost of higher probability of RSVP signalling packet 1088 loss. One possible place where this scheme will work well is in the 1089 core of the network where there is the most opportunity to take 1090 advantage of the savings due to multiplexing. The exact savings depend 1091 on the patterns of traffic and the topology of the ATM network. 1093 4.3.1.3 Multiplexed point-to-point RSVP VCs 1095 In this scheme, multiple point-to-point RSVP signalling VCs are used 1096 for a single point-to-multipoint data VC. This scheme allows 1097 multiplexing of RSVP signalling traffic but requires the same traffic 1098 to be sent on each of several VCs. This scheme is quite flexible and 1099 allows a large amount of multiplexing. 1101 Since point-to-point VCs can set up a reverse channel at the same time 1102 as setting up the forward channel, this scheme could save substantially 1103 on signalling cost. In addition, signalling traffic could share 1104 existing best effort VCs. Sharing existing best effort VCs reduces the 1105 total number of VCs needed, but might cause signalling traffic drops if 1106 there is congestion in the ATM network. This point-to-point scheme 1107 would work well in the core of the network where there is much 1108 opportunity for multiplexing. Also in the core of the network, RSVP VCs 1109 can stay permanently established either as Permanent Virtual Circuits 1110 (PVCs) or as long lived Switched Virtual Circuits (SVCs). The number 1111 of VCs in this scheme will depend on traffic patterns, but in the core 1112 of a network would be approximately n(n-1)/2 where n is the number of 1113 IP nodes in the network. In the core of the network, this will 1114 typically be small compared to the total number of VCs. 1116 4.3.2 QoS for RSVP VCs 1118 There is an issue of what QoS, if any, to assign to the RSVP signalling 1119 VCs. For other RSVP VC schemes, a QoS (possibly best effort) will be 1120 needed. What QoS to use partially depends on the expected level of 1121 multiplexing that is being done on the VCs, and the expected 1122 reliability of best effort VCs. Since RSVP signalling is infrequent 1123 (typically every 30 seconds), only a relatively small QoS should be 1124 needed. This is important since using a larger QoS risks the VC setup 1125 being rejected for lack of resources. Falling back to best effort when 1126 a QoS call is rejected is possible, but if the ATM net is congested, 1127 there will likely be problems with RSVP packet loss on the best effort 1128 VC also. Additional experimentation is needed in this area. 1130 5. Encapsulation 1132 Since RSVP is a signalling protocol used to control flows of IP data 1133 packets, encapsulation for both RSVP packets and associated IP data 1134 packets must be defined. The methods for transmitting IP packets over 1135 ATM (Classical IP over ATM[10], LANE[17], and MPOA[18]) are all based 1136 on the encapsulations defined in RFC1483 [19]. RFC1483 specifies two 1137 encapsulations, LLC Encapsulation and VC-based multiplexing. The 1138 former allows multiple protocols to be encapsulated over the same VC 1139 and the latter requires different VCs for different protocols. 1141 For the purposes of RSVP over ATM, any encapsulation can be used as 1142 long as the VCs are managed in accordance to the methods outlined in 1143 Section 4. Obviously, running multiple protocol data streams over the 1144 same VC with LLC encapsulation can cause the same problems as running 1145 multiple flows over the same VC. 1147 While none of the transmission methods directly address the issue of 1148 QoS, RFC1755 [11] does suggest some common values for VC setup for 1149 best-effort traffic. [14] discusses the relationship of the RFC1755 1150 setup parameters and those needed to support IntServ flows in greater 1151 detail. 1153 6. Security Considerations 1155 The same considerations stated in [1] and [11] apply to this document. 1156 There are no additional security issues raised in this document. 1158 7. References 1160 [1] R. Braden, L. Zhang, S. Berson, S. Herzog, S. Jamin. Resource 1161 ReSerVation Protocol (RSVP) -- Version 1 Functional Specification 1162 RFC 2209, September 1997. 1163 [2] M. Borden, E. Crawley, B. Davie, S. Batsell. Integration of Real- 1164 time Services in an IP-ATM Network Architecture. Request for 1165 Comments (Informational) RFC 1821, August 1995. 1166 [3] R. Cole, D. Shur, C. Villamizar. IP over ATM: A Framework Document. 1167 Request for Comments (Informational), RFC 1932, April 1996. 1168 [4] D. Katz, D. Piscitello, B. Cole, J. Luciani. NBMA Next Hop 1169 Resolution Protocol (NHRP). Internet Draft, draft-ietf-rolc-nhrp- 1170 12.txt, October 1997. 1171 [5] G. Armitage, Support for Multicast over UNI 3.0/3.1 based ATM 1172 Networks. RFC 2022. November 1996. 1173 [6] S. Shenker, C. Partridge. Specification of Guaranteed Quality of 1174 Service. RFC 2212, September 1997. 1175 [7] J. Wroclawski. Specification of the Controlled-Load Network Element 1176 Service. RFC 2211, September 1997. 1177 [8] ATM Forum. ATM User-Network Interface Specification Version 3.0. 1178 Prentice Hall, September 1993 1179 [9] ATM Forum. ATM User Network Interface (UNI) Specification Version 1180 3.1. Prentice Hall, June 1995. 1181 [10]M. Laubach, Classical IP and ARP over ATM. Request for Comments 1182 (Proposed Standard) RFC1577, January 1994. 1183 [11]M. Perez, A. Mankin, E. Hoffman, G. Grossman, A. Malis, ATM 1184 Signalling Support for IP over ATM, Request for Comments (Proposed 1185 Standard) RFC1755, February 1995. 1186 [12]S. Herzog. RSVP Extensions for Policy Control. Internet Draft, 1187 draft-ietf-rsvp-policy-ext-02.txt, April 1997. 1188 [13]S. Herzog. Local Policy Modules (LPM): Policy Control for RSVP, 1189 Internet Draft, draft-ietf-rsvp-policy-lpm-01.txt, November 1996. 1190 [14]M. Borden, M. Garrett. Interoperation of Controlled-Load and 1191 Guaranteed Service with ATM, Internet Draft, draft-ietf-issll-atm- 1192 mapping-03.txt, August 1997. 1193 [15]L. Berger. RSVP over ATM Implementation Requirements. Internet 1194 Draft, draft-ietf-issll-atm-imp-req-00.txt, July 1997. 1195 [16]L. Berger. RSVP over ATM Implementation Guidelines. Internet Draft, 1196 draft-ietf-issll-atm-imp-guide-01.txt, July 1997. 1197 [17]ATM Forum Technical Committee. LAN Emulation over ATM, Version 1.0 1198 Specification, af-lane-0021.000, January 1995. 1199 [18]ATM Forum Technical Committee. Baseline Text for MPOA, af-95- 1200 0824r9, September 1996. 1201 [19]J. Heinanen. Multiprotocol Encapsulation over ATM Adaptation Layer 1202 5, RFC 1483, July 1993. 1203 [20]ATM Forum Technical Committee. LAN Emulation over ATM Version 2 - 1204 LUNI Specification, December 1996. 1205 [21]ATM Forum Technical Committee. Traffic Management Specification 1206 v4.0, af-tm-0056.000, April 1996. 1208 [22]R. Callon, et al. A Framework for Multiprotocol Label Switching, 1209 Internet Draft, draft-ietf-mpls-framework-01.txt, July 1997. 1210 [23]B. Rajagopalan, R. Nair, H. Sandick, E. Crawley. A Framework for 1211 QoS-based Routing in the Internet, Internet Draft, draft-ietf-qosr- 1212 framework-01.txt, July 1997. 1213 [24]ITU-T. Digital Subscriber Signaling System No. 2-Connection 1214 modification: Peak cell rate modification by the connection owner, 1215 ITU-T Recommendation Q.2963.1, July 1996. 1216 [25]ITU-T. Digital Subscriber Signaling System No. 2-Connection 1217 characteristics negotiation during call/connection establishment 1218 phase, ITU-T Recommendation Q.2962, July 1996. 1219 [26]ATM Forum Technical Committee. Private Network-Network Interface 1220 Specification v1.0 (PNNI), March 1996 1222 8. Author's Address 1224 Eric S. Crawley 1225 Argon Networks 1226 25 Porter Road 1227 Littleton, Ma 01460 1228 +1 978 486-0665 1229 esc@argon.com 1231 Lou Berger 1232 FORE Systems 1233 6905 Rockledge Drive 1234 Suite 800 1235 Bethesda, MD 20817 1236 +1 301 571-2534 1237 lberger@fore.com 1239 Steven Berson 1240 USC Information Sciences Institute 1241 4676 Admiralty Way 1242 Marina del Rey, CA 90292 1243 +1 310 822-1511 1244 berson@isi.edu 1246 Fred Baker 1247 Cisco Systems 1248 519 Lado Drive 1249 Santa Barbara, California 93111 1250 +1 805 681-0115 1251 fred@cisco.com 1253 Marty Borden 1254 Bay Networks 1255 125 Nagog Park 1256 Acton, MA 01720 1257 mborden@baynetworks.com 1258 +1 978 266-1011 1259 John J. Krawczyk 1260 ArrowPoint Communications 1261 235 Littleton Road 1262 Westford, Massachusetts 01886 1263 +1 978 692-5875 1264 jj@arrowpoint.com