idnits 2.17.1 draft-ietf-qosr-framework-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-23) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Abstract section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 177 instances of too long lines in the document, the longest one being 8 characters in excess of 72. ** There are 11 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 129 has weird spacing: '...plexing infor...' == Line 316 has weird spacing: '...o how does ...' == Line 507 has weird spacing: '...han the reven...' == Line 557 has weird spacing: '...trolled fashi...' == Line 832 has weird spacing: '...s to be adver...' == (2 more instances...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'I-PNNI' is mentioned on line 1286, but not defined == Unused Reference: 'BCS94' is defined on line 1419, but no explicit reference was found in the text == Unused Reference: 'BZ92' is defined on line 1422, but no explicit reference was found in the text == Unused Reference: 'JMW83' is defined on line 1472, but no explicit reference was found in the text == Unused Reference: 'M86' is defined on line 1483, but no explicit reference was found in the text == Unused Reference: 'YS87' is defined on line 1555, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'A79' -- Possible downref: Non-RFC (?) normative reference: ref. 'A84' -- Possible downref: Non-RFC (?) normative reference: ref. 'ACFH92' -- Possible downref: Non-RFC (?) normative reference: ref. 'ACG92' -- Possible downref: Non-RFC (?) normative reference: ref. 'B82' -- Possible downref: Non-RFC (?) normative reference: ref. 'B85' -- Possible downref: Non-RFC (?) normative reference: ref. 'BBCD98' -- Possible downref: Non-RFC (?) normative reference: ref. 'BCF94' ** Downref: Normative reference to an Informational RFC: RFC 1633 (ref. 'BCS94') -- Possible downref: Non-RFC (?) normative reference: ref. 'BZ92' -- Possible downref: Non-RFC (?) normative reference: ref. 'C91' ** Downref: Normative reference to an Informational RFC: RFC 1992 (ref. 'CCM96') -- Possible downref: Non-RFC (?) normative reference: ref. 'DEFV94' ** Downref: Normative reference to an Informational RFC: RFC 1940 (ref. 'ELRV96') -- Possible downref: Non-RFC (?) normative reference: ref. 'GKR96' -- Possible downref: Non-RFC (?) normative reference: ref. 'GPSS98' -- Possible downref: Non-RFC (?) normative reference: ref. 'GM79' -- Possible downref: Non-RFC (?) normative reference: ref. 'GOA97' -- Possible downref: Non-RFC (?) normative reference: ref. 'GKOP98' -- Possible downref: Non-RFC (?) normative reference: ref. 'IBM97' -- Possible downref: Non-RFC (?) normative reference: ref. 'IPNNI' -- Possible downref: Non-RFC (?) normative reference: ref. 'JMW83' -- Possible downref: Non-RFC (?) normative reference: ref. 'K88' -- Possible downref: Non-RFC (?) normative reference: ref. 'L95' -- Possible downref: Non-RFC (?) normative reference: ref. 'M86' ** Obsolete normative reference: RFC 1247 (ref. 'M91') (Obsoleted by RFC 1583) ** Downref: Normative reference to an Informational RFC: RFC 1585 (ref. 'M94') -- Possible downref: Non-RFC (?) normative reference: ref. 'M98' -- Possible downref: Non-RFC (?) normative reference: ref. 'MMR96' -- Possible downref: Non-RFC (?) normative reference: ref. 'MRR80' -- Possible downref: Non-RFC (?) normative reference: ref. 'MS91' -- Possible downref: Non-RFC (?) normative reference: ref. 'MW77' -- Possible downref: Non-RFC (?) normative reference: ref. 'NC94' -- Possible downref: Non-RFC (?) normative reference: ref. 'PNNI96' -- Possible downref: Non-RFC (?) normative reference: ref. 'R76' -- Possible downref: Non-RFC (?) normative reference: ref. 'R92' -- Possible downref: Non-RFC (?) normative reference: ref. 'R96' -- Possible downref: Non-RFC (?) normative reference: ref. 'RSR95' -- Possible downref: Non-RFC (?) normative reference: ref. 'SD95' -- Possible downref: Non-RFC (?) normative reference: ref. 'T88' -- Possible downref: Non-RFC (?) normative reference: ref. 'W88' -- Possible downref: Non-RFC (?) normative reference: ref. 'WC96' -- Possible downref: Non-RFC (?) normative reference: ref. 'YS81' -- Possible downref: Non-RFC (?) normative reference: ref. 'YS87' -- Possible downref: Non-RFC (?) normative reference: ref. 'ZES97' -- Possible downref: Non-RFC (?) normative reference: ref. 'ZSSC97' Summary: 18 errors (**), 0 flaws (~~), 13 warnings (==), 43 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 INTERNET DRAFT 2 draft-ietf-qosr-framework-05.txt May, 27, 1998 4 A Framework for QoS-based Routing in the Internet 6 Eric Crawley Raj Nair Bala Rajagopalan Hal Sandick 7 Argon Networks Arrowpoint NEC USA Bay Networks 9 Status of this Memo 11 This document is an Internet Draft. Internet Drafts are working 12 documents of the Internet Engineering Task Force (IETF), its Areas, 13 and its Working Groups. Note that other groups may also distribute 14 working documents as Internet Drafts. 16 Internet Drafts are draft documents valid for a maximum of six 17 months. Internet Drafts may be updated, replaced, or obsoleted by 18 other documents at any time. It is not appropriate to use Internet 19 Drafts as reference material or to cite them other than as a "working 20 draft" or "work in progress." 22 To view the entire list of current Internet-Drafts, please check 23 the "1id-abstracts.txt" listing contained in the Internet-Drafts 24 Shadow Directories on ftp.is.co.za (Africa), ftp.nordu.net 25 (Northern Europe), ftp.nis.garr.it (Southern Europe), munnari.oz.au 26 (Pacific Rim), ftp.ietf.org (US East Coast), or ftp.isi.edu 27 (US West Coast). 29 Distribution of this memo is unlimited. 31 This Internet Draft expires on October, 27, 1998. 33 ABSTRACT 35 QoS-based routing has been recognized as a missing piece in the evolution 36 of QoS-based service offerings in the Internet. This document describes 37 some of the QoS-based routing issues and requirements, and proposes a 38 framework for QoS-based routing in the Internet. This framework is based 39 on extending the current Internet routing model of intra and interdomain 40 routing to support QoS. 42 1. SCOPE OF DOCUMENT & PHILOSOPHY 44 This document proposes a framework for QoS-based routing, with the 45 objective of fostering the development of an Internet-wide solution while 46 encouraging innovations in solving the many problems that arise. QoS- 47 based routing has many complex facets and it is recommended that the 48 following two-pronged approach be employed towards its development: 50 1. Encourage the growth and evolution of novel intradomain QoS-based 51 routing architectures. This is to allow the development of 52 independent, innovative solutions that address the many QoS-based 53 routing issues. Such solutions may be deployed in autonomous systems 54 (ASs), large and small, based on their specific needs. 56 2. Encourage simple, consistent and stable interactions between ASs 57 implementing routing solutions developed as above. 59 This approach follows the traditional separation between intra and 60 interdomain routing. It allows solutions like QOSPF [GKOP98, ZSSC97], 61 Integrated PNNI [IPNNI] or other schemes to be deployed for intradomain 62 routing without any restriction, other than their ability to interact 63 with a common, and perhaps simple, interdomain routing protocol. The need 64 to develop a single, all encompassing solution to the complex problem of 65 QoS-based routing is therefore obviated. As a practical matter, there are 66 many different views on how QoS-based routing should be done. Much 67 overall progress can be made if an opportunity exists for various ideas 68 to be developed and deployed concurrently, while some consensus on the 69 interdomain routing architecture is being developed. Finally, this 70 routing model is perhaps the most practical from an evolution point of 71 view. It is superfluous to say that the eventual success of a QoS-based 72 Internet routing architecture would depend on the ease of evolution. 74 The aim of this document is to describe the QoS-based routing issues, 75 identify basic requirements on intra and interdomain routing, and 76 describe an extension of the current interdomain routing model to support 77 QoS. It is not an objective of this document to specify the details of 78 intradomain QoS-based routing architectures. This is left up to the 79 various intradomain routing efforts that might follow. Nor is it an 80 objective to specify the details of the interface between reservation 81 protocols such as RSVP and QoS-based routing. The specific interface 82 functionality needed, however, would be clear from the intra and 83 interdomain routing solutions devised. In the intradomain area, the goal 84 is to develop the basic routing requirements while allowing maximum 85 freedom for the development of solutions. In the interdomain area, the 86 objectives are to identify the QoS-based routing functions, and 87 facilitate the development or enhancement of a routing protocol that 88 allows relatively simple interaction between domains. 90 In the next section, a glossary of relevant terminology is given. In 91 Section 3, the objectives of QoS-based routing are described and the 92 issues that must be dealt with by QoS-based Internet routing efforts are 93 outlined. In Section 4, some requirements on intradomain routing are 94 defined. These requirements are purposely broad, putting few constraints 95 on solution approaches. The interdomain routing model and issues are 96 described in Section 5 and QoS-based multicast routing is discussed in 97 Section 6. The interaction between QoS-based routing and resource 98 reservation protocols is briefly considered in Section 7. Related work is 99 described in Section 8. Finally, summary and conclusions are presented in 100 Section 9. 102 2. GLOSSARY 104 The following glossary lists the terminology used in this document and 105 an explanation of what is meant. Some of these terms may have different 106 connotations, but when used in this document, their meaning is as given. 108 Alternate Path Routing : A routing technique where multiple paths, rather 109 than just the shortest path, between a source and a destination are 110 utilized to route traffic. One of the objectives of alternate path 111 routing is to distribute load among multiple paths in the network. 113 Autonomous System (AS): A routing domain which has a common administrative 114 authority and consistent internal routing policy. An AS may employ 115 multiple intradomain routing protocols internally and interfaces to other 116 ASs via a common interdomain routing protocol. 118 Source: A host or router that can be identified by a unique unicast IP 119 address. 121 Unicast destination: A host or router that can be identified by a unique 122 unicast IP address. 124 Multicast destination: A multicast IP address indicating all hosts and 125 routers that are members of the corresponding group. 127 IP flow (or simply "flow"): An IP packet stream from a source to a 128 destination (unicast or multicast) with an associated Quality of Service 129 (QoS) (see below) and higher level demultiplexing information. The 130 associated QoS could be "best-effort". 132 Quality-of-Service (QoS): A set of service requirements to be met by the 133 network while transporting a flow. 135 Service class: The definitions of the semantics and parameters of a 136 specific type of QoS. 138 Integrated services: The Integrated Services model for the Internet 139 defined in RFC 1633 allows for integration of QoS services with the best 140 effort services of the Internet. The Integrated Services (IntServ) 141 working group in the IETF has defined two service classes, Controlled 142 Load Service [W97] and Guaranteed Service [SPG97]. 144 RSVP: The ReSerVation Protocol [BZBH97]. A QoS signaling protocol for 145 the Internet. 147 Path: A unicast or multicast path. 149 Unicast path: A sequence of links from an IP source to a unicast IP 150 destination, determined by the routing scheme for forwarding packets. 152 Multicast path (or Multicast Tree): A subtree of the network topology in 153 which all the leaves and zero or more interior nodes are members of the 154 same multicast group. A multicast path may be per-source, in which case 155 the subtree is rooted at the source. 157 Flow set-up: The act of establishing state in routers along a path to 158 satisfy the QoS requirement of a flow. 160 Crankback: A technique where a flow setup is recursively backtracked 161 along the partial flow path up to the first node that can determine an 162 alternative path to the destination. 164 QoS-based routing: A routing mechanism under which paths for flows are 165 determined based on some knowledge of resource availability in the 166 network as well as the QoS requirement of flows. 168 Route pinning: A mechanism to keep a flow path fixed for a duration of 169 time. 171 Flow Admission Control (FAC): A process by which it is determined whether 172 a link or a node has sufficient resources to satisfy the QoS required for 173 a flow. FAC is typically applied by each node in the path of a flow 174 during flow set-up to check local resource availability. 176 Higher-level admission control: A process by which it is determined 177 whether or not a flow set-up should proceed, based on estimates and 178 policy requirements of the overall resource usage by the flow. Higher- 179 level admission control may result in the failure of a flow set-up even 180 when FAC at each node along the flow path indicates resource availability. 182 3. QOS-BASED ROUTING: BACKGROUND AND ISSUES 184 3.1 Best-Effort and QoS-Based Routing 185 --------------------------------- 187 Routing deployed in today's Internet is focused on connectivity and 188 typically supports only one type of datagram service called "best effort" 189 [WC96]. Current Internet routing protocols, e.g. OSPF, RIP, use 190 "shortest path routing", i.e. routing that is optimized for a single 191 arbitrary metric, administrative weight or hop count. These routing 192 protocols are also "opportunistic," using the current shortest path 193 or route to a destination. Alternate paths with acceptable but non-optimal 194 cost can not be used to route traffic (shortest path routing protocols do 195 allow a router to alternate among several equal cost paths to a destination). 197 QoS-based routing must extend the current routing paradigm in three basic 198 ways. First, to support traffic using integrated-services class of services, 199 multiple paths between node pairs will have to be calculated. Some of these 200 new classes of service will require the distribution of additional routing 201 metrics, e.g. delay, and available bandwidth. If any of these metrics change 202 frequently, routing updates can become more frequent thereby consuming network 203 bandwidth and router CPU cycles. 205 Second, today's opportunistic routing will shift traffic from one path 206 to another as soon as a "better" path is found. The traffic will be 207 shifted even if the existing path can meet the service requirements of 208 the existing traffic. If routing calculation is tied to frequently 209 changing consumable resources (e.g. available bandwidth) this change will 210 happen more often and can introduce routing oscillations as traffic 211 shifts back and forth between alternate paths. Furthermore, frequently 212 changing routes can increase the variation in the delay and jitter experienced 213 by the end users. 215 Third, as mentioned earlier, today's optimal path routing algorithms 216 do not support alternate routing. If the best existing path cannot admit 217 a new flow, the associated traffic cannot be forwarded even if an adequate 218 alternate path exists. 220 3.2 QoS-Based Routing and Resource Reservation 221 ------------------------------------------ 223 It is important to understand the difference between QoS-based routing and 224 resource reservation. While resource reservation protocols such as RSVP 225 [BZBH97] provide a method for requesting and reserving network resources, 226 they do not provide a mechanism for determining a network path that has 227 adequate resources to accommodate the requested QoS. Conversely, 228 QoS-based routing allows the determination of a path that has a good 229 chance of accommodating the requested QoS, but it does not include a 230 mechanism to reserve the required resources. 232 Consequently, QoS-based routing is usually used in conjunction with some 233 form of resource reservation or resource allocation mechanism. Simple forms 234 of QoS-based routing have been used in the past for Type of Service (TOS) 235 routing [M91]. In the case of OSPF, a different shortest-path tree can be 236 computed for each of the 8 TOS values in the IP header [ISI81]. Such 237 mechanisms can be used to select specially provisioned paths but do not 238 completely assure that resources are not overbooked along the path. As long 239 as strict resource management and control are not needed, mechanisms such 240 as TOS-based routing are useful for separating whole classes of traffic 241 over multiple routes. Such mechanisms might work well with the emerging 242 Differential Services efforts [BBCD98]. 244 Combining a resource reservation protocol with QoS-based routing allows 245 fine control over the route and resources at the cost of additional 246 state and setup time. For example, a protocol such as RSVP may be used to 247 trigger QoS-based routing calculations to meet the needs of a specific flow. 249 3.3 QoS-Based Routing: Objectives 250 ----------------------------- 252 Under QoS-based routing, paths for flows would be determined based on 253 some knowledge of resource availability in the network, as well as the 254 QoS requirement of flows. The main objectives of QoS-based routing are: 256 1. Dynamic determination of feasible paths: QoS-based routing can 257 determine a path, from among possibly many choices, that has a good 258 chance of accommodating the QoS of the given flow. Feasible path 259 selection may be subject to policy constraints, such as path cost, 260 provider selection, etc. 262 2. Optimization of resource usage: A network state-dependent QoS-based 263 routing scheme can aid in the efficient utilization of network 264 resources by improving the total network throughput. Such a routing 265 scheme can be the basis for efficient network engineering. 267 3. Graceful performance degradation: State-dependent routing can 268 compensate for transient inadequacies in network engineering (e.g., 269 during focused overload conditions), giving better throughput and a 270 more graceful performance degradation as compared to a state- 271 insensitive routing scheme [A84]. 273 QoS-based routing in the Internet, however, raises many issues: 275 - How do routers determine the QoS capability of each outgoing link and 276 reserve link resources? Note that some of these links may be virtual, 277 over ATM networks and others may be broadcast multi-access links. 279 - What is the granularity of routing decision (i.e., destination-based, 280 source and destination-based, or flow-based)? 282 - What routing metrics are used and how are QoS-accommodating paths 283 computed for unicast flows? 285 - How are QoS-accommodating paths computed for multicast flows with 286 different reservation styles and receiver heterogeneity? 288 - What are the performance objectives while computing QoS-based paths? 290 - What are the administrative control issues? 292 - What factors affect the routing overheads?, and 294 - How is scalability achieved? 296 Some of these issues are discussed briefly next. Interdomain routing is 297 discussed in Section 5. 299 3.4 QoS Determination and Resource Reservation 300 ------------------------------------------ 302 To determine whether the QoS requirements of a flow can be accommodated 303 on a link, a router must be able to determine the QoS available on the 304 link. It is still an open issue as to how the QoS availability is 305 determined for broadcast multiple access links (e.g., Ethernet). A 306 related problem is the reservation of resources over such links. 307 Solutions to these problems are just emerging [GPSS98]. 309 Similar problems arise when a router is connected to a large non- 310 broadcast multiple access network, such as ATM. In this case, if the 311 destination of a flow is outside the ATM network, the router may have 312 multiple egress choices. Furthermore, the QoS availability on the ATM 313 paths to each egress point may be different. The issues then are, 314 o how does a router determine all the egress choices across the ATM 315 network? 316 o how does it determine what QoS is available over the path to each 317 egress point?, and 318 o what QoS value does the router advertise for the ATM link. 320 Typically, IP routing over ATM (e.g., NHRP) allows the selection of a 321 single egress point in the ATM network, and the procedure does not 322 incorporate any knowledge of the QoS required over the path. An approach 323 like I-PNNI [IPNNI] would be helpful here, although it introduces some 324 complexity. 326 An additional problem with resource reservation is how to determine what 327 resources have already been allocated to a multicast flow. The availability 328 of this information during path computation improves the chances of finding 329 a path to add a new receiver to a multicast flow. QOSPF [ZSSC97] handles 330 this problem by letting routers broadcast reserved resource information to 331 other routers in their area. Alternate path routing [ZES97] deals with this 332 issue by using probe messages to find a path with sufficient resources. Path 333 QoS Computation (PQC) method, proposed in [GOA97], propagates bandwidth 334 allocation information in RSVP PATH messages. A router receiving the PATH 335 message gets an indication of the resource allocation only on those links in 336 the path to itself from the source. Allocation for the same flow on other 337 remote branches of the multicast tree is not available. Thus, the PQC method 338 may not be sufficient to find feasible QoS-accommodating paths to all receivers. 340 3.5 Granularity of Routing Decision 341 ------------------------------- 343 Routing in the Internet is currently based only on the destination 344 address of a packet. Many multicast routing protocols require routing 345 based on the source AND destination of a packet. The Integrated Services 346 architecture and RSVP allow QoS determination for an individual flow 347 between a source and a destination. This set of routing granularities 348 presents a problem for QoS routing solutions. 350 If routing based only on destination address is considered, then an intermediate 351 router will route all flows between different sources and a given destination 352 along the same path. This is acceptable if the path has adequate capacity but 353 a problem arises if there are multiple flows to a destination that exceed the 354 capacity of the link. 356 One version of QOSPF [ZSSC97] determines QoS routes based on source and 357 destination address. This implies that all traffic between a given source 358 and destination, regardless of the flow, will travel down the same 359 route. Again, the route must have capacity for all the QoS traffic for 360 the source/destination pair. The amount of routing state also 361 increases since the routing tables must include source/destination pairs 362 instead of just the destination. 364 The best granularity is found when routing is based on individual flows 365 but this incurs a tremendous cost in terms of the routing state. Each QoS 366 flow can be routed separately between any source and destination. PQC [GOA97] 367 and alternate path routing [ZES97], are examples of solutions which operate 368 at the flow level. 370 Both source/destination and flow-based routing may be susceptible to 371 packet looping under hop-by-hop forwarding. Suppose a node along a flow or 372 source/destination-based path loses the state information for the flow. 373 Also suppose that the flow-based route is different from the regular 374 destination-based route. The potential then exists for a routing loop to 375 form when the node forwards a packet belonging to the flow using its 376 destination-based routing table to a node that occurs earlier on the 377 flow-based path. This is because the latter node may use its flow-based 378 routing table to forward the packet again to the former and this can 379 go on indefinitely. 381 3.6 Metrics and Path Computation 382 ---------------------------- 384 3.6.1 Metric Selection and Representation 386 There are some considerations in defining suitable link and node metrics 387 [WC96]. First, the metrics must represent the basic network properties of 388 interest. Such metrics include residual bandwidth, delay and jitter. 389 Since the flow QoS requirements have to be mapped onto path metrics, the 390 metrics define the types of QoS guarantees the network can support. 391 Alternatively, QoS-based routing cannot support QoS requirements that 392 cannot be meaningfully mapped onto a reasonable combination of path metrics. 393 Second, path computation based on a metric or a combination of metrics must 394 not be too complex as to render them impractical. In this regard, it is 395 worthwhile to note that path computation based on certain combinations of 396 metrics (e.g., delay and jitter) is theoretically hard. Thus, the 397 allowable combinations of metrics must be determined while taking into 398 account the complexity of computing paths based on these metrics and the 399 QoS needs of flows. A common strategy to allow flexible combinations of 400 metrics while at the same time reduce the path computation complexity is 401 to utilize "sequential filtering". Under this approach, a combination of 402 metrics is ordered in some fashion, reflecting the importance of 403 different metrics (e.g., cost followed by delay, etc.). Paths based on 404 the primary metric are computed first (using a simple algorithm, e.g., 405 shortest path) and a subset of them are eliminated based on the secondary 406 metric and so forth until a single path is found. This is an approximation 407 technique and it trades off global optimality for path computation 408 simplicity (The filtering technique may be simpler, depending on the 409 set of metrics used. For example, with bandwidth and cost as metrics, it 410 is possible to first eliminate the set of links that do not have the 411 requested bandwidth and then compute the least cost path using the 412 remaining links.) 414 Now, once suitable link and node metrics are defined, a uniform 415 representation of them is required across independent domains - employing 416 possibly different routing schemes - in order to derive path metrics 417 consistently (path metrics are obtained by the composition of link and 418 node metrics). Encoding of the maximum, minimum, range, and granularity 419 of the metrics are needed. Also, the definitions of comparison and 420 accumulation operators are required. In addition, suitable triggers must 421 be defined for indicating a significant change from a minor change. The 422 former will cause a routing update to be generated. The stability of the 423 QoS routes would depend on the ability to control the generation of 424 updates. With interdomain routing, it is essential to obtain a fairly 425 stable view of the interconnection among the ASs. 427 3.6.2 Metric Hierarchy 429 A hierarchy can be defined among various classes of service based on the 430 degree to which traffic from one class can potentially degrade service of 431 traffic from lower classes that traverse the same link. In this hierarchy, 432 guaranteed constant bit rate traffic is at the top and "best-effort" 433 datagram traffic at the bottom. Classes providing service higher in the 434 hierarchy impact classes providing service in lower levels. The same 435 situation is not true in the other direction. For example, a datagram 436 flow cannot affect a real-time service. Thus, it may be necessary to 437 distribute and update different metrics for each type of service in the 438 worst case. But, several advantages result by identifying a single 439 default metric. For example, one could derive a single metric combining 440 the availability of datagram and real-time service over a common 441 substrate. 443 3.6.3 Datagram Flows 445 A delay-sensitive metric is probably the most obvious type of metric 446 suitable for datagram flows. However, it requires careful analysis to 447 avoid instabilities and to reduce storage and bandwidth requirements. For 448 example, a recursive filtering technique based on a simple and efficient 449 weighted averaging algorithm [NC94] could be used. This filter is 450 used to stabilize the metric. While it is adequate for smoothing most 451 loading patterns, it will not distinguish between patterns consisting of 452 regular bursts of traffic and random loading. Among other stabilizing 453 tools, is a minimum time between updates that can help filter out 454 high-frequency oscillations. 456 3.6.4 Real-time Flows 458 In real-time quality-of-service, delay variation is generally more 459 critical than delay as long as the delay is not too high. Clearly, 460 voice-based applications cannot tolerate more than a certain level of 461 delay. The condition of varying delays may be expected to a greater 462 degree in a shared medium environment with datagrams, than in a network 463 implemented over a switched substrate. Routing a real-time flow 464 therefore reduces to an exercise in allocating the required network 465 resources while minimizing fragmentation of bandwidth. The resulting 466 situation is a bandwidth-limited minimum hop path from a source to the 467 destination. In other words, the router performs an ordered search 468 through paths of increasing hop count until it finds one that meets all 469 the bandwidth needs of the flow. To reduce contention and the probability 470 of false probes (due to inaccuracy in route tables), the router could 471 select a path randomly from a "window" of paths which meet the needs of 472 the flow and satisfy one of three additional criteria: best-fit, 473 first-fit or worst-fit. Note that there is a similarity between the 474 allocation of bandwidth and the allocation of memory in a multiprocessing 475 system. First-fit seems to be appropriate for a system with a high 476 real-time flow arrival rates; and worst-fit is ideal for real-time flows 477 with high holding times. This rather nonintuitive result was shown in 478 [NC94]. 480 3.6.5 Path Properties 482 Path computation by itself is merely a search technique, e.g., Shortest 483 Path First (SPF) is a search technique based on dynamic programming. The 484 usefulness of the paths computed depends to a large extent on the metrics 485 used in evaluating the cost of a path with respect to a flow. 487 Each link considered by the path computation engine must be evaluated 488 against the requirements of the flow, i.e., the cost of providing the 489 services required by the flow must be estimated with respect to the 490 capabilities of the link. This requires a uniform method of combining 491 features such as delay, bandwidth, priority and other service features. 492 Furthermore, the costs must reflect the lost opportunity of using each 493 link after routing the flow. 495 3.6.6 Performance Objectives 497 One common objective during path computation is to improve the total 498 network throughput. In this regard, merely routing a flow on any path 499 that accommodates its QoS requirement is not a good strategy. In fact, 500 this corresponds to uncontrolled alternate routing [SD95] and may adversely 501 impact performance at higher traffic loads. It is therefore necessary 502 to consider the total resource allocation for a flow along a path, in 503 relation to available resources, to determine whether or not the flow 504 should be routed on the path [RSR95]. Such a mechanism is referred to 505 in this document as "higher level admission control". The goal of this 506 is to ensure that the "cost" incurred by the network in routing a flow 507 with a given QoS is never more than the revenue gained. The routing 508 cost in this regard may be the lost revenue in potentially blocking other 509 flows that contend for the same resources. The formulation of the higher 510 level admission control strategy, with suitable administrative hooks and 511 with fairness to all flows desiring entry to the network, is an issue. 512 The fairness problem arises because flows with smaller reservations tend 513 to be more successfully routed than flows with large reservations, for a 514 given engineered capacity. To guarantee a certain level of acceptance 515 rate for "larger" flows, without over-engineering the network, requires 516 a fair higher level admission control mechanism. The application of 517 higher level admission control to multicast routing is discussed later. 519 3.7 Administrative Control 520 ---------------------- 522 There are several administrative control issues. First, within an AS 523 employing state-dependent routing, administrative control of routing 524 behavior may be necessary. One example discussed earlier was higher 525 level admission control. Some others are described in this section. 526 Second, the control of interdomain routing based on policy is an issue. 527 The discussion of interdomain routing is defered to Section 5. 529 Two areas that need administrative control, in addition to appropriate 530 routing mechanisms, are handling flow priority with preemption, and 531 resource allocation for multiple service classes. 533 3.7.1 Flow Priorities and Preemption 535 If there are critical flows that must be accorded higher priority than 536 other types of flows, a mechanism must be implemented in the network to 537 recognize flow priorities. There are two aspects to prioritizing flows. 538 First, there must be a policy to decide how different users are allowed 539 to set priorities for flows they originate. The network must be able to 540 verify that a given flow is allowed to claim a priority level signaled 541 for it. Second, the routing scheme must ensure that a path with the 542 requested QoS will be found for a flow with a probability that increases 543 with the priority of the flow. In other words, for a given network load, 544 a high priority flow should be more likely to get a certain QoS from the 545 network than a lower priority flow requesting the same QoS. Routing 546 procedures for flow prioritization can be complex. Identification and 547 evaluation of different procedures are areas that require investigation. 549 3.7.2 Resource Control 551 If there are multiple service classes, it is necessary to engineer a 552 network to carry the forecasted traffic demands of each class. To do 553 this, router and link resources may be logically partitioned among 554 various service classes. It is desirable to have dynamic partitioning 555 whereby unused resources in various partitions are dynamically shifted 556 to other partitions on demand [ACFH92]. Dynamic sharing, however, must 557 be done in a controlled fashion in order to prevent traffic under some 558 service class from taking up more resources than what was engineered for 559 it for prolonged periods of time. The design of such a resource sharing 560 scheme, and its incorporation into the QoS-based routing scheme are 561 significant issues. 563 3.8 QoS-Based Routing for Multicast Flows 564 ------------------------------------- 566 QoS-based multicast routing is an important problem, especially if the 567 notion of higher level admission control is included. The dynamism in 568 the receiver set allowed by IP multicast, and receiver heterogeneity add 569 to the problem. With straightforward implementation of distributed 570 heuristic algorithms for multicast path computation [W88, C91], the 571 difficulty is essentially one of scalability. To accommodate QoS, 572 multicast path computation at a router must have knowledge of not only 573 the id of subnets where group members are present, but also the 574 identity of branches in the existing tree. In other words, routers must 575 keep flow-specific state information. Also, computing optimal shared 576 trees based on the shared reservation style [BZBH97], may require new 577 algorithms. Multicast routing is discussed in some detail in Section 6. 579 3.9 Routing Overheads 580 ----------------- 582 The overheads incurred by a routing scheme depend on the type of the 583 routing scheme, as well as the implementation. There are three types of 584 overheads to be considered: computation, storage and communication. It 585 is necessary to understand the implications of choosing a routing 586 mechanism in terms of these overheads. 588 For example, considering link state routing, the choice of the update 589 propagation mechanism is important since network state is dynamic and 590 changes relatively frequently. Specifically, a flooding mechanism would 591 result in many unnecessary message transmissions and processing. 592 Alternative techniques, such as tree-based forwarding [R96], have to be 593 considered. A related issue is the quantization of state information to 594 prevent frequent updating of dynamic state. While coarse quantization 595 reduces updating overheads, it may affect the performance of the routing 596 scheme. The tradeoff has to be carefully evaluated. QoS-based routing 597 incurs certain overheads during flow establishment, for example, 598 computing a source route. Whether this overhead is disproportionate 599 compared to the length of the sessions is an issue. In general, 600 techniques for the minimization of routing-related overheads during flow 601 establishment must be investigated. Approaches that are useful include 602 pre-computation of routes, caching recently used routes, and TOS routing 603 based on hints in packets (e.g., the TOS field). 605 3.10 Scaling by Hierarchical Aggregation 606 ----------------------------------- 608 QoS-based routing should be scalable, and hierarchical aggregation is a 609 common technique for scaling (e.g., [PNNI96]). But this introduces 610 problems with regard to the accuracy of the aggregated state information 611 [L95]. Also, the aggregation of paths under multiple constraints is 612 difficult. One of the difficulties is the risk of accepting a flow based 613 on inaccurate information, but not being able to support the QoS 614 requirements of flow because the capabilities of the actual paths that 615 are aggregated are not known during route computation. Performance 616 impacts of aggregating path metric information must therefore be 617 understood. A way to compensate for inaccuracies is to use crankback, 618 i.e., dynamic search for alternate paths as a flow is being routed. But 619 crankback increases the time to set up a flow, and may adversely affect 620 the performance of the routing scheme under some circumstances. Thus, 621 crankback must be used judiciously, if at all, along with a higher level 622 admission control mechanism. 624 4. INTRADOMAIN ROUTING REQUIREMENTS 626 At the intradomain level, the objective is to allow as much latitude as 627 possible in addressing the QoS-based routing issues. Indeed, there are 628 many ideas about how QoS-based routing services can be provisioned within 629 ASs. These range from on-demand path computation based on current state 630 information, to statically provisioned paths supporting a few service 631 classes. 633 Another aspect that might invite differing solutions is performance 634 optimization. Based on the technique used for this, intradomain routing 635 could be very sophisticated or rather simple. Finally, the service 636 classes supported, as well as the specific QoS engineered for a service 637 class, could differ from AS to AS. For instance, some ASs may not support 638 guaranteed service, while others may. Also, some ASs supporting the 639 service may be engineered for a better delay bound than others. Thus, it 640 requires considerable thought to determine the high level requirements 641 for intradomain routing that both supports the overall view of QoS-based 642 routing in the Internet and allows maximum autonomy in developing 643 solutions. 645 Our view is that certain minimum requirements must be satisfied by 646 intradomain routing in order to be qualified as "QoS-based" routing. 647 These are: 649 - The routing scheme must route a flow along a path that can accommodate 650 its QoS requirements, or indicate that the flow cannot be admitted with 651 the QoS currently being requested. 653 - The routing scheme must indicate disruptions to the current route of a 654 flow due to topological changes. 656 - The routing scheme must accommodate best-effort flows without any 657 resource reservation requirements. That is, present best effort 658 applications and protocol stacks need not have to change to run in a 659 domain employing QoS-based routing. 661 - The routing scheme may optionally support QoS-based multicasting with 662 receiver heterogeneity and shared reservation styles. 664 In addition, the following capabilities are also recommended: 666 - Capabilities to optimize resource usage. 668 - Implementation of higher level admission control procedures to limit 669 the overall resource utilization by individual flows. 671 Further requirements along these lines may be specified. The requirements 672 should capture the consensus view of QoS-based routing, but should not 673 preclude particular approaches (e.g., TOS-based routing) from being 674 implemented. Thus, the intradomain requirements are expected to be rather 675 broad. 677 5. INTERDOMAIN ROUTING 679 The fundamental requirement on interdomain QoS-based routing is 680 scalability. This implies that interdomain routing cannot be based on 681 highly dynamic network state information. Rather, such routing must be 682 aided by sound network engineering and relatively sparse information 683 exchange between independent routing domains. This approach has the 684 advantage that it can be realized by straightforward extensions of the 685 present Internet interdomain routing model. A number of issues, however, 686 need to be addressed to achieve this, as discussed below. 688 5.1 Interdomain QoS-Based Routing Model 689 ----------------------------------- 691 The interdomain QoS-based routing model is depicted below: 693 AS1 AS2 AS3 694 ___________ _____________ ____________ 695 | | | | | | 696 | B------B B----B | 697 | | | | | | 698 -----B----- B------------- --B--------- 699 \ / / 700 \ / / 701 ____B_____B____ _________B______ 702 | | | | 703 | B-------B | 704 | | | | 705 | B-------B | 706 --------------- ---------------- 707 AS4 AS5 709 Here, ASs exchange standardized routing information via border nodes B. 710 Under this model, each AS can itself consist of a set of interconnected 711 ASs, with standardized routing interaction. Thus, the interdomain routing 712 model is hierarchical. Also, each lowest level AS employs an intradomain 713 QoS-based routing scheme (proprietary or standardized by intradomain 714 routing efforts such as QOSPF). Given this structure, some questions that 715 arise are: 717 - What information is exchanged between ASs? 719 - What routing capabilities does the information exchange lead to? (E.g., 720 source routing, on-demand path computation, etc.) 722 - How is the external routing information represented within an AS? 724 - How are interdomain paths computed? 726 - What sort of policy controls may be exerted on interdomain path 727 computation and flow routing?, and 729 - How is interdomain QoS-based multicast routing accomplished? 731 At a high level, the answers to these questions depend on the routing 732 paradigm. Specifically, considering link state routing, the information 733 exchanged between domains would consist of an abstract representation of 734 the domains in the form of logical nodes and links, along with metrics 735 that quantify their properties and resource availability. The 736 hierarchical structure of the ASs may be handled by a hierarchical link 737 state representation, with appropriate metric aggregation. 739 Link state routing may not necessarily be advantageous for interdomain 740 routing for the following reasons: 742 - One advantage of intradomain link state routing is that it would allow 743 fairly detailed link state information be used to compute paths on 744 demand for flows requiring QoS. The state and metric aggregation used 745 in interdomain routing, on the other hand, erodes this property to a 746 great degree. 748 - The usefulness of keeping track of the abstract topology and metrics 749 of a remote domain, or the interconnection between remote domains is 750 not obvious. This is especially the case when the remote topology and 751 metric encoding are lossy. 753 - ASs may not want to advertise any details of their internal topology 754 or resource availability. 756 - Scalability in interdomain routing can be achieved only if information 757 exchange between domains is relatively infrequent. Thus, it seems 758 practical to limit information flow between domains as much as 759 possible. 761 Compact information flow allows the implementation QoS-enhanced versions 762 of existing interdomain protocols such as BGP-4. We look at the 763 interdomain routing issues in this context. 765 5.2 Interdomain Information Flow 766 ---------------------------- 768 The information flow between routing domains must enable certain basic 769 functions: 771 1. Determination of reachability to various destinations 773 2. Loop-free flow routes 775 3. Address aggregation whenever possible 777 4. Determination of the QoS that will be supported on the path to a 778 destination. The QoS information should be relatively static, 779 determined from the engineered topology and capacity of an AS rather 780 than ephemeral fluctuations in traffic load through the AS. Ideally, 781 the QoS supported in a transit AS should be allowed to vary 782 significantly only under exceptional circumstances, such as failures 783 or focused overload. 785 5. Determination, optionally, of multiple paths for a given destination, 786 based on service classes. 788 6. Expression of routing policies, including monetary cost, as a function 789 of flow parameters, usage and administrative factors. 791 Items 1-3 are already part of existing interdomain routing. Item 5 is 792 also a straightfoward extension of the current model. The main problem 793 areas are therefore items 4 and 6. 795 The QoS of an end-to-end path is obtained by composing the QoS available 796 in each transit AS. Thus, border routers must first determine what the 797 locally available QoS is in order to advertise routes to both internal 798 and external destinations. The determination of local "AS metrics" 799 (corresponding to link metrics in the intradomain case) should not be 800 subject to too much dynamism. Thus, the issue is how to define such 801 metrics and what triggers an occasional change that results in 802 re-advertisements of routes. 804 The approach suggested in this document is not to compute paths based on 805 residual or instantaneous values of AS metics (which can be dynamic), but 806 utilize only the QoS capabilities engineered for aggregate transit flows. 807 Such engineering may be based on the knowledge of traffic to be expected 808 from each neighboring ASs and the corresponding QOS needs. This 809 information may be obtained based on contracts agreed upon prior to the 810 provisioning of services. The AS metric then corresponds to the QoS 811 capabilities of the "virtual path" engineered through the AS (for transit 812 traffic) and a different metric may be used for different neighbors. This 813 is illustrated in the following figure. 815 AS1 AS2 AS3 816 ___________ _____________ ____________ 817 | | | | | | 818 | B------B1 B2----B | 819 | | | | | | 820 -----B----- B3------------ --B--------- 821 \ / 822 \ / 823 ____B_____B____ 824 | | 825 | | 826 | | 827 | | 828 --------------- 829 AS4 831 Here, B1 may utilize an AS metric specific for AS1 when computing path 832 metrics to be advertised to AS1. This metric is based on the resources 833 engineered in AS2 for transit traffic from AS1. Similarly, B3 may utilize 834 a different metric when computing path metrics to be advertised to AS4. 835 Now, it is assumed that as long as traffic flow into AS2 from AS1 or AS4 836 does not exceed the engineered values, these path metrics would hold. 837 Excess traffic due to transient fluctuations, however, may be handled as 838 best effort or marked with a discard bit. 840 Thus, this model is different from the intradomain model, where end nodes 841 pick a path dynamically based on the QoS needs of the flow to be routed. 842 Here, paths within ASs are engineered based on presumed, measured or 843 declared traffic and QoS requirements. Under this model, an AS can 844 contract for routes via multiple transit ASs with different QoS 845 requirements. For instance, AS4 above can use both AS1 and AS2 as 846 transits for same or different destinations. Also, a QoS contract 847 between one AS and another may generate another contract between the 848 second and a third AS and so forth. 850 An issue is what triggers the recomputation of path metrics within an AS. 851 Failures or other events that prevent engineered resource allocation 852 should certainly trigger recomputation. Recomputation should not be 853 triggered in response to arrival of flows within the engineered limit. 855 5.3 Path Computation 856 ---------------- 858 Path computation for an external destination at a border node is based on 859 reachability, path metrics and local policies of selection. If there are 860 multiple selection criteria (e.g., delay, bandwidth, cost, etc.), mutiple 861 alternaives may have to be maintained as well as propagated by border 862 nodes. Selection of a path from among many alternatives would depend on 863 the QoS requests of flows, as well as policies. Path computation may also 864 utilze any heuristics for optimizing resource usage. 866 5.4 Flow Aggregation 867 ---------------- 869 An important issue in interdomain routing is the amount of flow state to 870 be processed by transit ASs. Reducing the flow state by aggregation 871 techniques must therefore be seriously considered. Flow aggregation means 872 that transit traffic through an AS is classified into a few aggregated 873 streams rather than being routed at the individual flow level. For 874 example, an entry border router may classify various transit flows 875 entering an AS into a few coarse categories, based on the egress node and 876 QoS requirements of the flows. Then, the aggregated stream for a given 877 traffic class may be routed as a single flow inside the AS to the exit 878 border router. This router may then present individual flows to 879 different neighboring ASs and the process repeats at each entry border 880 router. Under this scenario, it is essential that entry border routers 881 keep track of the resource requirements for each transit flow and apply 882 admission control to determine whether the aggregate requirement from any 883 neighbor exceeds the engineered limit. If so, some policy must be invoked 884 to deal with the excess traffic. Otherwise, it may be assumed that 885 aggregated flows are routed over paths that have adequate resources to 886 guarantee QoS for the member flows. Finally, it is possible that entry 887 border routers at a transit AS may prefer not to aggregate flows if finer 888 grain routing within the AS may be more efficient (e.g., to aid load 889 balancing within the AS). 891 5.5 Path Cost Determination 892 ----------------------- 894 It is hoped that the integrated services Internet architecture would 895 allow providers to charge for IP flows based on their QoS requirements. 896 A QoS- based routing architecture can aid in distributing information on 897 expected costs of routing flows to various destinations via different 898 domains. Clearly, from a provider's point of view, there is a cost 899 incurred in guaranteeing QoS to flows. This cost could be a function of 900 several parameters, some related to flow parameters, others based on 901 policy. From a user's point of view, the consequence of requesting a 902 particular QoS for a flow is the cost incurred, and hence the selection 903 of providers may be based on cost. A routing scheme can aid a provider 904 in distributing the costs in routing to various destinations, as a 905 function of several parameters, to other providers or to end users. In 906 the interdomain routing model described earlier, the costs to a 907 destination will change as routing updates are passed through a transit 908 domain. One of the goals of the routing scheme should be to maintain a 909 uniform semantics for cost values (or functions) as they are handled by 910 intermediate domains. As an example, consider the cost function generated 911 by border node B1 in domain A and passed to node B2 in domain B below. 912 The routing update may be injected into domain B by B2 and finally passed 913 to B4 in domain C by router B3. Domain B may interpret the cost value 914 received from domain A in any way it wants, for instance, adding a 915 locally significant component to it. But when this cost value is passed 916 to domain C, the meaning of it must be what domain A intended, plus the 917 incremental cost of transiting domain B, but not what domain B uses 918 internally. 920 Domain A Domain B Domain C 921 ____________ ___________ ____________ 922 | | | | | | 923 | B1------B2 B3---B4 | 924 | | | | | | 925 ------------ ----------- ------------ 927 A problem with charging for a flow is the determination of the cost when 928 the QoS promised for the flow was not actually delivered. Clearly, when 929 a flow is routed via multiple domains, it must be determined whether each 930 domain delivers the QoS it declares possible for traffic through it. 932 6. QOS-BASED MULTICAST ROUTING 934 The goals of QoS-based multicast routing are as follows: 936 - Scalability to large groups with dynamic membership 938 - Robustness in the presence of topological changes 940 - Support for receiver-initiated, heterogeneous reservations 942 - Support for shared reservation styles, and 944 - Support for "global" admission control, i.e., administrative control of 945 resource consumption by the multicast flow. 947 The RSVP multicast flow model is as follows. The sender of a multicast 948 flow advertises the traffic characteristics periodically to the receivers. 949 On receipt of an advertisement, a receiver may generate a message to 950 reserve resources along the flow path from the sender. Receiver 951 reservations may be heterogeneous. Other multicast models may be 952 considered. 954 The multicast routing scheme attempts to determine a path from the 955 sender to each receiver that can accommodate the requested reservation. 956 The routing scheme may attempt to maximize network resource utilization 957 by minimizing the total bandwidth allocated to the multicast flow, or by 958 optimizing some other measure. 960 6.1 Scalability, Robustness and Heterogeneity 961 ----------------------------------------- 963 When addressing scalability, two aspects must be considered: 965 1. The overheads associated with receiver discovery. This overhead is 966 incurred when determining the multicast tree for forwarding 967 best-effort sender traffic characterization to receivers. 969 2. The overheads associated with QoS-based multicast path computation. 970 This overhead is incurred when flow-specific state information has 971 to be collected by a router to determine QoS-accommodating paths to 972 a receiver. 974 Depending on the multicast routing scheme, one or both of these aspects 975 become important. For instance, under the present RSVP model, 976 reservations are established on the same path over which sender traffic 977 characterizations are sent, and hence there is no path computation 978 overhead. On the other hand, under the proposed QOSPF model [ZSSC97] of 979 multicast source routing, receiver discovery overheads are incurred by 980 MOSPF [M94] receiver location broadcasts, and additional path computation 981 overheads are incurred due to the need to keep track of existing flow 982 paths. Scaling of QoS-based multicast depends on both these scaling 983 issues. However, scalable best-effort multicasting is really not in the 984 domain of QoS-based routing work (solutions for this are being devised 985 by the IDMR WG [BCF94, DEFV94]). QoS-based multicast routing may build 986 on these solutions to achieve overall scalability. 988 There are several options for QoS-based multicast routing. Multicast 989 source routing is one under which multicast trees are computed by the 990 first-hop router from the source, based on sender traffic advertisements. 991 The advantage of this is that it blends nicely with the present RSVP 992 signaling model. Also, this scheme works well when receiver reservations 993 are homogeneous and the same as the maximum reservation derived from 994 sender advertisement. The disadvantages of this scheme are the extra 995 effort needed to accommodate heterogeneous reservations and the 996 difficulties in optimizing resource allocation based on shared 997 reservations. 999 In these regards, a receiver-oriented multicast routing model seems to 1000 have some advantage over multicast source routing. Under this model: 1002 1. Sender traffic advertisements are multicast over a best-effort tree 1003 which can be different from the QoS-accommodating tree for sender 1004 data. 1006 2. Receiver discovery overheads are minimized by utilizing a scalable 1007 scheme (e.g., PIM, CBT), to multicast sender traffic 1008 characterization. 1010 3. Each receiver-side router independently computes a QoS-accommodating 1011 path from the source, based on the receiver reservation. This path 1012 can be computed based on unicast routing information only, or with 1013 additional multicast flow-specific state information. In any case, 1014 multicast path computation is broken up into multiple, concurrent 1015 unicast path computations. 1017 4. Routers processing unicast reserve messages from receivers aggregate 1018 resource reservations from multiple receivers. 1020 Flow-specific state information may be limited in Step 3 to achieve 1021 scalability. In general, limiting flow-specific information in making 1022 multicast routing decisions is important in any routing model. The 1023 advantages of this model are the ease with which heterogeneous 1024 reservations can be accommodated, and the ability to handle shared 1025 reservations. The disadvantages are the incompatibility with the present 1026 RSVP signaling model, and the need to rely on reverse paths when link 1027 state routing is not used. Both multicast source routing and the 1028 receiver-oriented routing model described above utilize per-source trees 1029 to route multicast flows. Another possibility is the utilization of 1030 shared, per-group trees for routing flows. The computation and usage of 1031 such trees require further work. 1033 Finally, scalability at the interdomain level may be achieved if QoS-based 1034 multicast paths are computed independently in each domain. This principle 1035 is illustrated by the QOSPF multicast source routing scheme which allows 1036 independent path computation in different OSPF areas. It is easy to 1037 incorporate this idea in the receiver-oriented model also. An evaluation 1038 of multicast routing strategies must take into account the relative 1039 advantages and disadvantages of various approaches, in terms of 1040 scalability features and functionality supported. 1042 6.2 Multicast Admission Control 1043 --------------------------- 1045 Higher level admission control, as defined for unicast, prevents 1046 excessive resource consumption by flows when traffic load is high. Such 1047 an admission control strategy must be applied to multicast flows when 1048 the flow path computation is receiver-oriented or sender-oriented. In 1049 essence, a router computing a path for a receiver must determine whether 1050 the incremental resource allocation for the receiver is excessive under 1051 some administratively determined admission control policy. Other 1052 admission control criteria, based on the total resource consumption of a 1053 tree may be defined. 1055 7. QOS-BASED ROUTING AND RESOURCE RESERVATION PROTOCOLS 1057 There must clearly be a well-defined interface between routing and resource 1058 reservation protocols. The nature of this interface, and the interaction 1059 between routing and resource reservation has to be determined carefully 1060 to avoid incompatibilities. The importance of this can be readily 1061 illustrated in the case of RSVP. 1063 RSVP has been designed to operate independent of the underlying routing 1064 scheme. Under this model, RSVP PATH messages establish the reverse path 1065 for RESV messages. In essence, this model is not compatible with 1066 QoS-based routing schemes that compute paths after receiver reservations 1067 are received. While this incompatibility can be resolved in a simple 1068 manner for unicast flows, multicast with heterogeneous receiver 1069 requirements is a more difficult case. For this, reconciliation between 1070 RSVP and QoS-based routing models is necessary. Such a reconciliation, 1071 however, may require some changes to the RSVP model depending on the 1072 QoS-based routing model [ZES97, ZSSC97, GOA97]. On the other hand, 1073 QoS-based routing schemes may be designed with RSVP compatibility as a 1074 necessary goal. How this affects scalability and other performance 1075 measures must be considered. 1077 8. RELATED WORK 1079 "Adaptive" routing, based on network state, has a long history, 1080 especially in circuit-switched networks. Such routing has also been 1081 implemented in early datagram and virtual circuit packet networks. More 1082 recently, this type of routing has been the subject of study in the 1083 context of ATM networks, where the traffic characteristics and topology 1084 are substantially different from those of circuit-switched networks 1085 [MMR96]. It is instructive to review the adaptive routing methodologies, 1086 both to understand the problems encountered and possible solutions. 1088 Fundamentally, there are two aspects to adaptive, network state-dependent 1089 routing. 1091 1. Measuring and gathering network state information, and 1092 2. Computing routes based on the available information. 1094 Depending on how these two steps are implemented, a variety of routing 1095 techniques are possible. These differ in the following respects: 1097 - what state information is used 1098 - whether local or global state is used 1099 - what triggers the propagation of state information 1100 - whether routes are computed in a distributed or centralized manner 1101 - whether routes are computed on-demand, pre-computed, or in a hybrid 1102 manner 1103 - what optimization criteria, if any, are used in computing routes 1104 - whether source routing or hop by hop routing is used, and 1105 - how alternate route choices are explored 1107 It should be noted that most of the adaptive routing work has focused on 1108 unicast routing. Multicast routing is one of the areas that would be 1109 prominent with Internet QoS-based routing. We treat this separately, and 1110 the following review considers only unicast routing. This review is not 1111 exhaustive, but gives a brief overview of some of the approaches. 1113 8.1 Optimization Criteria 1114 --------------------- 1116 The most common optimization criteria used in adaptive routing is 1117 throughput maximization or delay minimization. A general formulation of 1118 the optimization problem is the one in which the network revenue is 1119 maximized, given that there is a cost associated with routing a flow over 1120 a given path [MMR96, K88]. In general, global optimization solutions are 1121 difficult to implement, and they rely on a number of assumptions on the 1122 characteristics of the traffic being routed [MMR96]. Thus, the practical 1123 approach has been to treat the routing of each flow (VC, circuit or 1124 packet stream to a given destination) independently of the routing of 1125 other flows. Many such routing schemes have been implemented. 1127 8.2 Circuit Switched Networks 1128 ------------------------- 1130 Many adaptive routing concepts have been proposed for circuit-switched 1131 networks. An example of a simple adaptive routing scheme is sequential 1132 alternate routing [T88]. This is a hop-by-hop destination-based routing 1133 scheme where only local state information is utilized. Under this 1134 scheme, a routing table is computed for each node, which lists multiple 1135 output link choices for each destination. When a call set-up request is 1136 received by a node, it tries each output link choice in sequence, until 1137 it finds one that can accommodate the call. Resources are reserved on 1138 this link, and the call set-up is forwarded to the next node. The set-up 1139 either reaches the destination, or is blocked at some node. In the latter 1140 case, the set-up can be cranked back to the previous node or a failure 1141 declared. Crankback allows the previous node to try an alternate path. 1142 The routing table under this scheme can be computed in a centralized or 1143 distributed manner, based only on the topology of the network. For 1144 instance, a k-shortest-path algorithm can be used to determine k 1145 alternate paths from a node with distinct initial links [T88]. Some 1146 mechanism must be implemented during path computation or call set-up to 1147 prevent looping. 1149 Performance studies of this scheme illustrate some of the pitfalls of 1150 alternate routing in general, and crankback in particular [A84, M86, 1151 YS87]. Specifically, alternate routing improves the throughput when 1152 traffic load is relatively light, but adversely affects the performance 1153 when traffic load is heavy. Crankback could further degrade the 1154 performance under these conditions. In general, uncontrolled alternate 1155 routing (with or without crankback) can be harmful in a heavily utilized 1156 network, since circuits tend to be routed along longer paths thereby 1157 utilizing more capacity. This is an obvious, but important result that 1158 applies to QoS-based Internet routing also. 1160 The problem with alternate routing is that both direct routed (i.e., over 1161 shortest paths) and alternate routed calls compete for the same resource. 1162 At higher loads, allocating these resources to alternate routed calls 1163 result in the displacement of direct routed calls and hence the alternate 1164 routing of these calls. Therefore, many approaches have been proposed to 1165 limit the flow of alternate routed calls under high traffic loads. These 1166 schemes are designed for the fully-connected logical topology of long 1167 distance telephone networks (i.e., there is a logical link between every 1168 pair of nodes). In this topology, direct routed calls always traverse a 1169 1-hop path to the destination and alternate routed calls traverse at 1170 most a 2-hop path. 1172 "Trunk reservation" is a scheme whereby on each link a certain bandwidth 1173 is reserved for direct routed calls [MS91]. Alternate routed calls are 1174 allowed on a trunk as long as the remaining trunk bandwidth is greater 1175 than the reserved capacity. Thus, alternate routed calls cannot totally 1176 displace direct routed calls on a trunk. This strategy has been shown to 1177 be very effective in preventing the adverse effects of alternate routing. 1179 "Dynamic alternate routing" (DAR) is a strategy whereby alternate routing 1180 is controlled by limiting the number of choices, in addition to trunk 1181 reservation [MS91]. Under DAR, the source first attempts to use the 1182 direct link to the destination. When blocked, the source attempts to 1183 alternate route the call via a pre-selected neighbor. If the call is still 1184 blocked, a different neighbor is selected for alternate routing to this 1185 destination in the future. The present call is dropped. DAR thus requires 1186 only local state information. Also, it "learns" of good alternate paths 1187 by random sampling and sticks to them as long as possible. 1189 More recent circuit-switched routing schemes utilize global state to 1190 select routes for calls. An example is AT&T's Real-Time Network Routing 1191 (RTNR) scheme [ACFH92]. Unlike schemes like DAR, RTNR handles multiple 1192 classes of service, including voice and data at fixed rates. RTNR 1193 utilizes a sophisticated per-class trunk reservation mechanism with 1194 dynamic bandwidth sharing between classes. Also, when alternate routing 1195 a call, RTNR utilizes the loading on all trunks in the network to select 1196 a path. Because of the fully-connected topology, disseminating status 1197 information is simple under RTNR; each node simply exchanges status 1198 information directly with all others. 1200 >From the point of view of designing QoS-based Internet routing schemes, 1201 there is much to be learned from circuit-switched routing. For example, 1202 alternate routing and its control, and dynamic resource sharing among 1203 different classes of traffic. It is, however, not simple to apply some of 1204 the results to a general topology network with heterogeneous multirate 1205 traffic. Work in the area of ATM network routing described next 1206 illustrates this. 1208 8.3 ATM Networks 1209 ------------ 1211 The VC routing problem in ATM networks presents issues similar to that 1212 encountered in circuit-switched networks. Not surprisingly, some 1213 extensions of circuit-switched routing have been proposed. The goal of 1214 these routing schemes is to achieve higher throughput as compared to 1215 traditional shortest-path routing. The flows considered usually have a 1216 single QoS requirement, i.e., bandwidth. 1218 The first idea is to extend alternate routing with trunk reservation to 1219 general topologies [SD95]. Under this scheme, a distance vector routing 1220 protocol is used to build routing tables at each node with multiple 1221 choices of increasing hop count to each destination. A VC set-up is first 1222 routed along the primary ("direct") path. If sufficient resources are not 1223 available along this path, alternate paths are tried in the order of 1224 increasing hop count. A flag in the VC set-up message indicates primary 1225 or alternate routing, and bandwidth on links along an alternate path is 1226 allocated subject to trunk reservation. The trunk reservation values are 1227 determined based on some assumptions on traffic characteristics. Because 1228 the scheme works only for a single data rate, the practical utility of 1229 it is limited. 1231 The next idea is to import the notion of controlled alternate routing into 1232 traditional link state QoS-based routing [RSR95, GKR96]. To do this, first 1233 each VC is associated with a maximum permissible routing cost. This cost 1234 can be set based on expected revenues in carrying the VC or simply based 1235 on the length of the shortest path to the destination. Each link is 1236 associated with a metric that increases exponentially with its 1237 utilization. A switch computing a path for a VC simply determines a least- 1238 cost feasible path based on the link metric and the VC's QoS requirement. 1239 The VC is admitted if the cost of the path is less than or equal to the 1240 maximum permissible routing cost. This routing scheme thus limits the 1241 extent of "detour" a VC experiences, thus preventing excessive resource 1242 consumption. This is a practical scheme and the basic idea can be 1243 extended to hierarchical routing. But the performance of this scheme has 1244 not been analyzed thoroughly. A similar notion of admission control based 1245 on the connection route was also incorporated in a routing scheme 1246 presented in [ACG92]. 1248 Considering the ATM Forum PNNI protocol [PNNI96], a partial list of its 1249 stated characteristics are as follows: 1251 o Scales to very large networks 1252 o Supports hierarchical routing 1253 o Supports QoS 1254 o Uses source routed connection setup 1255 o Supports multiple metrics and attributes 1256 o Provides dynamic routing 1258 The PNNI specification is sub-divided into two protocols: a signaling and 1259 a routing protocol. The PNNI signaling protocol is used to establish 1260 point-to- point and point to multipoint connections and supports source 1261 routing, crankback and alternate routing. PNNI source routing allows loop 1262 free paths. Also, it allows each implementation to use its own path 1263 computation algorithm. Furthermore, source routing is expected to support 1264 incremental deployment of future enhancements such as policy routing. 1266 The PNNI routing protocol is a dynamic, hierarchical link state protocol 1267 that propagates topology information by flooding it through the network. 1268 The topology information is the set of resources (e.g., nodes, links and 1269 addresses) which define the network. Resources are qualified by defined 1270 sets of metrics and attributes (delay, available bandwidth, jitter, etc.) 1271 which are grouped by supported traffic class. Since some of the metrics 1272 used will change frequently, e.g., available bandwidth, threshold 1273 algorithms are used to determine if the change in a metric or attribute 1274 is significant enough to require propagation of updated information. 1275 Other features include, auto configuration of the routing hierarchy, 1276 connection admission control (as part of path calculation) and 1277 aggregation and summarization of topology and reachability information. 1279 Despite its functionality, the PNNI routing protocol does not address the 1280 issues of multicast routing, policy routing and control of alternate 1281 routing. A problem in general with link state QoS-based routing is that 1282 of efficient broadcasting of state information. While flooding is a 1283 reasonable choice with static link metrics it may impact the performance 1284 adversely with dynamic metrics. 1286 Finally, Integrated PNNI [I-PNNI] has been designed from the start to take 1287 advantage of the QoS Routing capabilities that are available in PNNI and 1288 integrate them with routing for layer 3. This would provide an integrated 1289 layer 2 and layer 3 routing protocol for networks that include PNNI in the 1290 ATM core. The I-PNNI specification has been under development in the ATM 1291 Forum and, at this time, has not yet incorporated QoS routing mechanisms 1292 for layer 3. 1294 8.4 Packet Networks 1295 --------------- 1297 Early attempts at adaptive routing in packet networks had the objective of 1298 delay minimization by dynamically adapting to network congestion. 1299 Alternate routing based on k-shortest path tables, with route selection 1300 based on some local measure (e.g., shortest output queue) has been 1301 described [R76, YS81]. The original ARPAnet routing scheme was a distance 1302 vector protocol with delay-based cost metric [MW77]. Such a scheme was 1303 shown to be prone to route oscillations [B82]. For this and other reasons, 1304 a link state delay-based routing scheme was later developed for the 1305 ARPAnet [MRR80]. This scheme demonstrated a number of techniques such as 1306 triggered updates, flooding, etc., which are being used in OSPF and PNNI 1307 routing today. Although none of these schemes can be called QoS-based 1308 routing schemes, they had features that are relevant to QoS-based routing. 1310 IBM's System Network Architecture (SNA) introduced the concept of Class 1311 of Service (COS)-based routing [A79, GM79]. There were several classes 1312 of service: interactive, batch, and network control. In addition, 1313 users could define other classes. When starting a data session an 1314 application or device would request a COS. Routing would then map the 1315 COS into a statically configured route which marked a path across the 1316 physical network. Since SNA is connection oriented, a session was set up 1317 along this path and the application's or device's data would traverse 1318 this path for the life of the session. Initially, the service delivered 1319 to a session was based on the network engineering and current state of 1320 network congestion. Later, transmission priority was added to subarea SNA. 1321 Transmission priority allowed more important traffic (e.g. interactive) 1322 to proceed before less time-critical traffic (e.g. batch) and improved 1323 link and network utilization. Transmission priority of a session was based 1324 on its COS. 1326 SNA later evolved to support multiple or alternate paths between nodes. 1327 But, although assisted by network design tools, the network administrator 1328 still had to statically configure routes. IBM later introduced SNA's 1329 Advanced Peer to Peer Networking (APPN) [B85]. APPN added new features to 1330 SNA including dynamic routing based on a link state database. An 1331 application would use COS to indicate it traffic requirements and APPN 1332 would calculate a path capable of meeting these requirements. Each COS 1333 was mapped to a table of acceptable metrics and parameters that qualified 1334 the nodes and links contained in the APPN topology Database. Metrics and 1335 parameters used as part of the APPN route calculation include, but are not 1336 limited to: delay, cost per minute, node congestion and security. The 1337 dynamic nature of APPN allowed it to route around failures and reduce 1338 network configuration. 1340 The service delivered by APPN was still based on the network engineering, 1341 transmission priority and network congestion. IBM later introduced 1342 an extension to APPN, High Performance Routing (HPR)[IBM97]. HPR uses 1343 a congestion avoidance algorithm called adaptive rate 1344 based (ARB) congestion control. Using predictive feedback methods, the 1345 ARB algorithm prevents congestion and improves network utilization. Most 1346 recently, an extension to the COS table has been defined so that HPR 1347 routing could recognize and take advantage of ATM QoS capabilities. 1349 Considering IP routing, both IDRP [R92] and OSPF support type of service 1350 (TOS)-based routing. While the IP header has a TOS field, there is no 1351 standardized way of utilizing it for TOS specification and routing. It 1352 seems possible to make use of the IP TOS feature, along with TOS-based 1353 routing and proper network engineering, to do QoS-based routing. The 1354 emerging differentiated services model is generating renewed interest in 1355 TOS support. Among the newer schemes, Source Demand Routing (SDR) 1356 [ELRV96] allows on-demand path computation by routers and the 1357 implementation of strict and loose source routing. The Nimrod 1358 architecture [CCM96] has a number of concepts built in to handle 1359 scalability and specialized path computation. Recently, some work has 1360 been done on QoS-based routing schemes for the integrated services 1361 Internet. For example, in [M98], heuristic schemes for efficient routing 1362 of flows with bandwidth and/or delay constraints is described and 1363 evaluated. 1365 9. SUMMARY AND CONCLUSIONS 1367 In this document, a framework for QoS-based Internet routing was defined. 1368 This framework adopts the traditional separation between intra and 1369 interdomain routing. This approach is especially meaningful in the case 1370 of QoS-based routing, since there are many views on how QoS-based routing 1371 should be accomplished and many different needs. The objective of this 1372 document was to encourage the development of different solution 1373 approaches for intradomain routing, subject to some broad requirements, 1374 while consensus on interdomain routing is achieved. To this end, the QoS- 1375 based routing issues were described, and some broad intradomain routing 1376 requirements and an interdomain routing model were defined. In addition, 1377 QoS-based multicast routing was discussed and a detailed review of 1378 related work was presented. 1380 The deployment of QoS-based routing across multiple administrative 1381 domains requires both the development of intradomain routing schemes and 1382 a standard way for them to interact via a well-defined interdomain 1383 routing mechanism. This document, while outlining the issues that must 1384 be addressed, did not engage in the specification of the actual features 1385 of the interdomain routing scheme. This would be the next step in the 1386 evolution of wide-area, multidomain QoS-based routing. 1388 REFERENCES 1390 [A79] V. Ahuja, "Routing and Flow Control in SNA" IBM Systems Journal, 1391 18 No. 2, pp. 298-314, 1979. 1393 [A84] J. M. Akinpelu, "The Overload Performance of Engineered Networks 1394 with Non-Hierarchical Routing," AT&T Technical Journal, Vol. 63, 1395 pp. 1261-1281, 1984. 1397 [ACFH92] G. R. Ash, J. S. Chen, A. E. Frey and B. D. Huang, "RealTime 1398 Network Routing in a Dynamic Class-of-Service Network," 1399 Proceedings of ITC 13, 1992. 1401 [ACG92] H. Ahmadi, J. Chen, and R. Guerin, "Dynamic Routing and Call 1402 Control in High-Speed Integrated Networks," Proceedings of 1403 ITC-13, pp. 397-403, 1992. 1405 [B82] D. P. Bertsekas, "Dynamic Behavior of Shortest Path Routing 1406 Algorithms for Communication Networks," IEEE Trans. Auto. 1407 Control, pp. 60-74, 1982. 1409 [B85] A. E. Baratz, "SNA Networks of Small Systems", IEEE JSAC, May, 1410 1985. 1412 [BBCD98] D. Black, S. Blake, M. Carlson, E. Davies, Z. Wang, and W. Weiss, 1413 "An Architecture for Differentiated Services," work in progress, 1414 May, 1998. 1416 [BCF94] A. Ballardie, J. Crowcroft and P. Francis, "Core-Based Trees: A 1417 Scalable Multicast Routing Protocol," Proceedings of SIGCOMM `94. 1419 [BCS94] R. Braden, D. Clark, and S. Shenker, "Integrated Services in the 1420 Internet Architecture: An Overview," RFC 1633, July, 1994. 1422 [BZ92] S. Bahk and M. El Zarki, "Dynamic Multi-Path Routing and How it 1423 Compares with Other Dynamic Routing Algorithms for High Speed 1424 Wide Area Networks," Proc. SIGCOMM `92, pp. 53-64, 1992. 1426 [BZBH97] R. Braden, L. Zhang, S. Berson, S. Herzog, S. Jamin. Resource 1427 ReSerVation Protocol (RSVP) -- Version 1 Functional Spec. RFC 1428 2205, September, 1997. 1430 [C91] C-H. Chow, "On Multicast Path Finding Algorithms," Proceedings 1431 of the IEEE INFOCOM `91, pp. 1274-1283, 1991. 1433 [CCM96] I. Castineyra, J. N. Chiappa, and M. Steenstrup, "The Nimrod 1434 Routing Architecture," RFC 1992, August, 1996. 1436 [DEFV94] S. E. Deering, D. Estrin, D. Farinnacci, V. Jacobson, C-G. Liu, 1437 and L. Wei, "An Architecture for Wide-Area Multicast Routing," 1438 Technical Report, 94-565, ISI, University of Southern California, 1439 1994. 1441 [ELRV96] D. Estrin, T. Li, Y. Rekhter, K. Varadhan, and D. Zappala, 1442 "Source Demand Routing: Packet Format and Forwarding Spec. 1443 (Version 1)," RFC 1940, May, 1996. 1445 [GKR96] R. Gawlick, C. R. Kalmanek, and K. G. Ramakrishnan, "On-Line 1446 Routing of Permanent Virtual Circuits," Computer Communications, 1447 March, 1996. 1449 [GPSS98] A. Ghanwani, J. W. Pace, V. Srinivasan, A. Smith and M. Seaman, 1450 "A Framework for Providing Integrated Services over Shared and 1451 Switched IEEE 802 LAN Technologies," work in progress, March, 1452 1998. 1454 [GM79] J. P. Gray, T. B. McNeil, "SNA Multi-System Networking," IBM 1455 Systems Journal, 18 No. 2, pp. 263-297, 1979. 1457 [GOA97] Y. Goto, M. Ohta and K. Araki, "Path QoS Collection for Stable 1458 Hop-by-Hop QoS Routing," Proc. INET '97, June, 1997. 1460 [GKOP98] R. Guerin, S. Kamat, A. Orda, T. Przygienda, and D. Williams, 1461 "QoS Routing Mechanisms and OSPF extensions," work in 1462 progress, March, 1998. 1464 [IBM97] IBM Corp, SNA APPN - High Performance Routing Architecture 1465 Reference, Version 2.0, SV40-1018, February 1997. 1467 [IPNNI] ATM Forum Technical Committee. Integrated PNNI (I-PNNI) v1.0 1468 Specification. af-96-0987r1, September 1996. 1470 [ISI81] USC-ISI, "Internet Protocol," RFC 791, September, 1981 1472 [JMW83] J. M. Jaffe, F. H. Moss, R. A. Weingarten, "SNA Routing: Past, 1473 Present, and Possible Future," IBM Systems Journal, pp. 417-435, 1474 1983. 1476 [K88] F.P. Kelly, "Routing in Circuit-Switched Networks: Optimization, 1477 Shadow Prices and Decentralization," Adv. Applied Prob., 1478 pp. 112-144, March, 1988. 1480 [L95] W. C. Lee, "Topology Aggregation for Hierarchical Routing in 1481 ATM Networks," ACM SIGCOMM Computer Communication Review, 1995. 1483 [M86] L. G. Mason, "On the Stability of Circuit-Switched Networks with 1484 Non-hierarchical Routing," Proc. 25th Conf. On Decision and 1485 Control, pp. 1345-1347, 1986. 1487 [M91] J. Moy, "OSPF Version 2," RFC 1247, July, 1991 1489 [M94] J. Moy, "MOSPF: Analysis and Experience," RFC 1585, March, 1994. 1491 [M98] Q. Ma, "Quality-of-Service Routing in Integrated Services 1492 Networks," PhD thesis, Computer Science Department, Carnegie 1493 Mellon University, 1998. 1495 [MMR96] D. Mitra, J. Morrison, and K. G. Ramakrishnan, "ATM Network 1496 Design and Optimization: A Multirate Loss Network Framework," 1497 Proceedings of IEEE INFOCOM `96, 1996. 1499 [MRR80] J. M. McQuillan, I. Richer and E. C. Rosen, "The New Routing 1500 Algorithm for the ARPANET," IEEE Trans. Communications, pp. 1501 711-719, May, 1980. 1503 [MS91] D. Mitra and J. B. Seery, "Comparative Evaluations of Randomized 1504 and Dynamic Routing Strategies for Circuit Switched Networks," 1505 IEEE Trans. on Communications, pp. 102-116, January, 1991. 1507 [MW77] J. M. McQuillan and D. C. Walden, "The ARPANET Design Decisions," 1508 Computer Networks, August, 1977. 1510 [NC94] Nair, R. and Clemmensen, D. : "Routing in Integrated Services 1511 Networks," Proc. 2nd International Conference on Telecom. 1512 Systems Modeling and Analysis, March 1994 1514 [PNNI96] ATM Forum PNNI subworking group, "Private Network-Network 1515 Interface Spec. v1.0 (PNNI 1.0)", afpnni-0055.00, March 1996. 1517 [R76] H. Rudin, "On Routing and "Delta Routing": A Taxonomy and 1518 Performance Comparison of Techniques for Packet-Switched 1519 Networks," IEEE Trans. Communications, pp. 43-59, January, 1996. 1521 [R92] Y. Rekhter, "IDRP Protocol Analysis: Storage Overhead," ACM Comp. 1522 Comm. Review, April, 1992. 1524 [R96] B. Rajagopalan, "Efficient Link State Routing," Draft, 1525 available from braja@ccrl.nj.nec.com. 1527 [RSR95] B. Rajagopalan, R. Srikant and K. G. Ramakrishnan, "An 1528 Efficient ATM VC Routing Scheme," Draft, 1995 1529 (Available from braja@ccrl.nj.nec.com) 1531 [SD95] S. Sibal and A. Desimone, "Controlling Alternate Routing in 1532 General-Mesh Packet Flow Networks," Proceedings of ACM SIGCOMM, 1533 1995. 1535 [SPG97] S. Shenker, C. Partridge, R. Guerin, "Specification of Guaranteed 1536 Quality of Service,", RFC 2212, September, 1997. 1538 [T88] D. M. Topkis, "A k-Shortest-Path Algorithm for Adaptive Routing 1539 in Communications Networks," IEEE Trans. Communications, pp. 1540 855-859, July, 1988. 1542 [W88] B. M. Waxman, "Routing of Multipoint Connections," IEEE JSAC, 1543 pp. 1617-1622, December, 1988. 1545 [W97] J. Wroclawski, "Specification of the Controlled-Load Network 1546 Element Service," RFC 2211, September, 1997. 1548 [WC96] Z. Wang and J. Crowcroft, "QoS Routing for Supporting Resource 1549 Reservation," IEEE JSAC, September, 1996. 1551 [YS81] T. P. Yum and M. Schwartz, "The Join-Based Queue Rule and its 1552 Application to Routing in Computer Communications Networks," 1553 IEEE Trans. Communications, pp. 505-511, 1981. 1555 [YS87] T. G. Yum and M. Schwartz, "Comparison of Routing Procedures for 1556 Circuit-Switched Traffic in Nonhierarchical Networks," IEEE 1557 Trans. Communications, pp. 535-544, May, 1987. 1559 [ZES97] Zappala, D., Estrin, D., Shenker, S. "Alternate Path Routing 1560 and Pinning for Interdomain Multicast Routing", USC Computer 1561 Science Technical Report #97-655, USC, 1997. 1563 [ZSSC97] Z. Zhang, C. Sanchez, B. Salkewicz, and E. Crawley, "QoS 1564 Extensions to OSPF," work in progress, September, 1997. 1566 AUTHORS' ADDRESSES 1568 Bala Rajagopalan Raj Nair 1569 NEC USA, C&C Research Labs Arrowpoint 1570 4 Independence Way 235 Littleton Rd. 1571 Princeton, NJ 08540 Westford, MA 01886 1572 U.S.A U.S.A 1573 Ph: +1-609-951-2969 Ph: +1-508-692-5875, x29 1574 Email: braja@ccrl.nj.nec.com Email: nair@arrowpoint.com 1576 Hal Sandick Eric S. Crawley 1577 Bay Networks, Inc. Argon Networks, Inc. 1578 1009 Slater Rd., Suite 220 25 Porter Rd. 1579 Durham, NC 27703 Littelton, MA 01460 1580 U.S.A U.S.A 1581 Ph: +1-919-941-1739 Ph: +1-508-486-0665 1582 Email: Hsandick@baynetworks.com Email: esc@argon.com 1584 ***** This draft expires on October, 27, 1998 *****