idnits 2.17.1 draft-ietf-intserv-ctrl-load-svc-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-26) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** The abstract seems to contain references ([2], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 199: '... elements MUST return an error for r...' RFC 2119 keyword, line 200: '...Network elements MUST return an error ...' RFC 2119 keyword, line 207: '...igabytes. Network elements MUST return...' RFC 2119 keyword, line 209: '... elements MUST return an error for a...' RFC 2119 keyword, line 223: '...Network elements MUST reject a service...' Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 181 has weird spacing: '...ed-load servi...' == Line 398 has weird spacing: '...largest maxim...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' Summary: 9 errors (**), 0 flaws (~~), 3 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force Integrated Services WG 3 INTERNET-DRAFT J. Wroclawski 4 draft-ietf-intserv-ctrl-load-svc-00.txt MIT LCS 5 November, 1995 6 Expires: 5/96 8 Specification of the Controlled-Load Network Element Service 10 Status of this Memo 12 This document is an Internet-Draft. Internet-Drafts are working 13 documents of the Internet Engineering Task Force (IETF), its areas, 14 and its working groups. Note that other groups may also distribute 15 working documents as Internet-Drafts. 17 Internet-Drafts are draft documents valid for a maximum of six months 18 and may be updated, replaced, or obsoleted by other documents at any 19 time. It is inappropriate to use Internet-Drafts as reference 20 material or to cite them other than as "work in progress". 22 To learn the current status of any Internet-Draft, please check the 23 "1id-abstracts.txt" listing contained in the Internet- Drafts Shadow 24 Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 25 munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or 26 ftp.isi.edu (US West Coast). 28 This draft is a product of the Integrated Services Working Group of 29 the Internet Engineering Task Force. Comments are solicited and 30 should be addressed to the working group's mailing list at int- 31 serv@isi.edu and/or the author(s). 33 Abstract 35 This memo specifies the network element behavior required to 36 deliver Controlled-Load service in the Internet. The controlled- 37 load service provides the client data flow with a quality of 38 service closely approximating the QoS that same flow would receive 39 from an unloaded network element, but uses capacity (admission) 40 control to assure that this service is received even when the 41 network element is overloaded. 43 Introduction 45 This document defines the requirements for network elements that 46 support the Controlled-Load service. This memo is one of a series of 47 documents that specify the network element behavior required to 48 support various qualities of service in IP internetworks. Services 49 described in these documents are useful both in the global Internet 50 and private IP networks. 52 This document is based on the service specification template given in 53 [1]. Please refer to that document for definitions and additional 54 information about the specification of qualities of service within 55 the IP protocol family. 57 End-to-End Behavior 59 The end-to-end behavior provided to an application by a series of 60 network elements conforming to this specification tightly 61 approximates the behavior visible to applications receiving best- 62 effort service *under unloaded conditions* from the same series of 63 network elements. Assuming the network is functioning correctly, 64 these applications may assume that: 66 - A very high percentage of transmitted packets will be 67 successfully delivered by the network to the receiving end-nodes. 68 (The percentage of packets not successfully delivered must closely 69 approximate the basic packet error rate of the transmission 70 medium). 72 - The transit delay experienced by a very high percentage of the 73 delivered packets will not greatly exceed the minimum transmit 74 delay experienced by any successfully delivered packet (the 75 "speed-of-light delay"). 77 NOTE: the term "unloaded" above is used in the sense of "not 78 heavily loaded or congested" rather than in the sense of "no other 79 network traffic whatsoever". 81 To ensure that these conditions are met, clients requesting 82 controlled-load service provide the intermediate network elements 83 with a estimation of the data traffic they will generate; the TSpec. 84 In return, the service ensures that network element resources 85 adequate to process traffic falling within this descriptive envelope 86 will be available to the client. Should the client's traffic 87 generation properties fall outside of the region described by the 88 TSpec parameters, the QoS provided to the client may exhibit 89 characteristics indicative of overload, including large numbers of 90 delayed or dropped packets. The service definition does not require 91 that the precise characteristics of this overload behavior match 92 those which would be received by a best-effort data flow traversing 93 the same path under overloaded conditions. 95 Motivation 97 The controlled load service is intended to support a broad class of 98 applications which have been developed for use in today's Internet, 99 but are highly sensitive to overloaded conditions. Important 100 examples of this class are the "adaptive real-time applications" 101 currently offered by a number of vendors and researchers. These 102 applications have been shown to work well on unloaded nets, but 103 poorly on much of todays overloaded Internet. A service which mimics 104 unloaded nets serves these applications well. 106 The controlled-load service is intentionally minimal, in that there 107 are no optional functions or capabilities in the specification. The 108 service offers only a single function but system and application 109 designers can assume that all implementations will be indentical in 110 this respect. 112 Internally, the controlled-load service is suited to a wide range of 113 implementation techniques; including evolving scheduling and 114 admission control algorithms which allow sophisticated 115 implementations to be highly efficient in the use of network 116 resources. It is equally amenable to extremely simple implementation 117 in circumstances where maximum utilization of network resources is 118 not the only concern. 120 Network Element Data Handling Requirements 122 Each network element accepting a request for controlled-load service 123 must ensure that adequate bandwidth and packet processing resources 124 are available to handle the requested level of traffic, as given by 125 the requestor's TSpec. This must be accomplished through active 126 admission control. All resources important to the operation of the 127 network element must be considered when admitting a request. Common 128 examples of such resources include link bandwidth, router or switch 129 port buffer space, and computational capacity of the packet 130 forwarding engine. 132 The controlled-load service does not accept or make use of specific 133 target values for control parameters such as delay or loss. Instead, 134 acceptance of a request for controlled-load service is defined to 135 imply a commitment by the network element to provide the requestor 136 with service closely equivalent to that provided to uncontrolled 137 (best-effort) traffic under unloaded conditions. This definition may 138 be taken to include: 140 - Little or no average packet queueing delay over all timescales 141 significantly larger than the "burst time". The burst time is 142 defined as the time required for the flow's maximum size data burst 143 to be transmitted at the flow's requested transmission rate, where 144 the burst size and rate are given by the flow's TSpec, as described 145 below. 147 - A very low level of congestion loss. In this context, congestion 148 loss includes packet losses due to shortage of any required 149 processing resource, such as buffer space or link bandwidth. 150 Although occasional congestion losses may occur, any substantial 151 sustained loss represents a failure of the admission control 152 algorithm. 154 NOTE: 156 Implementations of controlled-load service are not required to 157 provide any control of short-term packet delay jitter beyond that 158 described above. However, the use of packet scheduling algorithms 159 that provide additional jitter control is not prohibited by this 160 specification. 162 Packet losses due to non-congestion-related causes, such as link 163 errors, are not bounded by this service. 165 A network element may employ statistical approaches to decide whether 166 adequate capacity is available to accept a service request. For 167 example, a network element processing a number of flows with long- 168 term characteristics predicted through measurement may be able to 169 overallocate its resources to some extent without reducing the level 170 of service delivered to the flows. 172 A network element may employ any appropriate means to ensure that 173 admitted flows receive appropriate service. 175 Links are not permitted to fragment packets which receive the 176 controlled-load service. Packets larger than the MTU of the link must 177 be treated as nonconformant to the TSpec. This implies that they will 178 be policed according to the rules described in the Policing section 179 below. 181 The controlled-load service is invoked by specifying the data flow's 182 desired traffic parameters (TSpec) to the network element. Requests 183 placed for a new flow will be accepted if the network element has the 184 capacity to forward the flow's packets as described above. Requests 185 to change the TSpec for an existing flow should be treated as a new 186 invocation, in the sense that admission control must be reapplied to 187 the flow. Requests that reduce the TSpec for an existing flow (in the 188 sense that the new TSpec is strictly smaller than the old TSpec 189 according to the ordering rules given below) should never be denied 190 service. 192 The TSpec takes the form of a token bucket specification plus a 193 minimum policed unit (m) and a maximum packet size (M). 195 The token bucket specification includes a bucket rate r and a bucket 196 depth, b. Both r and b must be positive. The rate, r, is measured 197 in bytes of IP datagrams per second. Values of this parameter may 198 range from 1 byte per second to 40 terabytes per second. Network 199 elements MUST return an error for requests containing values outside 200 this range. Network elements MUST return an error for any request 201 containing a value within this range which cannot be supported by the 202 element. In practice, only the first few digits of the r parameter 203 are significant, so the use of floating point representations, 204 accurate to at least 0.1% is encouraged. 206 The bucket depth, b, is measured in bytes. Values of this parameter 207 may range from 1 byte to 250 gigabytes. Network elements MUST return 208 an error for requests containing values outside this range. Network 209 elements MUST return an error for any request containing a value 210 within this range which cannot be supported by the element. In 211 practice, only the first few digits of the b parameter are 212 significant, so the use of floating point representations, accurate 213 to at least 0.1% is encouraged. 215 The range of values allowed for these parameters is intentionally 216 large to allow for future network technologies. Any given network 217 element is not expected to support the full range of values. 219 The minimum policed unit, m, is an integer measured in bytes. All IP 220 datagrams less than size m will be counted against the token bucket 221 as being of size m. The maximum packet size, M, is the biggest packet 222 that will conform to the traffic specification; it is also measured 223 in bytes. Network elements MUST reject a service request if the 224 requested maximum packet size is larger than the MTU of the link. 225 Both m and M must be positive, and m must be less then or equal to M. 227 The preferred concrete representation for the TSpec is two floating 228 point numbers in single-precision IEEE floating point format followed 229 by two 32-bit integers in network byte order. The first value is the 230 rate (r), the second value is the bucket size (b), the third is the 231 minimum policed unit (m), and the fourth is the maximum packet size 232 (M). 234 Exported Information 236 The controlled-load service is assigned service_name 5. 238 The controlled-load service has no required characterization 239 parameters. Specific implementations may export appropriate 240 measurement and monitoring information. 242 Policing 244 The controlled-load service is suitable for use with multicast as 245 well as unicast data flows. This capability introduces some 246 complexity into the policing requirements. 248 Controlled-load traffic must be policed for conformance to its TSpec 249 at every network element. The TSpec's token bucket parameters require 250 that traffic must obey the rule that over all time periods, the 251 amount of data sent does not exceed rT+b, where r and b are the token 252 bucket parameters and T is the length of the time period. For the 253 purposes of this accounting, links must count packets that are 254 smaller than the minimal policing unit to be of size m. Packets that 255 arrive at an element and cause a violation of the the rT+b bound are 256 considered nonconformant. 258 At all policing points, non-conforming packets are treated as BEST- 259 EFFORT datagrams. (See the NOTEs below for further discussion of this 260 issue). 262 If resources are available, it is desirable for the policing function 263 at points within the interior of the network (but *not* at edge 264 traffic entry points) to enforce slightly "relaxed" traffic 265 parameters to accommodate packet bursts somewhat larger than the 266 actual TSpec. 268 Other actions, such as reshaping the traffic stream (delaying packets 269 until they are compliant), are not allowed. 271 NOTE: RESHAPING. The prohibition on delaying packets is one of 272 many possible design choices. It may be better to permit some 273 delaying of a packet if that delay would allow it to pass the 274 policing function. (In other words, to reshape the traffic). The 275 challenge is to define a viable reshaping function. 277 Intuitively, a plausible approach is to allow a delay of (roughly) 278 up to the maximum queueing delay experienced by completely 279 conforming packets before declaring that a packet has failed to 280 pass the policing function. The merit of this approach, and the 281 precise wording of the specification that describes it, require 282 further study. 284 NOTE: INTERACTION WITH BEST-EFFORT TRAFFIC. Implementors of this 285 service should clearly understand that in certain circumstances 286 (routers acting as the "split points" of a multicast distribution 287 tree supporting a shared reservation) large numbers of packets may 288 fail the policing test *as a matter of normal operation*. 289 According to the definition above, these packets should be 290 processed as best-effort packets. 292 If the network element's best-effort queueing algorithm does not 293 distinguish between these packets and elastic best-effort traffic 294 such as TCP flows, THESE PACKETS WILL "BACK OFF" THE ELASTIC 295 TRAFFIC AND DOMINATE THE BEST-EFFORT BANDWIDTH USAGE. The 296 integrated services framework does not currently address this 297 issue. However, several possible solutions to the problem are 298 known [RED, xFQ]. Network elements supporting the controlled load 299 service should also implement some mechanism in their best-effort 300 queueing path to discriminate between classes of best-effort 301 traffic and provide elastic traffic with protection from inelastic 302 best-effort flows. 304 NOTE: EDGE POLICING. The text above specifys that the policing 305 function treats non-conformant packets as best-effort at all 306 points. A possible alternative is to replace this with language 307 reading: 309 At points where traffic first enters the network (end-nodes), 310 non-conforming packets are DROPPED. At these points, the 311 reservation setup mechanism must ensure that the TSpec used is 312 *no smaller* than the TSpec specified by the source for the 313 traffic it is generating. 315 At all other policing points, non-conforming packets are treated 316 as BEST-EFFORT datagrams. 318 The effect of this change is significant. Under the non-dropping 319 model, it is possible for a source to vastly over-send its TSpec, 320 with the excess packets being delivered if conditions permit. The 321 service offered in this case has been described as "best-effort- 322 with-floor"; essentially a best-effort delivery service with 323 enough resources reserved for a certain minimum traffic level. 325 Under the dropping model, the service loses its "best-effort- 326 with-floor" characteristics, and becomes essentially a fixed- 327 traffic-level service. In return, it offers significantly more 328 protection against overload of the network resources and 329 degradation of other flows' QoS. 331 NOTE: ARCHITECTURAL OPTIONS. The text above specifies a functional 332 and consistant model for policing of controlled-load data which 333 can be implemented within the current IP protocols. 335 In this model, it is necessary to police at every network element 336 because the policing function does not actually drop traffic which 337 exceeds the TSpec, but instead carries it as best-effort. Since 338 there is no end-to-end mechanism in place to limit a controlled- 339 load flow's traffic to the TSpec value, every network element must 340 perform this function for itself. Since excess controlled-load 341 traffic (traffic above the TSpec) is not dropped, every network 342 element should also perform the best-effort service discrimination 343 function described above. 345 The alternative option of "marking" packets which have failed the 346 policing test at some node is not available within the current IP 347 protocol. If marking were available, it would be necessary to 348 police only at certain points within the network. In this case, 349 the relevant language above might be replaced with a paragraph 350 reading: 352 Policing is performed at the edge of the network, at all 353 heterogeneous source branch points and at all source merge 354 points. A heterogeneous source branch point is a spot where the 355 multicast distribution tree from a source branches to multiple 356 distinct paths, and the TSpec's of the reservations on the 357 various outgoing links are not all the same. Policing need only 358 be done if the TSpec on the outgoing link is "less than" (in the 359 sense described in the Ordering section) the TSpec reserved on 360 the immediately upstream link. A source merge point occurs when 361 the multicast distribution trees from two different sources 362 (sharing the same reservation) merge. It is the responsibility 363 of the invoker of the service (a setup protocol, local 364 configuration tool, or similar mechanism) to identify points 365 where policing is required. Policing is allowed at points other 366 than those mentioned above. 368 Note that the best-effort traffic discrimination function 369 described above must still be performed at every network element. 370 In this case, the discrimination might be based in part on the 371 mark bit. 373 At all network elements, packets bigger than the outgoing link MTU 374 must be considered nonconformant and classified as best effort (and 375 will then either be fragmented or dropped according to the element's 376 handling of best effort traffic). It is expected that this situation 377 will not arise with any frequency, because flow setup mechanisms are 378 expected to notify the sending application of the appropriate path 379 MTU. 381 Ordering and Merging 383 The controlled-load service TSpec is ordered according to the 384 following rule: TSpec A is a substitute for ("as good or better 385 than") TSpec B if and only if 387 (1) both the token bucket depth and rate for TSpec A are greater 388 than or equal to those of TSpec B, 390 (2) the minimum policed unit m is at least as small for TSpec A as 391 it is for TSpec B, and 393 (3) the maximum packet size M is at least as large for TSpec A as 394 it is for TSpec B. 396 A merged TSpec may be calculated over a set of TSpecs by taking the 397 largest token bucket rate, largest bucket size, smallest minimal 398 policed unit, and largest maximum packet size across all members of 399 the set. This use of the word "merging" is similar to that in the 400 RSVP protocol; a merged TSpec is one that is adequate to describe the 401 traffic from any one of a number of flows. 403 The sum of n controlled-load service TSpecs is used when computing 404 the TSpec for a shared reservation of n flows. It is computed by 405 taking: 407 - The minimum across all TSpecs of the minimum policed unit 408 parameter m. 410 - The maximum across all TSpecs of the maximum packet size 411 parameter M. 413 - The sum across all TSpecs of the token bucket rate parameter r. 415 - The sum across all TSpecs of the token bucket size parameter b. 417 The perfect minimum of two TSpecs is defined as a TSpec which would 418 view as compliant any traffic flow that complied with both of the 419 original TSpecs, but would reject any flow that was non-compliant 420 with at least one of the original TSpecs. This perfect minimum can be 421 computed only when the two original TSpecs are ordered, in the sense 422 described above. 424 A definition for computing the minimum of two unordered TSpecs is: 426 - The minimum of the minimum policed units m. 428 - The maximum of the maximum packet sizes M. 430 - The minimum of the token bucket rates r. 432 - The maximum of the token bucket sizes b. 434 NOTE: The proper definition the minimum TSpec function is a topic 435 of current discussion. The definition above is provisional and 436 subject to change. 438 Guidelines for Implementors 440 The intention of this service specification is that network elements 441 deliver a level of service closely approximating best-effort service 442 under unloaded conditions. As with best-effort service under these 443 conditions, it is not required that every single packet must be 444 successfully delivered with zero queueing delay. Network elements 445 providing controlled-load service are permitted to oversubscribe the 446 available resources to some extent, in the sense that the bandwidth 447 and buffer requirements indicated by summing the TSpec token buckets 448 of all controlled-load flows may exceed the maximum capabilities of 449 the network element. However, this oversubscription may only be done 450 in cases where the element is quite sure that actual utilization is 451 far less than the sum of the token buckets would suggest. The most 452 conservative approach, rejection of new flows whenever the addition 453 of their traffic would cause the sums of the token buckets to exceed 454 the capacity of the network element, may be appropriate in other 455 circumstances. 457 Specific issues related to this subject are discussed in the 458 "Evaluation Criteria" and "Examples of Implementation" sections 459 below. 461 Implementors are encouraged (but not required) to implement policing 462 behavior (the behavior seen when a flow's actual traffic exceeds its 463 TSpec) which closely approximates the behavior of well-designed 464 best-effort services under overload. In particular, it is undesirable 465 to employ queueing models which lead to heavily bi-modal delay 466 distributions or large numbers of mis-ordered packet arrivals. 468 Evaluation Criteria 470 The basic requirement placed on an implementation of controlled-load 471 service is that, under all conditions, it provide accepted data flows 472 with service closely similar to the service that same flow would 473 receive using best-effort service under unloaded conditions. 475 This suggests a simple two-step evaluation strategy. Step one is to 476 compare the service given best-effort traffic and controlled-load 477 traffic under underloaded conditions. 479 - Measure the packet loss rate and delay characteristics of a test 480 flow using best-effort service and with no load on the network 481 element. 483 - Compare those measurements with measurements of the same flow 484 receiving controlled-load service with no load on the network 485 element. 487 Closer measurements indicate higher evaluation ratings. A 488 substantial difference in the delay characteristics, such as the 489 smoothing which would be seen in an implementation which scheduled 490 the controlled-load flow using a fixed, constant-bitrate algorithm, 491 should result in a somewhat lower rating. 493 Step two is to observe the change in service received by a 494 controlled-load flow as the load increases. 496 - Increase the background traffic load on the network element, 497 while continuing to measuring the loss and delay characteristics of 498 the controlled-load flow. Characteristics which remain essentially 499 constant as the element is driven into overload indicate a high 500 evaluation rating. Minor changes in the delay distribution indicate 501 a somewhat lower rating. Significant increases in delay or loss 502 indicate a poor evaluation rating. 504 This simple model is not adequate to fully evaluate the performance 505 of controlled-load service. Three additional variables affect the 506 evaluation. The first is the short-term burstiness of the traffic 507 stream used to perform the tests outlined above. The second is the 508 degree of long-term change in the controlled-load traffic within the 509 bounds of its TSpec. (Changes in this characteristic will have great 510 effect on the effectiveness of certain admission control algorithms.) 511 The third is the ratio of controlled-load traffic to other traffic at 512 the network element (either best effort or other controlled 513 services). 515 The third variable should be specifically evaluated using the 516 following procedure. 518 With no controlled-load flows in place, overload the network 519 element with best-effort traffic (as indicated by substantial 520 packet loss and queueing delay). 522 Execute requests for controlled-load service giving TSpecs with 523 increasingly large rate and burst parameters. If the request is 524 accepted, verify that traffic matching the TSpec is in fact handled 525 with characteristics closely approximating the unloaded 526 measurements taken above. 528 Repeat these experiments to determine the range of traffic 529 parameter (rate, burst size) values successfully handled by the 530 network element. The useful range of each parameter must be 531 determined for several settings of the other parameter, to map out 532 a two-dimensional "region" of successfully handled TSpecs. When 533 compared with network elements providing similar capabilities, this 534 region indicates the relative ability of the elements to provide 535 controlled-load service under high load. A larger region indicates 536 a higher evaluation rating. 538 Examples of Implementation 540 One possible implementation of controlled-load service is to provide 541 a queueing mechanism with two priority levels; a high priority one 542 for controlled-load and a lower priority one for best effort service. 543 An admission control algorithm is used to limit the amount of traffic 544 placed into the high-priority queue. This algorithm may be based 545 either on the specified characteristics of the high-priority flows 546 (using information provided by the TSpecs), or on the measured 547 characteristics of the existing high-priority flows and the TSpec of 548 the new request. 550 Another possible implementation of controlled-load service is based 551 on the existing capabilities of network elements which support 552 "traffic classes" based on mechanisms such as weighted fair queueing 553 or class-based queueing [xxx]. In this case, it is sufficient to map 554 data flows accepted for controlled-load service into an existing 555 traffic class with adequate capacity to avoid overload. This 556 requirement is enforced by an admission control algorithm which 557 considers the characteristics of the traffic class, the 558 characteristics of the traffic already admitted to the class, and the 559 TSpec of the new flow requesting service. Again, the admission 560 control algorithm may be based either on the TSpec-specified or the 561 measured characteristics of the existing traffic. 563 Admission control algorithms based on specified characteristics are 564 likely be appropriate when the number of flows in the high-priority 565 class is small, or the traffic characteristics of the flows appear 566 highly variable. In these situations the measured behavior of the 567 aggregate controlled-load traffic stream may not serve as an 568 effective predictor of future traffic, leading a measurement-based 569 admission control algorithm to produce incorrect results. Conversely, 570 in situations where the past behavior of the aggregate controlled- 571 load traffic *is* a good predictor of future behavior, a 572 measurement-based admission control algorithm may allow more traffic 573 to be admitted to the controlled-load service class with no 574 degradation in performance. An implementation may choose to switch 575 between these two approaches depending on the nature of the traffic 576 stream at a given time. 578 Examples of Use 580 The controlled-load service may be used by any application which can 581 make use of best-effort service, but is best suited to those 582 applications which can usefully characterize their traffic 583 requirements. Applications based on the transport of "continuous 584 media" data, such as digitized audio or video, are an important 585 example of this class. 587 The controlled-load service is not isochronous and does not provide 588 any explicit information about transmission delay. For this reason, 589 applications with end-to-end timing requirements, including the 590 continuous-media class mentioned above, provide an application- 591 specific timing recovery mechanism, similar or identical to the 592 mechanisms required when these applications use best-effort service. 593 A protocol useful to applications requiring this capability is the 594 IETF Real-Time Transport Protocol [2]. 596 Load-sensitive applications may choose to request controlled-load 597 service whenever they are run. Alternatively, these applications may 598 monitor their own performance and request controlled-load service 599 from the network only when best-effort service is not providing 600 acceptable performance. The first strategy provides higher assurance 601 that the level of quality delivered to the user will not change over 602 the lifetime of an application session. The second strategy provides 603 greated flexibility and offers cost savings in environments where 604 levels of service above best-effort incur a charge. 606 Security Considerations 608 Security considerations are not discussed in this memo. 610 References 612 [1] S. Shenker and J. Wroclawski. "Network Element Service 613 Specification Template", Internet Draft, June 1995, 616 [2] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson. "RTP: 617 A Transport Protocol for Real-Time Applications", Internet Draft, 618 March 1995, 620 Authors' Address: 622 John Wroclawski 623 MIT Laboratory for Computer Science 624 545 Technology Sq. 625 Cambridge, MA 02139 626 jtw@lcs.mit.edu 627 617-253-7885 628 617-253-2673 (FAX)