idnits 2.17.1 draft-ietf-intserv-predictive-svc-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-20) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == No 'Intended status' indicated for this document; assuming Proposed Standard Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** The abstract seems to contain references ([2], [3], [5], [4,6], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 446 has weird spacing: '...largest maxim...' == Line 547 has weird spacing: '... assume that ...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' -- Possible downref: Non-RFC (?) normative reference: ref. '3' ** Downref: Normative reference to an Informational RFC: RFC 1633 (ref. '4') -- Possible downref: Non-RFC (?) normative reference: ref. '5' -- Possible downref: Non-RFC (?) normative reference: ref. '6' Summary: 10 errors (**), 0 flaws (~~), 3 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force Integrated Services WG 3 INTERNET-DRAFT S. Shenker/C. Partridge/B. Davie/L. Breslau 4 draft-ietf-intserv-predictive-svc-01.txt Xerox/BBN/Bellcore/Xerox 5 ? 1995 6 Expires: ?/?/96 8 Specification of Predictive Quality of Service 10 Status of this Memo 12 This document is an Internet-Draft. Internet-Drafts are working 13 documents of the Internet Engineering Task Force (IETF), its areas, 14 and its working groups. Note that other groups may also distribute 15 working documents as Internet-Drafts. 17 Internet-Drafts are draft documents valid for a maximum of six months 18 and may be updated, replaced, or obsoleted by other documents at any 19 time. It is inappropriate to use Internet- Drafts as reference 20 material or to cite them other than as ``work in progress.'' 22 To learn the current status of any Internet-Draft, please check the 23 ``1id-abstracts.txt'' listing contained in the Internet- Drafts 24 Shadow Directories on ftp.is.co.za (Africa), nic.nordu.net (Europe), 25 munnari.oz.au (Pacific Rim), ds.internic.net (US East Coast), or 26 ftp.isi.edu (US West Coast). 28 This document is a product of the Integrated Services working group 29 of the Internet Engineering Task Force. Comments are solicited and 30 should be addressed to the working group's mailing list at int- 31 serv@isi.edu and/or the author(s). 33 Abstract 35 This memo describes the network element behavior required to deliver 36 Predictive service in the Internet. Predictive service is a real- 37 time service that provides low packet loss and a fairly reliable 38 delay bound. This service is intended for applications that are 39 tolerant of occasional late arriving packets, but require substantial 40 and quantified levels of delay control from the network. Predictive 41 service is very similar to Controlled Delay service, and the two 42 specifications have a fair amount of shared language. The main 43 salient different between the two services is that Predictive service 44 offers a delay bound and Controlled Delay does not. If no 45 characterizations are provided, then Predictive service is, from an 46 application's perspective, almost indistinguishable from Controlled 47 Delay; the delay bounds are of little use if the endpoints are not 48 aware of them. Thus, the distinction between Predictive and 49 Controlled Delay is important only in contexts where 50 characterizations are made available to endpoints. This 51 specification follows the service specification template described in 52 [1]. 54 Introduction 56 This document defines the requirements for network elements that 57 support Predictive service. This memo is one of a series of 58 documents that specify the network element behavior required to 59 support various qualities of service in IP internetworks. Services 60 described in these documents are useful both in the global Internet 61 and private IP networks. 63 This document is based on the service specification template given in 64 [1]. Please refer to that document for definitions and additional 65 information about the specification of qualities of service within 66 the IP protocol family. 68 This memo describes the specification for Predictive service in the 69 Internet. Predictive service is a real-time service that provides a 70 fairly reliable delay bound. That is, the large majority of packets 71 are delivered within the delay bound. This is in contrast to 72 Guaranteed service [2], which provides an absolute bound on packet 73 delay, and Controlled Delay service [3], which provides no 74 quantitative assurance about end-to-end delays. Predictive service 75 is intended for use by applications that require an upper bound on 76 end-to-end delay, but that can be tolerant of occasional violations 77 of that bound. 79 This document is one of a series of documents specifying network 80 element behavior in IP internetworks that provide multiple qualities 81 of service to their clients. Services described in these documents 82 are useful both in the global Internet and private IP networks. 84 This document follows the service specification template given in 85 [1]. Please refer to that document for definitions and additional 86 information about the specification of qualities of service within 87 the IP protocol family. 89 End-to-End Behavior 91 The end-to-end behavior provided by a series of network elements that 92 conform to this document provides three levels of delay control. 93 Each service level is associated with a fairly reliable delay bound, 94 and almost all packets are delivered within this delay bound. 96 Moreover, all three levels of predictive service will have average 97 delays that are no worse than best effort service, and the maximal 98 delays should be significantly better than best effort service when 99 there is significant load on the network. Packet losses are rare as 100 long as the offered traffic conforms to the specified traffic 101 characterization (see Invocation Information). This 102 characterization of the end-to-end behavior assumes that there are no 103 hard failures in the network elements or packet routing changes 104 within the lifetime of an application using the service. 105 NOTE: While the per-hop delay bounds are exported by the service 106 module (see Exported Information below), the mechanisms needed to 107 collect per-hop bounds and make these end-to-end bounds known to 108 the applications are not described in this specification. These 109 functions, which can be provided by reservation setup protocols, 110 routing protocols or by other network management functions, are 111 outside the scope of this document. 113 The delay bounds are not absolute firm. Some packets may arrive 114 after their delay bound, or they may be lost in transit. At the same 115 time, packets may often arrive well before the bound provided by the 116 service. No attempt to control jitter, beyond providing an upper 117 bound on delay, is required by network elements implementing this 118 service. It is expected that most packets will experience delays 119 well below the actual delay bound and that only the tail of the delay 120 distribution will approach (or occasionally exceed) the bound. 121 Consequently, the average delay will also be well below the delay 122 bound. 124 This service is designed for use by playback applications that desire 125 a bound on end-to-end delay. Such applications may or may not be 126 delay adaptive. The delay bound is useful for those applications 127 that do not wish to adapt their playback point or that require an 128 upper bound on end-to-end delay. Note that the delay bound provided 129 along an end-to-end path should be stable. That is, it should not 130 change as long as the end-to-end path does not change. 132 This service is subject to admission control. 134 Motivation 136 Predictive service is designed for playback applications that desire 137 a reserved rate with low packet loss and a maximum bound on end-to- 138 end packet delay, and that are tolerant of occasional dropped or late 139 packets. The presence of delay bounds serves two functions. First, 140 they provide some characterization of the service so that a non- 141 service-adaptive application (that is, an application that does not 142 want to continually change its service request in response to current 143 conditions) can know beforehand the maximum delays its packets will 144 experience in a given service class. These bounds will allow such 145 applications to choose an appropriate service class. Second, such 146 delay bounds can help applications that are not interested in 147 adapting to current delays set their playback point. For many 148 noninteractive "playback" applications, fidelity is of more 149 importance than reducing the playback delay; the delay bound allows 150 the application to achieve high fidelity by having a stable playback 151 point with a very few late packets. 153 Some real-time applications may want a service providing end-to-end 154 delay bounds. However, they may be willing to forgo the absolute 155 bound on delay provided by Guaranteed service [2]. By relaxing the 156 service commitment from a firm to a fairly reliable delay bound, 157 network elements will in many environments be able to accommodate 158 more flows using Predictive service while meeting the service 159 requirement. Thus, Predictive service relaxes the service commitment 160 in favor of higher utilization, when compared to Guaranteed service. 162 At the same time, these applications may require a higher level of 163 assurance, in the form of a quantitative delay bound, than Controlled 164 Delay service [3] provides. The use of Predictive service, rather 165 than Controlled Delay, may also allow applications to avoid adapting 166 their service requests to changing network performance. 168 In order to accommodate the requirements of different applications, 169 Predictive service provides multiple levels of service. Applications 170 can choose the level of service providing the most appropriate delay 171 bound. 173 For additional discussion of Predictive service, see [4,6]. 175 Associated with this service are characterization parameters which 176 describe the delay bound and the current delays experienced in the 177 three services levels. If the characterizations are provided to the 178 endpoints, these will provide some hint about the likely end-to-end 179 delays that might result from requesting a particular level of 180 service, as well as providing information about the end-to-end delay 181 bound. This is intended to aid applications in choosing the 182 appropriate service level. The delay bound information can also be 183 used by applications not wishing to adapt to current delays. 185 Predictive service is very similar to controlled delay service. The 186 only salient difference is the predictive service provides a fairly 187 reliable delay bound, whereas controlled delay does not have any 188 quantified service assurance. Note that if no characterizations are 189 provided, then this service is, from an application's perspective, 190 almost indistinguishable from controlled delay; the delay bounds are 191 of little use if the endpoints are not aware of them. Thus, the 192 distinction between predictive and controlled delay is important only 193 in contexts where characterizations are made available to endpoints. 195 Network Element Data Handling Requirements 197 The network element must ensure that packet delays are below a 198 specified delay bound. There can be occasional violations of the 199 delay bound, but these violations should be very rare. Similarly, 200 Predictive service must maintain a very low level of packet loss. 201 Although packets may be lost or experience delays in excess of the 202 delay bound, any substantial loss or delay bound violations 203 represents a "failure" of the admission control algorithm. However, 204 vendors may employ admission control algorithms with different levels 205 of conservativeness, resulting in very different levels of delay 206 violations and/or loss (delay bound violations might, for instance, 207 vary from 1 in 10^4 to 1 in 10^8). 209 This service must use admission control. Overprovisioning alone is 210 not sufficient to deliver predictive service; the network element 211 must be able to turn flows away if accepting them would cause the 212 network element to experience queueing delays in excess of the delay 213 bound. 215 There are three different logical levels of predictive service. A 216 network element may internally implement fewer actual levels of 217 service, but must map them into three levels at the predictive 218 service invocation interface. Each level of service is associated 219 with a delay bound, with level 1 having the smallest delay and level 220 3 the largest. If the network element implements different levels of 221 service internally, the delay bounds of the different service levels 222 should differ substantially. The actual choice of delays is left to 223 the network element, and it is expected that different network 224 elements will select different delay bounds for the same level of 225 service. 227 All three levels of service should be given better service, i.e., 228 more tightly controlled delay, than best effort traffic. The average 229 delays experienced by packets receiving different levels of 230 predictive service and best-effort service may not differ 231 significantly. However, the tails of the delay distributions, i.e., 232 the maximum packet delays seen, for the levels of Predictive service 233 that are implemented and for best-effort service should be 234 significantly different when the network has substantial load. 236 Predictive service does not require any control of delay jitter 237 (variation in network element transit delay between different packets 238 in the flow) beyond the limit imposed by the per-service level delay 239 bound. Network element implementors who find it advantageous to do 240 so may use resource scheduling algorithms that exercise some jitter 241 control. See the guidelines for implementors section for more 242 discussion of this issue. 244 Links are not permitted to fragment packets as part of predictive 245 service. Packets larger than the MTU of the link must be policed as 246 nonconformant which means that they will be policed according to the 247 rules described in the Policing section below. 249 Invocation Information 251 The Predictive service is invoked by specifying the traffic (TSpec) 252 and the desired service (RSpec) to the network element. A service 253 request for an existing flow that has a new TSpec and/or RSpec should 254 be treated as a new invocation, in the sense that admission control 255 must be reapplied to the flow. Flows that reduce their TSpec and/or 256 their RSpec (i.e., their new TSpec/RSpec is strictly smaller than the 257 old TSpec/RSpec according to the ordering rules described in the 258 section on Ordering below) should never be denied service. 260 The TSpec takes the form of a token bucket plus a minimum policed 261 unit (m) and a maximum packet size (M). 263 The token bucket has a bucket depth, b, and a bucket rate, r. Both b 264 and r must be positive. The rate, r, is measured in bytes of IP 265 datagrams per second, and can range from 1 byte per second to as 266 large as 40 terabytes per second (or about what is believed to be the 267 maximum theoretical bandwidth of a single strand of fiber). Clearly, 268 particularly for large bandwidths, only the first few digits are 269 significant and so the use of floating point representations, 270 accurate to at least 0.1% is encouraged. 272 The bucket depth, b, is also measured in bytes and can range from 1 273 byte to 250 gigabytes. Again, floating point representations 274 accurate to at least 0.1% are encouraged. 276 The range of values is intentionally large to allow for the future 277 bandwidths. The range is not intended to imply that a network 278 element must support the entire range. 280 The minimum policed unit, m, is an integer measured in bytes. All 281 IP datagrams less than size m will be counted against the token 282 bucket as being of size m. The maximum packet size, M, is the biggest 283 packet that will conform to the traffic specification; it is also 284 measured in bytes. The flow must be rejected if the requested 285 maximum packet size is larger than the MTU of the link. Both m and 286 M must be positive, and m must be less then or equal to M. 288 The RSpec is a service level. The service level is specified by one 289 of the integers 1, 2, or 3. Implementations should internally choose 290 representations that leave a range of at least 256 service levels 291 undefined, for possible extension in the future. 293 The TSpec can be represented by two floating point numbers in 294 single-precision IEEE floating point format followed by two 32-bit 295 integers in network byte order. The first value is the rate (r), the 296 second value is the bucket size (b), the third is the minimum policed 297 unit (m), and the fourth is the maximum packet size (M). 299 The RSpec may be represented as an unsigned 16-bit integer carried in 300 network byte order. 302 For all IEEE floating point values, the sign bit must be zero. (All 303 values must be positive). Exponents less than 127 (i.e., 0) are 304 prohibited. Exponents greater than 162 (i.e., positive 35) are 305 discouraged. 307 Exported Information 309 Each predictive service module must export the following information. 310 All of the data elements described below are characterization 311 parameters. 313 For each logical level of service, the network element exports the 314 delay bound as well as three measurements of delay (thus making 315 twelve quantities in total). Each of the measured characterization 316 parameters is based on the maximal packet transit delay experienced 317 over some set of previous time intervals of length T; these delays do 318 not include discarded packets. The three time intervals T are 1 319 second, 60 seconds, and 3600 seconds. The exported parameters are 320 averages over some set of these previous time intervals. 322 There is no requirement that these characterization parameters be 323 based on exact measurements. In particular, these delay measurements 324 can be based on estimates of packet delays or aggregate measurements 325 of queue loading. This looseness is intended to avoid placing undue 326 burdens on network element designs in which obtaining precise delay 327 measurements is difficult. 329 These delay parameters (both the measured values and the bound) have 330 an additive composition rule. For each parameter the composition 331 function computes the sum, enabling a setup protocol to deliver the 332 cumulative sum along the path to the end nodes. 334 The characterization parameters are measured in units of one 335 microsecond. An individual element can advertise a delay value 336 between 1 and 2**28 (somewhat over two minutes) and the total delay 337 added across all elements can range as high as 2**32-1. Should the 338 sum of the values of individual network elements along a path exceed 339 2**32-1, the end-to-end advertised value should be 2**32-1. 341 Note that while the delay measurements are expressed in microseconds, 342 a network element is free to measure delays more loosely. The 343 minimum requirement is that the element estimate its delay accurately 344 to the nearest 100 microseconds. Elements that can measure more 345 accurately are encouraged to do so. 347 NOTE: Measuring delays in milliseconds is not acceptable, as it 348 may lead to composed delay values with unacceptably large errors 349 along paths that are several hops long. 351 The characterization parameters may be represented as a sequence of 352 twelve 32-bit unsigned integers in network byte order. The first 353 four integers are the parameters for the delay bound and for the 354 measurement values for T=1, T=60 and T=3600 for level 1. The next 355 four integers are the parameters for the delay bound and for the 356 measurement values for T=1, T=60 and T=3600 for level 2. The last 357 four integers are the parameters for the delay bound and for the 358 measurement values for T=1, T=60 and T=3600 for level 3. 360 The following values are assigned from the characterization parameter 361 name space. 363 Predictive service is service_name 3. 365 The delay characterization parameters are parameter_number's one 366 through twelve, in the order given above. That is, 368 parameter_name definition 369 1 Service Level = 1, Delay Bound 370 2 Service Level = 1, Delay Measure, T = 1 371 3 Service Level = 1, Delay Measure, T = 60 372 4 Service Level = 1, Delay Measure, T = 3600 373 5 Service Level = 2, Delay Bound 374 6 Service Level = 2, Delay Measure, T = 1 375 7 Service Level = 2, Delay Measure, T = 60 376 8 Service Level = 2, Delay Measure, T = 3600 377 9 Service Level = 3, Delay Bound 378 10 Service Level = 3, Delay Measure, T = 1 379 11 Service Level = 3, Delay Measure, T = 60 380 12 Service Level = 3, Delay Measure, T = 3600 382 The end-to-end composed results are assigned parameter_names N+12, 383 where N is the value of the per-hop name given above. 385 No other exported data is required by this specification. 387 Policing 389 Policing is done at the edge of the network, at all heterogeneous 390 source branch points and at all source merge points. A heterogeneous 391 source branch point is a spot where the multicast distribution tree 392 from a source branches to multiple distinct paths, and the TSpec's of 393 the reservations on the various outgoing links are not all the same. 394 Policing need only be done if the TSpec on the outgoing link is "less 395 than" (in the sense described in the Ordering section) the TSpec 396 reserved on the immediately upstream link. A source merge point is 397 where the multicast distribution trees from two different sources 398 (sharing the same reservation) merge. It is the responsibility of 399 the invoker of the service (a setup protocol, local configuration 400 tool, or similar mechanism) to identify points where policing is 401 required. Policing is allowed at points other than those mentioned 402 above. 404 The token bucket parameters require that traffic must obey the rule 405 that over all time periods, the amount of data sent cannot exceed 406 rT+b, where r and b are the token bucket parameters and T is the 407 length of the time period. For the purposes of this accounting, 408 links must count packets that are smaller than the minimal policing 409 unit to be of size m. Packets that arrive at an element and cause a 410 violation of the the rT+b bound are considered nonconformant. 411 Policing to conformance with this token bucket is done in two 412 different ways. At all policing point, non conforming packets are 413 treated as best-effort datagrams. [If and when a marking ability 414 becomes available, these nonconformant packets should be ``marked'' 415 as being noncompliant and then treated as best effort packets at all 416 subsequent routers.] Other actions, such as delaying packets until 417 they are compliant, are not allowed. 418 NOTE: This point is open to discussion. The requirement given 419 above may be too strict; it may be better to permit some delaying 420 of a packet if that delay would allow it to pass the policing 421 function. Intuitively, a plausible approach is to allow a delay 422 of (roughly) up to the maximum queueing delay experienced by 423 completely conforming packets before declaring that a packet has 424 failed to pass the policing function and dropping it. The merit of 425 this approach, and the precise wording of the specification that 426 describes it, require further study. 428 A related issue is that at all network elements, packets bigger than 429 the MTU of the link must be considered nonconformant and should be 430 classified as best effort (and will then either be fragmented or 431 dropped according to the element's handling of best effort traffic). 432 [Again, if marking is available, these reclassified packets should be 433 marked.] 435 Ordering 437 TSpec's are ordered according to the following rule: TSpec A is a 438 substitute ("as good or better than") for TSpec B if (1) both the 439 token bucket depth and rate for TSpec A are greater than or equal to 440 those of TSpec B, (2) the minimum policed unit m is at least as small 441 for TSpec A as it is for TSpec B, and (3) the maximum packet size M 442 is at least as large for TSpec A as it is for TSpec B. 444 A merged TSpec may be calculated over a set of TSpecs by taking the 445 largest token bucket rate, largest bucket size, smallest minimal 446 policed unit, and largest maximum packet size across all members of 447 the set. This use of the word "merging" is similar to that in the 448 RSVP protocol; a merged TSpec is one that is adequate to describe the 449 traffic from any one of a number of flows. 451 Service request specifications (RSpecs) are ordered by their 452 numerical values (in inverse order); service level 1 is substitutable 453 for service level 2, and service level 2 is substitutable for service 454 level 3. 456 In addition, predictive service is related to controlled delay 457 service in the sense that a given level of predictive service is 458 considered at least as good as the same level of controlled delay 459 service. That is, predictive level 1 is substitutable for controlled 460 delay level 1, and so on. See additional comments in the guidelines 461 section. 463 Guidelines for Implementors 465 It is expected that the service levels implemented at a particular 466 element will offer significantly different levels of delay bounds. 467 There seems little advantage in offering levels whose delay bounds 468 differ only slightly. So, while a particular element may offer less 469 than three levels of service, the levels of service it does offer 470 should have notably different delay bounds. For example, appropriate 471 delay bounds for three levels of predictive service are 1, 10 and 100 472 milliseconds. 474 For each level of service, packet loss and violation of the delay 475 bound are expected to be very rare. As a preliminary guideline, we 476 suggest that over long term use (measured in hours or days), the 477 aggregate rate of delay bound violation and packet loss should be 478 less than 1 in 10,000 packets. Violations of the delay bound are 479 likely to be correlated. On shorter time scales, delay bound 480 violation rates should not exceed 1 in 1,000 during any 60 second 481 interval. 483 An additional service currently being considered is the controlled 484 delay service described in [3]. It is expected that if an element 485 offers both predictive service and controlled delay service, it 486 should not implement both but should use the predictive service as a 487 controlled delay service. This is allowed since (1) the required 488 behavior of predictive service meets all of the requirements of 489 controlled delay service, (2) the invocations are compatible, and (3) 490 the ordering relationships are such that a given level of predictive 491 service is at least as good as the same level of controlled delay 492 service. 494 Evaluation Criteria 496 Evaluating a network element's implementation of predictive service 497 is somewhat difficult, since the quality of service depends on 498 overall traffic load and the traffic pattern presented. In this 499 section we sketch out a methodology for testing a network element's 500 predictive service. 502 The idea is that one chooses a particular traffic mix (for instance, 503 three parts level 1, one part level 2, two parts level 3 and one part 504 best-effort traffic) and loads the network element with progressively 505 higher levels of this traffic mix (i.e., 40% of capacity, then 50% of 506 capacity, on beyond 100% capacity). For each load level, one 507 measures the utilization, mean delays, packet loss rate, and delay 508 bound violation rate for each level of service (including best 509 effort). Each test run at a particular load should involve enough 510 traffic that is a reasonable predictor of the performance a long- 511 lived application such as a video conference would experience (e.g., 512 an hour or more of traffic). 514 This memo does not specify particular traffic mixes to test. 516 However, we expect in the future that as the nature of real-time 517 Internet traffic is better understood, the traffic used in these 518 tests will be chosen to reflect the current and future Internet load. 520 Examples of Implementation 522 One implementation of predictive service would be to have a queueing 523 mechanism with three priority levels, with level 1 packets being 524 highest priority and level 3 packets being lowest priority. Maximum 525 packet delays and link utilization would be measured for each class 526 over some relatively short interval, such as 10,000 packet 527 transmission times. The admission control algorithm would use these 528 measurements to determine whether or not to admit a new flow. 529 Specifically, a new flow would be admitted if the network element 530 expects to be able to meet the delay bounds of the packets in each 531 service class after admitting a new flow. For an example of an 532 admission control algorithm for Predictive service, see [5]. 534 Note that the viability of measurement based admission control for 535 predictive service depends on link bandwidth and traffic patterns. 536 Specifically, with bursty traffic sources, sufficient multiplexing is 537 needed for measurements of existing traffic to be good predictors of 538 future traffic behavior. In an environments where sufficient 539 multiplexing is not possible, parameter based admission control may 540 be necessary. 542 Examples of Use 544 We give two examples of use, both involving an interactive 545 application. 547 In the first example, we assume that either the receiving 548 application is ignoring characterizations or the setup protocol is 549 not delivering the characterizations to the end-nodes. We further 550 assume that the application's data transmission units are 551 timestamped. The receiver, by inspecting the timestamps, can 552 determine the end-to-end delays and determine if they are excessive. 553 If so, then the application asks for a better level of service. If 554 the delays are well below the required level, the application can ask 555 for a worse level of service. 557 In the second example, we assume that characterizations are delivered 558 to the receiving application. The receiver chooses the worst service 559 level whose characterization for the delay bound is less than the 560 required level (once latencies are added in). 562 References 564 [1] S. Shenker and J. Wroclawski. "Network Element Service 565 Specification Template", Internet Draft, June 1995, 568 [2] S. Shenker and C. Partridge. "Specification of Guaranteed Quality 569 of Service", Internet Draft, July 1995, 572 [3] S. Shenker and C. Partridge and J. Wroclawski. "Specification of 573 Controlled Delay Quality of Service",Internet Draft, June 1995, 574 576 [4] R. Braden, D. Clark and S. Shenker, "Integrated Services in the 577 Internet Architecture: an Overview", RFC 1633, June 1994. 579 [5] S. Jamin, P. Danzig, S. Shenker and L. Zhang, "A Measurement- 580 based Admission Control Algorithm for Integrated Services Packet 581 Networks", Sigcomm '95, September 1995. 583 [6] D. Clark, S. Shenker and L. Zhang, "Supporting Real-Time 584 Applications in an Integrated Services Packet Network: Architecture 585 and Mechanism", Sigcomm '92, October 1992. 587 Security Considerations 589 Security considerations are not discussed in this memo. 591 Authors' Address: 593 Scott Shenker 594 Xerox PARC 595 3333 Coyote Hill Road 596 Palo Alto, CA 94304-1314 597 shenker@parc.xerox.com 598 415-812-4840 599 415-812-4471 (FAX) 601 Craig Partridge 602 BBN 603 2370 Amherst St 604 Palo Alto CA 94306 605 craig@bbn.com 607 Bruce Davie 608 Bellcore 609 445 South St 610 Morristown, NJ, 07960 611 bsd@bellcore.com 613 Lee Breslau 614 Xerox PARC 615 3333 Coyote Hill Road 616 Palo Alto, CA 94304-1314 617 breslau@parc.xerox.com 618 415-812-4402 619 415-812-4471 (FAX)