idnits 2.17.1 draft-ietf-issll-is802-svc-mapping-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Cannot find the required boilerplate sections (Copyright, IPR, etc.) in this document. Expected boilerplate is as follows today (2024-04-25) according to https://trustee.ietf.org/license-info : IETF Trust Legal Provisions of 28-dec-2009, Section 6.a: This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 2: Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. IETF Trust Legal Provisions of 28-dec-2009, Section 6.b(i), paragraph 3: This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing expiration date. The document expiration date should appear on the first and last page. ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 36 longer pages, the longest (page 2) being 60 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** The abstract seems to contain references ([2], [15], [16], [4], [5], [6], [7], [8], [9], [10], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 48 has weird spacing: '...tension to th...' == Line 161 has weird spacing: '...ted and the s...' == Line 1248 has weird spacing: '... 1.2ms unb...' == Line 1249 has weird spacing: '... 120us unb...' == Line 1250 has weird spacing: '... 12us unb...' == (4 more instances...) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 1997) is 9781 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: '12' is defined on line 1641, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. '1' -- Possible downref: Non-RFC (?) normative reference: ref. '2' ** Downref: Normative reference to an Informational RFC: RFC 1633 (ref. '3') -- Possible downref: Non-RFC (?) normative reference: ref. '4' -- Possible downref: Non-RFC (?) normative reference: ref. '5' -- Possible downref: Non-RFC (?) normative reference: ref. '6' -- Possible downref: Non-RFC (?) normative reference: ref. '7' -- Possible downref: Non-RFC (?) normative reference: ref. '8' -- Possible downref: Non-RFC (?) normative reference: ref. '9' -- Possible downref: Non-RFC (?) normative reference: ref. '10' -- Possible downref: Non-RFC (?) normative reference: ref. '11' -- Possible downref: Non-RFC (?) normative reference: ref. '12' -- Possible downref: Non-RFC (?) normative reference: ref. '14' -- Possible downref: Non-RFC (?) normative reference: ref. '15' -- Possible downref: Non-RFC (?) normative reference: ref. '16' Summary: 11 errors (**), 0 flaws (~~), 9 warnings (==), 16 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Draft Mick Seaman 2 Expires January 1998 3Com 3 draft-ietf-issll-is802-svc-mapping-00.txt Andrew Smith 4 Extreme Networks 5 Eric Crawley 6 Gigapacket Networks 7 July 1997 9 Integrated Service Mappings on IEEE 802 Networks 11 Status of this Memo 13 This document is an Internet Draft. Internet Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its Areas, 15 and its Working Groups. Note that other groups may also distribute 16 working documents as Internet Drafts. 18 Internet Drafts are draft documents valid for a maximum of six 19 months. Internet Drafts may be updated, replaced, or obsoleted by 20 other documents at any time. It is not appropriate to use Internet 21 Drafts as reference material or to cite them other than as a "working 22 draft" or "work in progress." 24 Please check the I-D abstract listing contained in each Internet 25 Draft directory to learn the current status of this or any other 26 Internet Draft. 28 Abstract 30 This document describes the support of IETF Integrated Services over 31 LANs built from IEEE 802 network segments which may be interconnected by 32 IEEE 802.1 MAC Bridges (switches) [1]. 34 It describes the practical capabilities and limitations of this 35 technology for supporting Controlled Load [8] and Guaranteed Service [9] 36 using the inherent capabilities of the relevant 802 technologies 37 [5],[6],[15],[16] etc. and the proposed 802.1p queuing features in 38 switches. IEEE P802.1p [2] is a superset of the existing IEEE 802.1D 39 bridging specification. This document provides a functional model for 40 the layer 3 to layer 2 and user-to-network dialogue which supports 41 admission control and defines requirements for interoperability between 42 switches. The special case of such networks where the sender and 43 receiver are located on the same segment is also discussed. 45 This scheme expands on the ISSLL over 802 LANs framework described in 46 [7]. It makes reference to a signaling protocol for admission control 47 developed by the ISSLL WG which is known as the "Subnet Bandwidth 48 Manager". This is an extension to the IETF's RSVP protocol [4] and is 49 described in a separate document [10]. 51 1. Introduction 53 The IEEE 802.1 Interworking Task Group is currently enhancing the basic 54 MAC Service provided in Bridged Local Area Networks (a.k.a. "switched 55 LANs"). As a supplement to the original IEEE MAC Bridges standard [1], 56 the update P802.1p [2] proposes differential traffic class queuing and 57 access to media on the basis of a "user_priority" signaled in frames. 59 In this document we 60 * review the meaning and use of user_priority in LANs and the frame 61 forwarding capabilities of a standard LAN switch. 62 * examine alternatives for identifying layer 2 traffic flows for 63 admission control. 64 * review the options available for policing traffic flows. 65 * derive requirements for consistent traffic class handling in a network 66 of switches and use these requirements to discuss queue handling 67 alternatives for 802.1p and the way in which these meet administrative 68 and interoperability goals. 69 * consider the benefits and limitations of this switched-based approach, 70 contrasting it with full router based RSVP implementation in terms of 71 complexity, utilisation of transmission resources and administrative 72 controls. 74 The model used is outlined in the "framework document" [7] which in 75 summary: 76 * partitions the admission control process into two separable 77 operations: 78 * an interaction between the user of the integrated service and the 79 local network elements ("provision of the service" in the terms of 80 802.1D) to confirm the availability of transmission resources for 81 traffic to be introduced. 82 * selection of an appropriate user_priority for that traffic on the 83 basis of the service and service parameters to be supported. 84 * distinguishes between the user to network interface above and the 85 mechanisms used by the switches ("support of the service"): these 86 include communication between the switches (network to network 87 signaling). 88 * describes a simple architecture for the provision and support of these 89 services, broken down into components with functional and interface 90 descriptions: 91 * "user" components: a layer-3 to layer-2 negotiation and translation 92 component for sending and receiving, with interfaces to other components 93 residing in the station. 94 * processes residing in a bridge/switch to handle admission control and 95 mapping requests, including proposals for actual traffic mappings to 96 user_priority values. 97 * identifies the requirements of a signaling protocol to carry admission 98 control requests between devices. 100 It will be noted that this document is written from the pragmatic 101 viewpoint that there will be a widely deployed network technology and we 102 are evaluating it for its ability to support some or all of the defined 103 IETF integrated services: this approach is intended to ensure 104 development of a system which can provide useful new capabilities in 105 existing (and soon to be deployed) network infrastructures. 107 2. Goals and Assumptions 109 It is assumed that typical subnetworks that are concerned about 110 quality-of-service will be "switch-rich": that is to say most 111 communication between end stations using integrated services support 112 will pass through at least one switch. The mechanisms and protocols 113 described will be trivially extensible to communicating systems on the 114 same shared media, but it is important not to allow problem 115 generalisation to complicate the practical application that we target: 116 the access characteristics of Ethernet and Token-Ring LANs are forcing a 117 trend to switch-rich topologies. In addition, there have been 118 developments in the area of MAC enhancements to ensure delay- 119 deterministic access on network links e.g. IEEE 802.12 [15] and other 120 proprietary schemes. 122 Note that we illustrate most examples in this document using RSVP as an 123 "upper-layer" QoS signaling protocol but there are actually no real 124 dependencies on this protocol: RSVP could be replaced by some other 125 dynamic protocol or else the requests could be made by network 126 management or other policy entities. In particular, the SBM signaling 127 protocol [10], which is based upon RSVP, is designed to work seamlessly 128 in the service-mapping architecture described in this document and the 129 "Integrated Services over IEEE 802" framework [7]. 131 There may be a heterogeneous mixture of switches with different 132 capabilities, all compliant with IEEE 802.1p, but implementing queuing 133 and forwarding mechanisms in a range from simple 2-queue per port, 134 strict priority, up to more complex multi-queue (maybe even one per- 135 flow) WFQ or other algorithms. 137 The problem is broken down into smaller independent pieces: this may 138 lead to sub-optimal usage of the network resources but we contend that 139 such benefits are often equivalent to very small improvements in network 140 efficiency in a LAN environment. Therefore, it is a goal that the 141 switches in the network operate using a much simpler set of information 142 than the RSVP engine in a router. In particular, it is assumed that such 143 switches do not need to implement per-flow queuing and policing 144 (although they might do so). 146 It is a fundamental assumption of the int-serv model that flows are 147 isolated from each other throughout their transit across a network. 148 Intermediate queueing nodes are expected to police the traffic to ensure 149 that it conforms to the pre-agreed traffic flow specification. In the 150 architecture proposed here for mapping to layer-2, we diverge from that 151 assumption in the interests of simplicity: the policing function is 152 assumed to be implemented in the transmit schedulers of the layer-3 153 devices (end stations, routers). In the LAN environments envisioned, it 154 is reasonable to assume that end stations are "trusted" to adhere to 155 their agreed contracts at the inputs to the network and that we can 156 afford to over-allocate resources at admission -control time to 157 compensate for the inevitable extra jitter/bunching introduced by the 158 switched network itself. 160 These divergences have some implications on the types of receiver 161 heterogeneity that can be supported and the statistical multiplexing 162 gains that might have been exploited, especially for Controlled Load 163 flows: this is discussed in a later section of this document. 165 3. Non-Goals 167 This document describes service mappings onto existing IEEE- and ANSI- 168 defined standard MAC layers and uses standard MAC-layer services as in 169 IEEE 802.1 bridging. It does not attempt to make use of or describe the 170 capabilities of other proprietary or standard MAC-layer protocols 171 although it should be noted that there exists published work regarding 172 MAC layers suitable for QoS mappings: these are outside the scope of the 173 IETF ISSLL working group charter. 175 4. User Priority and Frame Forwarding in IEEE 802 Networks 177 4.1 General IEEE 802 Service Model 179 User_priority is a value associated with the transmission and reception 180 of all frames in the IEEE 802 service model: it is supplied by the 181 sender which is using the MAC service. It is provided along with the 182 data to a receiver using the MAC service. It may or may not be actually 183 carried over the network: Token- Ring/802.5 carries this value (encoded 184 in its FC octet), basic Ethernet/802.3 does not, 802.12 may or may not 185 depending on the frame format in use. 802.1p defines a consistent way to 186 carry this value over the bridged network on Ethernet, Token Ring, 187 Demand- Priority, FDDI or other MAC-layer media using an extended frame 188 format. The usage of user_priority is summarised below but is more fully 189 described in section 2.5 of 802.1D [1] and 802.1p [2] "Support of the 190 Internal Layer Service by Specific MAC Procedures" and readers are 191 referred to these documents for further information. 193 If the "user_priority" is carried explicitly in packets, its utility is 194 as a simple label in the data stream enabling packets in different 195 classes to be discriminated easily by downstream nodes without their 196 having to parse the packet in more detail. 198 Apart from making the job of desktop or wiring-closet switches easier, 199 an explicit field means they do not have to change hardware or software 200 as the rules for classifying packets evolve (e.g. based on new protocols 201 or new policies). More sophisticated layer-3 switches, perhaps deployed 202 towards the core of a network, can provide added value here by 203 performing the classification more accurately and, hence, utilising 204 network resources more efficiently or providing better protection of 205 flows from one another: this appears to be a good economic choice since 206 there are likely to be very many more desktop/wiring closet switches in 207 a network than switches requiring layer-3 functionality. 209 The IEEE 802 specifications make no assumptions about how user_priority 210 is to be used by end stations or by the network. In particular it can 211 only be considered a "priority" in a loose sense: although the current 212 802.1p draft defines static priority queuing as the default mode of 213 operation of switches that implement multiple queues (user_priority is 214 defined as a 3-bit quantity so strict priority queueing would give value 215 7 = high priority, 0 = low priority). The general switch algorithm is as 216 follows: packets are placed onto a particular queue based on the 217 received user_priority (from the packet if a 802.1p header or 802.5 218 network was used, invented according to some local policy if not). The 219 selection of queue is based on a mapping from user_priority 220 [0,1,2,3,4,5,6 or 7] onto the number of available queues. Note that 221 switches may implement any number of queues from 1 upwards and it may 222 not be visible externally, except through any advertised int- serv 223 parameters and the switch's admission control behaviour, which 224 user_priority values get mapped internally onto the same vs. different 225 queues. Other algorithms that a switch might implement might include 226 e.g. weighted fair queueuing, round robin. 228 In particular, IEEE makes no recommendations about how a sender should 229 select the value for user_priority: one of the main purposes of this 230 current document is to propose such usage rules and how to communicate 231 the semantics of the values between switches, end- stations and routers. 232 In the remainder of this document we use the term "traffic class" 233 synonymously with user_priority. 235 4.2 Ethernet/802.3 237 There is no explicit traffic class or user_priority field carried in 238 Ethernet packets. This means that user_priority must be regenerated at a 239 downstream receiver or switch according to some defaults or by parsing 240 further into higher-layer protocol fields in the packet. Alternatively, 241 the IEEE 802.1Q encapsulation [11] may be used which provides an 242 explicit traffic class field on top of an basic MAC format. 244 For the different IP packet encapsulations used over Ethernet/802.3, it 245 will be necessary to adjust any admission- control calculations 246 according to the framing and to the padding requirements: 248 Encapsulation Framing Overhead IP MTU 249 bytes/pkt bytes 251 IP EtherType (ip_len<=46 bytes) 64-ip_len 1500 252 (1500>=ip_len>=46 bytes) 18 1500 254 IP EtherType over 802.1p/Q (ip_len<=42) 64-ip_len 1500* 255 (1500>=ip_len>=42 bytes) 22 1500* 257 IP EtherType over LLC/SNAP (ip_len<=40) 64-ip_len 1492 258 (1500>=ip_len>=40 bytes) 24 1492 260 * note that the draft IEEE 802.1Q specification exceeds the current IEEE 261 802.3 maximum packet length values by 4 bytes although work is 262 proceeding within IEEE to address this issue. 264 4.3 Token-Ring/802.5 266 The token ring standard [6] provides a priority mechanism that can be 267 used to control both the queuing of packets for transmission and the 268 access of packets to the shared media. The priority mechanisms are 269 implemented using bits within the Access Control (AC) and the Frame 270 Control (FC) fields of a LLC frame. The first three bits of the AC 271 field, the Token Priority bits, together with the last three bits of the 272 AC field, the Reservation bits, regulate which stations get access to 273 the ring. The last three bits of the FC field of an LLC frame, the User 274 Priority bits, are obtained from the higher layer in the user_priority 275 parameter when it requests transmission of a packet. This parameter also 276 establishes the Access Priority used by the MAC. The user_priority value 277 is conveyed end-to-end by the User Priority bits in the FC field and is 278 typically preserved through Token-Ring bridges of all types. In all 279 cases, 0 is the lowest priority. 281 Token-Ring also uses a concept of Reserved Priority: this relates to the 282 value of priority which a station uses to reserve the token for the next 283 transmission on the ring. When a free token is circulating, only a 284 station having an Access Priority greater than or equal to the Reserved 285 Priority in the token will be allowed to seize the token for 286 transmission. Readers are referred to [14] for further discussion of 287 this topic. 289 A token ring station is theoretically capable of separately queuing each 290 of the eight levels of requested user priority and then transmitting 291 frames in order of priority. A station sets Reservation bits according 292 to the user priority of frames that are queued for transmission in the 293 highest priority queue. This allows the access mechanism to ensure that 294 the frame with the highest priority throughout the entire ring will be 295 transmitted before any lower priority frame. Annex I to the IEEE 802.5 296 token ring standard recommends that stations send/relay frames as 297 follows: 299 Application user_priority 301 non-time-critical data 0 302 - 1 303 - 2 304 - 3 305 LAN management 4 306 time-sensitive data 5 307 real-time-critical data 6 308 MAC frames 7 310 To reduce frame jitter associated with high-priority traffic, the annex 311 also recommends that only one frame be transmitted per token and that 312 the maximum information field size be 4399 octets whenever delay- 313 sensitive traffic is traversing the ring. Most existing implementations 314 of token ring bridges forward all LLC frames with a default access 315 priority of 4. Annex I recommends that bridges forward LLC frames that 316 have a user priorities greater that 4 with a reservation equal to the 317 user priority (although the draft IEEE P802.1p [2] permits network 318 management override this behaviour). The capabilities provided by token 319 ring's user and reservation priorities and by IEEE 802.1p can provide 320 effective support for Integrated Services flows that request QoS using 321 RSVP. These mechanisms can provide, with few or no additions to the 322 token ring architecture, bandwidth guarantees with the network flow 323 control necessary to support such guarantees. 325 For the different IP packet encapsulations used over Token Ring/802.5, 326 it will be necessary to adjust any admission-control calculations 327 according to the framing requirements: 329 Encapsulation Framing Overhead IP MTU 330 bytes/pkt bytes 332 IP EtherType over 802.1p/802.1Q 29 4370* 333 IP EtherType over LLC/SNAP 25 4370* 335 *the suggested MTU from RFC 1042 [13] is 4464 bytes but there are issues 336 related to discovering what the maximum supported MTU between any two 337 points both within and between Token Ring subnets. We recommend here an 338 MTU consistent with the 802.5 Annex I recommendation. 340 4.4 FDDI 342 The Fiber Distributed Data Interface standard [16] provides a priority 343 mechanism that can be used to control both the queuing of packets for 344 transmission and the access of packets to the shared media. The priority 345 mechanisms are implemented using similar mechanisms to Token-Ring 346 described above. The standard also makes provision for "Synchronous" 347 data traffic with strict media access and delay guarantees - this mode 348 of operation is not discussed further here: this is an area within the 349 scope of the ISSLL WG that requires further work. In the remainder of 350 this document we treat FDDI as a 100Mbps Token Ring (which it is) using 351 a service interface compatible with IEEE 802 networks. 353 4.5 Demand-Priority/802.12 355 IEEE 802.12 [15] is a standard for a shared 100Mbit/s LAN. Data packets 356 are transmitted using either 803.3 or 802.5 frame formats. The MAC 357 protocol is called Demand Priority. Its main characteristics in respect 358 to QoS are the support of two service priority levels (normal- and 359 high-priority) and the service order: data packets from all network 360 nodes (e.g. end-hosts and bridges/switches) are served using a simple 361 round robin algorithm. 363 If the 802.3 frame format is used for data transmission then 364 user_priority is encoded in the starting delimiter of the 802.12 data 365 packet. If the 802.5 frame format is used then the priority is 366 additionally encoded in the YYY bits of the AC field in the 802.5 packet 367 header (see also section 4.3). Furthermore, the 802.1p/Q encapsulation 368 may also be applied in 802.12 networks with its own user_priority field. 369 Thus, in all cases, switches are able to recover any user_priority 370 supplied by a sender. 372 The same rules apply for 802.12 user_priority mapping through a bridge 373 as with other media types: the only additional information is that 374 "normal" priority is used by default for user_priority values 0 through 375 4 inclusive and "high" priority is used for user_priority levels 5 376 through 7: this ensures that the default Token-Ring user_priority level 377 of 4 for 802.5 bridges is mapped to "normal" on 802.12 segments. 379 The medium access in 802.12 LANs is deterministic: the demand priority 380 mechanism ensures that, once the normal priority service has been pre- 381 empted, all high priority packets have strict priority over packets with 382 normal priority. In the abnormal situation that a normal-priority packet 383 has been waiting at the front of a MAC transmit queue for a time period 384 longer than PACKET_PROMOTION (200 - 300 ms [15]),its priority is 385 automatically 'promoted' to high priority. Thus, even normal-priority 386 packets have a maximum guaranteed access time to the medium. 388 Integrated Services can be built on top of the 802.12 medium access 389 mechanism. When combined with admission control and bandwidth 390 enforcement mechanisms, delay guarantees as required for a Guaranteed 391 Service can be provided without any changes to the existing 802.12 MAC 392 protocol. 394 Since the 802.12 standard supports the 802.3 and 802.5 frame formats, 395 the same framing overhead as reported in sections 4.2 and 4.3 must be 396 considered in the admission control equations for 802.12 links. 398 5. Integrated services through layer-2 switches 400 5.1 Summary of switch characteristics 402 For the sake of illustration, we divide layer-2 bridges/switches into 403 several categories, based on the level of sophistication of their QoS 404 and software protocol capabilities: these categories are not intended to 405 represent all possible implementation choices but, instead, to aid 406 discussion of what QoS capabilities can be expected from a network made 407 of these devices (the basic "class 0" device is included for 408 completeness but cannot really provide useful integrated service). 410 Class 0 411 - 802.1D MAC bridging 412 - single queue per output port, no separation of 413 traffic classes 414 - Spanning-Tree to remove topology loops (single active path) 416 Class I 417 - 802.1p priority queueuing between traffic classes. 418 - No multicast heterogeneity. 419 - 802.1p GARP/GMRP pruning of individual multicast addresses. 421 Class II As (I) plus: 422 - can map received user_priority on a per-input-port basis 423 to some internal set of canonical values. 424 - can map internal canonical values onto transmitted 425 user_priority on a per-output-port basis giving some 426 limited form of multicast heterogeneity. 428 - maybe implements IGMP snooping for pruning. 430 Class III As (II) plus: 431 - per-flow classification 432 - maybe per-flow policing and/or reshaping 433 - more complex transmit scheduling (probably not per-flow) 435 5.2 Queueing 437 Connectionless packet-based networks in general, and LAN-switched 438 networks in particular, work today because of scaling choices in network 439 provisioning. Consciously or (more usually) unconsciously, enough excess 440 bandwidth and buffering is provisioned in the network to absorb the 441 traffic sourced by higher-layer protocols or cause their transmission 442 windows to run out, on a statistical basis, so that the network is only 443 overloaded for a short duration and the average expected loading is less 444 than 60% (usually much less). 446 With the advent of time-critical traffic such over-provisioning has 447 become far less easy to achieve. Time critical frames may find 448 themselves queued for annoyingly long periods of time behind temporary 449 bursts of file transfer traffic, particularly at network bottleneck 450 points, e.g. at the 100 Mb/s to 10 Mb/s transition that might occur 451 between the riser to the wiring closet and the final link to the user 452 from a desktop switch. In this case, however, if it is known (guaranteed 453 by application design, merely expected on the basis of statistics, or 454 just that this is all that the network guarantees to support) that the 455 time critical traffic is a small fraction of the total bandwidth, it 456 suffices to give it strict priority over the "normal" traffic. The worst 457 case delay experienced by the time critical traffic is roughly the 458 maximum transmission time of a maximum length non-time-critical frame - 459 less than a millisecond for 10 Mb/s Ethernet, and well below an end to 460 end budget based on human perception times. 462 When more than one "priority" service is to be offered by a network 463 element e.g. it supports Controlled-Load as well as Guaranteed Service, 464 the queuing discipline becomes more complex. In order to provide the 465 required isolation between the service classes, it will probably be 466 necessary to queue them separately. There is then an issue of how to 467 service the queues - a combination of admission control and more 468 intelligent queueing disciplines e.g. weighted fair queuing, may be 469 required in such cases. As with the service specifications themselves, 470 it is not the place for this document to specify queuing algorithms, 471 merely to observe that the external behaviour meet the services' 472 requirements. 474 5.3 Multicast Heterogeneity 475 At layer-3, the int-serv model allows heterogeneous multicast flows 476 where different branches of a tree can have different types of 477 reservations for a given multicast destination. It also supports the 478 notion that trees may have some branches with reserved flows and some 479 using best effort (default) service. If we were to treat a layer-2 480 subnet as a single "network element", as defined in [3], then all of the 481 branches of the distribution tree that lie within the subnet could be 482 assumed to require the same QoS treatment and be treated as an atomic 483 unit as regards admission control etc. with this assumption, the model 484 and protocols already defined by int- serv and RSVP already provide 485 sufficient support for multicast heterogeneity. Note, though, that an 486 admission control request may well be rejected because just one link in 487 the subnet has reached its traffic limit and that this will lead to 488 rejection of the request for the whole subnet. 490 The above approach would, therefore, provide very sub-optimal 491 utilisation of resources given the size and complexity of the layer-2 492 subnets envisioned by this document. Therefore, it is desirable to 493 support the ability of layer-2 switches to apply QoS differently on 494 different egress branches of a tree that divides at that switch: this is 495 discussed in the following paragraphs. 497 IEEE 802.1D and 802.1p specify a basic model for multicast whereby a 498 switch performs multicast routing decisions based on the destination 499 address: this would produce a list of output ports to which the packet 500 should be forwarded. In its default mode, such a switch would use the 501 user_priority value in received packets (or a value regenerated on a 502 per-input-port basis in the absence of an explicit value) to enqueue the 503 packets at each output port. All of the classes of switch identified 504 above can support this operation. 506 If a switch is selecting per-port output queues based only on the 507 incoming user_priority, as described by 802.1p, it must treat all 508 branches of all multicast sessions within that user_priority class with 509 the same queuing mechanism: no heterogeneity is then possible and this 510 could well lead to the failure of an admission control request for the 511 whole multicast session due to a single link being at its maximum 512 allocation, as described above. Note that, in the layer-2 case as 513 distinct from the layer-3 case with RSVP/int-serv, the option of having 514 some receivers getting the session with the requested QoS and some 515 getting it best effort does not exist as the Class I switches are unable 516 to re-map the user_priority on a per- link basis: this could well become 517 an issue with heavy use of dynamic multicast sessions. If a switch were 518 to implement a separate user_priority mapping at each output port, as 519 described under "Class II switch" above, then some limited form of 520 receiver heterogeneity can be supported e.g. forwarding of traffic as 521 user_priority 4 on one branch where receivers have performed admission 522 control reservations and as user_priority 0 on one where they have not. 524 We assume that per-user_priority queuing without taking account of input 525 or output ports is the minimum standard functionality for switches in a 526 LAN environment (Class I switch, as defined above) but that more 527 functional layer-2 or even layer-3 switches (a.k.a. routers) can be used 528 if even more flexible forms of heterogeneity are considered necessary to 529 achieve more efficient resource utilisation: note that the behaviour of 530 layer-3 switches in this context is already well standardised by IETF. 532 5.4 Override of incoming user_priority 534 In some cases, a network administrator may not trust the user_priority 535 values contained in packets from a source and may wish to map these into 536 some more suitable set of values. Alternatively, due perhaps to 537 equipment limitations or transition periods, values may need to be 538 mapped to/from different regions of a network. 540 Some switches may implement such a function on input that maps received 541 user_priority into some internal set of values (this table is known in 542 802.1p as the "user_priority regeneration table"). These values can then 543 be mapped using the output table described above onto outgoing 544 user_priority values: these same mappings must also be used when 545 applying admission control to requests that use the user_priority values 546 (see e.g. [10]). More sophisticated approaches may also be envisioned 547 where a device polices traffic flows and adjusts their onward 548 user_priority based on their conformance to the admitted traffic flow 549 specifications. 551 5.5 Remapping of non-conformant aggregated flows 553 One other topic under discussion in the int-serv context is how to 554 handle the traffic for data flows from sources that are exceeding their 555 currently agreed traffic contract with the network. An approach that 556 shows some promise is to treat such traffic with "somewhat less than 557 best effort" service in order to protect traffic that is normally given 558 "best effort" service from having to back off (such traffic is often 559 "adaptive" using TCP or other congestion control algorithms and it would 560 be unfair to penalise it due to badly behaved traffic from reserved 561 flows which are often set up by non-adaptive applications). 563 One solution here might be to assign normal best effort traffic to one 564 user_priority and to label excess non-conformant traffic as a "lower" 565 user_priority although the re-ordering problems that might arise from 566 doing this may make this solution undesirable, particularly if the flows 567 are using TCP: for this reason the controlled load service recommends 568 dropping excess traffic, rather than re-mapping to a lower priority. 569 This topic is further discussed below. 571 6. Selecting traffic classes 573 One fundamental question is "who gets to decide what the classes mean 574 and who gets access to them?" One approach would be for the meanings of 575 the classes to be "well-known": we would then need to standardise a set 576 of classes e.g. 1 = best effort, 2 = controlled- load, 3 = guaranteed 577 (loose delay bound, high bandwidth), 4 = guaranteed (slightly tighter 578 delay) etc. The values to encode in such a table in end stations, in 579 isolation from the network to which they are connected, is 580 problematical: one approach could be to define one user_priority value 581 per int-serv service and leave it at that (reserving the rest of the 582 combinations for future traffic classes - there are sure to be plenty!). 584 We propose here a more flexible mapping: clients ask "the network" which 585 user_priority traffic class to use for a given traffic flow, as 586 categorised by its flow-spec and layer-2 endpoints. The network provides 587 a value back to the requester which is appropriate to the current 588 network topology, load conditions, other admitted flows etc. The task of 589 configuring switches with this mapping (e.g. through network management, 590 a switch-switch protocol or via some network-wide QoS-mapping directory 591 service) is an order of magnitude less complex than performing the same 592 function in end stations. Also, when new services (or other network 593 reconfigurations) are added to such a network, the network elements will 594 typically be the ones to be upgraded with new queuing algorithms etc. 595 and can be provided with new mappings at this time. 597 Given the need for a new session or "flow" requiring some QoS support, a 598 client then needs answers to the following questions: 600 1. which traffic class do I add this flow to? 601 The client needs to know how to label the packets of the flow as it 602 places them into the network. 604 2. who do I ask/tell? 605 The proposed model is that a client ask "the network" which 606 user_priority traffic class to use for a given traffic flow. This has 607 several benefits as compared to a model which allows clients to select a 608 class for themselves. 610 3. how do I ask/tell them? 611 A request/response protocol is needed between client and network: in 612 fact, the request can be piggy-backed onto an admission control request 613 and the response can be piggy-backed onto an admission control 614 acknowledgment: this "one pass" assignment has the benefit of completing 615 the admission control in a timely way and reducing the exposure to 616 changing conditions which could occur if clients cached the knowledge 617 for extensive periods. 619 The network (i.e. the first network element encountered downstream from 620 the client) must then answer the following questions: 622 1. which traffic class do I add this flow to? 623 This is a packing problem, difficult to solve in general, but many 624 simplifying assumptions can be made: presumably some simple form of 625 allocation can be done without a more complex scheme able to dynamically 626 shift flows around between classes. 628 2. which traffic class has worst-case parameters which meet the needs of 629 this flow? 630 This might be an ordering/comparison problem: which of two service 631 classes is "better" than another? Again, we can make this tractable by 632 observing that all of the current int-serv classes can be ranked (best 633 effort <= Controlled Load <= Guaranteed Service) in a simple manner. If 634 any classes are implemented in the future that cannot be simply ranked 635 then the issue can be finessed by either a priori knowledge about what 636 classes are supported or by configuration. 638 and return the chosen user_priority value to the client. 640 Note that the client may be either an end station, router or a first 641 switch which may be acting as a proxy for a client which does not 642 participate in these protocols for whatever reason. Note also that a 643 device e.g. a server or router, may choose to implement both the 644 "client" as well as the "network" portion of this model so that it can 645 select its own user_priority values: such an implementation would, 646 however, be discouraged unless the device really does have a close tie- 647 in with the network topology and resource allocation policies but would 648 work in some cases where there is known over- provisioning of resources. 650 7. Flow Identification 652 Some models for int-serv over lower-layers treat layer-2 switches very 653 much as a special case of routers: in particular, that switches along 654 the data path will make packet handling decisions based on the RSVP flow 655 and filter specifications and use them to classify the corresponding 656 data packets. However, filtering to the per-flow level becomes difficult 657 with increasing switch speed: devices with such filtering capabilities 658 are unlikely to have a very different implementation complexity from IP 659 routers and there already exist protocol specifications for those 660 devices. 662 This document argues that "aggregated flow" identification based on 663 user_priority is a useful intermediate point between no QoS and full 664 router-type integrated services and that this be the minimum flow 665 classification capability required of switches. 667 8. Reserving Network Resources - Admission Control 669 So far we have not discussed admission control. In fact, without 670 admission control it is possible to assemble a layer-2 LAN of some size 671 capable of supporting real-time services, providing that the traffic 672 fits within certain scaling constraints (relative link speeds, numbers 673 of ports etc. - see below). This is not surprising since it is possible 674 to run a fair approximation to real time services on small LANs today 675 with no admission control or help from encoded priority bits. 677 As an example, imagine a campus network providing dedicated 10 Mbps 678 connections to each user. Each floor of each building supports up to 96 679 users, organized into groups of 24, with each group being supported by a 680 100 Mbps downlink to a basement switch which concentrates 5 floors (20 x 681 100 Mbps) and a data center (4 x 100 Mbps) to a 1 Gbps link to an 8 Gbps 682 central campus switch, which in turn hooks 6 buildings together (with 2 683 x 1 Gbps full duplex links to support a corporate server farm). Such a 684 network could support 1.5 Mb/s of voice/video from every user to any 685 other user or (for half the population) the server farm, provided the 686 video ran high priority: this gives 3000 users, all with desktop video 687 conferencing running along with file transfer/email etc. 689 In such a network, a discussion as to the best service policy to apply 690 to high and low priority queues may prove academic: while it is true 691 that "normal" traffic may be delayed by bunches of high priority frames, 692 queuing theory tells us that the average queue occupancy in the high 693 priority queue at any switch port will be somewhat less than 1 (with 694 real user behaviour, i.e. not all watching video conferences all the 695 time) it should be far less. A cheaper alternative to buying equipment 696 with a fancy queue service policy may be to buy equipment with more 697 bandwidth to lower the average link utilisation by a few per cent. 699 In practice a number of objections can be made to such a simple 700 solution. There may be long established expensive equipment in the 701 network which does not provide all the bandwidth required. There will be 702 considerable concern over who is allowed to say what traffic is high 703 priority. There may be a wish to give some form of "prioritised" service 704 to crucial business applications, above that given to experimental 705 video-conferencing: in this context, admission control needs to provide 706 administrative control to some level, without making that control so 707 elaborate to implement that it is not simply rejected in favor of 708 providing yet more bandwidth instead. 710 The proposed admission control mechanism requires a query-response 711 interaction with the network returning a "YES/NO" answer and, if 712 successful, a user_priority value with which to tag the data frames of 713 this flow. 715 The relevant int-serv specifications describe the parameters which need 716 to be considered when making an admission control decision at each node 717 in the network path between sender and receiver. We discuss how to 718 calculate these parameters for different network technologies below but 719 we do not specify admission control algorithms or mechanisms as to how 720 to progress the admission control process across the network. The 721 proposed IETF protocol for this purpose "Subnet Bandwidth Manager" (SBM) 722 is defined in [10]. 724 Where there are multiple mechanisms in use for allocating resources e.g. 725 some combination of SBM and network management, it will be necessary to 726 ensure that network resources are partitioned amongst the different 727 mechanisms in some way: this could be by configuration or maybe by 728 having the mechanisms allocate from a common resource pool within any 729 device. 731 9. Mapping of integrated services to layer-2 in layer-3 devices 733 9.1 Layer-3 Client Model 735 We assume the same client model as int-serv and RSVP where we use the 736 term "client" to mean the entity handling QoS in the layer-3 device at 737 each end of a layer-2 hop (e.g. end-station, router). The sending client 738 itself is responsible for local admission control and scheduling packets 739 onto its link in accordance with the service agreed. As with the current 740 int-serv model, this involves per-flow scheduling (a.k.a. traffic 741 shaping) in every such originating source. 743 The client is running an RSVP process which presents a session 744 establishment interface to applications, signals over the network, 745 programs a scheduler and classifier in the driver and interfaces to a 746 policy control module. In particular, RSVP also interfaces to a local 747 admission control module: it is this entity that we focus on here. 749 The following diagram is taken from the RSVP specification[4]: 750 _____________________________ 751 | _______ | 752 | | | _______ | 753 | |Appli- | | | | RSVP 754 | | cation| | RSVP <--------------------> 755 | | <--> | | 756 | | | |process| _____ | 757 | |_._____| | -->Polcy|| 758 | | |__.__._| |Cntrl|| 759 | |data | | |_____|| 760 |===|===========|==|==========| 761 | | --------| | _____ | 762 | | | | ---->Admis|| 763 | _V__V_ ___V____ |Cntrl|| 764 | | | | | |_____|| 765 | |Class-| | Packet | | 766 | | ifier|==>Schedulr|====================> 767 | |______| |________| | data 768 | | 769 |_____________________________| 771 Figure 1 - RSVP in Sending Hosts 773 Note that we illustrate examples in this document using RSVP as the 774 "upper-layer" signaling protocol but there are no actual dependencies on 775 this protocol: RSVP could be replaced by some other dynamic protocol or 776 else the requests could be made by network management or other policy 777 entities. 779 9.2 Requests to layer-2 ISSLL 781 The local admission control entity within a client is responsible for 782 mapping these layer-3 session-establishment requests into layer-2 783 language. 785 The upper-layer entity makes a request, in generalised terms, to ISSLL 786 of the form: 788 "May I reserve for traffic with with 789 from to and how 790 should I label it?" 792 where 793 = Sender Tspec 794 (e.g. bandwidth, burstiness, MTU) 795 = FlowSpec 796 (e.g. latency, jitter bounds) 797 = IP address(es) 798 = IP address(es) - may be multicast 800 9.3 At the Layer-3 Sender 802 The ISSLL functionality in the sender is illustrated below and the 803 functions of the box labeled "SBM client" may be summarised as: 804 * maps the endpoints of the conversation to layer-2 addresses in the 805 LAN, so that the client can figure out what traffic is really going 806 where (probably makes reference to the ARP protocol cache for unicast or 807 an algorithmic mapping for multicast destinations). 808 * applies local admission control on outgoing link and driver 809 * formats a SBM request to the network with the mapped addresses and 810 filter/flow specs 811 * receives response from the network and reports the YES/NO admission 812 control answer back to the upper layer entity, along with any negotiated 813 modifications to the session parameters. 814 * saves any returned user_priority to be associated with this session in 815 a "802 header" table: this will be used when adding layer-2 header 816 before sending any future data packet belonging to this session. This 817 table might, for example, be indexed by the RSVP flow identifier. 819 from IP from RSVP 820 ____|____________|____________ 821 | | | | 822 | __V____ ___V___ | 823 | | | | | | 824 | | Addr |<->| | | SBM signaling 825 | |mapping| | SBM |<------------------------> 826 | |_______| |Client | | 827 | ___|___ | | | 828 | | |<->| | | 829 | | 802 | |_______| | 830 | | header| / | | | 831 | |_______| / | | | 832 | | / | | _____ | 833 | | +-----/ | +->|Local| | 834 | __V_V_ _____V__ |Admis| | 835 | | | | | |Cntrl| | 836 | |Class-| | Packet | |_____| | 837 | | ifier|==>Schedulr|======================> 838 | |______| |________| | data 839 |______________________________| 841 Figure 2 - ISSLL in End-station Sender 843 9.4 At the Layer-3 Receiver 845 The ISSLL functionality in the receiver is a good deal simpler. It is 846 summarised below and is illustrated by the following picture: 847 * handles any received SBM protocol indications. 848 * applies local admission control to see if a request can be supported 849 with appropriate local receive resources. 850 * passes indications up to RSVP if OK. 851 * accepts confirmations from RSVP and relays them back via SBM signaling 852 towards the requester. 853 * may program a receive classifier and scheduler, if any is used, to 854 identify traffic classes of received packets and accord them appropriate 855 treatment e.g. reserve some buffers for particular traffic classes. 856 * programs receiver to strip any 802 header information from received 857 packets. 859 to RSVP to IP 860 ^ ^ 861 ____|____________|___________ 862 | | | | 863 | __|____ | | 864 | | | | | 865 SBM signaling | | SBM | ___|___ | 866 <-----------------> |Client | | Strip | | 867 | |_______| |802 hdr| | 868 | | \ |_______| | 869 | __v___ \ ^ | 870 | | Local |\ | | 871 | | Admis | \ | | 872 | | Cntrl | \ | | 873 | |_______| \ | | 874 | ______ v___|____ | 875 | |Class-| | Packet | | 876 ===================>| ifier|==>|Scheduler| | 877 data | |______| |_________| | 878 |_____________________________| 880 Figure 3 - ISSLL in End-station Receiver 882 10. Layer-2 Switch Functions 884 10.1 Switch Model 886 The model of layer-2 switch behaviour described here uses the 887 terminology of the SBM protocol [10] as an example of an admission 888 control protocol: the model is equally applicable when other mechanisms 889 e.g. static configuration, network management are in use for admission 890 control. We define the following entities within the switch: 892 * Local admission control - one of these on each port accounts for the 893 available bandwidth on the link attached to that port. For half-duplex 894 links, this involves taking account of the resources allocated to both 895 transmit and receive flows. For full-duplex, the input port accountant's 896 task is trivial. 898 * Input SBM module: one instance on each port, performs the "network" 899 side of the signaling protocol for peering with clients or other 900 switches. Also holds knowledge of the mappings of int-serv classes to 901 user_priority. 903 * SBM propagation - relays requests that have passed admission control 904 at the input port to the relevant output ports' SBM modules. This will 905 require access to the switch's forwarding table (layer-2 "routing table" 906 cf. RSVP model) and port spanning-tree states. 908 * Output SBM module - forwards requests to the next layer-2 or -3 909 network hop. 911 * Classifier, Queueing and Scheduler - these functions are basically as 912 described by the Forwarding Process of IEEE 802.1p (see section 3.7 of 913 [2]). The Classifier module identifies the relevant QoS information from 914 incoming packets and uses this, together with the normal bridge 915 forwarding database, to decide to which output queue of which output 916 port to enqueue the packet. In Class I switches, this information is the 917 "regenerated user_priority" parameter which has already been decoded by 918 the receiving MAC service and potentially re-mapped by the 802.1p 919 forwarding process (see description in section 3.7.3 of [2]). This does 920 not preclude more sophisticated classification rules which may be 921 applied in more complex Class III switches e.g. matching on individual 922 int-serv flows. 924 The Queueing and Scheduler module holds the output queues for ports and 925 provides the algorithm for servicing the queues for transmission onto 926 the output link in order to provide the promised int-serv service. 927 Switches will implement one or more output queues per port and all will 928 implement at least a basic strict priority dequeueing algorithm as their 929 default, in accordance with 802.1p. 931 * Ingress traffic class mapper and policing - as described in 802.1p 932 section 3.7. This optional module may check on whether the data within 933 traffic classes are conforming to the patterns currently agreed: 934 switches may police this and discard or re-map packets. The default 935 behaviour is to pass things through unchanged. 937 * Egress traffic class mapper - as described in 802.1p section 3.7. This 938 optional module may apply re-mapping of traffic classes e.g. on a per- 939 output port basis. The default behaviour is to pass things through 940 unchanged. 942 These are shown by the following diagram which is a superset of the IEEE 943 802.1D/802.1p bridge model: 945 _______________________________ 946 | _____ ______ ______ | 947 SBM signaling | | | | | | | | SBM signaling 948 <------------------>| IN |<->| SBM |<->| OUT |<----------------> 949 | | SBM | | prop.| | SBM | | 950 | |_____| |______| |______| | 951 | / | ^ / | | 952 ______________| / | | | | |_____________ 953 | \ / __V__ | | __V__ / | 954 | \ ____/ |Local| | | |Local| / | 955 | \ / |Admis| | | |Admis| / | 956 | \/ |Cntrl| | | |Cntrl| / | 957 | _____V \ |_____| | | |_____| / _____ | 958 | |traff | \ ___|__ V_______ / |egrss| | 959 | |class | \ |Filter| |Queue & | / |traff| | 960 | |map & |=====|==========>|Data- |=| Packet |=|===>|class| | 961 | |police| | | base| |Schedule| | |map | | 962 | |______| | |______| |________| | |_____| | 963 |____^_________|_______________________________|______|______| 964 data in | |data out 965 ========+ +========> 966 Figure 4 - ISSLL in Switches 968 10.2 Admission Control 970 On reception of an admission control request, a switch performs the 971 following actions, again using SBM as an example: the behaviour is 972 different depending on whether the "Designated SBM" for this segment is 973 within this switch or not - see [10] for a more detailed specification 974 of the DSBM/SBM actions: 975 * if the ingress SBM is the "Designated SBM" for this link/segment, it 976 translates any received user_priority or else selects a layer-2 traffic 977 class which appears compatible with the request and whose use does not 978 violate any administrative policies in force. In effect, it matches up 979 the requested service with those available in each of the user_priority 980 classes and chooses the "best" one. It ensures that, if this reservation 981 is successful, the selected value is passed back to the client. 982 * ingress DSBM observes the current state of allocation of resources on 983 the input port/link and then determines whether the new resource 984 allocation from the mapped traffic class would be excessive. The request 985 is passed to the reservation propagator if accepted so far. 986 * if the ingress SBM is not the "Designated SBM" for this link/segment 987 then it passes the request on directly to the reservation propagator 988 * reservation propagator relays the request to the bandwidth accountants 989 on each of the switch's outbound links to which this reservation would 990 apply (implied interface to routing/forwarding database). 991 * egress bandwidth accountant observes the current state of allocation 992 of queueing resources on its outbound port and bandwidth on the link 993 itself and determines whether the new allocation would be excessive. 994 Note that this is only the local decision of this switch hop: each 995 further layer-2 hop through the network gets a chance to veto the 996 request as it passes along. 997 * the request, if accepted by this switch, is then passed on down the 998 line on each output link selected. Any user_priority described in the 999 forwarded request must be translated according to any egress mapping 1000 table. 1001 * if accepted, the switch must notify the client of the user_priority to 1002 use for packets belonging to this flow. Note that this is a 1003 "provisional YES" - we assume an optimistic approach here: later 1004 switches can still say "NO" later. 1005 * if this switch wishes to reject the request, it can do so by notifying 1006 the original client (by means of its layer-2 address). 1008 11. Mappings from int-serv service models to IEEE 802 1010 It is assumed that admission control will be applied when deciding 1011 whether or not to admit a new flow through a given network element and 1012 that a device sending onto a link will be proxying the parameters and 1013 admission control decisions on behalf of that link: this process will 1014 require the device to be able to determine (by estimation, measurement 1015 or calculation) several parameters. It is assumed that details of the 1016 potential flow are provided to the device by some means (e.g. a 1017 signaling protocol, network management). The service definition 1018 specifications themselves provide some implementation guidance as to how 1019 to calculate some of these quantities. 1021 The accuracy of calculation of these parameters may not be very 1022 critical: indeed it is an assumption of this model's being used with 1023 relatively simple Class I switches that they merely provide values to 1024 describe the device and admit flows conservatively. 1026 11.1 General characterisation parameters 1028 There are some general parameters that a device will need to use and/or 1029 supply for all service types: 1030 * Ingress link 1031 * Egress links and their MTUs, framing overheads and minimum packet 1032 sizes (see media-specific information presented above). 1033 * available path bandwidth: updated hop-by-hop by any device along the 1034 path of the flow. 1035 * minimum latency 1037 11.2 Parameters to implement Guaranteed Service 1039 A network element must be able to determine the following parameters: 1041 * Constant delay bound through this device (in addition to any value 1042 provided by "minimum latency" above) and up to the receiver at the next 1043 network element for the packets of this flow if it were to be admitted: 1044 this would include any access latency bound to the outgoing link as well 1045 as propagation delay across that link. 1046 * Rate-proportional delay bound through this device and up to the 1047 receiver at the next network element for the packets of this flow if it 1048 were to be admitted. 1049 * Receive resources that would need to be associated with this flow 1050 (e.g. buffering, bandwidth) if it were to be admitted and not suffer 1051 packet loss if it kept within its supplied Tspec/Rspec. 1052 * Transmit resources that would need to be associated with this flow 1053 (e.g. buffering, bandwidth, constant- and rate-proportional delay 1054 bounds) if it were to be admitted. 1056 11.3 Parameters to implement Controlled Load 1058 A network element must be able to determine the following parameters 1059 which can be extracted from [8]: 1061 * Receive resources that would need to be associated with this flow 1062 (e.g. buffering) if it were to be admitted. 1063 * Transmit resources that would need to be associated with this flow 1064 (e.g. buffering) if it were to be admitted. 1066 11.4 Parameters to implement Best Effort 1068 For a network element to implement best effort service there are no 1069 explicit parameters that need to be characterised. 1071 11.5 Mapping to IEEE 802 user_priority 1073 There are many options available for mapping aggregations of flows 1074 described by int-serv service models (Best Effort, Controlled Load, and 1075 Guaranteed are the services considered here) onto user_priority classes. 1076 There currently exists very little practical experience with particular 1077 mappings to help make a determination as to the "best" mapping. In that 1078 spirit, the following options are presented in order to stimulate 1079 experimentation in this area. Note, this does not dictate what 1080 mechanisms/algorithms a network element (e.g. an Ethernet switch) needs 1081 to perform to implement these mappings: this is an implementation choice 1082 and does not matter so long as the requirements for the particular 1083 service model are met. Having said that, we do explore below the ability 1084 of a switch implementing strict priority queueing to support some or all 1085 of the service types under discussion: this is worthwhile because this 1086 is likely to be the most widely deployed dequeueing algorithm in simple 1087 switches as it is the default specified in 802.1p. 1089 In order to reduce the administrative problems, such a mapping table is 1090 held by *switches* (and routers if desired) but generally not by end- 1091 station hosts and is a read-write table. The values proposed below are 1092 defaults and can be overridden by management control so long as all 1093 switches agree to some extent (the required level of agreement requires 1094 further analysis). 1096 It is possible that some form of network-wide lookup service could be 1097 implemented that serviced requests from clients e.g. traffic_class = 1098 getQoSbyName("H.323 video") and notified switches of what sorts of 1099 traffic categories they were likely to encounter and how to allocate 1100 those requests into traffic classes: such mechanisms are for further 1101 study. 1103 Example: A Simple Scheme 1105 user_priority Service 1107 0 "less than" Best Effort 1108 1 Best Effort 1109 2 reserved 1110 3 reserved 1111 4 Controlled Load 1112 5 Guaranteed Service, 100ms bound 1113 6 Guaranteed Service, 10ms bound 1114 7 reserved 1116 Table 1 - Example user_priority to service mappings 1118 In this proposal, all traffic that uses the controlled load service is 1119 mapped to a single 802.1p user_priority whilst that for guaranteed 1120 service is placed into one of two user_priority classes with different 1121 delay bounds. Unreserved best effort traffic is mapped to another. 1123 The use of classes 4, 5 and 6 for Controlled Load and Guaranteed Service 1124 is somewhat arbitrary as long as they are increasing. Any two classes 1125 greater than Best Effort can be used as long as GS is "greater" than CL 1126 although those proposed here have the advantage that, for transit 1127 through 802.1p switches with only two-level strict priority queuing, 1128 they both get "high priority" treatment (the current 802.1p default 1129 split is 0-3 and 4-7 for a device with 2 queues). The choice of delay 1130 bound is also arbitrary but potentially very significant: this can lead 1131 to a much more efficient allocation of resources as well as greater 1132 (though still not very good) isolation between flows. 1134 The "less than best effort" class might be useful for devices that wish 1135 to tag packets that are exceeding a committed network capacity and can 1136 be optionally discarded by a downstream device. Note, this is not 1137 *required* by any current int-serv models but is under study. 1139 The advantage to this approach is that it puts some real delay bounds on 1140 the Guaranteed Service without adding any additional complexity to the 1141 other services. It still ignores the amount of *bandwidth* available 1142 for each class. This should behave reasonably well as long as all 1143 traffic for CL and GS flows does not exceed any resource capacities in 1144 the device. Some isolation between very delay-critical GS and less 1145 critical GS flows is provided but there is still an overall assumption 1146 that flows will in general be well- behaved. In addition, this mapping 1147 still leaves room for future service models. 1149 Expanding the number of classes for CL service is not as appealing since 1150 there is no need to map to a particular delay bound. There may be cases 1151 where an administrator might map CL onto more classes for particular 1152 bandwidths or policy levels. It may also be desirable to further 1153 subdivide CL traffic in cases where the it is frequently non-conformant 1154 for certain applications. 1156 12. Network Topology Scenarios 1158 12.1 Switched networks using priority scheduling algorithms 1160 In general, the int-serv standards work has tried to avoid any 1161 specification of scheduling algorithms, instead relying on implementers 1162 to deduce appropriate algorithms from the service definitions and on 1163 users to apply measurable benchmarks to check for conformance. However, 1164 since one standards' body has chosen to specify a single default 1165 scheduling algorithm for switches [2], it seems appropriate to examine 1166 to some degree, how well this "implementation" might actually support 1167 some or all of the int-serv services. 1169 If the mappings of Proposal A above are applied in a switch implementing 1170 strict priority queueing between the 8 traffic classes (7 = highest) 1171 then the result will be that all Guaranteed Service packets will be 1172 transmitted in preference to any other service. Controlled Load packets 1173 will be transmitted next, with everything else waiting until both of 1174 these queues are empty. If the admission control algorithms in use on 1175 the switch ensure that the sum of the "promised" bandwidth of all of the 1176 GS and CL sessions are never allowed to exceed the available link 1177 bandwidth then the promised service can be maintained. 1179 12.2 Full-duplex switched networks 1181 We have up to now ignored the MAC access protocol. On a full-duplex 1182 switched LAN (of either Ethernet or Token-Ring types - the MAC algorithm 1183 is, by definition, unimportant) this can be factored in to the 1184 characterisation parameters advertised by the device since the access 1185 latency is well controlled (jitter = one largest packet time). Some 1186 example characteristics (approximate): 1188 Type Speed Max Pkt Max Access 1189 Length Latency 1191 Ethernet 10Mbps 1.2ms 1.2ms 1192 100Mbps 120us 120us 1193 1Gbps 12us 12us 1194 Token-Ring 4Mbps 9ms 9ms 1195 16Mbps 9ms 9ms 1196 FDDI 100Mbps 360us 8.4ms 1197 Demand-Priority 100Mbps 120us 253us 1199 Table 2 - Full-duplex switched media access latency 1201 These delays should be also be considered in the context of speed- of- 1202 light delays of e.g. ~400ns for typical 100m UTP links and ~7us for 1203 typical 2km multimode fibre links. 1205 Therefore we see Full-Duplex switched network topologies as offering 1206 good QoS capabilities for both Controlled Load and Guaranteed Service 1207 when supported by suitable queueing strategies in the switch nodes. 1209 12.3 Shared-media Ethernet networks 1211 We have not mentioned the difficulty of dealing with allocation on a 1212 single shared CSMA/CD segment: as soon as any CSMA/CD algorithm is 1213 introduced then the ability to provide any form of Guaranteed Service is 1214 seriously compromised in the absence of any tight coupling between the 1215 multiple senders on the link. There are a number of reasons for not 1216 offering a better solution for this issue. 1218 Firstly, we do not believe this is a truly solvable problem: it would 1219 seem to require a new MAC protocol. There have been proposals for 1220 enhancements to the MAC layer protocols e.g. BLAM and enhanced flow- 1221 control in IEEE 802.3; IEEE 802.1 has examined research showing 1222 disappointing simulation results for performance guarantees on shared 1223 CSMA/CD Ethernet without MAC enhancements. However, any solution 1224 involving a new "software MAC" running above the traditional 802.3 MAC 1225 or other proprietary MAC protocols is clearly outside the scope of the 1226 work of the ISSLL WG and this document. Secondly, we are not convinced 1227 that it is really an interesting problem. While not everyone in the 1228 world is buying desktop switches today and there will be end stations 1229 living on repeated segments for some time to come, the number of 1230 switches is going up and the number of stations on repeated segments is 1231 going down. This trend is proceeding to the point that we may be happy 1232 with a solution which assumes that any network conversation requiring 1233 resource reservations will take place through at least one switch (be it 1234 layer-2 or layer-3). Put another way, the easiest QoS upgrade to a 1235 layer-2 network is to install segment switching: only when this has been 1236 done is it worthwhile to investigate more complex solutions involving 1237 admission control. 1239 Thirdly, in the core of the network (as opposed to at the edges), there 1240 does not seem to be wide deployment of repeated segments as opposed to 1241 switched solutions. There may be special circumstances in the future 1242 (e.g. Gigabit buffered repeaters) but these have differing 1243 characteristics to existing CSMA/CD repeaters anyway. 1245 Type Speed Max Pkt Max Access 1246 Length Latency 1248 Etherne 10Mbps 1.2ms unbounded 1249 100Mbps 120us unbounded 1250 1Gbps 12us unbounded 1252 Table 3 - Shared Ethernet media access latency 1254 12.4 Half-duplex switched Ethernet networks 1256 Many of the same arguments for sub-optimal support of Guaranteed Service 1257 apply to half-duplex switched Ethernet as to shared media: in essence, 1258 this topology is a medium that *is* shared between at least two senders 1259 contending for each packet transmission opportunity. Unless these are 1260 tightly coupled and cooperative then there is always the chance that the 1261 best-effort traffic of one will interfere with the important traffic of 1262 the other. Such coupling would seem to need some form of modifications 1263 to the MAC protocol (see above). 1265 Notwithstanding the above, half-duplex switched topologies do seem to 1266 offer the chance to provide Controlled Load service: with the knowledge 1267 that there are only a small limited number (e.g. two) of potential 1268 senders that are both using prioritisation for their CL traffic (with 1269 admission control for those CL flows based on the knowledge of the 1270 number of potential senders) over best effort, the media access 1271 characteristics, whilst not deterministic in the true mathematical 1272 sense, are somewhat predictable. This is probably a close enough 1273 approximation to CL to be useful. 1275 Type Speed Max Pkt Max Access 1276 Length Latency 1278 Ethernet 10Mbps 1.2ms unbounded 1279 100Mbps 120us unbounded 1280 1Gbps 12us unbounded 1282 Table 4 - Half-duplex switched Ethernet media access latency 1284 12.5 Half-duplex and shared Token Ring networks 1286 In a shared Token Ring network, the network access time for high 1287 priority traffic at any station is bounded and is given by (N+1)*THTmax, 1288 where N is the number of stations sending high priority traffic and 1289 THTmax is the maximum token holding time [14]. This assumes that network 1290 adapters have priority queues so that reservation of the token is done 1291 for traffic with the highest priority currently queued in the adapter. 1292 It is easy to see that access times can be improved by reducing N or 1293 THTmax. The recommended default for THTmax is 10 ms [6]. N is an 1294 integer from 2 to 256 for a shared ring and 2 for a switched half duplex 1295 topology. A similar analysis applies for FDDI. Using default values 1296 gives: 1298 Type Speed Max Pkt Max Access 1299 Length Latency 1301 Token-Ring 4/16Mbps shared 9ms 2570ms 1302 4/16Mbps switched 9ms 30ms 1303 FDDI 100Mbps 360us 8ms 1305 Table 5 - Half-duplex and shared Token-Ring media access latency 1307 Given that access time is bounded, it is possible to provide an upper 1308 bound for end-to-end delays as required by Guaranteed Service assuming 1309 that traffic of this class uses the highest priority allowable for user 1310 traffic. The actual number of stations that send traffic mapped into 1311 the same traffic class as GS may vary over time but, from an admission 1312 control standpoint, this value is needed a priori. The admission 1313 control entity must therefore use a fixed value for N, which may be the 1314 total number of stations on the ring or some lower value if it is 1315 desired to keep the offered delay guarantees smaller. If the value of N 1316 used is lower than the total number of stations on the ring, admission 1317 control must ensure that the number of stations sending high priority 1318 traffic never exceeds this number. This approach allows admission 1319 control to estimate worst case access delays assuming that all of the N 1320 stations are sending high priority data even though, in most cases, this 1321 will mean that delays are significantly overestimated. 1323 Assuming that Controlled Load flows use a traffic class lower than that 1324 used by GS, no upper-bound on access latency can be provided for CL 1325 flows. However, CL flows will receive better service than best effort 1326 flows. 1328 Note that, on many existing shared token rings, bridges will transmit 1329 frames using an Access Priority (see section 4.3) value 4 irrespective 1330 of the user_priority carried in the frame control field of the frame. 1331 Therefore, existing bridges would need to be reconfigured or modified 1332 before the above access time bounds can actually be used. 1334 12.6 Half-duplex and shared Demand-Priority networks 1336 In 802.12 networks, communication between end-nodes and hubs and between 1337 the hubs themselves is based on the exchange of link control signals. 1338 These signals are used to control the shared medium access. If a hub, 1339 for example, receives a high-priority request while another hub is in 1340 the process of serving normal- priority requests, then the service of 1341 the latter hub can effectively be pre-empted in order to serve the 1342 high-priority request first. After the network has processed all high- 1343 priority requests, it resumes the normal-priority service at the point 1344 in the network at which it was interrupted. 1346 The time needed to preempt normal-priority network service (the high- 1347 priority network access time) is bounded: the bound depends on the 1348 physical layer and on the topology of the shared network. The physical 1349 layer has a significant impact when operating in half- duplex mode as 1350 e.g. used across unshielded twisted-pair cabling (UTP) links, because 1351 link control signals cannot be exchanged while a packet is transmitted 1352 over the link. Therefore the network topology has to be considered 1353 since, in larger shared networks, the link control signals must 1354 potentially traverse several links (and hubs) before they can reach the 1355 hub which possesses the network control. This may delay the preemption 1356 of the normal priority service and hence increase the upper bound that 1357 may be guaranteed. 1359 Upper bounds on the high-priority access time are given below for a UTP 1360 physical layer and a cable length of 100 m between all end- nodes and 1361 hubs using a maximum propagation delay of 570ns as defined in [15]. 1362 These values consider the worst case signaling overhead and assume the 1363 transmission of maximum-sized normal- priority data packets while the 1364 normal-priority service is being pre-empted. 1366 Type Speed Max Pkt Max Access 1367 Length Latency 1369 Demand Priority 100Mbps, 802.3pkt, UTP 120us 253us 1370 802.5pkt, UTP 360us 733us 1372 Table 6 - Half-duplex switched Demand-Priority UTP access latency 1374 Shared 802.12 topologies can be classified using the hub cascading level 1375 "N". The simplest topology is the single hub network (N = 1). For a UTP 1376 physical layer, a maximum cascading level of N = 5 is supported by the 1377 standard. Large shared networks with many hundreds nodes can however 1378 already be built with a level 2 topology. The bandwidth manager could be 1379 informed about the actual cascading level by using network management 1380 mechanisms and use this information in its admission control algorithms. 1382 Type Speed Max Pkt Max Access Topology 1383 Length Latency 1385 Demand Priority 100Mbps, 802.3pkt 120us 262us N=1 1386 120us 554us N=2 1387 120us 878us N=3 1388 120us 1.24ms N=4 1389 120us 1.63ms N=5 1391 Demand Priority 100Mbps, 802.5pkt 360us 722us N=1 1392 360us 1.41ms N=2 1393 360us 2.32ms N=3 1394 360us 3.16ms N=4 1395 360us 4.03ms N=5 1397 Table 7 - Shared Demand-Priority UTP access latency 1399 In contrast to UTP, the fibre-optic physical layer operates in dual 1400 simplex mode: Upper bounds for the high-priority access time are given 1401 below for 2 km multimode fibre links with a propagation delay of 10 us. 1403 Type Speed Max Pkt Max Access 1404 Length Latency 1406 Demand Priority 100Mbps,802.3pkt,Fibre 120us 139us 1407 802.5pkt,Fibre 360us 379us 1409 Table 8 - Half-duplex switched Demand-Priority Fibre access latency 1411 For shared-media with distances of 2km between all end-nodes and hubs, 1412 the 802.12 standard allows a maximum cascading level of 2. Higher levels 1413 of cascaded topologies are supported but require a reduction of the 1414 distances [15]. 1416 Type Speed Max Pkt Max Access Topology 1417 Length Latency 1419 Demand Priority 100Mbps,802.3pkt 120us 160us N=1 1420 120us 202us N=2 1422 Demand Priority 100Mbps,802.5pkt 360us 400us N=1 1423 360us 682us N=2 1425 Table 9 - Shared Demand-Priority Fibre access latency 1427 The bounded access delay and deterministic network access allow the 1428 support of service commitments required for Guaranteed Service and 1429 Controlled Load, even on shared-media topologies. The support of just 1430 two priority levels in 802.12, however, limits the number of services 1431 that can simultaneously be implemented across the network. 1433 13. Signaling protocol 1435 The mechanisms described in this document make use of a signaling 1436 protocol for devices to communicate their admission control requests 1437 across the network: the service definitions to be provided by such a 1438 protocol e.g. [10] are described below. Below, we illustrate the 1439 primitives and information that need to be exchanged with such a 1440 signaling protocol entity - in all these examples, appropriate 1441 delete/cleanup mechanisms will also have to be provided for when 1442 sessions are torn down. 1444 13.1 Client service definitions 1446 The following interfaces can be identified from Figures 2 and 3: 1448 * SBM <-> Address mapping 1450 This is a simple lookup function which may cause ARP protocol 1451 interactions, may be just a lookup of an existing ARP cache entry or may 1452 be an algorithmic mapping. The layer-2 addresses are needed by SBM for 1453 inclusion in its signaling messages to/from switches which avoids the 1454 switches having to perform the mapping and, hence, have knowledge of 1455 layer-3 information for the complete subnet: 1457 l2_addr = map_address( ip_addr ) 1459 * SBM <-> Session/802 header 1460 This is for notifying the transmit path of how to add layer-2 header 1461 information e.g. user_priority values to the traffic of each outgoing 1462 flow: the transmit path will provide the user_priority value when it 1463 requests a MAC-layer transmit operation for each packet (user_priority 1464 is one of the parameters passed in the packet transmit primitive defined 1465 by the IEEE 802 service model): 1467 bind_l2_header( flow_id, user_priority ) 1469 * SBM <-> Classifier/Scheduler 1471 This is for notifying transmit classifier/scheduler of any additional 1472 layer-2 information associated with scheduling the transmission of a 1473 flow packets: this primitive may be unused in some implementations or it 1474 may be used, for example, to provide information to a transmit scheduler 1475 that is performing per- traffic_class scheduling in addition to the 1476 per-flow scheduling required by int-serv: the l2_header may be a pattern 1477 (additional to the FilterSpec) to be used to identify the flow's 1478 traffic. 1480 bind_l2schedulerinfo( flow_id, , l2_header, traffic_class ) 1482 * SBM <-> Local Admission Control 1484 For applying local admission control for a session e.g. is there enough 1485 transmit bandwidth still uncommitted for this potential new session? Are 1486 there sufficient receive buffers? This should commit the necessary 1487 resources if OK: it will be necessary to release these resources at a 1488 later stage if the session setup process fails. This call would be made 1489 by a segment's Designated SBM for example: 1491 status = admit_l2session( flow_id, Tspec, FlowSpec ) 1493 * SBM <-> RSVP - this is outlined above in section 9.2 and fully 1494 described in [10]. 1496 * Management Interfaces 1498 Some or all of the modules described by this model will also require 1499 configuration management: it is expected that details of the manageable 1500 objects will be specified by future work in the ISSLL WG. 1502 13.2 Switch service definitions 1504 The following interfaces are identified from Figure 4: 1506 * SBM <-> Classifier 1507 This is for notifying receive classifier of how to match up incoming 1508 layer-2 information with the associated traffic class: it may in some 1509 cases consist of a set of read-only default mappings: 1511 bind_l2classifierinfo( flow_id, l2_header, traffic_class ) 1513 * SBM <-> Queue and Packet Scheduler 1515 This is for notifying transmit scheduler of additional layer-2 1516 information associated with a given traffic class (it may be unused in 1517 some cases - see discussion in previous section): 1519 bind_l2schedulerinfo( flow_id, l2_header, traffic_class ) 1521 * SBM <-> Local Admission Control 1523 As for host above. 1525 * SBM <-> Traffic Class Map and Police 1527 Optional configuration of any user_priority remapping that might be 1528 implemented on ingress to and egress from the ports of a switch (note 1529 that, for Class I switches, it is likely that these mappings will have 1530 to be consistent across all ports): 1532 bind_l2ingressprimap( inport, in_user_pri, internal_priority ) 1533 bind_l2egressprimap( outport, internal_priority, out_user_pri ) 1535 Optional configuration of any layer-2 policing function to be applied 1536 on a per-class basis to traffic matching the l2_header. If the switch is 1537 capable of per-flow policing then existing int- serv/RSVP models will 1538 provide a service definition for that configuration: 1540 bind_l2policing( flow_id, l2_header, Tspec, FlowSpec ) 1542 * SBM <-> Filtering Database 1544 SBM propagation rules need access to the layer-2 forwarding database to 1545 determine where to forward SBM messages (analogous to RSRR interface in 1546 L3 RSVP): 1548 output_portlist = lookup_l2dest( l2_addr ) 1550 * Management Interfaces 1552 Some or all of the modules described by this model will also require 1553 configuration management: it is expected that details of the manageable 1554 objects will be specified by future work in the ISSLL WG. 1556 14. Compatibility and Interoperability with existing equipment 1558 Switches using layer-2-only standards (e.g. 802.1p) will have to 1559 cooperate with routers and layer-3 switches. Wide deployment of such 1560 802.1p switches will occur in a number of roles in the network: "desktop 1561 switches" provide dedicated 10/100 Mbps links to end stations and high 1562 speed core switches will act as central campus switching points for 1563 layer-3 devices. Layer-2 devices will have to operate in all of the 1564 following scenarios: * every device along a network path is layer-3 1565 capable and intrusive into the full data stream * only the edge devices 1566 are pure layer-2 * every alternate device lacks layer-3 functionality * 1567 most devices lack layer-3 functionality except for some key control 1568 points such as router firewalls, for example. 1570 Of course, where int-serv flows pass through equipment which is ignorant 1571 of priority queuing and which places all packets through the same 1572 queuing/overload-dropping path, it is obvious that some of the 1573 characteristics of the flow get more difficult to support. Suitable 1574 courses of action in the cases where sufficient bandwidth or buffering 1575 is not available are of the form: 1577 * buy more (and bigger) routers 1578 * buy more capable switches 1579 * rearrange the network topology: 802.1Q VLANs [11] may help 1580 here. 1581 * buy more bandwidth 1583 It would also be possible to pass more information between switches 1584 about the capabilities of their neighbours and to route around non- 1585 QoS-capable switches: such methods are for further study. 1587 15. Justification 1589 An obvious comment is that this is all too complex, it's what RSVP is 1590 doing already, why do we think we can do better by reinventing the 1591 solution to this problem at layer-2? 1593 The key is that there are a number of simple layer-2 scenarios that 1594 cover a considerable proportion of the real QoS problems that will occur 1595 and a solution that covers nearly all of the problems at significantly 1596 lower cost is beneficial: full RSVP/int-serv with per-flow queueing in 1597 strategically-positioned high-function switches or routers may be needed 1598 to completely solve all issues but devices implementing the architecture 1599 described in this document will allow a significantly simpler network. 1601 16. References 1603 [1] ISO/IEC 10038, ANSI/IEEE Std 802.1D-1993 "MAC Bridges" 1605 [2] "Supplement to MAC Bridges: Traffic Class Expediting and 1606 Dynamic Multicast Filtering", May 1997, IEEE P802.1p/D6 1608 [3] "Integrated Services in the Internet Architecture: an Overview" 1609 RFC1633, June 1994 1611 [4] "Resource Reservation Protocol (RSVP) - Version 1 Functional 1612 Specification", Internet Draft, June 1997 1613 1615 [5] "Carrier Sense Multiple Access with Collision Detection 1616 (CSMA/CD) Access Method and Physical Layer Specifications" 1617 ANSI/IEEE Std 802.3-1985. 1619 [6] "Token-Ring Access Method and Physical Layer Specifications" 1620 ANSI/IEEE Std 802.5-1995 1622 [7] "A Framework for Providing Integrated Services Over Shared and 1623 Switched LAN Technologies", Internet Draft, May 1997 1624 1626 [8] "Specification of the Controlled-Load Network Element Service", 1627 Internet Draft, May 1997, 1628 1630 [9] "Specification of Guaranteed Quality of Service", 1631 Internet Draft, February 1997, 1632 1634 [10] "SBM (Subnet Bandwidth Manager): A Proposal for Admission 1635 Control over Ethernet", Internet Draft, July 1997 1636 1638 [11] "Draft Standard for Virtual Bridged Local Area Networks", 1639 May 1997, IEEE P802.1Q/D6 1641 [12] "General Characterization Parameters for Integrated 1642 Service Network Elements", Internet Draft, July 1997 1643 1645 [13] "A Standard for the Transmission of IP Datagrams over IEEE 1646 802 Networks", RFC 1042, February 1988 1648 [14] C. Bisdikian, B. V. Patel, F. Schaffa, and M Willebeek-LeMair, 1649 The Use of Priorities on Token-Ring Networks for Multimedia 1650 Traffic, IEEE Network, Nov/Dec 1995. 1652 [15] "Demand Priority Access Method, Physical Layer and Repeater 1653 Specification for 100Mbit/s", IEEE Std. 802.12-1995. 1655 [16] "Fiber Distributed Data Interface", ANSI Std. X3.139-1987 1657 17. Security Considerations 1659 Implementation of the model described in this memo creates no known new 1660 avenues for malicious attack on the network infrastructure although 1661 readers are referred to section 2.8 of the RSVP specification for a 1662 discussion of the impact of the use of admission control signaling 1663 protocols on network security. 1665 18. Acknowledgments 1667 This document draws heavily on the work of the ISSLL WG of the IETF and 1668 the IEEE P802.1 Interworking Task Group. In particular, it relies on 1669 previous work on Token-Ring/802.5 from Anoop Ghanwani, Wayne Pace and 1670 Vijay Srinivasan and on Demand-Priority/802.12 from Peter Kim. 1672 19. Authors' addresses 1674 Mick Seaman 1675 3Com Corp. 1676 5400 Bayfront Plaza 1677 Santa Clara CA 95052-8145 1678 USA 1679 +1 (408) 764 5000 1680 mick_seaman@3com.com 1682 Andrew Smith 1683 Extreme Networks 1684 10460 Bandley Drive 1685 Cupertino CA 95014 1686 USA 1687 +1 (408) 863 2821 1688 andrew@extremenetworks.com 1690 Eric Crawley 1691 Gigapacket Networks 1692 25 Porter Rd. 1693 Littleton MA 01460 1694 USA 1695 +1 (508) 486 0665 1696 esc@gigapacket.com