idnits 2.17.1 draft-ietf-diffserv-model-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 59 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 54 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 2 instances of too long lines in the document, the longest one being 1 character in excess of 72. ** The abstract seems to contain references ([DSARCH], [DSMIB], [QOSDEVMOD]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. == There are 2 instances of lines with private range IPv4 addresses in the document. If these are generic example addresses, they should be changed to use any of the ranges defined in RFC 6890 (or successor): 192.0.2.x, 198.51.100.x or 203.0.113.x. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 1073: '... such operations MUST NOT have the eff...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 187 has weird spacing: '...serving ser...' == Line 220 has weird spacing: '...tioning funct...' == Line 228 has weird spacing: '...tioning speci...' == Line 232 has weird spacing: '...serving ser...' == Line 1158 has weird spacing: '...such as the c...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 2000) is 8563 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC 791' is mentioned on line 1442, but not defined == Unused Reference: 'QUEUEMGMT' is defined on line 2079, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2475 (ref. 'DSARCH') -- Possible downref: Non-RFC (?) normative reference: ref. 'DSMIB' -- Possible downref: Non-RFC (?) normative reference: ref. 'E2E' ** Obsolete normative reference: RFC 2598 (ref. 'EF-PHB') (Obsoleted by RFC 3246) -- Possible downref: Non-RFC (?) normative reference: ref. 'FLOYD' -- Possible downref: Non-RFC (?) normative reference: ref. 'GTC' ** Downref: Normative reference to an Informational RFC: RFC 1633 (ref. 'INTSERV') -- Possible downref: Non-RFC (?) normative reference: ref. 'POLTERM' -- Possible downref: Non-RFC (?) normative reference: ref. 'QOSDEVMOD' ** Obsolete normative reference: RFC 2309 (ref. 'QUEUEMGMT') (Obsoleted by RFC 7567) ** Downref: Normative reference to an Informational RFC: RFC 2697 (ref. 'SRTCM') ** Downref: Normative reference to an Informational RFC: RFC 2698 (ref. 'TRTCM') -- Possible downref: Non-RFC (?) normative reference: ref. 'VIC' Summary: 13 errors (**), 0 flaws (~~), 13 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force Y. Bernet 3 Diffserv Working Group Microsoft 4 INTERNET-DRAFT S. Blake 5 Expires May 2001 Ericsson 6 draft-ietf-diffserv-model-05.txt D. Grossman 7 Motorola 8 A. Smith 9 Allegro Networks 10 November 2000 11 An Informal Management Model for Diffserv Routers 12 ***** Preliminary Authors' Review DRAFT ***** 14 Status of this Memo 16 This document is an Internet-Draft and is in full conformance with all 17 provisions of Section 10 of RFC2026. Internet-Drafts are working 18 documents of the Internet Engineering Task Force (IETF), its areas, and 19 its working groups. Note that other groups may also distribute working 20 documents as Internet-Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference material 25 or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft 29 Shadow Directories can be accessed at http://www.ietf.org/shadow.html. 31 This document is a product of the IETF's Differentiated Services working 32 group. Comments should be addressed to WG's mailing list at 33 diffserv@ietf.org. The charter for Differentiated Services may be found 34 at http://www.ietf.org/html.charters/diffserv-charter.html Copyright (C) 35 The Internet Society (2000). All Rights Reserved. 37 Distribution of this memo is unlimited. 39 Abstract 41 This document proposes an informal management model of Differentiated 42 Services (Diffserv) routers for use in their management and 43 configuration. This model defines functional datapath elements (e.g. 44 classifiers, meters, actions (e.g. marking, absolute dropping, counting, 45 multiplexing), algorithmic droppers, queues and schedulers. It describes 46 possible configuration parameters for these elements and how they might 47 be interconnected to realize the range of traffic conditioning and per- 48 hop behavior (PHB) functionalities described in the Diffserv 49 Architecture [DSARCH]. 51 The model is intended to be abstract and capable of representing the 52 configuration parameters important to Diffserv functionality for a 53 variety of specific router implementations. It is not intended as a 54 guide to system implementation nor as a formal modelling description. 55 This model serves as the rationale for the design of an SNMP MIB [DSMIB] 56 and for other configuration interfaces (e.g. other policy-management 57 protocols) and, possibly, more detailed formal models (e.g. 58 [QOSDEVMOD]): these should all be consistent with this model. 60 1. Introduction 62 Differentiated Services (Diffserv) [DSARCH] is a set of technologies 63 which allow network service providers to offer services with different 64 kinds of network quality-of-service (QoS) objectives to different 65 customers and their traffic streams. This document uses terminology 66 defined in [DSARCH] and other work-in-progress from the IETF's Diffserv 67 working group (some of these definitions are included here in Section 2 68 for completeness). 70 The premise of Diffserv networks is that routers within the core of the 71 network handle packets in different traffic streams by forwarding them 72 using different per-hop behaviors (PHBs). The PHB to be applied is 73 indicated by a Diffserv codepoint (DSCP) in the IP header of each packet 74 [DSFIELD]. The DSCP markings are applied either by a trusted customer or 75 by the edge routers on entry to the Diffserv network. 77 The advantage of such a scheme is that many traffic streams can be 78 aggregated to one of a small number of behavior aggregates (BA) which 79 are each forwarded using the same PHB at the router, thereby simplifying 80 the processing and associated storage. In addition, there is no 81 signaling, other than what is carried in the DSCP of each packet, and no 82 other related processing that is required in the core of the Diffserv 83 network since QoS is invoked on a packet-by-packet basis. 85 The Diffserv architecture enables a variety of possible services which 86 could be deployed in a network. These services are reflected to 87 customers at the edges of the Diffserv network in the form of a Service 88 Level Specification (SLS - see section 2). The ability to provide these 89 services depends on the availability of cohesive management and 90 configuration tools that can be used to provision and monitor a set of 91 Diffserv routers in a coordinated manner. To facilitate the development 92 of such configuration and management tools it is helpful to define a 93 conceptual model of a Diffserv router that abstracts away implementation 94 details of particular Diffserv routers from the parameters of interest 95 for configuration and management. The purpose of this document is to 96 define such a model. 98 The basic forwarding functionality of a Diffserv router is defined in 99 other specifications; e.g., [DSARCH, DSFIELD, AF-PHB, EF-PHB]. 101 This document is not intended in any way to constrain or to dictate the 102 implementation alternatives of Diffserv routers. It is expected that 103 router implementers will demonstrate a great deal of variability in 104 their implementations. To the extent that implementers are able to model 105 their implementations using the abstractions described in this document, 106 configuration and management tools will more readily be able to 107 configure and manage networks incorporating Diffserv routers of assorted 108 origins. 110 o Section 3 starts by describing the basic high-level blocks of a 111 Diffserv router. It explains the concepts used in the model, 112 including the hierarchical management model for these blocks which 113 uses low-level functional datapath elements such as Classifiers, 114 Actions, Queues. 116 o Section 4 describes Classifier elements. 118 o Section 5 discusses Meter elements. 120 o Section 6 discusses Action elements. 122 o Section 7 discusses the basic queueing elements of Algorithmic 123 Droppers, Queues and Schedulers and their functional behaviors 124 (e.g. traffic shaping). 126 o Section 8 shows how the low-level elements can be combined to build 127 modules called Traffic Conditioning Blocks (TCBs) which are useful 128 for management purposes. 130 o Section 9 discusses security concerns. 132 o Appendix A contains a brief discussion of the token bucket and 133 leaky bucket algorithms used in this model and some of the 134 practical effects of the use of token buckets within the Diffserv 135 architecture. 137 2. Glossary 139 This document uses terminology which is defined in [DSARCH]. There is 140 also current work-in-progress on this terminology in the IETF and some 141 of the definitions provided here are taken from that work. Some of the 142 terms from these other references are defined again here in order to 143 provide additional detail, along with some new terms specific to this 144 document. 146 Absolute A functional datapath element which simply discards all 147 Dropper packets arriving at its input. 149 Algorithmic A functional datapath element which selectively discards 150 Dropper packets that arrive at its input, based on a discarding 151 algorithm. It has one data input and one output. 153 Classifier A functional datapath element which consists of filters 154 that select matching and non-matching packets. Based 155 on this selection, packets are forwarded along the 156 appropriate datapath within the router. A classifier, 157 therefore, splits a single incoming traffic stream into 158 multiple outgoing streams. 160 Counter A functional datapath element which updates a packet 161 counter and also an octet counter for every 162 packet that passes through it. 164 Datapath A conceptual path taken by packets with particular 165 characteristics through a Diffserv router. Decisions 166 as to the path taken by a packet are made by functional 167 datapath elements such as Classifiers and Meters. 169 Filter A set of wildcard, prefix, masked, range and/or exact 170 match conditions on the content of a packet's 171 headers or other data, and/or on implicit or derived 172 attributes associated with the packet. A filter is 173 said to match only if each condition is satisfied. 175 Functional A basic building block of the conceptual router. 176 Datapath Typical elements are Classifiers, Meters, Actions, 177 Element Algorithmic Droppers, Queues and Schedulers. 179 Multiplexer A multiplexor. 180 (Mux) 182 Multiplexor A functional datapath element that merges multiple 183 (Mux) traffic streams (datapaths) into a single traffic 184 stream (datapath). 186 Non-work- A property of a scheduling algorithm such that it 187 conserving services packets no sooner than a scheduled departure 188 time, even if this means leaving packets queued 189 while the output (e.g. a network link or connection 190 to the next element) is idle. 192 Policing The process of comparing the arrival of data packets 193 against a temporal profile and forwarding, delaying 194 or dropping them so as to make the output stream 195 conformant to the profile. 197 Queueing A combination of functional datapath elements 198 Block that modulates the transmission of packets belonging 199 to a traffic streams and determines their 200 ordering, possibly storing them temporarily or 201 discarding them. 203 Scheduling An algorithm which determines which queue of a set 204 algorithm of queues to service next. This may be based on the 205 relative priority of the queues, on a weighted fair 206 bandwidth sharing policy or some other policy. Such 207 an algorithm may be either work-conserving or non- 208 work-conserving. 210 Service-Level A set of parameters and their values which together 211 Specification define the service offered to a traffic stream by a 212 (SLS) Diffserv domain. 214 Shaping The process of delaying packets within a traffic stream 215 to cause it to conform to some defined temporal profile. 216 Shaping can be implemented using a queue serviced by a 217 non-work-conserving scheduling algorithm. 219 Traffic A logical datapath entity consisting of a number of 220 Conditioning functional datapath elements interconnected in 221 Block (TCB) such a way as to perform a specific set of traffic 222 conditioning functions on an incoming traffic stream. 223 A TCB can be thought of as an entity with one 224 input and one or more outputs and a set of control 225 parameters. 227 Traffic A set of parameters and their values which together 228 Conditioning specify a set of classfier rules and a traffic profile. 229 Specification A TCS is an integral element of a SLS. 230 (TCS) 231 Work- A property of a scheduling algorithm such that it 232 conserving services a packet, if one is available, at every 233 transmission opportunity. 235 3. Conceptual Model 237 This section introduces a block diagram of a Diffserv router and 238 describes the various components illustrated in Figure 1. Note that a 239 Diffserv core router is likely to require only a subset of these 240 components: the model presented here is intended to cover the case of 241 both Diffserv edge and core routers. 243 3.1. Components of a Diffserv Router 245 The conceptual model includes abstract definitions for the following: 247 o Traffic Classification elements. 249 o Metering functions. 251 o Actions of Marking, Absolute Dropping, Counting and 252 Multiplexing. 254 o Queueing elements, including capabilities of algorithmic 255 dropping and scheduling. 257 o Certain combinations of the above functional datapath elements 258 into higher-level blocks known as Traffic Conditioning Blocks 259 (TCBs). 261 The components and combinations of components described in this document 262 form building blocks that need to be manageable by Diffserv 263 configuration and management tools. One of the goals of this document is 264 to show how a model of a Diffserv device can be built using these 265 component blocks. This model is in the form of a connected directed 266 acyclic graph (DAG) of functional datapath elements that describes the 267 traffic conditioning and queueing behaviors that any particular packet 268 will experience when forwarded to the Diffserv router. Figure 1 269 illustrates the major functional blocks of a Diffserv router. 271 3.1.1. Datapath 273 An ingress interface, routing core and egress interface are illustrated 274 at the center of the diagram. In actual router implementations, there 275 may be an arbitrary number of ingress and egress interfaces 276 interconnected by the routing core. The routing core element serves as 277 +---------------+ 278 | Diffserv | 279 Mgmt | configuration | 280 <----+-->| & management |------------------+ 281 SNMP,| | interface | | 282 COPS | +---------------+ | 283 etc. | | | 284 | | | 285 | v v 286 | +-------------+ +-------------+ 287 | | ingress i/f | +---------+ | egress i/f | 288 --------->| classify, |-->| routing |-->| classify, |----> 289 data | | meter, | | core | | meter |data out 290 in | | action, | +---------+ | action, | 291 | | queueing | | queueing | 292 | +-------------+ +-------------+ 293 | ^ ^ 294 | | | 295 | | | 296 | +------------+ | 297 +-->| QOS agent | | 298 -------->| (optional) |---------------------+ 299 QOS | (e.g. RSVP)| 300 cntl +------------+ 301 msgs 302 Figure 1: Diffserv Router Major Functional Blocks 304 an abstraction of a router's normal routing and switching functionality. 305 The routing core moves packets between interfaces according to policies 306 outside the scope of Diffserv (note: it is possible that such policies 307 for output-interface selection might involve use of packet fields such 308 as the DSCP but this is outside the scope of this model). The actual 309 queueing delay and packet loss behavior of a specific router's switching 310 fabric/backplane is not modeled by the routing core; these should be 311 modeled using the functional datapath elements described later. The 312 routing core of this model can be thought of as an infinite bandwidth, 313 zero-delay backplane connecting interfaces - properties like the 314 behaviour of the core when overloaded need to be reflected back into the 315 queueing elements that are modelled around it e.g. when too much traffic 316 is directed across the core at an egress interface, the excess must 317 either be dropped or queued somewhere: the elements performing these 318 functions must be modelled on one of the interfaces involved. 320 The components of interest at the ingress to and egress from interfaces 321 are the functional datapath elements (e.g. Classifiers, Queueing 322 elements) that support Diffserv traffic conditioning and per-hop 323 behaviors [DSARCH]. These are the fundamental components comprising a 324 Diffserv router and are the focal point of this model. 326 3.1.2. Configuration and Management Interface 328 Diffserv operating parameters are monitored and provisioned through this 329 interface. Monitored parameters include statistics regarding traffic 330 carried at various Diffserv service levels. These statistics may be 331 important for accounting purposes and/or for tracking compliance to 332 Traffic Conditioning Specifications (TCSs) negotiated with customers. 333 Provisioned parameters are primarily the TCS parameters for Classifiers 334 and Meters and the associated PHB configuration parameters for Actions 335 and Queueing elements. The network administrator interacts with the 336 Diffserv configuration and management interface via one or more 337 management protocols, such as SNMP or COPS, or through other router 338 configuration tools such as serial terminal or telnet consoles. 340 Specific policy rules and goals governing the Diffserv behaviour of a 341 router are presumed to be installed by policy management mechanisms. 342 However, Diffserv routers are always subject to implementation limits 343 which scope the kinds of policies which can be successfully implemented 344 by the router. External reporting of such implementation capabilities is 345 considered out of scope for this document. 347 3.1.3. Optional QoS Agent Module 349 Diffserv routers may snoop or participate in either per-microflow or 350 per-flow-aggregate signaling of QoS requirements [E2E] e.g. using the 351 RSVP protocol. Snooping of RSVP messages may be used, for example, to 352 learn how to classify traffic without actually participating as a RSVP 353 protocol peer. Diffserv routers may reject or admit RSVP reservation 354 requests to provide a means of admission control to Diffserv-based 355 services or they may use these requests to trigger provisioning changes 356 for a flow-aggregation in the Diffserv network. A flow-aggregation in 357 this context might be equivalent to a Diffserv BA or it may be more 358 fine-grained, relying on a MF classifier [DSARCH]. Note that the 359 conceptual model of such a router implements the Integrated Services 360 Model as described in [INTSERV], applying the control plane controls to 361 the data classified and conditioned in the data plane, as desribed in 362 [E2E]. 364 Note that a QoS Agent component of a Diffserv router, if present, might 365 be active only in the control plane and not in the data plane. In this 366 scenario, RSVP could be used merely to signal reservation state without 367 installing any actual reservations in the data plane of the Diffserv 368 router: the data plane could still act purely on Diffserv DSCPs and 369 provide PHBs for handling data traffic without the normal per-microflow 370 handling expected to support some Intserv services. 372 3.2. Diffserv Functions at Ingress and Egress 374 This document focuses on the Diffserv-specific components of the router. 375 Figure 2 shows a high-level view of ingress and egress interfaces of a 376 router. The diagram illustrates two Diffserv router interfaces, each 377 having a set of ingress and a set of egress elements. It shows 378 classification, metering, action and queueing functions which might be 379 instantiated at each interface's ingress and egress. 381 In principle, if one were to construct a network entirely out of two- 382 port routers (connected by LANs or similar media), then it might be 383 necessary for each router to perform four QoS control functions in the 384 datapath on traffic in each direction: 386 - Classify each message according to some set of rules, possibly just 387 a "match everything" rule. 389 - If necessary, determine whether the data stream the message is part 390 of is within or outside its rate by metering the stream. 392 - Perform a set of resulting actions, including applying a drop 393 policy appropriate to the classification and queue in question and 394 perhaps additionally marking the traffic with a Differentiated 395 Services Code Point (DSCP) [DSFIELD]. 397 Interface A Interface B 398 +-------------+ +---------+ +-------------+ 399 | ingress: | | | | egress: | 400 | classify, | | | | classify, | 401 --->| meter, |---->| |---->| meter, |---> 402 | action, | | | | action, | 403 | queueing | | routing | | queueing | 404 +-------------+ | core | +-------------+ 405 | egress: | | | | ingress: | 406 | classify, | | | | classify, | 407 <---| meter, |<----| |<----| meter, |<--- 408 | action, | | | | action, | 409 | queueing | +---------+ | queueing | 410 +-------------+ +-------------+ 412 Figure 2. Traffic Conditioning and Queueing Elements 414 - Enqueue the traffic for output in the appropriate queue. The 415 scheduling of output from this queue may lead to shaping of the 416 traffic or may simply cause it to be forwarded with some minimum 417 rate or maximum latency assurance. 419 If the network is now built out of N-port routers, the expected behavior 420 of the network should be identical. Therefore, this model must provide 421 for essentially the same set of functions at the ingress as on the 422 egress of a router's interfaces. The one point of difference in the 423 model between ingress and the egress is that all traffic at the egress 424 of an interface is queued, while traffic at the ingress to an interface 425 is likely to be queued only for shaping purposes, if at all. Therefore, 426 equivalent functional datapath elements may be modelled at both the 427 ingress to and egress from an interface. 429 Note that it is not mandatory that each of these functional datapath 430 elements be implemented at both ingress and egress; equally, the model 431 allows that multiple sets of these elements may be placed in series 432 and/or in parallel at ingress or at egress. The arrangement of elements 433 is dependent on the service requirements on a particular interface on a 434 particular router. By modelling these elements at both ingress and 435 egress, it is not implied that they must be implemented in this way in a 436 specific router. For example, a router may implement all shaping and PHB 437 queueing at the interface egress or may instead implement it only at the 438 ingress. Furthermore, the classification needed to map a packet to an 439 egress queue (if present) need not be implemented at the egress but 440 instead might be implemented at the ingress, with the packet passed 441 through the routing core with in-band control information to allow for 442 egress queue selection. 444 Specifically, some interfaces will be at the outer "edge" and some will 445 be towards the "core" of the Diffserv domain. It is to be expected (from 446 the general principles guiding the motivation of Diffserv) that "edge" 447 interfaces, or at least the routers that contain them, will contain more 448 complexity and require more configuration than those in the core. 450 3.3. Shaping and Policing 452 Diffserv nodes may apply shaping, policing and/or marking to traffic 453 streams that exceed the bounds of their TCS in order to prevent one 454 traffic stream from seizing more than its share of resources from a 455 Diffserv network. In this model, Shaping, sometimes considered as a TC 456 action, is treated as a function of queueing elements - see section 7. 457 Algorithmic Dropping techniques (e.g. RED) are similarly treated since 458 these often are closely associated with queues. Policing is modelled as 459 either a concatenation of a Meter with an Absolute Dropper or as a 460 concatenation of an Algorithmic Dropper with a Scheduler. These elements 461 will discard packets which exceed the TCS. 463 3.4. Hierarchical View of the Model 465 >From a device-level configuration management perspective, the following 466 hierarchy exists: 468 At the lowest level considered here, are individual functional 469 datapath elements, each with their own configuration parameters and 470 management counters and flags. 472 At the next level, the network administrator manages groupings of 473 these functional datapath elements interconnected in a DAG. These 474 functional datapath elements are organized in self-contained TCBs 475 which are used to implement some desired network policy (see 476 Section 8). One or more TCBs may be instantiated at each 477 interface's ingress or egress; they may be connected in series 478 and/or in parallel configurations on the multiple outputs of a 479 preceding TCB. A TCB can be thought of as a "black box" with one 480 input and one or more outputs (in the data path). Each interface 481 may have a different TCB configuration and each direction (ingress 482 or egress) may too. 484 At the topmost level considered here, the network administrator 485 manages interfaces. Each interface has ingress and egress 486 functionality, with each of these expressed as one or more TCBs. 487 This level of the hierarchy is what was illustrated in Figure 2. 489 Further levels may be built on top of this hierarchy, in particular ones 490 for aiding in the repetitive configuration tasks likely for routers with 491 many interfaces: some such "template" tools for Diffserv routers are 492 outside the scope of this model but are under study by other working 493 groups within IETF. 495 4. Classifiers 497 4.1. Definition 499 Classification is performed by a classifier element. Classifiers are 1:N 500 (fan-out) devices: they take a single traffic stream as input and 501 generate N logically separate traffic streams as output. Classifiers are 502 parameterized by filters and output streams. Packets from the input 503 stream are sorted into various output streams by filters which match the 504 contents of the packet or possibly match other attributes associated 505 with the packet. Various types of classifiers using different filters 506 are described in the following sections. Figure 3 illustrates a 507 classifier, where the outputs connect to succeeding functional datapath 508 elements. 510 The simplest possible Classifier element is one that matches all packets 511 that are applied at its input. In this case, the Classifier element is 512 just a no-op and may be omitted. 514 Note that we allow a Multiplexor (see Section 6.5) before the Classifier 515 to allow input from multiple traffic streams. For example, if traffic 516 streams originating from multiple ingress interfaces feed through a 517 single Classifier then the interface number could be one of the packet 518 classification keys used by the Classifier. This optimization may be 519 important for scalability in the management plane. Classifiers may also 520 be cascaded in sequence to perform more complex lookup operations whilst 521 still maintaining such scalability. 523 Another example of a packet attribute could be an integer representing 524 the BGP community string associated with the packet's best-matching 525 route. Other contextual information may also be used by a Classifier 526 e.g. knowledge that a particular interface faces a Diffserv domain or a 527 legacy IPTOS domain [DSARCH] could be used when determining whether a 528 DSCP is present or not. 530 The following classifier separates traffic into one of three output 531 streams based on three filters: 533 Filter Matched Output Stream 534 -------------- --------------- 535 Filter1 A 536 Filter2 B 537 no match C 539 Where Filters1 and Filter2 are defined to be the following BA filters 540 ([DSARCH], Section 4.2.1 ): 542 unclassified classified 543 traffic traffic 544 +------------+ 545 | |--> match Filter1 --> OutputA 546 ------->| classifier |--> match Filter2 --> OutputB 547 | |--> no match --> OutputC 548 +------------+ 550 Figure 3. An Example Classifier 551 Filter DSCP 552 ------ ------ 553 1 101010 554 2 111111 555 3 ****** (wildcard) 557 4.1.1. Filters 559 A filter consists of a set of conditions on the component values of a 560 packet's classification key (the header values, contents, and attributes 561 relevant for classification). In the BA classifier example above, the 562 classification key consists of one packet header field, the DSCP, and 563 both Filter1 and Filter2 specify exact-match conditions on the value of 564 the DSCP. Filter3 is a wildcard default filter which matches every 565 packet, but which is only selected in the event that no other more 566 specific filter matches. 568 In general there are a set of possible component conditions including 569 exact, prefix, range, masked and wildcard matches. Note that ranges can 570 be represented (with less efficiency) as a set of prefixes and that 571 prefix matches are just a special case of both masked and range matches. 573 In the case of a MF classifier [DSARCH], the classification key consists 574 of a number of packet header fields. The filter may specify a different 575 condition for each key component, as illustrated in the example below 576 for a IPv4/TCP classifier: 578 In this example, the fourth octet of the destination IPv4 address and 579 the source TCP port are wildcard or "don't care". 581 MF classification of fragmented packets is impossible if the filter uses 582 transport-layer port numbers e.g. TCP port numbers. MTU-discovery is 583 therefore a prerequisite for proper operation of a Diffserv network that 584 uses such classifiers. 586 4.1.2. Overlapping Filters 588 Note that it is easy to define sets of overlapping filters in a 589 classifier. For example: 591 Filter IP Src Addr IP Dest Addr TCP SrcPort TCP DestPort 592 ------ ------------- ------------- ----------- ------------ 593 Filter4 172.31.8.1/32 172.31.3.X/24 X 5003 594 Filter5: 595 Type: Masked-DSCP 596 Value: 111000 597 Mask: 111000 599 Filter6: 600 Type: Masked-DSCP 601 Value: 000111 (binary) 602 Mask: 000111 (binary) 604 A packet containing DSCP = 111111 cannot be uniquely classified by this 605 pair of filters and so a precedence must be established between Filter5 606 and Filter6 in order to break the tie. This precedence must be 607 established either (a) by a manager which knows that the router can 608 accomplish this particular ordering e.g. by means of reported 609 capabilities, or (b) by the router along with a mechanism to report to a 610 manager which precedence is being used. Such precedence mechanisms must 611 be supported in any translation of this model into specific syntax for 612 configuration and management protocols. 614 As another example, one might want first to disallow certain 615 applications from using the network at all, or to classify some 616 individual traffic streams that are not Diffserv-marked. Traffic that is 617 not classified by those tests might then be inspected for a DSCP. The 618 word "then" implies sequence and this must be specified by means of 619 precedence. 621 An unambiguous classifier requires that every possible classification 622 key match at least one filter (possibly the wildcard default) and that 623 any ambiguity between overlapping filters be resolved by precedence. 624 Therefore, the classifiers on any given interface must be "complete" and 625 will often include an "everything else" filter as the lowest precedence 626 element in order for the result of classification to be deterministic. 627 Note that this completeness is only required of the first classifier 628 that incoming traffic will meet as it enters an interface - subsequent 629 classifiers on an interface only need to handle the traffic that it is 630 known that they will receive. 632 This model of classifier operation makes the assumption that all filters 633 of the same precedence be applied simultaneously. Whilst convenient from 634 a modelling point-of-view, this may or may not be how the classifier is 635 actually implemented - this assumption is not intended to dictate how 636 the implementation actually handles this, merely to clearly define the 637 required end result. 639 4.2. Examples 641 4.2.1. Behaviour Aggregate (BA) Classifier 643 The simplest Diffserv classifier is a behavior aggregate (BA) classifier 644 [DSARCH]. A BA classifier uses only the Diffserv codepoint (DSCP) in a 645 packet's IP header to determine the logical output stream to which the 646 packet should be directed. We allow only an exact-match condition on 647 this field because the assigned DSCP values have no structure, and 648 therefore no subset of DSCP bits are significant. 650 The following defines a possible BA filter: 652 Filter8: 653 Type: BA 654 Value: 111000 656 4.2.2. Multi-Field (MF) Classifier 658 Another type of classifier is a multi-field (MF) classifier [DSARCH]. 659 This classifies packets based on one or more fields in the packet 660 (possibly including the DSCP). A common type of MF classifier is a 6- 661 tuple classifier that classifies based on six fields from the IP and TCP 662 or UDP headers (destination address, source address, IP protocol, source 663 port, destination port, and DSCP). MF classifiers may classify on other 664 fields such as MAC addresses, VLAN tags, link-layer traffic class fields 665 or other higher-layer protocol fields. 667 The following defines a possible MF filter: 669 Filter9: 670 Type: IPv4-6-tuple 671 IPv4DestAddrValue: 0.0.0.0 672 IPv4DestAddrMask: 0.0.0.0 673 IPv4SrcAddrValue: 172.31.8.0 674 IPv4SrcAddrMask: 255.255.255.0 675 IPv4DSCP: 28 676 IPv4Protocol: 6 677 IPv4DestL4PortMin: 0 678 IPv4DestL4PortMax: 65535 679 IPv4SrcL4PortMin: 20 680 IPv4SrcL4PortMax: 20 682 A similar type of classifier can be defined for IPv6. 684 4.2.3. Free-form Classifier 686 A Free-form classifier is made up of a set of user definable arbitrary 687 filters each made up of {bit-field size, offset (from head of packet), 688 mask}: 690 Classifier2: 691 Filter12: OutputA 692 Filter13: OutputB 693 Default: OutputC 695 Filter12: 696 Type: FreeForm 697 SizeBits: 3 (bits) 698 Offset: 16 (bytes) 699 Value: 100 (binary) 700 Mask: 101 (binary) 702 Filter13: 703 Type: FreeForm 704 SizeBits: 12 (bits) 705 Offset: 16 (bytes) 706 Value: 100100000000 (binary) 707 Mask: 111111111111 (binary) 709 Free-form filters can be combined into filter groups to form very 710 powerful filters. 712 4.2.4. Other Possible Classifiers 714 Classification may also be performed based on information at the 715 datalink layer below IP (e.g. VLAN or datalink-layer priority) or 716 perhaps on the ingress or egress IP, logical or physical interface 717 identifier. (e.g. the incoming channel number on a channelized 718 interface). A classifier that filters based on IEEE 802.1p Priority and 719 on 802.1Q VLAN-ID might be represented as: 721 Classifier3: 722 Filter14 AND Filter15: OutputA 723 Default: OutputB 725 Filter14: -- priority 4 or 5 726 Type: Ieee8021pPriority 727 Value: 100 (binary) 728 Mask: 110 (binary) 730 Filter15: -- VLAN 2304 731 Type: Ieee8021QVlan 732 Value: 100100000000 (binary) 733 Mask: 111111111111 (binary) 735 Such classifiers may be the subject of other standards or may be 736 proprietary to a router vendor but they are not discussed further here. 738 5. Meters 740 Metering is defined in [DSARCH]. Diffserv network providers may choose 741 to offer services to customers based on a temporal (i.e., rate) profile 742 within which the customer submits traffic for the service. In this 743 event, a meter might be used to trigger real-time traffic conditioning 744 actions (e.g., marking) by routing a non-conforming packet through an 745 appropriate next-stage action element. Alternatively, by counting 746 conforming and/or non-conforming traffic using a Counter element 747 downstream of the Meter, it might also be used to help in collecting 748 data for out-of-band management functions such as billing applications. 750 Meters are logically 1:N (fan-out) devices (although a multiplexor can 751 be used in front of a meter). Meters are parameterized by a temporal 752 profile and by conformance levels, each of which is associated with a 753 meter's output. Each output can be connected to another functional 754 element. 756 Note that this model of a meter differs slightly from that described in 757 [DSARCH]. In that description the meter is not a datapath element but is 758 instead used to monitor the traffic stream and send control signals to 759 action elements to dynamically modulate their behavior based on the 760 conformance of the packet. Figure 4 illustrates a meter with 3 levels of 761 conformance. 763 unmetered metered 764 traffic traffic 765 +---------+ 766 | |--------> conformance A 767 --------->| meter |--------> conformance B 768 | |--------> conformance C 769 +---------+ 771 Figure 4. A Generic Meter 773 In some Diffserv examples e.g. [AF-PHB], three levels of conformance are 774 discussed in terms of colors, with green representing conforming, yellow 775 representing partially conforming and red representing non-conforming. 776 These different conformance levels may be used to trigger different 777 queueing, marking or dropping treatment later on in the processing. 778 Other example meters use a binary notion of conformance; in the general 779 case N levels of conformance can be supported. In general there is no 780 constraint on the type of functional datapath element following a meter 781 output, but care must be taken not to inadvertently configure a datapath 782 that results in packet reordering that is not consistent with the 783 requirements of the relevant PHB specification. 785 A meter, according to this model, measures the rate at which packets 786 making up a stream of traffic pass it, compares the rate to some set of 787 thresholds and produces some number of potential results (two or more): 788 a given packet is said to be "conformant" to a level of the meter if, at 789 the time that the packet is being examined, the stream appears to be 790 within the rate limit for the profile associated with that level. A 791 fuller discussion of conformance to meter profiles (and the associated 792 requirements that this places on the schedulers upstream) is provided in 793 Appendix A. 795 5.1. Examples 797 The following are some examples of possible meters. 799 5.1.1. Average Rate Meter 801 An example of a very simple meter is an average rate meter. This type of 802 meter measures the average rate at which packets are submitted to it 803 over a specified averaging time. 805 An average rate profile may take the following form: 807 Meter1: 808 Type: AverageRate 809 Profile: Profile1 810 ConformingOutput: Queue1 811 NonConformingOutput: Counter1 813 Profile1: 814 Type: AverageRate 815 AverageRate: 120 kbps 816 Delta: 100 msec 818 A Meter measuring against this profile would continually maintain a 819 count that indicates the total number and/or cumulative byte-count of 820 packets arriving between time T (now) and time T - 100 msecs. So long as 821 an arriving packet does not push the count over 12 kbits in the last 100 822 msec then the packet would be deemed conforming. Any packet that pushes 823 the count over 12 kbits would be deemed non-conforming. Thus, this Meter 824 deems packets to correspond to one of two conformance levels: conforming 825 or non-conforming and sends them on for the appropriate subsequent 826 treatment. 828 5.1.2. Exponential Weighted Moving Average (EWMA) Meter 830 The EWMA form of Meter is easy to implement in hardware and can be 831 parameterized as follows: 833 avg_rate(t) = (1 - Gain) * avg_rate(t') + Gain * rate(t) 834 t = t' + Delta 836 For a packet arriving at time t: 838 if (avg_rate(t) > AverageRate) 839 non-conforming 840 else 841 conforming 843 "Gain" controls the time constant (e.g. frequency response) of what is 844 essentially a simple IIR low-pass filter. "rate(t)" measures the number 845 of incoming bytes in a small fixed sampling interval, Delta. Any packet 846 that arrives and pushes the average rate over a predefined rate 847 AverageRate is deemed non-conforming. An EWMA Meter profile might look 848 something like the following: 850 Meter2: 851 Type: ExpWeightedMovingAvg 852 Profile: Profile2 853 ConformingOutput: Queue1 854 NonConformingOutput: AbsoluteDropper1 856 Profile2: 857 Type: ExpWeightedMovingAvg 858 AverageRate: 25 kbps 859 Delta: 10 usec 860 Gain: 1/16 862 5.1.3. Two-Parameter Token Bucket Meter 864 A more sophisticated Meter might measure loose conformance to a token 865 bucket (TB) profile (see above and Appendix A for discussions of loose 866 and strict conformance to a token bucket). A TB profile generally has 867 two parameters, an average token rate and a burst size. TB Meters 868 compare the arrival rate of packets to the average rate specified by the 869 TB profile. Logically, tokens accumulate in a bucket at the average 870 rate, up to a maximum credit which is the burst size. Packets of length 871 L bytes are considered conforming if any tokens are available in the 872 bucket at the time of packet arrival: up to L bytes may then be borrowed 873 from future token allocations. Packets are allowed to exceed the average 874 rate in bursts up to the burst size. Packets which arrive to find a 875 bucket with no tokens in it are deemed non-conforming. A two-parameter 876 TB meter has exactly two possible conformance levels (conforming, non- 877 conforming). Note that "strict" conformance meters are also useful - 878 see e.g. [SRTCM] and [TRTCM]. 880 A two-parameter TB meter might appear as follows: 882 Meter3: 883 Type: SimpleTokenBucket 884 Profile: Profile3 885 ConformingOutput: Queue1 886 NonConformingOutput: AbsoluteDropper1 888 Profile3: 889 Type: SimpleTokenBucket 890 AverageRate: 200 kbps 891 BurstSize: 100 kbytes 892 ConformanceType: loose 894 5.1.4. Multi-Stage Token Bucket Meter 896 More complicated TB meters might define multiple burst sizes and more 897 conformance levels. Packets found to exceed the larger burst size are 898 deemed non-conforming. Packets found to exceed the smaller burst size 899 are deemed partially conforming. Packets exceeding neither are deemed 900 conforming. Token bucket meters designed for Diffserv networks are 901 described in more detail in [SRTCM, TRTCM, GTC]; in some of these 902 references, three levels of conformance are discussed in terms of colors 903 with green representing conforming, yellow representing partially 904 conforming and red representing non-conforming. Note that these 905 multiple-conformance-level meters can sometimes be implemented using an 906 appropriate sequence of multiple two-parameter TB meters. 908 A profile for a multi-stage TB meter with three levels of conformance 909 might look as follows: 911 Meter4: 912 Type: TwoRateTokenBucket 913 ProfileA: Profile4 914 ConformingOutputA: Queue1 915 ProfileB: Profile5 916 ConformingOutputB: Marker1 917 NonConformingOutput: AbsoluteDropper1 919 Profile4: 920 Type: SimpleTokenBucket 921 AverageRate: 100 kbps 922 BurstSize: 20 kbytes 924 Profile5: 925 Type: SimpleTokenBucket 926 AverageRate: 100 kbps 927 BurstSize: 100 kbytes 929 5.1.5. Null Meter 931 A null meter has only one output: always conforming, and no associated 932 temporal profile. Such a meter is useful to define in the event that the 933 configuration or management interface does not have the flexibility to 934 omit a meter in a datapath segment. 936 Meter5: 937 Type: NullMeter 938 Output: Queue1 940 6. Action Elements 942 The classifiers and meters described up to this point are fan-out 943 elements which are generally used to determine the appropriate action to 944 apply to a packet. The set of possible actions that can then be applied 945 include: 947 - Marking 949 - Absolute Dropping 951 - Multiplexing 953 - Counting 955 - Null action - do nothing 957 The corresponding action elements are described in the following 958 sections. 960 6.1. DSCP Marker 962 DSCP Markers are 1:1 elements which set a codepoint (e.g. the DSCP in an 963 IP header). DSCP Markers may also act on unmarked packets (e.g. those 964 submitted with DSCP of zero) or may re-mark previously marked packets. 965 In particular, the model supports the application of marking based on a 966 preceding classifier match. The mark set in a packet will determine its 967 subsequent PHB treatment in downstream nodes of a network and possibly 968 also in subsequent processing stages within this router. 970 DSCP Markers for Diffserv are normally parameterized by a single 971 parameter: the 6-bit DSCP to be marked in the packet header. 973 Marker1: 974 Type: DSCPMarker 975 Mark: 010010 977 6.2. Absolute Dropper 979 Absolute Droppers simply discard packets. There are no parameters for 980 these droppers. Because this Absolute Dropper is a terminating point of 981 the datapath and has no outputs it is probably desirable to forward the 982 packet through a Counter Action first for instrumentation purposes. 984 AbsoluteDropper1: 985 Type: AbsoluteDropper 987 Absolute Droppers are not the only elements than can cause a packet to 988 be discarded: another element is an Algorithmic Dropper element (see 989 Section 7.1.3). However, since this element's behavior is closely tied 990 the state of one or more queues, we choose to distinguish it as a 991 separate functional datapath element. 993 6.3. Multiplexor 995 It is occasionally necessary to multiplex traffic streams into a 996 functional datapath element with a single input. A M:1 (fan-in) 997 multiplexor is a simple logical device for merging traffic streams. It 998 is parameterized by its number of incoming ports. 1000 Mux1: 1001 Type: Multiplexor 1002 Output: Queue2 1004 6.4. Counter 1006 One passive action is to account for the fact that a data packet was 1007 processed. The statistics that result might be used later for customer 1008 billing, service verification or network engineering purposes. Counters 1009 are 1:1 functional datapath elements which update a counter by L and a 1010 packet counter by 1 every time a L-byte sized packet passes through 1011 them. Counters can be used to count packets about to be dropped by an 1012 Absolute Dropper or to count packets arriving at or departing from some 1013 other functional element. 1015 Counter1: 1016 Type: Counter 1017 Output: Queue1 1019 6.5. Null Action 1021 A null action has one input and one output. The element performs no 1022 action on the packet. Such an element is useful to define in the event 1023 that the configuration or management interface does not have the 1024 flexibility to omit an action element in a datapath segment. 1026 Null1: 1027 Type: Null 1028 Output: Queue1 1030 7. Queueing Elements 1032 Queueing elements modulate the transmission of packets belonging to the 1033 different traffic streams and determine their ordering, possibly storing 1034 them temporarily or discarding them. Packets are usually stored either 1035 because there is a resource constraint (e.g., available bandwidth) which 1036 prevents immediate forwarding, or because the queueing block is being 1037 used to alter the temporal properties of a traffic stream (i.e. 1038 shaping). Packets are discarded either because of buffering limitations, 1039 because a buffer threshold is exceeded (including when shaping is 1040 performed), as a feedback control signal to reactive control protocols 1041 such as TCP, because a meter exceeds a configured profile (i.e. 1042 policing). 1044 The queueing elements in this model represent a logical abstraction of a 1045 queueing system, which is used to configure PHB-related parameters. The 1046 model can be used to represent a broad variety of possible 1047 implementations. However, it need not necessarily map one-to-one with 1048 physical queueing systems in a specific router implementation. 1049 Implementors should map the configurable parameters of the 1050 implementation's queueing systems to these queueing element parameters 1051 as appropriate to achieve equivalent behaviors. 1053 7.1. Queueing Model 1055 Queueing is a function which lends itself to innovation. It must be 1056 modelled to allow a broad range of possible implementations to be 1057 represented using common structures and parameters. This model uses 1058 functional decomposition as a tool to permit the needed lattitude. 1060 Queueing systems perform three distinct, but related, functions: they 1061 store packets, they modulate the departure of packets belonging to 1062 various traffic streams and they selectively discard packets. This model 1063 decomposes queueing into the component elements that perform each of 1064 these functions: Queues, Schedulers and Algorithmic Droppers, 1065 respectively. These elements may be connected together as part of a 1066 TCB, as described in section 8. 1068 The remainder of this section discusses FIFO Queues: typically, the 1069 Queue element of this model will be implemented as a FIFO data 1070 structure. However, this does not preclude implementations which are not 1071 strictly FIFO, in that they also support operations that remove or 1072 examine packets (e.g., for use by discarders) other than at the head or 1073 tail. However, such operations MUST NOT have the effect of reordering 1074 packets belonging to the same microflow. 1076 Note that the term FIFO has multiple different common usages: it is 1077 sometimes taken to mean, among other things, a data structure that 1078 permits items to be removed only in the order in which they were 1079 inserted or a service discipline which is non-reordering. 1081 7.1.1. FIFO Queue 1083 In this model, a FIFO Queue element is a data structure which at any 1084 time may contain zero or more packets. It may have one or more 1085 thresholds associated with it. A FIFO has one or more inputs and exactly 1086 one output. It must support an enqueue operation to add a packet to the 1087 tail of the queue and a dequeue operation to remove a packet from the 1088 head of the queue. Packets must be dequeued in the order in which they 1089 were enqueued. A FIFO has a current depth, which indicates the number of 1090 packets and/or bytes that it contains at a particular time. FIFOs in 1091 this model are modelled without inherent limits on their depth - 1092 obviously this does not reflect the reality of implementations: FIFO 1093 size limits are modelled here by an algorithmic dropper associated with 1094 the FIFO, typically at its input. It is quite likely that every FIFO 1095 will be preceded by an algorithmic dropper. One exception might be the 1096 case where the packet stream has already been policed to a profile that 1097 can never exceed the scheduler bandwidth available at the FIFO's output 1098 - this would not need an algorithmic dropper at the input to the FIFO. 1100 This representation of a FIFO allows for one common type of depth limit, 1101 one that results from a FIFO supplied from a limited pool of buffers, 1102 shared between multiple FIFOs. 1104 In an implementation, packets are presumably stored in one or more 1105 buffers. Buffers are allocated from one or more free buffer pools. If 1106 there are multiple instances of a FIFO, their packet buffers may or may 1107 not be allocated out of the same free buffer pool. Free buffer pools may 1108 also have one or more thresholds associated with them, which may affect 1109 discarding and/or scheduling. Other than this, buffering mechanisms are 1110 implementation specific and not part of this model. 1112 A FIFO might be represented using the following parameters: 1114 Queue1: 1115 Type: FIFO 1116 Output: Scheduler1 1118 Note that a FIFO must provide triggers and/or current state information 1119 to other elements upstream and downstream from it: in particular, it is 1120 likely that the current depth will need to be used by Algorithmic 1121 Dropper elements placed before or after the FIFO. It will also likely 1122 need to provide an implicit "I have packets for you" signal to 1123 downstream Scheduler elements. 1125 7.1.2. Scheduler 1127 A scheduler is an element which gates the departure of each packet that 1128 arrives at one of its inputs, based on a service discipline. It has one 1129 or more inputs and exactly one output. Each input has an upstream 1130 element to which it is connected, and a set of parameters that affects 1131 the scheduling of packets received at that input. 1133 The service discipline (also known as a scheduling algorithm) is an 1134 algorithm which might take any of the following as its input(s): 1136 a) static parameters such as relative priority associated with each of 1137 the scheduler's inputs. 1139 b) absolute token bucket parameters for maximum or minimum rates 1140 associated with each of the scheduler's inputs. 1142 c) parameters, such as packet length or DSCP, associated with the 1143 packet currently present at its input. 1145 d) absolute time and/or local state. 1147 Possible service disciplines fall into a number of categories, including 1148 (but not limited to) first come, first served (FCFS), strict priority, 1149 weighted fair bandwidth sharing (e.g. WFQ), rate-limited strict priority 1150 and rate-based. Service disciplines can be further distinguished by 1151 whether they are work-conserving or non-work-conserving (see Glossary). 1152 Non-work-conserving schedulers can be used to shape traffic streams to 1153 match some profile by delaying packets that might be deemed non- 1154 conforming by some downstream node: a packet is delayed until such time 1155 as it would conform to a downstream meter using the same profile. 1157 [DSARCH] defines PHBs without specifying required scheduling algorithms. 1158 However, PHBs such as the class selectors [DSFIELD], EF [EF-PHB] and AF 1159 [AF-PHB] have descriptions or configuration parameters which strongly 1160 suggest the sort of scheduling discipline needed to implement them. This 1161 document discusses a minimal set of queue parameters to enable 1162 realization of these PHBs. It does not attempt to specify an all- 1163 embracing set of parameters to cover all possible implementation models. 1164 A mimimal set includes: 1166 a) a minimum service rate profile which allows rate guarantees for 1167 each traffic stream as required by EF and AF without specifying the 1168 details of how excess bandwidth between these traffic streams is 1169 shared. Additional parameters to control this behavior should be 1170 made available, but are dependent on the particular scheduling 1171 algorithm implemented. 1173 b) a service priority, used only after the minimum rate profiles of 1174 all inputs have been satisfied, to decide how to allocate any 1175 remaining bandwidth. 1177 c) a maximum service rate profile, for use only with a non-work- 1178 conserving service discipline. 1180 Any one of these profiles is composed, for the purposes of this model, 1181 of both a rate (in suitable units of bits, bytes or larger chunks in 1182 some unit of time) and a burst size, as discussed further in Appendix A. 1184 By way of example, for an implementation of the EF PHB using a strict 1185 priority scheduling algorithm that assumes that the aggregate EF rate 1186 has been appropriately bounded by upstream policing to avoid starvation 1187 of other BAs, the service rate profiles are not used: the minimum 1188 service rate profile would be defaulted to zero and the maximum service 1189 rate profile would effectively be the "line rate". Such an 1190 implementation, with multiple priority classes, could also be used for 1191 the Diffserv class selectors [DSFIELD]. 1193 Alternatively, setting the service priority values for each input to the 1194 scheduler to the same value enables the scheduler to satisfy the minimum 1195 service rates for each input, so long as the sum of all minimum service 1196 rates is less than or equal to the line rate. 1198 For example, a non-work-conserving scheduler, allocating spare bandwidth 1199 equally between all its inputs, might be represented using the following 1200 parameters: 1202 Scheduler1: 1203 Type: Scheduler2Input 1205 Input1: 1206 MaxRateProfile: Profile1 1207 MinRateProfile: Profile2 1208 Priority: none 1210 Input2: 1211 MaxRateProfile: Profile3 1212 MinRateProfile: Profile4 1213 Priority: none 1215 A work-conserving scheduler might be represented using the following 1216 parameters: 1218 Scheduler2: 1219 Type: Scheduler3Input 1221 Input1: 1222 MaxRateProfile: WorkConserving 1223 MinRateProfile: Profile5 1224 Priority: 1 1226 Input2: 1227 MaxRateProfile: WorkConserving 1228 MinRateProfile: Profile6 1229 Priority: 2 1231 Input3: 1232 MaxRateProfile: WorkConserving 1233 MinRateProfile: none 1234 Priority: 3 1236 7.1.3. Algorithmic Dropper 1238 An Algorithmic Dropper is an element which selectively discards packets 1239 that arrive at its input, based on a discarding algorithm. It has one 1240 data input and one output. In this model (but not necessarily in a real 1241 implementation), a packet enters the dropper at its input and either its 1242 buffer is returned to a free buffer pool or the packet exits the dropper 1243 at the output. 1245 Alternatively, an Algorithmic Dropper can be thought of as invoking 1246 operations on a FIFO which selectively remove a packet and return its 1247 buffer to the free buffer pool based on a discarding algorithm. In this 1248 case, the operation could be modelled as being a side-effect on the FIFO 1249 upon which it operated, rather than as having a discrete input and 1250 output. This treatment is equivalent and we choose the one described in 1251 the previous paragraph for this model. 1253 The Algorithmic Dropper is modelled as having a single input. It is 1254 possible that packets which were classified differently by a Classifier 1255 in this TCB will end up passing through the same dropper. The dropper's 1256 algorithm may need to apply different calculations based on 1257 characteristics of the incoming packet e.g. its DSCP. So there is a 1258 need, in implementations of this model, to be able to relate information 1259 about which classifier element was matched by a packet from a Classifier 1260 to an Algorithmic Dropper. In the rare cases where this is required, 1261 the chosen model is to insert another Classifier element at this point 1262 in the flow and for it to feed into multiple Algorithmic Dropper 1263 elements, each one implementing a drop calculation that is independent 1264 of any classification keys of the packet: this will likely require the 1265 creation of a new TCB to contain the Classifier and the Algorithmic 1266 Dropper elements. 1268 NOTE: There are many other formulations of a model that could 1269 represent this linkage that are different to the one described 1270 above: one formulation would have been to have a pointer from one 1271 of the drop probability calculation algorithms inside the dropper 1272 to the original Classifier element that selects this algorithm. 1273 Another way would have been to have multiple "inputs" to the 1274 Algorithmic Dropper element fed from the preceding elements, 1275 leading eventually back to the Classifier elements that matched the 1276 packet. Yet another formulation might have been for the Classifier 1277 to (logically) include some sort of "classification identifier" 1278 along with the packet along its path, for use by any subsequent 1279 element. And yet another could have been to include a classifier 1280 inside the dropper, in order for it to pick out the drop algorithm 1281 to be applied. These other approaches could be used by 1282 implementations but were deemed to be less clear than the approach 1283 taken here. 1285 An Algorithmic Dropper, illustrated in Figure 5, has one or more 1286 triggers that cause it to make a decision whether or not to drop one (or 1287 possibly more than one) packet. A trigger may be internal (the arrival 1288 of a packet at the input to the dropper) or it may be external 1289 (resulting from one or more state changes at another element, such as a 1290 FIFO depth crossing a threshold or a scheduling event). It is likely 1291 that an instantaneous FIFO depth will need to be smoothed over some 1292 averaging interval. Some dropping algorithms may require several trigger 1293 inputs feeding back from events elsewhere in the system e.g. depth- 1294 smoothing functions that calculate averages over more than one time 1295 interval. Smoothing functions are outside the scope of this document 1296 and are not modelled here, we merely indicate where they might be added 1297 in the model. 1299 A trigger may be a boolean combination of events (e.g. a FIFO depth 1300 exceeding a threshold OR a buffer pool depth falling below a threshold). 1302 The dropping algorithm makes a decision on whether to forward or to 1303 discard a packet and, if discarding, whether to discard it from the 1304 head, tail or other part of the associated queue. It takes as its 1305 parameters some set of dynamic parameters e.g. smoothed or instantaneous 1307 +--------------------------------------+ 1308 | +------------+ +-----------+ |Algorithmic 1309 | | smoothing | n |trigger & | |Dropper 1310 | | function(s)|---/--->|discard | | 1311 | | (optional) | |calc. | | 1312 | +------------+ +-----------+ | 1313 | ^ TailDrop| |HeadDrop | 1314 +------------|-------------|-|---------+ 1315 | | | 1316 +---|-------------+ | 1317 | | | 1318 v |Depth v 1319 Input ----------------------+ Output 1320 -----------------------------> |x|x|x|x|x|x|x|-------------------> 1321 ----------------------+ 1322 FIFO | 1323 | 1324 | | | 1325 | v | bit-bucket 1326 +---+ 1328 Figure 5. Algorithmic Dropper + Queue 1330 FIFO depth, some set of static parameters e.g. thresholds, and possibly 1331 other parameters associated with the packet. It may also have internal 1332 state and is likely to keep counters regarding the dropped packets 1333 (there is no appropriate place here to include a Counter Action 1334 element). Note that, although an Algorithmic Dropper may require 1335 knowledge of data fields in a packet, as discovered by a Classifier in 1336 the same TCB, it may not modify the packet (i.e. it is not a marker). 1338 RED, RED-on-In-and-Out (RIO) and Drop-on-threshold are examples of 1339 dropping algorithms. Tail-dropping and head-dropping are effected by the 1340 location of the dropper relative to the FIFO. 1342 For example, a dropper using a RIO algorithm might be represented using 1343 2 Algorithmic Droppers with the following parameters: 1345 AlgorithmicDropper1: (for in-profile traffic) 1346 Type: AlgorithmicDropper 1347 Discipline: RED, discard from tail 1348 Trigger: Internal 1349 Output: Fifo1 1350 MinThresh: Fifo1.Depth > 20 kbyte 1351 MaxThresh: Fifo1.Depth > 30 kbyte 1352 SampleWeight .002 1353 MaxDropProb 1% 1355 AlgorithmicDropper2: (for out-of-profile traffic) 1356 Type: AlgorithmicDropper 1357 Discipline: RED, discard from tail 1358 Trigger: Internal 1359 Output: Fifo1 1360 MinThresh: Fifo1.Depth > 10 kbyte 1361 MaxThresh: Fifo1.Depth > 20 kbyte 1362 SampleWeight .002 1363 MaxDropProb 2% 1365 Another form of Algorithmic Dropper, a threshold-dropper, might be 1366 represented using the following parameters: 1368 AlgorithmicDropper3: 1369 Type: AlgorithmicDropper 1370 Discipline: Drop-on-threshold, discard from tail 1371 Trigger: Fifo2.Depth > 20 kbyte 1372 Output: Fifo1 1374 7.2. Sharing load among traffic streams using queueing 1376 Queues are used, in Differentiated Services, for a number of purposes. 1377 In essence, they are simply places to store traffic until it is 1378 transmitted. However, when several queues are used together in a 1379 queueing system, they can also achieve effects beyond that for given 1380 traffic streams. They can be used to limit variation in delay or impose 1381 a maximum rate (shaping), to permit several streams to share a link in a 1382 semi-predictable fashion (load sharing), or to move variation in delay 1383 from some streams to other streams. 1385 Traffic shaping is often used to condition traffic such that packets 1386 arriving in a burst will be "smoothed" and deemed conforming by 1387 subsequent downstream meters in this or other nodes. In [DSARCH] a 1388 shaper is described as a queueing element controlled by a meter which 1389 defines its temporal profile. However, this representation of a shaper 1390 differs substantially from typical shaper implementations. 1392 In the model described here, a shaper is realized by using a non-work- 1393 conserving Scheduler. Some implementations may elect to have queues 1394 whose sole purpose is shaping, while others may integrate the shaping 1395 function with other buffering, discarding and scheduling associated with 1396 access to a resource. Shapers operate by delaying the departure of 1397 packets that would be deemed non-conforming by a meter configured to the 1398 shaper's maximum service rate profile. The packet is scheduled to depart 1399 no sooner than such time that it would become conforming. 1401 7.2.1. Load Sharing 1403 Load sharing is the traditional use of queues. It was theoretically 1404 explored in a paper by Floyd [FLOYD] in 1993, but has been in use in 1405 communications systems since the 1970's. 1407 [DSARCH] discusses load sharing as dividing an interface among traffic 1408 classes predictably or applying a minimum rate to each of a set of 1409 traffic classes, which might be measured as an absolute lower bound on 1410 the rate a traffic stream achieves or a fraction of the rate an 1411 interface offers. It is generally implemented as some form of weighted 1412 queueing algorithm among a set of FIFO queues i.e. a WFQ scheme. This 1413 has interesting side-effects. 1415 A key effect sought is to ensure that the mean rate the traffic in a 1416 stream experiences is never lower than some threshold when there is at 1417 least that much traffic to send. When there is less traffic than this, 1418 the queue tends to be starved of traffic, meaning that the queuing 1419 system will not delay its traffic by very much. When there is 1420 significantly more traffic and the queue starts filling, packets in this 1421 class will be delayed significantly more than traffic in other classes 1422 that are under-using their available capacity. This form of queuing 1423 system therefore tends to move delay and variation in delay from under- 1424 used classes of traffic to heavier users, as well as managing the rates 1425 of the traffic streams. 1427 A side-effect of a WRR or WFQ implementation is that between any two 1428 packets in a given traffic class, the scheduler may emit one or more 1429 packets from each of the other classes in the queuing system. In cases 1430 where average behavior is in view, this is perfectly acceptable. In 1431 cases where traffic is very intolerant of jitter and there are a number 1432 of competing classes, this may have undesirable consequences. 1434 7.2.2. Traffic Priority 1436 Traffic Prioritization is a special case of load sharing, wherein a 1437 certain traffic class is deemed so jitter-intolerant that if it has 1438 traffic present, that traffic must be sent at the earliest possible 1439 time. By extension, several priorities might be defined, such that 1440 traffic in each of several classes is given preferential service over 1441 any traffic of a lower class. It is the obvious implementation of IP 1442 Precedence as described in [RFC 791], of 802.1p traffic classes [802.1D] 1443 and other similar technologies. 1445 Priority is often abused in real networks; people tend to think that 1446 traffic which has a high business priority deserves this treatment and 1447 talk more about the business imperatives than the actual application 1448 requirements. This can have severe consequences; networks have been 1449 configured which placed business-critical traffic at a higher priority 1450 than routing-protocol traffic, resulting in collapse of the network's 1451 management or control systems. However, it may have a legitimate use for 1452 services based on an Expedited Forwarding (EF) PHB, where it is 1453 absolutely sure, thanks to policing at all possible traffic entry 1454 points, that a traffic stream does not abuse its rate and that the 1455 application is indeed jitter-intolerant enough to merit this type of 1456 handling. Note that, even in cases with well-policed ingress points, 1457 there is still the possibility of unexpected traffic loops within an un- 1458 policed core part of the network causing such collapse. 1460 8. Traffic Conditioning Blocks (TCBs) 1462 The Classifier, Meter, Action, Algorithmic Dropper, Queue and Scheduler 1463 functional datapath elements described above can be combined into 1464 Traffic Conditioning Blocks (TCBs). A TCB is an abstraction of a set of 1465 functional datapath elements that may be used to facilitate the 1466 definition of specific traffic conditioning functionality e.g. it might 1467 be likened to a template which can be replicated multiple times for 1468 different traffic streams or different customers. It has no likely 1469 physical representation in the implementation of the data path: it is 1470 invented purely as an abstraction for use by management tools. 1472 This model describes the configuration and management of a Diffserv 1473 interface in terms of a TCB that contains, by definition, zero or more 1474 Classifier, Meter, Action, Algorithmic Dropper, Queue and Scheduler 1475 elements. These elements are arranged arbitrarily according to the 1476 policy being expressed, but always in the order here. Traffic may be 1477 classified; classified traffic may be metered; each stream of traffic 1478 identified by a combination of classifiers and meters may have some set 1479 of actions performed on it, followed by drop algorithms; packets of the 1480 traffic stream may ultimately be stored into a queue and then be 1481 scheduled out to the next TCB or physical interface. It is permissible 1482 to omit elements or include null elements of any type, or to concatenate 1483 multiple functional datapath elements of the same type. 1485 When the Diffserv treatment for a given packet needs to have such 1486 building blocks repeated, this is performed by cascading multiple TCBs: 1487 an output of one TCB may drive the input of a succeeding one. For 1488 example, consider the case where traffic of a set of classes is shaped 1489 to a set of rates, but the total output rate of the group of classes 1490 must also be limited to a rate. One might imagine a set of network news 1491 feeds, each with a certain maximum rate, and a policy that their 1492 aggregate may not exceed some figure. This may be simply accomplished by 1493 cascading two TCBs. The first classifies the traffic into its separate 1494 feeds and queues each feed separately. The feeds (or a subset of them) 1495 are now fed into a second TCB, which places all input (these news feeds) 1496 into a single queue with a certain maximum rate. In implementation, one 1497 could imagine this as the several literal queues, a CBQ or WFQ system 1498 with an appropriate (and complex) weighting scheme, or a number of other 1499 approaches. But they would have the same externally measurable effect on 1500 the traffic as if they had been literally implemented with separate 1501 TCBs. 1503 8.1. TCB 1505 A generalised TCB might consist of the following stages: 1506 - Classification stage 1507 - Metering stage 1508 - Action stage (involving Markers, Absolute Droppers, 1509 Counters and Multiplexors) 1510 - Queueing stage (involving Algorithmic Droppers, Queues 1511 and Schedulers) 1513 where each stage may consist of a set of parallel datapaths consisting 1514 of pipelined elements. 1516 A Classifier or a Meter is typically a 1:N element, an Action, 1517 Algorithmic Dropper or Queue is typically a 1:1 element and a Scheduler 1518 is a N:1 element. A complete TCB should, however, result in a 1:1 or 1:N 1519 abstract element. Note that the fan-in or fan-out of an element is not 1520 an important defining characteristic of this taxonomy. 1522 8.1.1. Building blocks for Queueing 1524 Some particular rules are applied to the ordering of elements within a 1525 Queueing stage within a TCB: elements of the same type may appear more 1526 than once, either in parallel or in series. Typically, a queueing stage 1527 will have relatively many elements in parallel and few in series. 1528 Iteration and recursion are not supported constructs (the elements are 1529 arranged in an acyclic graph). The following inter-connections of 1530 elements are allowed: 1532 1) The input of a Queue may be the input of the queueing block or it 1533 may be connected to the output of an Algorithmic Dropper or to an 1534 output of a Scheduler. 1536 2) Each input of a Scheduler may be connected to the output of a 1537 Queue, to the output of an Algorithmic Dropper or to the output of 1538 another Scheduler. 1540 3) The input of an Algorithmic Dropper must be the first element of 1541 the queueing stage, the output of another Algorithmic Dropper. 1543 4) The output of the queueing block may be the output of a Queue, an 1544 Algorithmic Dropper or a Scheduler. 1546 Note, in particular, that Schedulers may operate in series such that a 1547 packet at the head of a Queue feeding the concatenated Schedulers is 1548 serviced only after all of the scheduling criteria are met. For example, 1549 a Queue which carries EF traffic streams may be served first by a non- 1550 work-conserving Scheduler to shape the stream to a maximum rate, then by 1551 a work-conserving Scheduler to mix EF traffic streams with other traffic 1552 streams. Alternatively, there might be a Queue and/or a dropper between 1553 the two Schedulers. 1555 8.2. An Example TCB 1557 A SLS is presumed to have been negotiated between the customer and the 1558 provider which specifies the handling of the customer's traffic, as 1559 defined by a TCS) by the provider's network. The agreement might be of 1560 the following form: 1562 DSCP PHB Profile Treatment 1563 ---- --- ------- ---------------------- 1564 001001 EF Profile4 Discard non-conforming. 1565 001100 AF11 Profile5 Shape to profile, tail-drop when full. 1566 001101 AF21 Profile3 Re-mark non-conforming to DSCP 001000, 1567 tail-drop when full. 1568 other BE none Apply RED-like dropping. 1570 This SLS specifies that the customer may submit packets marked for DSCP 1571 001001 which will get EF treatment so long as they remain conforming to 1572 Profile4 and will be discarded if they exceed this profile. The 1573 discarded packets are counted in this example, perhaps for use by the 1574 provider's sales department in convincing the customer to buy a larger 1575 SLS. Packets marked for DSCP 001100 will be shaped to Profile5 before 1576 forwarding. Packets marked for DSCP 001101 will be metered to Profile3 1577 with non-conforming packets "downgraded" by being re-marked with a DSCP 1578 of 001000. It is implicit in this agreement that conforming packets are 1579 given the PHB originally indicated by the packets' DSCP field. 1581 Figures 6 and 7 illustrates a TCB that might be used to handle this SLS 1582 at an ingress interface at the customer/provider boundary. 1584 The Classification stage of this example consists of a single BA 1585 classifier. The BA classifier is used to separate traffic based on the 1586 Diffserv service level requested by the customer (as indicated by the 1587 DSCP in each submitted packet's IP header). We illustrate three DSCP 1588 filter values: A, B and C. The 'X' in the BA classifier is a wildcard 1589 filter that matches every packet not otherwise matched. 1591 The path for DSCP 001100 proceeds directly to Dropper1 whilst the paths 1592 for DSCP 001001 and 001101 include a metering stage. All other traffic 1593 is passed directly on to Dropper3. There is a separate meter for each 1594 set of packets corresponding to classifier outputs A and C. Each meter 1595 uses a specific profile, as specified in the TCS, for the corresponding 1596 Diffserv service level. The meters in this example each indicate one of 1597 two conformance levels: conforming or non-conforming. 1599 Following the Metering stage is an Action stage in some of the branches. 1600 Packets submitted for DSCP 001001 (Classifier output A) that are deemed 1601 non-conforming by Meter1 are counted and discarded while packets that 1602 are conforming are passed on to Queue1. Packets submitted for DSCP 1603 001101 (Classifier output C) that are deemed non-conforming by Meter2 1604 are re-marked and then both conforming and non-conforming packets are 1605 multiplexed together before being passed on to Dropper2/Queue3. 1607 +-----+ 1608 | A|---------------------------> to Queue1 1609 +->| | 1610 | | B|--+ +-----+ +-----+ 1611 | +-----+ | | | | | 1612 | Meter1 +->| |--->| | 1613 | | | | | 1614 | +-----+ +-----+ 1615 | Counter1 Absolute 1616 submitted +-----+ | Dropper1 1617 traffic | A|-----+ 1618 --------->| B|--------------------------------------> to AlgDropper1 1619 | C|-----+ 1620 | X|--+ | 1621 +-----+ | | +-----+ +-----+ 1622 Classifier1| | | A|--------------->|A | 1623 (BA) | +->| | | |--> to AlgDrop2 1624 | | B|--+ +-----+ +->|B | 1625 | +-----+ | | | | +-----+ 1626 | Meter2 +->| |-+ Mux1 1627 | | | 1628 | +-----+ 1629 | Marker1 1630 +-----------------------------------> to AlgDropper3 1632 Figure 6: An Example Traffic Conditioning Block (Part 1) 1634 The Algorithmic Dropping, Queueing and Scheduling stages are realised as 1635 follows, illustrated in figure 7. Note that the figure does not show any 1636 of the implicit control linkages between elements that allow e.g. an 1637 Algorithmic Dropper to sense the current state of a succeeding Queue. 1638 Conforming DSCP 001001 packets from Meter1 are passed directly to 1639 Queue1: there is no way, with configuration of the following Scheduler 1640 to match the metering, for these packets to overflow the depth of Queue1 1641 so there is no requirement for dropping at this point. Packets marked 1642 for DSCP 001100 must be passed through a tail-dropper, AlgDropper1, 1643 which serves to limit the depth of the following queue, Queue2: packets 1644 that arrive to a full queue will be discarded. This is likely to be an 1645 error case: the customer is obviously not sticking to its agreed 1646 profile. Similarly, all packets from the original DSCP 001101 stream 1647 (some may have been re-marked by this stage) are passed to AlgDropper2 1648 and Queue3. Packets marked for all other DSCPs are passed to 1649 AlgDropper3 which is a RED-like Algorithmic Dropper: based on feedback 1650 of the current depth of Queue4, this dropper is supposed to discard 1651 enough packets from its input stream to keep the queue depth under 1652 control. 1654 These four Queue elements are then serviced by a Scheduler element 1655 Scheduler1: this must be configured to give each of its inputs an 1656 appropriate priority and/or bandwidth share. Inputs A and C are given 1657 guarantees of bandwidth, as appropriate for the contracted profiles. 1658 Input B is given a limit on the bandwidth it can use i.e. a non-work- 1659 conserving discipline in order to achieve the desired shaping of this 1660 stream. Input D is given no limits or guarantees but a lower priority 1661 than the other queues, appropriate for its best-effort status. Traffic 1662 then exits the Scheduler in a single orderly stream. 1664 The interconnections of the TCB elements illustrated in Figures 6 and 7 1665 can be represented textually as follows: 1667 TCB1: 1669 Classifier1: 1670 FilterA: Meter1 1672 from Meter1 +-----+ 1673 ------------------------------->| |----+ 1674 | | | 1675 +-----+ | 1676 Queue1 | 1677 | +-----+ 1678 from Classifier1 +-----+ +-----+ +->|A | 1679 ---------------->| |------->| |------>|B |-------> 1680 | | | | +--->|C | exiting 1681 +-----+ +-----+ | +->|D | traffic 1682 AlgDropper1 Queue2 | | +-----+ 1683 | | Scheduler1 1684 from Mux1 +-----+ +-----+ | | 1685 ---------------->| |------->| |--+ | 1686 | | | | | 1687 +-----+ +-----+ | 1688 AlgDropper2 Queue3 | 1689 | 1690 from Classifier1 +-----+ +-----+ | 1691 ---------------->| |------->| |----+ 1692 | | | | 1693 +-----+ +-----+ 1694 AlgDropper3 Queue4 1696 Figure 7: An Example Traffic Conditioning Block (Part 2) 1697 FilterB: Dropper1 1698 FilterC: Meter2 1699 Default: Dropper3 1701 Meter1: 1702 Type: AverageRate 1703 Profile: Profile4 1704 ConformingOutput: Queue1 1705 NonConformingOutput: Counter1 1707 Counter1: 1708 Output: AbsoluteDropper1 1710 Meter2: 1711 Type: AverageRate 1712 Profile: Profile3 1713 ConformingOutput: Mux1.InputA 1714 NonConformingOutput: Marker1 1716 Marker1: 1717 Type: DSCPMarker 1718 Mark: 001000 1719 Output: Mux1.InputB 1721 Mux1: 1722 Output: Dropper2 1724 AlgDropper1: 1725 Type: AlgorithmicDropper 1726 Discipline: Drop-on-threshold 1727 Trigger: Queue2.Depth > 10kbyte 1728 Output: Queue2 1730 AlgDropper2: 1731 Type: AlgorithmicDropper 1732 Discipline: Drop-on-threshold 1733 Trigger: Queue3.Depth > 20kbyte 1734 Output: Queue3 1736 AlgDropper3: 1737 Type: AlgorithmicDropper 1738 Discipline: RED93 1739 Trigger: Internal 1740 Output: Queue3 1741 MinThresh: Queue3.Depth > 20 kbyte 1742 MaxThresh: Queue3.Depth > 40 kbyte 1743 1745 Queue1: 1746 Type: FIFO 1747 Output: Scheduler1.InputA 1749 Queue2: 1750 Type: FIFO 1751 Output: Scheduler1.InputB 1753 Queue3: 1754 Type: FIFO 1755 Output: Scheduler1.InputC 1757 Queue4: 1758 Type: FIFO 1759 Output: Scheduler1.InputD 1761 Scheduler1: 1762 Type: Scheduler4Input 1763 InputA: 1764 MaxRateProfile: none 1765 MinRateProfile: Profile4 1766 Priority: 20 1767 InputB: 1768 MaxRateProfile: Profile5 1769 MinRateProfile: none 1770 Priority: 40 1771 InputC: 1772 MaxRateProfile: none 1773 MinRateProfile: Profile3 1774 Priority: 20 1775 InputD: 1776 MaxRateProfile: none 1777 MinRateProfile: none 1778 Priority: 10 1780 8.3. An Example TCB to Support Multiple Customers 1782 The TCB described above can be installed on an ingress interface to 1783 implement a provider/customer TCS if the interface is dedicated to the 1784 customer. However, if a single interface is shared between multiple 1785 customers, then the TCB above will not suffice, since it does not 1786 differentiate among traffic from different customers. Its classification 1787 stage uses only BA classifiers. 1789 The configuration is readily modified to support the case of multiple 1790 customers per interface, as follows. First, a TCB is defined for each 1791 customer to reflect the TCS with that customer: TCB1, defined above is 1792 the TCB for customer 1. Similar elements are created for TCB2 and for 1793 TCB3 which reflect the agreements with customers 2 and 3 respectively. 1794 These 3 TCBs may or may not contain similar elements and parameters. 1796 Finally, a classifier is added to the front end to separate the traffic 1797 from the three different customers. This forms a new TCB, TCB4, which is 1798 illustrated in Figure 8. 1800 A representation of this multi-customer TCB might be: 1802 TCB4: 1804 Classifier4: 1805 Filter1: to TCB1 1806 Filter2: to TCB2 1807 Filter3: to TCB3 1808 No Match: AbsoluteDropper4 1810 AbsoluteDropper4: 1811 Type: AbsoluteDropper 1813 TCB1: 1814 (as defined above) 1816 TCB2: 1817 (similar to TCB1, perhaps with different 1818 elements or numeric parameters) 1820 TCB3: 1821 (similar to TCB1, perhaps with different 1822 elements or numeric parameters) 1824 submitted +-----+ 1825 traffic | A|--------> TCB1 1826 --------->| B|--------> TCB2 1827 | C|--------> TCB3 1828 | X|------+ +-----+ 1829 +-----+ +-->| | 1830 Classifier4 +-----+ 1831 AbsoluteDrop4 1833 Figure 8: An Example of a Multi-Customer TCB 1835 and the filters, based on each customer's source MAC address, could be 1836 defined as follows: 1838 Filter1: 1839 Type: MacAddress 1840 SrcValue: 01-02-03-04-05-06 (source MAC address of customer 1) 1841 SrcMask: FF-FF-FF-FF-FF-FF 1842 DestValue: 00-00-00-00-00-00 1843 DestMask: 00-00-00-00-00-00 1845 Filter2: 1846 (similar to Filter1 but with customer 2's source MAC address as 1847 SrcValue) 1849 Filter3: 1850 (similar to Filter1 but with customer 3's source MAC address as 1851 SrcValue) 1853 In this example, Classifier4 separates traffic submitted from different 1854 customers based on the source MAC address in submitted packets. Those 1855 packets with recognized source MAC addresses are passed to the TCB 1856 implementing the TCS with the corresponding customer. Those packets with 1857 unrecognized source MAC addresses are passed to a dropper. 1859 TCB4 has a Classifier stage and an Action element stage performing 1860 dropping of all unmatched traffic. 1862 8.4. TCBs Supporting Microflow-based Services 1864 The TCB illustrated above describes a configuration that might be 1865 suitable for enforcing a SLS at a router's ingress. It assumes that the 1866 customer marks its own traffic for the appropriate service level. It 1867 then limits the rate of aggregate traffic submitted at each service 1868 level, thereby protecting the resources of the Diffserv network. It does 1869 not provide any isolation between the customer's individual microflows. 1871 A more complex example might be a TCB configuration that offers 1872 additional functionality to the customer. It recognizes individual 1873 customer microflows and marks each one independently. It also isolates 1874 the customer's individual microflows from each other in order to prevent 1875 a single microflow from seizing an unfair share of the resources 1876 available to the customer at a certain service level. This is 1877 illustrated in Figure 9. 1879 Suppose that the customer has an SLS which specifices 2 service levels, 1880 to be identifed to the provider by DSCP A and DSCP B. Traffic is first 1881 directed to a MF classifier which classifies traffic based on 1882 +-----+ +-----+ 1883 Classifier1 | | | |---------------+ 1884 (MF) +->| |-->| | +-----+ | 1885 +-----+ | | | | |---->| | | 1886 | A|------ +-----+ +-----+ +-----+ | 1887 --->| B|-----+ Marker1 Meter1 Absolute | 1888 | C|---+ | Dropper1 | +-----+ 1889 | X|-+ | | +-----+ +-----+ +-->|A | 1890 +-----+ | | | | | | |------------------>|B |---> 1891 | | +->| |-->| | +-----+ +-->|C | to TCB2 1892 | | | | | |---->| | | +-----+ 1893 | | +-----+ +-----+ +-----+ | Mux1 1894 | | Marker2 Meter2 Absolute | 1895 | | Dropper2 | 1896 | | +-----+ +-----+ | 1897 | | | | | |---------------+ 1898 | |--->| |-->| | +-----+ 1899 | | | | |---->| | 1900 | +-----+ +-----+ +-----+ 1901 | Marker3 Meter3 Absolute 1902 | Dropper3 1903 V etc. 1905 Figure 9: An Example of a Marking and Traffic Isolation TCB 1907 miscellaneous classification criteria, to a granularity sufficient to 1908 identify individual customer microflows. Each microflow can then be 1909 marked for a specific DSCP The metering elements limit the contribution 1910 of each of the customer's microflows to the service level for which it 1911 was marked. Packets exceeding the allowable limit for the microflow are 1912 dropped. 1914 This TCB could be formally specified as follows: 1916 TCB1: 1917 Classifier1: (MF) 1918 FilterA: Marker1 1919 FilterB: Marker2 1920 FilterC: Marker3 1921 etc. 1923 Marker1: 1924 Output: Meter1 1926 Marker2: 1928 Output: Meter2 1930 Marker3: 1931 Output: Meter3 1933 Meter1: 1934 ConformingOutput: Mux1.InputA 1935 NonConformingOutput: AbsoluteDropper1 1937 Meter2: 1938 ConformingOutput: Mux1.InputB 1939 NonConformingOutput: AbsoluteDropper2 1941 Meter3: 1942 ConformingOutput: Mux1.InputC 1943 NonConformingOutput: AbsoluteDropper3 1945 etc. 1947 Mux1: 1948 Output: to TCB2 1950 Note that the detailed traffic element declarations are not shown here. 1951 Traffic is either dropped by TCB1 or emerges marked for one of two 1952 DSCPs. This traffic is then passed to TCB2 which is illustrated in 1953 Figure 10. 1955 +-----+ 1956 | |---------------> to Queue1 1957 +->| | +-----+ 1958 +-----+ | | |---->| | 1959 | A|---+ +-----+ +-----+ 1960 ->| | Meter5 AbsoluteDropper4 1961 | B|---+ +-----+ 1962 +-----+ | | |---------------> to Queue2 1963 Classifier2 +->| | +-----+ 1964 (BA) | |---->| | 1965 +-----+ +-----+ 1966 Meter6 AbsoluteDropper5 1968 Figure 10: Additional Example: TCB2 1970 TCB2 could then be specified as follows: 1972 Classifier2: (BA) 1973 FilterA: Meter5 1974 FilterB: Meter6 1976 Meter5: 1977 ConformingOutput: Queue1 1978 NonConformingOutput: AbsoluteDropper4 1980 Meter6: 1981 ConformingOutput: Queue2 1982 NonConformingOutput: AbsoluteDropper5 1984 8.5. Cascaded TCBs 1986 Nothing in this model prevents more complex scenarios in which one 1987 microflow TCB precedes another (e.g. for TCBs implementing separate TCSs 1988 for the source and for a set of destinations). 1990 9. Security Considerations 1992 Security vulnerabilities of Diffserv network operation are discussed in 1993 [DSARCH]. This document describes an abstract functional model of 1994 Diffserv router elements. Certain denial-of-service attacks such as 1995 those resulting from resource starvation may be mitigated by appropriate 1996 configuration of these router elements; for example, by rate limiting 1997 certain traffic streams or by authenticating traffic marked for higher 1998 quality-of-service. 2000 One particular theft- or denial-of-service issue may arise where a 2001 token-bucket meter, with an absolute dropper for non-conforming traffic, 2002 is used in a TCB to police a stream to a given TCS: the definition of 2003 the token-bucket meter in section 5 indicates that it should be lenient 2004 in accepting a packet whenever any bits of the packet would have been 2005 within the profile; the definition of the leaky-bucket scheduler is 2006 conservative in that a packet is to be transmitted only if the whole 2007 packet fits within the profile. This difference may be exploited by a 2008 malicious scheduler either to obtain QoS treatment for more octets than 2009 allowed in the TCS or to disrupt (perhaps only slightly) the QoS 2010 guarantees promised to other traffic streams. 2012 10. Acknowledgments 2014 Concepts, terminology, and text have been borrowed liberally from 2015 [POLTERM], as well as from other IETF work on MIBs and policy- 2016 management. We wish to thank the authors of some of those documents: 2017 Fred Baker, Michael Fine, Keith McCloghrie, John Seligson, Kwok Chan, 2018 Scott Hahn and Andrea Westerinen for their contributions. 2020 This document has benefitted from the comments and suggestions of 2021 several participants of the Diffserv working group, particularly John 2022 Strassner and Walter Weiss. 2024 11. References 2026 [AF-PHB] 2027 J. Heinanen, F. Baker, W. Weiss, and J. Wroclawski, "Assured 2028 Forwarding PHB Group", RFC 2597, June 1999. 2030 [DSARCH] 2031 M. Carlson, W. Weiss, S. Blake, Z. Wang, D. Black, and E. Davies, 2032 "An Architecture for Differentiated Services", RFC 2475, December 2033 1998 2035 [DSFIELD] 2036 K. Nichols, S. Blake, F. Baker, and D. Black, "Definition of the 2037 Differentiated Services Field (DS Field) in the IPv4 and IPv6 2038 Headers", RFC 2474, December 1998. 2040 [DSMIB] 2041 F. Baker, A. Smith, K. Chan, "Differentiated Services MIB", 2042 Internet Draft , November 2000. 2045 [E2E] 2046 Y. Bernet, R. Yavatkar, P. Ford, F. Baker, L. Zhang, M. Speer, K. 2047 Nichols, R. Braden, B. Davie, J. Wroclawski, and E. Felstaine, 2048 "Integrated Services Operation over Diffserv Networks", Internet 2049 Draft , March 2000. 2052 [EF-PHB] 2053 V. Jacobson, K. Nichols, and K. Poduri, "An Expedited Forwarding 2054 PHB", RFC 2598, June 1999. 2056 [FLOYD] 2057 S. Floyd, "General Load Sharing", 1993. 2059 [GTC] 2060 L. Lin, J. Lo, and F. Ou, "A Generic Traffic Conditioner", Internet 2061 Draft , August 1999. 2064 [INTSERV] 2065 R. Braden, D. Clark and S. Shenker, "Integrated Services in the 2066 Internet Architecture: an Overview" RFC 1633, June 1994. 2068 [POLTERM] 2069 A. Westerinen et al., "Policy Terminology", Internet Draft 2070 , July 2000 2073 [QOSDEVMOD] 2074 J. Strassner, A. Westerinen, B. Moore, "Information Model for 2075 Describing Network Device QoS Mechanisms", Internet Draft 2076 , July 2000 2079 [QUEUEMGMT] 2080 B. Braden et al., "Recommendations on Queue Management and 2081 Congestion Avoidance in the Internet", RFC 2309, April 1998. 2083 [SRTCM] 2084 J. Heinanen, and R. Guerin, "A Single Rate Three Color Marker", RFC 2085 2697, September 1999. 2087 [TRTCM] 2088 J. Heinanen, R. Guerin, "A Two Rate Three Color Marker", RFC 2698, 2089 September 1999. 2091 [VIC] 2092 McCanne, S. and Jacobson, V., "vic: A Flexible Framework for Packet 2093 Video", ACM Multimedia '95, November 1995, San Francisco, CA, pp. 2094 511-522. 2096 [802.1D] 2097 "Information technology - Telecommunications and information 2098 exchange between systems - Local and metropolitan area networks - 2099 Common specifications - Part 3: Media Access Control (MAC) Bridges: 2100 Revision. This is a revision of ISO/IEC 10038: 1993, 802.1j-1992 2101 and 802.6k-1992. It incorporates P802.11c, P802.1p and P802.12e.", 2102 ISO/IEC 15802-3: 1998. 2104 12. Appendix A. Discussion of Token Buckets and Leaky Buckets 2106 The concept used for rate-control in several architectures, including 2107 ATM, Frame Relay, Integrated Services and Differentiated Services, 2108 consists of "leaky buckets" and/or "token buckets". Both of these are, 2109 by definition, theoretical relationships between some defined 2110 burst_size, rate and interval: 2112 rate = burst_size/interval 2114 Thus, a token bucket or leaky bucket might specify an information rate 2115 of 1.2 Mbps with a burst size of 1500 bytes. In this case, the token 2116 rate is 1,200,000 bits per second, the token burst is 12,000 bits and 2117 the token interval is 10 milliseconds. The specification says that 2118 conforming traffic will in the worst case come in 100 bursts per second 2119 of 1500 bytes and at an average rate exceeding 1.2 Mbps. 2121 A.1 Leaky Buckets 2123 A leaky bucket algorithm is primarily used for shaping traffic as it 2124 leaves an interface onto the network (handled under Queues and 2125 Schedulers in this model). Traffic theoretically departs from an 2126 interface at a rate of one bit every so many time units (in the example, 2127 one bit every 0.83 microseconds) but, in fact, departs in multi-bit 2128 units (packets) at a rate approximating the theoretical, as measured 2129 over a longer interval. In the example, it might send one 1500 byte 2130 packet every 10 ms or perhaps one 500 byte packet every 3.3 ms. It is 2131 also possible to build multi-rate leaky buckets in which traffic departs 2132 from the interface at varying rates depending on recent activity or 2133 inactivity. 2135 Implementations generally seek as constant a transmission rate as 2136 achievable. In theory, a 10 Mbps shaped transmission stream from an 2137 algorithmic implementation and a stream which is running at 10 Mbps 2138 because its bottleneck link has been a 10 Mbps Ethernet link should be 2139 indistinguishable. Depending on configuration, the approximation to 2140 theoretical smoothness may vary by moving as much as an MTU from one 2141 token interval to another. Traffic may also be jostled by other traffic 2142 competing for the same transmission resources. 2144 A.2 Token Buckets 2146 A token bucket, on the other hand, measures the arrival rate of traffic 2147 from another device. This traffic may originally have been shaped using 2148 a leaky bucket shaper or its equivalent. The token bucket determines 2149 whether the traffic (still) conforms to the specification. Multi-rate 2150 token buckets (e.g. token buckets with both a peak rate and a mean rate, 2151 and sometimes more) are commonly used, such as described in [SRTCM] and 2152 [TRTCM]. In this case, absolute smoothness is not expected, but 2153 conformance to one or more of the specified rates is. 2155 Simplistically, a data stream is said to conform to a simple token 2156 bucket parameterised by a {rate, burst_size} if the system receives in 2157 any time interval, t, at most, an amount of data not exceeding (rate * 2158 t) + burst_size. 2160 For the multi-rate token bucket case, the data stream is said to conform 2161 if, for each of the rates, the stream conforms to the token-bucket 2162 profile appropriate for traffic of that class. For example, received 2163 traffic that arrives pre-classified as one of the "excess" rates (e.g. 2164 AF12 or AF13 traffic for a device implementing the AF1x PHB) is only 2165 compared to the relevant "excess" token bucket profile. 2167 A.3 Some consequences 2169 When used as a leaky bucket shaper, the above definition interacts with 2170 clock granularity in ways one might not expect. A leaky bucket releases 2171 a packet only when all of its bits would have been allowed: it does not 2172 borrow from future capacity. If the clock is very fine grain, on the 2173 order of the bit rate or faster, this is not an issue. But if the clock 2174 is relatively slow (and millisecond or multi-millisecond clocks are not 2175 unusual in networking equipment), this can introduce jitter to the 2176 shaped stream. 2178 The fact that data is organized into variable length packets introduces 2179 some uncertainty in the conformance decision made by a downstream Meter 2180 that is attempting to determine conformance to a traffic profile. 2181 Theoretically, in this case, a token bucket accepts a packet only if all 2182 of its bits would have been accepted and does not borrow the required 2183 excess capacity from future capacity - this is referred to as a "strict" 2184 token bucket. This is consistent with [SRTCM] and [TRTCM]. In real- 2185 world deployment, however, where MTUs are often larger than the burst 2186 size offered by a link-layer network service provider and TCP is more 2187 commonly ACK-paced than shaped using a leaky bucket, a "loose" or 2188 "lenient" token bucket definition that would accept a packet if any of 2189 its bits were within a profile offers a solution to the practical 2190 problems that may arise from use of a strict meter. 2192 Internet Protocol (IP) packets are of variable-length but theoretical 2193 token buckets operate using fixed-length time intervals or pieces of 2194 data. This leaves an implementor of a token bucket scheme with a 2195 dilemma. When the amount of bandwidth tokens, TB, left in the token 2196 bucket is positive but less than the size of the packet being operated 2197 on, one of three things can be done: 2199 (1) The whole size of the packet can be substracted from the bucket, 2200 leaving it negative, remembering that the token bucket size must 2201 be added to TB rather than simply setting it "full". This 2202 potentially puts more than the token bucket size into this token 2203 bucket interval and less into the next. It does, however, make 2204 the average amount accepted per token bucket interval equal to 2205 the token burst. This approach accepts traffic if any bit in the 2206 packet would be accepted and borrows up to one MTU of capacity 2207 from one or more subsequent intervals when necessary. Such a 2208 token bucket implementation is said to be a "loose" token bucket. 2210 (2) Alternatively, the amount can be left unchanged (and maybe an 2211 attempt could be made to accept the packet under another 2212 threshold in another bucket), remembering that the token bucket 2213 size must be added to the TB variable rather than simply setting 2214 it "full". This potentially puts less than the token bucket size 2215 into this token bucket interval and more into the next. Like the 2216 first option, it makes the average amount accepted per token 2217 bucket interval equal to the token burst. This approach accepts 2218 traffic if every bit in the packet would be accepted and borrows 2219 up to one MTU of capacity from one or more previous intervals 2220 when necessary. Such a token bucket implementation is said to be 2221 a "strict" (or perhaps "stricter") token bucket. 2223 (3) The TB variable can be set to zero to account for the first part 2224 of the packet and the remainder of the packet size can be taken 2225 out of the next-colored bucket. This, of course, has another bug: 2226 the same packet cannot have both conforming and nonconforming 2227 components in the Diffserv architecture and so is not really 2228 appropriate here. 2230 Unfortunately, the thing that cannot be done is exactly to fit the token 2231 burst specification with random sized packets: therefore token buckets 2232 in a variable length packet environment always have a some variance from 2233 theoretical reality. This has also been observed in the ATM Guaranteed 2234 Frame Rate (GFR) service category specification and Frame Relay. 2236 Some find the behavior of a "loose" token bucket unacceptable, as it is 2237 significantly different than the token bucket description for ATM and 2238 for Frame Relay. However, the "strict" token bucket approach has three 2239 characteristics which are important to keep in mind: 2241 (1) First, if the maximum token burst is smaller than the MTU, it is 2242 possible that traffic never matches the specification. This may 2243 be avoided by not allowing such a specification. 2245 (2) Second, the strict token bucket specifications [SRTCM] and 2246 [TRTCM], as specified, are subject to a persistent under-run. 2247 These accumulate burst capacity over time, up to the maximum 2248 burst size. Suppose that the maximum burst size is exactly the 2249 size of the packets being sent - which one might call the 2250 "strictest" token bucket implementation. In such a case, when one 2251 packet has been accepted, the token depth becomes zero, and 2252 starts to accumulate. If the next packet is received any time 2253 earlier than a token interval later, it will not be accepted. If 2254 the next packet arrives exactly on time, it will be accepted and 2255 the token depth again set to zero. If it arrives later, however, 2256 the token depth will stop accumulating, as it is capped by the 2257 maximum burst size, and tokens that would have accumulated 2258 between the end of that token interval and the actual arrival of 2259 the packet are lost. As a result, natural jitter in the network 2260 conspires against the algorithm to reduce the actual acceptance 2261 rate. Overcoming this error requires the maximum token bucket 2262 size to be significantly greater than the MTU. 2264 (3) Third, operationally, a strict token bucket is reasonable for 2265 traffic which has been shaped by a leaky bucket shaper or a 2266 serial line. However, traffic in the Internet is rarely shaped in 2267 that way. TCP applies no shaping to its traffic, but rather 2268 depends on longer-range ACK-clocking behavior to help it 2269 approximate a certain rate and explicitly sends traffic bursts 2270 during slow start, retransmission and fast recovery. Video-on-IP 2271 implementations such as [VIC] may have a leaky bucket shaper 2272 available to them, but often do not, and simply enqueue the 2273 output of their codec for transmission on the appropriate 2274 interface. As a result, in each of these cases, a strict shaper 2275 may reject traffic in the short term (single token interval) 2276 which it would have accepted if it had a longer time in view and 2277 which it needs to accept for the application to work properly. To 2278 work around this, the token interval must approximate or exceed 2279 the RTT of the session or sessions in question and the burst size 2280 must accommodate the largest burst that the originator might 2281 send. 2283 A.4 Mathematics 2285 The behavior defined in [SRTCM] and [TRTCM] is not mandatory for 2286 compliance, but we give here a mathematical definition of two- parameter 2287 token bucket operation which is consistent with those documents and 2288 which can be used to define a shaping profile. 2290 Define a token bucket with bucket size BS, token accumulation rate R and 2291 instantaneous token occupancy T(t). Assume that T(0) = BS. 2293 Then after an arbitrary interval with no packet arrivals, T(t) will not 2294 change since the bucket is already full of tokens. Assume a packet of 2295 size B bytes at time t'. The bucket capacity T(t'-) = BS still. Then, as 2296 long as B <= BS, the packet conforms to the meter, and 2298 T(t') = BS - B. 2300 Assume an interval v = t - t' elapses before the next packet, of size C 2301 <= BS, arrives. T(t-) is given by the following equation: 2303 T(t-) = min { BS, T(t') + v*R } 2305 maximum of BS tokens). 2307 If T(t-) - C = 0, the packet conforms and T(t) = T(t-) - C. Otherwise, 2308 the packet does not conform and T(t) = T(t-). 2310 This function can be used to define a shaping profile. If a packet of 2311 size C arrives at time t, it will be eligible for transmission at time 2312 te given as follows (we still assume C <= BS): 2314 te = max { t, t" } 2316 where t" = (C - T(t') + t'*R)/R, T(t") = C, the time when C credits have 2317 accumulated in the bucket, and when the packet would conform if the 2318 token bucket were a meter. te != t" only if t > t". 2320 13. Authors' Addresses 2322 Yoram Bernet 2323 Microsoft 2324 One Microsoft Way 2325 Redmond, WA 98052 2326 Phone: +1 425 936 9568 2327 E-mail: yoramb@microsoft.com 2329 Steven Blake 2330 Ericsson 2331 920 Main Campus Drive, Suite 500 2332 Raleigh, NC 27606 2333 Phone: +1 919 472 9913 2334 E-mail: slblake@torrentnet.com 2336 Daniel Grossman 2337 Motorola Inc. 2338 20 Cabot Blvd. 2339 Mansfield, MA 02048 2340 Phone: +1 508 261 5312 2341 E-mail: dan@dma.isg.mot.com 2343 Andrew Smith (editor) 2344 Allegro Networks 2345 6399 San Ignacio Ave. 2346 San Jose, CA 95119 2347 FAX: +1 415 345 1827 2348 E-mail: andrew@allegronetworks.com 2350 Table of Contents 2352 1 Introduction .................................................... 2 2353 2 Glossary ........................................................ 4 2354 3 Conceptual Model ................................................ 6 2355 3.1 Components of a Diffserv Router ............................... 6 2356 3.1.1 Datapath .................................................... 6 2357 3.1.2 Configuration and Management Interface ...................... 8 2358 3.1.3 Optional QoS Agent Module ................................... 8 2359 3.2 Diffserv Functions at Ingress and Egress ...................... 9 2360 3.3 Shaping and Policing .......................................... 10 2361 3.4 Hierarchical View of the Model ................................ 11 2362 4 Classifiers ..................................................... 11 2363 4.1 Definition .................................................... 11 2364 4.1.1 Filters ..................................................... 13 2365 4.1.2 Overlapping Filters ......................................... 13 2366 4.2 Examples ...................................................... 15 2367 4.2.1 Behaviour Aggregate (BA) Classifier ......................... 15 2368 4.2.2 Multi-Field (MF) Classifier ................................. 15 2369 4.2.3 Free-form Classifier ........................................ 16 2370 4.2.4 Other Possible Classifiers .................................. 16 2371 5 Meters .......................................................... 17 2372 5.1 Examples ...................................................... 18 2373 5.1.1 Average Rate Meter .......................................... 18 2374 5.1.2 Exponential Weighted Moving Average (EWMA) Meter ............ 19 2375 5.1.3 Two-Parameter Token Bucket Meter ............................ 19 2376 5.1.4 Multi-Stage Token Bucket Meter .............................. 20 2377 5.1.5 Null Meter .................................................. 21 2378 6 Action Elements ................................................. 21 2379 6.1 DSCP Marker ................................................... 22 2380 6.2 Absolute Dropper .............................................. 22 2381 6.3 Multiplexor ................................................... 22 2382 6.4 Counter ....................................................... 23 2383 6.5 Null Action ................................................... 23 2384 7 Queueing Elements ............................................... 23 2385 7.1 Queueing Model ................................................ 24 2386 7.1.1 FIFO Queue .................................................. 24 2387 7.1.2 Scheduler ................................................... 25 2388 7.1.3 Algorithmic Dropper ......................................... 27 2389 7.2 Sharing load among traffic streams using queueing ............. 31 2390 7.2.1 Load Sharing ................................................ 31 2391 7.2.2 Traffic Priority ............................................ 32 2392 8 Traffic Conditioning Blocks (TCBs) .............................. 32 2393 8.1 TCB ........................................................... 33 2394 8.1.1 Building blocks for Queueing ................................ 34 2395 8.2 An Example TCB ................................................ 34 2396 8.3 An Example TCB to Support Multiple Customers .................. 39 2397 8.4 TCBs Supporting Microflow-based Services ...................... 41 2398 8.5 Cascaded TCBs ................................................. 44 2399 9 Security Considerations ......................................... 45 2400 10 Acknowledgments ................................................ 45 2401 11 References ..................................................... 45 2402 12 Appendix A. Discussion of Token Buckets and Leaky Buckets ...... 47 2403 13 Authors' Addresses ............................................. 52 2404 14. Full Copyright 2406 Copyright (C) The Internet Society (2000). All Rights Reserved. 2408 This document and translations of it may be copied and furnished to 2409 others, and derivative works that comment on or otherwise explain it 2410 or assist in its implmentation may be prepared, copied, published and 2411 distributed, in whole or in part, without restriction of any kind, 2412 provided that the above copyright notice and this paragraph are 2413 included on all such copies and derivative works. However, this 2414 document itself may not be modified in any way, such as by removing 2415 the copyright notice or references to the Internet Society or other 2416 Internet organizations, except as needed for the purpose of 2417 developing Internet standards in which case the procedures for 2418 copyrights defined in the Internet Standards process must be 2419 followed, or as required to translate it into languages other than 2420 English. 2422 The limited permissions granted above are perpetual and will not be 2423 revoked by the Internet Society or its successors or assigns. 2425 This document and the information contained herein is provided on an 2426 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 2427 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 2428 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 2429 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 2430 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.