idnits 2.17.1 draft-ietf-diffserv-pdb-def-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories. == Mismatching filename: the document gives the document name as 'draft-ietf-diffserv-pdb-def-01', but the file name used is 'draft-ietf-diffserv-pdb-def-02' ** The document is more than 15 pages and seems to lack a Table of Contents. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 1056 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 11 instances of lines with control characters in the document. ** The abstract seems to contain references ([RFC2475], [RFC2475,MODEL,, MIB], [RFC2474]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 1047 has weird spacing: '...ign.com email...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC2434' is mentioned on line 978, but not defined ** Obsolete undefined reference: RFC 2434 (Obsoleted by RFC 5226) == Unused Reference: 'RFC2698' is defined on line 1012, but no explicit reference was found in the text == Unused Reference: 'MIB' is defined on line 1019, but no explicit reference was found in the text == Unused Reference: 'RFC234' is defined on line 1035, but no explicit reference was found in the text ** Downref: Normative reference to an Informational RFC: RFC 2475 ** Obsolete normative reference: RFC 2598 (Obsoleted by RFC 3246) ** Downref: Normative reference to an Informational RFC: RFC 2698 == Outdated reference: A later version (-06) exists of draft-ietf-diffserv-model-05 ** Downref: Normative reference to an Informational draft: draft-ietf-diffserv-model (ref. 'MODEL') -- No information found for draft-ietf-diffserv- - is the name correct? -- Possible downref: Normative reference to a draft: ref. 'MIB' -- Possible downref: Normative reference to a draft: ref. 'VW' -- Possible downref: Non-RFC (?) normative reference: ref. 'WCG' -- Possible downref: Non-RFC (?) normative reference: ref. 'PSI' -- Possible downref: Non-RFC (?) normative reference: ref. 'UU' ** Obsolete normative reference: RFC 2434 (ref. 'RFC234') (Obsoleted by RFC 5226) Summary: 16 errors (**), 0 flaws (~~), 10 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Engineering Task Force K. Nichols 2 Differentiated Services Working Group Packet Design 3 Internet Draft B. Carpenter 4 Expires in June, 2001 IBM 5 draft-ietf-diffserv-pdb-def-01 December, 2000 7 Definition of Differentiated Services Per Domain Behaviors 8 and Rules for their Specification 10 12 Status of this Memo 14 This document is an Internet-Draft and is in full conformance with 15 all provisions of Section 10 of RFC2026. Internet-Drafts are working 16 documents of the Internet Engineering Task Force (IETF), its areas, 17 and its working groups. Note that other groups may also distribute 18 working documents as Internet-Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced, or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference material 23 or to cite them other than as "work in progress." 25 This document is a product of the Diffserv working group. Comments 26 on this draft should be directed to the Diffserv mailing list 27 . The list of current Internet-Drafts can be accessed 28 at www.ietf.org/ietf/1id-abstracts.txt. The list of Internet-Draft 29 Shadow Directories can be accessed at www.ietf.org/shadow.html. 30 Distribution of this memo is unlimited. 32 Copyright Notice 34 Copyright (C) The Internet Society (2000). All Rights Reserved. 36 Abstract 38 The differentiated services framework enables quality-of-service 39 provisioning within a network domain by applying rules at the edges 40 to create traffic aggregates and coupling each of these with a 41 specific forwarding path treatment in the domain through use of 42 a codepoint in the IP header [RFC2474]. The diffserv WG has defined 43 the general architecture for differentiated services [RFC2475] 44 and has focused on the forwarding path behavior required in routers, 45 known as "per-hop forwarding behaviors" (or PHBs) [RFC2474, RFC2597, 46 and RFC2598]. The WG has also discussed functionality required 47 at diffserv (DS) domain edges to select (classifiers) and condition 48 (e.g., policing and shaping) traffic according to the rules [RFC2475, 49 MODEL, MIB]. Short-term changes in the QoS goals for a DS domain 50 are implemented by changing only the configuration of these edge 51 behaviors without necessarily reconfiguring the behavior of interior 52 network nodes. 54 The next step is to formulate examples of how forwarding path components 55 (PHBs, classifiers, and traffic conditioners) can be used to compose 56 traffic aggregates whose packets experience specific forwarding 57 characteristics as they transit a differentiated services domain. 58 The WG has decided to use the term per-domain behavior, or PDB, 59 to describe the behavior experienced by a particular set of packets 60 as they cross a DS domain. A PDB is characterized by specific metrics 61 that quantify the treatment a set of packets with a particular 62 DSCP (or set of DSCPs) will receive as it crosses a DS domain. 63 A PDB specifies a fowarding path treatment for a traffic aggregate 64 and, due to the role that particular choices of edge and PHB 65 configuration play in its resulting attributes, it is where the 66 forwarding path and the control plane interact. The measurable 67 parameters of a PDB should be suitable for use in Service Level 68 Specifications at the network edge. 70 This document defines and discusses Per-Domain Behaviors in detail 71 and lays out the format and required content for contributions 72 to the Diffserv WG on PDBs and the procedure that will be applied 73 for individual PDB specifications to advance as WG products. This 74 format is specified to expedite working group review of PDB submissions. 76 A pdf version of this document is available at: 77 www.packetdesign.com/ietf/diffserv/pdb_def.pdf. 79 1 Introduction 81 Differentiated Services allows an approach to IP Quality of Service 82 that is modular, incrementally deployable, and scalable while 83 introducing minimal per-node complexity [RFC2475]. From the end user's 84 point of view, QoS should be supported end-to-end between any pair of 85 hosts. However, this goal is not immediately attainable. It will 86 require interdomain QoS support, and many untaken steps remain 87 on the road to achieving this. One essential step, the evolution 88 of the business models for interdomain QoS, will necessarily develop 89 outside of the IETF. A goal of the diffserv WG is to provide the 90 firm technical foundation that allows these business models to 91 develop. The first major step will be to support edge-to-edge or 92 intradomain QoS between the ingress and egress of a single network, 93 i.e. a DS Domain in the terminology of RFC 2474. The intention 94 is that this edge-to-edge QoS should be composable, in a purely 95 technical sense, to a quantifiable QoS across a DS Region composed 96 of multiple DS domains. 98 The Diffserv WG has finished the first phase of standardizing the 99 behaviors required in the forwarding path of all network nodes, 100 the per-hop forwarding behaviors or PHBs. The PHBs defined in RFCs 101 2474, 2597 and 2598 give a rich toolbox for differential packet 102 handling by individual boxes. The general architectural model for 103 diffserv has been documented in RFC 2475. An informal router model 104 [MODEL] describes a model of traffic conditioning and other forwarding 105 behaviors. However, technical issues remain in moving "beyond the 106 box" to intradomain QoS models. 108 The ultimate goal of creating scalable end-to-end QoS in the Internet 109 requires that we can identify and quantify behavior for a group 110 of packets that is preserved when they are aggregated with other 111 packets as they traverse the Internet. The step of specifying forwarding 112 path attributes on a per-domain basis for a set of packets distinguished 113 only by the mark in the DS field of individual packets is critical 114 in the evolution of Diffserv QoS and should provide the technical 115 input that will aid in the construction of business models. This 116 document defines and specifies the term "Per-Domain Behavior" or 117 PDB to describe QoS attributes across a DS domain. 119 Diffserv classification and traffic conditioning are applied to 120 packets arriving at the boundary of a DS domain to impose restrictions 121 on the composition of the resultant traffic aggregates, as distinguished 122 by the DSCP marking , inside the domain. The classifiers and traffic 123 conditioners are set to reflect the policy and traffic goals for 124 that domain and may be specified in a TCA (Traffic Conditioning 125 Agreement). Once packets have crossed the DS boundary, adherence 126 to diffserv principles makes it possible to group packets solely 127 according to the behavior they receive at each hop (as selected 128 by the DSCP). This approach has well-known scaling advantages, 129 both in the forwarding path and in the control plane. Less well 130 recognized is that these scaling properties only result if the 131 per-hop behavior definition gives rise to a particular type of 132 invariance under aggregation. Since the per-hop behavior must be 133 equivalent for every node in the domain, while the set of packets 134 marked for that PHB may be different at every node, PHBs should 135 be defined such that their characteristics do not depend on the 136 traffic volume of the associated BA on a router's ingress link 137 nor on a particular path through the DS domain taken by the packets. 138 Specifically, different streams of traffic that belong to the same 139 traffic aggregate merge and split as they traverse the network. 140 If the properties of a PDB using a particular PHB hold regardless 141 of how the temporal characteristics of the marked traffic aggregate 142 change as it traverses the domain, then that PDB scales. (Clearly 143 this assumes that numerical parameters such as bandwidth allocated 144 to the particular PDB may be different at different points in the 145 network, and may be adjusted dynamically as traffic volume varies.) 146 If there are limits to where the properties hold, that translates 147 to a limit on the size or topology of a DS domain that can use 148 that PDB. Although useful single-link DS domains might exist, PDBs 149 that are invariant with network size or that have simple relationships 150 with network size and whose properties can be recovered by reapplying 151 rules (that is, forming another diffserv boundary or edge to re-enforce 152 the rules for the traffic aggregate) are needed for building scalable 153 end-to-end quality of service. 155 There is a clear distinction between the definition of a Per-Domain 156 Behavior in a DS domain and a service that might be specified in 157 a Service Level Agreement. The PDB definition is a technical building 158 block that permits the coupling of classifiers, traffic conditioners, 159 specific PHBs, and particular configurations with a resulting set 160 of specific observable attributes which may be characterized in 161 a variety of ways. These definitions are intended to be useful 162 tools in configuring DS domains, but the PDB (or PDBs) used by 163 a provider is not expected to be visible to customers any more 164 than the specific PHBs employed in the provider's network would 165 be. Network providers are expected to select their own measures 166 to make customer-visible in contracts and these may be stated quite 167 differently from the technical attributes specified in a PDB definition, 168 though the configuration of a PDB might be taken from a Service 169 Level Specification (SLS). Similarly, specific PDBs are intended 170 as tools for ISPs to construct differentiated services offerings; 171 each may choose different sets of tools, or even develop their 172 own, in order to achieve particular externally observable metrics. 173 Nevertheless, the measurable parameters of a PDB are expected to 174 be among the parameters cited directly or indirectly in the Service 175 Level Specification component of a corresponding SLA. 177 This document defines Differentiated Services Per-Domain Behaviors 178 and specifies the format that must be used for submissions of particular 179 PDBs to the Diffserv WG. 181 2 Definitions 183 The following definitions are stated in RFCs 2474 and 2475 and are 184 repeated here for easy reference: 186 " Behavior Aggregate: a collection of packets with the same codepoint 187 crossing a link in a particular direction. 189 " Differentiated Services Domain: a contiguous portion of the Internet 190 over which a consistent set of differentiated services policies 191 are administered in a coordinated fashion. A differentiated services 192 domain can represent different administrative domains or autonomous 193 systems, different trust regions, different network technologies 194 (e.g., cell/frame), hosts and routers, etc. Also DS domain. 196 " Differentiated Services Boundary: the edge of a DS domain, where 197 classifiers and traffic conditioners are likely to be deployed. 198 A differentiated services boundary can be further sub-divided into 199 ingress and egress nodes, where the ingress/egress nodes are the 200 downstream/upstream nodes of a boundary link in a given traffic 201 direction. A differentiated services boundary typically is found 202 at the ingress to the first-hop differentiated services-compliant 203 router (or network node) that a host's packets traverse, or at 204 the egress of the last-hop differentiated services-compliant router 205 or network node that packets traverse before arriving at a host. 206 This is sometimes referred to as the boundary at a leaf router. 207 A differentiated services boundary may be co-located with a host, 208 subject to local policy. Also DS boundary. 210 To these we add: 212 " Traffic Aggregate: a collection of packets with a codepoint that 213 maps to the same PHB, usually in a DS domain or some subset of 214 a DS domain. A traffic aggregate marked for the foo PHB is referred 215 to as the "foo traffic aggregate" or "foo aggregate" interchangeably. 216 This generalizes the concept of Behavior Aggregate from a link 217 to a network. 219 " Per-Domain Behavior: the expected treatment that an identifiable 220 or target group of packets will receive from "edge-to-edge" of 221 a DS domain. (Also PDB.) A particular PHB (or, if applicable, list 222 of PHBs) and traffic conditioning requirements are associated with 223 each PDB. 225 " A Service Level Specfication (SLS) is a set of parameters and 226 their values which together define the service offered to a traffic 227 stream by a DS domain. It is expected to include specific values 228 or bounds for PDB parameters. 230 3 The Value of Defining Edge-to-Edge Behavior 232 As defined in section 2, a PDB describes the edge-to-edge behavior 233 across a DS domain's "cloud." Specification of the transit expectations 234 of packets matching a target for a particular diffserv behavior 235 across a DS domain will both assist in the deployment of single-domain 236 QoS and will help enable the composition of end-to-end, cross-domain 237 services. Networks of DS domains can be connected to create end-to-end 238 services by building on the PDB characteristics without regard 239 to the particular PHBs used. This level of abstraction makes it 240 easier to compose cross-domain services as well as making it possible 241 to hide details of a network's internals while exposing information 242 sufficient to enable QoS. 244 Today's Internet is composed of multiple independently administered 245 domains or Autonomous Systems (ASs), represented by the "clouds" 246 in figure 1. To deploy ubiquitous end-to-end quality of service 247 in the Internet, business models must evolve that include issues 248 of charging and reporting that are not in scope for the IETF. In 249 the meantime, there are many possible uses of quality of service 250 within an AS and the IETF can address the technical issues in creating 251 an intradomain QoS within a Differentiated Services framework. 252 In fact, this approach is quite amenable to incremental deployment 253 strategies. 255 ---------------------------------------- 256 | AS2 | 257 | | 258 ------- | ------------ ------------ | 259 | AS1 |------|-----X | | | | 260 ------- | | | Y | | ------- 261 | | | /| X----|--------| AS3 | 262 | | | / | | | ------- 263 | | | / ------------ | 264 | | Y | | 265 | | | \ ------------ | 266 ------- | | | \ | | | 267 | AS4 |------|-----X | \| | | 268 ------- | | | Y X----|------ 269 | | | | | | 270 | ------------ ------------ | 271 | | 272 | | 273 ---------------------------------------- 275 Figure 1: Interconnection of ASs and DS Domains 277 Where DS domains are independently administered, the evolution of 278 the necessary business agreements and future signaling arrangements 279 will take some time, thus, early deployments will be within a single 280 administrative domain. Putting aside the business issues, the same 281 technical issues that arise in interconnecting DS domains with 282 homogeneous administration will arise in interconnecting the autonomous 283 systems (ASs) of the Internet. 285 A single AS (e.g., AS2 in figure 1) may be composed of subnetworks 286 and, as the definition allows, these can be separate DS domains. 287 An AS might have multiple DS domains for a number of reasons, most 288 notable being to follow topological and/or technological boundaries 289 and to separate the allocation of resources. If we confine ourselves 290 to the DS boundaries between these "interior" DS domains, we avoid 291 the non-technical problems of setting up a service and can address 292 the issues of creating characterizable PDBs. 294 The incentive structure for differentiated services is based on 295 upstream domains ensuring their traffic conforms to the Traffic 296 Conditioning Agreements (TCAs) with downstream domains and downstream 297 domains enforcing that TCA, thus metrics associated with PDBs can 298 be sensibly computed. The rectangular boxes in figure 1 represent 299 the DS boundary routers containing traffic conditioners that ensure 300 and enforce conformance (e.g., shapers and policers). Although 301 policers and shapers are expected at the DS boundaries of ASs (dark 302 rectangles), they might appear anywhere, or nowhere, inside the 303 AS. Specifically, the boxes at the DS boundaries internal to the 304 AS (shaded rectangles) may or may not condition traffic. Technical 305 guidelines for the placement and configuration of DS boundaries 306 should derive from the attributes of a particular PDB under aggregation 307 and multiple hops. 309 This definition of PDB continues the separation of forwarding path 310 and control plane decribed in RFC 2474. The forwarding path 311 characteristics are addressed by considering how the behavior at every 312 hop of a packet's path is affected by the merging and branching of 313 traffic aggregates through multiple hops. Per-hop behaviors in nodes are 314 configured infrequently, representing a change in network 315 infrastructure. More frequent quality-of-service changes come from 316 employing control plane functions in the configuration of the DS 317 boundaries. A PDB provides a link between the DS domain level at which 318 control is exercised to form traffic aggregates with quality-of-service 319 goals across the domain and the per-hop and per-link treatments packets 320 receive that results in meeting the quality-of-service goals. 322 4 Understanding PDBs 324 4.1 Defining PDBs 326 RFCs 2474 and 2475 define a Differentiated Services Behavior Aggregate 327 as "a collection of packets with the same DS codepoint crossing 328 a link in a particular direction" and further state that packets 329 with the same DSCP get the same per-hop forwarding treatment (or 330 PHB) everywhere inside a single DS domain. Note that even if multiple 331 DSCPs map to the same PHB, this must hold for each DSCP individually. 332 In section 2 of this document, we introduced a more general definition 333 of a traffic aggregate in the diffserv sense so that we might easily 334 refer to the packets which are mapped to the same PHB everywhere 335 within a DS domain. Section 2 also presented a short definition 336 of PDBs which we expand upon in this section: 338 Per-Domain Behavior: the expected treatment that an identifiable 339 or target group of packets will receive from "edge to edge" of 340 a DS domain. A particular PHB (or, if applicable, list of PHBs) 341 and traffic conditioning requirements are associated with each 342 PDB. 344 Each PDB has measurable, quantifiable, attributes that can be used 345 to describe what happens to its packets as they enter and cross 346 the DS domain. These derive from the characteristics of the traffic 347 aggregate that results from application of classification and traffic 348 conditioning during the entry of packets into the DS domain and 349 the forwarding treatment (PHB) the packets get inside the domain, 350 but can also depend on the entering traffic loads and the domain's 351 topology. PDB attributes may be absolute or statistical and they 352 may be parameterized by network properties. For example, a loss 353 attribute might be expressed as "no more than 0.1% of packets will 354 be dropped when measured over any time period larger than T", a 355 delay attribute might be expressed as "50% of delivered packets 356 will see less than a delay of d milliseconds, 30% will see a delay 357 less than 2d ms, 20% will see a delay of less than 3d ms." A wide 358 range of metrics is possible. In general they will be expressed 359 as bounds or percentiles rather than as absolute values. 361 A PDB is applied to a target group of packets arriving at the edge 362 of the DS domain. The target group is distinguished from all arriving 363 packets by use of packet classifiers [RFC2475] (where the classifier 364 may be "null"). The action of the PDB on the target group has two 365 parts. The first part is the the use of traffic conditioning to 366 create a traffic aggregate. During traffic conditioning, conformant 367 packets are marked with a DSCP for the PHB associated with the 368 PDB (see figure 2). The second part is the treatment experienced 369 by packets from the same traffic aggregate transiting the interior 370 of a DS domain, between and inside of DS domain boundaries. The 371 following subsections further discuss these two effects on the 372 target group that arrives at the DS domain boundary. 374 ----------- ------------ -------------------- foo 375 arriving _|classifiers|_|target group|_|traffic conditioning|_ traffic 376 packets | | |of packets | |& marking (for foo) | aggregate 377 ----------- ------------ -------------------- 379 Figure 2: Relationship of the traffic aggregate associated with 380 a PDB to arriving packets 382 4.1.1 Crossing the DS edge: the effects of traffic conditioning on the 383 target group 385 This effect is quantified by the relationship of the emerging traffic 386 aggregate to the entering target group. That relationship can depend 387 on the arriving traffic pattern as well as the configuration of 388 the traffic conditioners. For example, if the EF PHB [RFC2598] 389 and a strict policer of rate R are associated with the foo PDB, 390 then the first part of characterizing the foo PDB is to write the 391 relationship between the arriving target packets and the departing 392 foo traffic aggregate. In this case, "the rate of the emerging 393 foo traffic aggregate is less than or equal to the smaller of R 394 and the arrival rate of the target group of packets" and additional 395 temporal characteristics of the packets (e.g., burst) may be specified 396 as desired. Thus, there is a "loss rate" on the arriving target 397 group that results from sending too much traffic or the traffic 398 with the wrong temporal characteristics. This loss rate should 399 be entirely preventable (or controllable) by the upstream sender 400 conforming to the traffic conditioning associated with the PDB 401 specification. 403 The issue of "who is in control" of the loss (or demotion) rate 404 helps to clearly delineate this component of PDB performance from 405 that associated with transiting the domain. The latter is completely 406 under control of the operator of the DS domain and the former is 407 used to ensure that the entering traffic aggregate conforms to 408 the traffic profile to which the operator has provisioned the network. 409 Further, the effects of traffic conditioning on the target group 410 can usually be expressed more simply than the effects of transitting 411 the DS domain on the traffic aggregate's traffic profile. 413 A PDB may also apply traffic conditioning at DS domain egress. The 414 effect of this conditioning on the overall PDB attributes would 415 be treated similarly to the ingress characteristics (the authors 416 may develop more text on this in the future, but it does not materially 417 affect the ideas presented in this document.) 419 4.1.2 Crossing the DS domain: transit effects 421 The second component of PDB performance is the metrics that characterize 422 the transit of a packet of the PDB's traffic aggregate between 423 any two edges of the DS domain boundary shown in figure 3. Note 424 that the DS domain boundary runs through the DS boundary routers 425 since the traffic aggregate is generally formed in the boundary 426 router before the packets are queued and scheduled for output. 427 (In most cases, this distinction is expected to be important.) 429 DSCPs should not change in the interior of a DS domain as there 430 is no traffic conditioning being applied. If it is necessary to 431 reapply the kind of traffic conditioning that could result in remarking, 432 there should be a DS domain boundary at that point, though such 433 an "interior" boundary can have "lighter weight" rules in its TCA. 434 Thus, when measuring attributes between locations as indicated 435 in figure 3, the DSCP at the egress side can be assumed to have 436 held throughout the domain. 438 ------------- 439 | | 440 -----X | 441 | | 442 | DS | 443 | domain X---- 444 | | 445 -----X | 446 | | 447 ------------- 449 Figure 3: Range of applicability of attributes of a traffic 450 aggregate associated with a PDB (is between the points 451 marked "X") 453 Though a DS domain may be as small as a single node, more complex 454 topologies are expected to be the norm, thus the PDB definition 455 must hold as its traffic aggregate is split and merged on the interior 456 links of a DS domain. Packet flow in a network is not part of the 457 PDB definition; the application of traffic conditioning as packets 458 enter the DS domain and the consistent PHB through the DS domain 459 must suffice. A PDB's definition does not have to hold for arbitrary 460 topologies of networks, but the limits on the range of applicability 461 for a specific PDB must be clearly specified. 463 In general, a PDB operates between N ingress points and M egress 464 points at the DS domain boundary. Even in the degenerate case where 465 N=M=1, PDB attributes are more complex than the definition of PHB 466 attributes since the concatenation of the behavior of intermediate 467 nodes affects the former. A complex case with N and M both greater 468 than one involves splits and merges in the traffic path and is 469 non-trivial to analyze. Analytic, simulation, and experimental 470 work will all be necessary to understand even the simplest PDBs. 472 4.2 Constructing PDBs 474 A DS domain is configured to meet the network operator's traffic 475 engineering goals for the domain independently of the performance 476 goals for a particular flow of a traffic aggregate. Once the interior 477 routers are configured for the number of distinct traffic aggregates 478 that the network will handle, each PDB's allocation at the edge 479 comes from meeting the desired performance goals for the PDB's 480 traffic aggregate subject to that configuration of packet schedulers 481 and bandwidth capacity. The configuration of traffic conditioners 482 at the edge may be altered by provisioning or admission control 483 but the decision about which PDB to use and how to apply classification 484 and traffic conditioning comes from matching performance to goals. 486 For example, consider the DS domain of figure 3. A PDB with an explicit 487 bound on loss must apply traffic conditioning at the boundary to 488 ensure that on the average no more packets are admitted than can 489 emerge. Though, queueing internal to the network may result in 490 a difference between input and output traffic over some timescales, 491 the averaging timescale should not exceed what might be expected 492 for reasonably sized buffering inside the network. Thus if bursts 493 are allowed to arrive into the interior of the network, there must 494 be enough capacity to ensure that losses don't exceed the bound. 495 Note that explicit bounds on the loss level can be particularly 496 difficult as the exact way in which packets merge inside the network 497 affects the burstiness of the PDB's traffic aggregate and hence, 498 loss. 500 PHBs give explicit expressions of the treatment a traffic aggregate 501 can expect at each hop. For a PDB, this behavior must apply to 502 merging and diverging traffic aggregates, thus characterizing a 503 PDB requires understanding what happens to a PHB under aggregation. 504 That is, PHBs recursively applied must result in a known behavior. 505 As an example, since maximum burst sizes grow with the number of 506 microflows or traffic aggregate streams merged, a PDB specification 507 must address this. A clear advantage of constructing behaviors 508 that aggregate is the ease of concatenating PDBs so that the associated 509 traffic aggregate has known attributes that span interior DS domains 510 and, eventually, farther. For example, in figure 1 assume that 511 we have configured the foo PDB on the interior DS domains of AS2. 512 Then traffic aggregates associated with the foo PDB in each interior 513 DS domain of AS2 can be merged at the shaded interior boundary 514 routers. If the same (or fewer) traffic conditioners as applied 515 at the entrance to AS2 are applied at these interior boundaries, 516 the attributes of the foo PDB should continue to be used to quantify 517 the expected behavior. Explicit expressions of what happens to 518 the behavior under aggregation, possibly parameterized by node 519 in-degrees or network diameters, are necessary to determine what 520 to do at the internal aggregation points. One approach might be 521 to completely reapply the traffic conditioning at these points; 522 another might employ some limited rate-based remarking only. 524 Multiple PDBs may use the same PHB. The specification of a PDB can 525 contain a list of PHBs and their required configuration, all of 526 which would result in the same PDB. In operation, it is expected 527 that a single domain will use a single PHB to implement a particular 528 PDB, though different domains may select different PHBs. Recall 529 that in the diffserv definition [RFC2474], a single PHB might be 530 selected within a domain by a list of DSCPs. Multiple PDBs might 531 use the same PHB in which case the transit performance of traffic 532 aggregates of these PDBs will, of necessity, be the same. Yet, 533 the particular characteristics that the PDB designer wishes to 534 claim as attributes may vary, so two PDBs that use the same PHB 535 might not be specified with the same list of attributes. 537 The specification of the transit expectations of PDBs across domains 538 both assists in the deployment of QoS within a DS domain and helps 539 enable the composition of end-to-end, cross-domain services to 540 proceed by making it possible to hide details of a domain's internals 541 while exposing characteristics necessary for QoS. 543 4.3 PDBs using PHB Groups 545 The use of PHB groups to construct PDBs can be done in several ways. 546 A single PHB member of a PHB group might be used to construct a 547 single PDB. For example, a PDB could be defined using just one 548 of the Class Selector Compliant PHBs [RFC2474]. The traffic conditioning 549 for that PDB and the required configuration of the particular PHB 550 would be defined in such a way that there was no dependence or 551 relationship with the manner in which other PHBs of the group are 552 used or, indeed, whether they are used in that DS domain. In this 553 case, the reasonable approach would be to specify this PDB alone in 554 a document which expressly called out the conditions and configuration 555 of the Class Selector PHB required. 557 A single PDB can be constructed using more than one PHB from the 558 same PHB group. For example, the traffic conditioner described 559 in RFC 2698 might be used to mark a particular entering traffic 560 aggregate for one of the three AF1x PHBs [RFC2597] while the transit 561 performance of the resultant PDB is specified, statistically, across 562 all the packets marked with one of those PHBs. 564 A set of related PDBs might be defined using a PHB group. In this 565 case, the related PDBs should be defined in the same document. 566 This is appropriate when the traffic conditioners that create the 567 traffic aggregates associated with each PDB have some relationships 568 and interdependencies such that the traffic aggregates for these 569 PDBs should be described and characterized together. The transit 570 attributes will depend on the PHB associated with the PDB and will 571 not be the same for all PHBs in the group, though there may be 572 some parameterized interrelationship between the attributes of 573 each of these PDBs. In this case, each PDB should have a clearly 574 separate description of its transit attributes (delineated in a 575 separate subsection) within the document. For example, the traffic 576 conditioner described in RFC 2698 might be used to mark arriving 577 packets for three different AF1x PHBs, each of which is to be treated 578 as a separate traffic aggregate in terms of transit properties. 579 Then a single document could be used to define and quantify the 580 relationship between the arriving packets and the emerging traffic 581 aggregates as they relate to one another. The transit characteristics 582 of packets of each separate AF1x traffic aggregate should be described 583 separately within the document. 585 Another way in which a PHB group might be used to create one PDB 586 per PHB might have decoupled traffic conditioners, but some relationship 587 between the PHBs of the group. For example, a set of PDBs might 588 be defined using Class Selector Compliant PHBs [RFC2474] in such 589 a way that the traffic conditioners that create the traffic aggregates 590 are not related, but the transit performance of each traffic aggregate 591 has some parametric relationship to the other. If it makes sense 592 to specify them in the same document, then the author(s) should 593 do so. 595 4.4 Forwarding path vs. control plane 597 A PDB's associated PHB, classifiers, and traffic conditioners are 598 all in the packet forwarding path and operate at line rates. PHBs, 599 classifiers, and traffic conditioners are configured in response 600 to control plane activity which takes place across a range of time 601 scales, but, even at the shortest time scale, control plane actions 602 are not expected to happen per-packet. Classifiers and traffic 603 conditioners at the DS domain boundary are used to enforce who 604 gets to use the PDB and how the PDB should behave temporally. 605 Reconfiguration of PHBs might occur monthly, quarterly, or only when 606 the network is upgraded. Classifiers and traffic conditioners might be 607 reconfigured at a few regular intervals during the day or might happen 608 in response to signalling decisions thousands of times a day. Much of 609 the control plane work is still evolving and is outside the charter of 610 the Diffserv WG. We note that this is quite appropriate since the 611 manner in which the configuration is done and the time scale at which 612 it is done should not affect the PDB attributes. 614 5 Format for Specification of Diffserv Per-Domain Behaviors 616 PDBs arise from a particular relationship between edge and interior 617 (which may be parameterized). The quantifiable characteristics 618 of a PDB must be independent of whether the network edge is configured 619 statically or dynamically. The particular configuration of traffic 620 conditioners at the DS domain edge is critical to how a PDB performs, 621 but the act(s) of configuring the edge is a control plane action 622 which can be separated from the specification of the PDB. 624 The following sections must be present in any specification of a 625 Differentiated Services PDB. Of necessity, their length and content 626 will vary greatly. 628 5.1 Applicability Statement 630 All PDB specs must have an applicability statement that outlines 631 the intended use of this PDB and the limits to its use. 633 5.2 Technical specification 635 This section specifies the rules or guidelines to create this PDB, 636 each distinguished with "may", "must" and "should." The technical 637 specification must list the classification and traffic conditioning 638 required (if any) and the PHB (or PHBs) to be used with any additional 639 requirements on their configuration beyond that contained in RFCs. 640 Classification can reflect the results of an admission control 641 process. Traffic conditioning may include marking, traffic shaping, 642 and policing. A Service Provisioning Policy might be used to describe 643 the technical specification of a particular PDB. 645 5.3 Attributes 647 A PDB's attributes tell how it behaves under ideal conditions if 648 configured in a specified manner (where the specification may be 649 parameterized). These might include drop rate, throughput, delay 650 bounds measured over some time period. They may be bounds, statistical 651 bounds, or percentiles (e.g., "90% of all packets measured over 652 intervals of at least 5 minutes will cross the DS domain in less 653 than 5 milliseconds"). A wide variety of characteristics may be 654 used but they must be explicit, quantifiable, and defensible. Where 655 particular statistics are used, the document must be precise about 656 how they are to be measured and about how the characteristics were 657 derived. 659 Advice to a network operator would be to use these as guidelines 660 in creating a service specification rather than use them directly. 661 For example, a "loss-free" PDB would probably not be sold as such, 662 but rather as a service with a very small packet loss probability. 664 5.4 Parameters 666 The definition and characteristics of a PDB may be parameterized 667 by network-specific features; for example, maximum number of hops, 668 minimum bandwidth, total number of entry/exit points of the PDB 669 to/from the diffserv network, maximum transit delay of network 670 elements, minimum buffer size available for the PDB at a network 671 node, etc. 673 5.5 Assumptions 675 In most cases, PDBs will be specified assuming lossless links, no 676 link failures, and relatively stable routing. This is reasonable 677 since otherwise it would be very difficult to quantify behavior 678 and this is the operating conditions for which most operators strive. 679 However, these assumptions must be clearly stated. Since PDBs with 680 specific bandwidth parameters require that bandwidth to be available, 681 the assumptions to be stated may include standby capacity. Some 682 PDBs may be specifically targeted for cases where these assumptions 683 do not hold, e.g., for high loss rate links, and such targeting 684 must also be made explicit. If additional restrictions, especially 685 specific traffic engineering measures, are required, these must 686 be stated. 688 Further, if any assumptions are made about the allocation of resources 689 within a diffserv network in the creation of the PDB, these must 690 be made explicit. 692 5.6 Example Uses 694 A PDB specification must give example uses to motivate the understanding 695 of ways in which a diffserv network could make use of the PDB although 696 these are not expected to be detailed. For example, "A bulk handling 697 PDB may be used for all packets which should not take any resources 698 from the network unless they would otherwise go unused. This might 699 be useful for Netnews traffic or for traffic rejected from some 700 other PDB by traffic policers." 702 5.7 Environmental Concerns (media, topology, etc.) 704 Note that it is not necessary for a provider to expose which PDB 705 (if a commonly defined one) is being used nor is it necessary for 706 a provider to specify a service by the PDB's attributes. For example, 707 a service provider might use a PDB with a "no queueing loss" 708 characteristic in order to specify a "very low loss" service. 710 This section is to inject realism into the characteristics described 711 above. Detail the assumptions made there and what constraints that 712 puts on topology or type of physical media or allocation. 714 6 On PDB Attributes 716 As discussed in section 4, measurable, quantifiable attributes 717 associated with each PDB can be used to describe what will happen to 718 packets using that PDB as they cross the domain. In its role as a build- 719 ing block for the construction of interdomain quality-of-service, a 720 PDB specification should provide the answer to the question: Under 721 what conditions can we join the output of this domain to another 722 under the same traffic conditioning and expectations? Although 723 there are many ways in which traffic might be distributed, creating 724 quantifiable, realizable PDBs that can be concatenated into multi-domain 725 services limits the realistic scenarios. A PDB's attributes with 726 a clear statement of the conditions under which the attributes 727 hold is critical to the composition of multi-domain services. 729 There is a clear correlation between the strictness of the traffic 730 conditioning and the quality of the PDB's attributes. As indicated 731 earlier, numerical bounds are likely to be statistical or expressed 732 as a percentile. Parameters expressed as strict bounds will require 733 very precise mathematical analysis, while those expressed statistically 734 can to some extent rely on experiment. Section 7 gives the example 735 of a PDB without strict traffic conditioning and concurrent work 736 on a PDB with strict traffic conditioning and attributes is also 737 in front of the WG [VW]. This section gives some general considerations 738 for characterizing PDB attributes. 740 There are two ways to characterize PDBs with respect to time. First 741 are properties over "long" time periods, or average behaviors. 742 A PDB specification should report these as the rates or throughput 743 seen over some specified time period. In addition, there are properties 744 of "short" time behavior, usually expressed as the allowable burstiness 745 in a traffic aggregate. The short time behavior is important in 746 understanding buffering requirements (and associated loss character- 747 istics) and for metering and conditioning considerations at DS 748 boundaries. For short-time behavior, we are interested primarily in 749 two things: 1) how many back-to-back packets of the PDB's traffic 750 aggregate will we see at any point (this would be metered as a burst) 751 and 2) how large a burst of packets of this PDB's traffic aggregate 752 can appear in a queue at once (gives queue overflow and loss). 753 If other PDBs are using the same PHB within the domain, that must 754 be taken into account. 756 6.1 Considerations in specifying long-term or average PDB attributes 758 To characterize the average or long-term behavior for the foo PDB 759 we must explore a number of questions, for instance: Can the DS 760 domain handle the average foo traffic flow? Is that answer topology de- 761 pendent or are there some specific assumptions on routing which must 762 hold for the foo PDB to preserve its "adequately provisioned" 763 capability? In other words, if the topology of D changes suddenly, 764 will the foo PDB's attributes change? Will its loss rate dramatically 765 increase? 767 Let domain D in figure 4 be an ISP ringing the U.S. with links of 768 bandwidth B and with N tails to various metropolitan areas. Inside 769 D, if the link between the node connected to A and the node connected 770 to Z goes down, all the foo traffic aggregate between the two nodes 771 must transit the entire ring: Would the bounded behavior of the 772 foo PDB change? If this outage results in some node of the ring 773 now having a larger arrival rate to one of its links than the capacity 774 of the link for foo's traffic aggregate, clearly the loss rate 775 would change dramatically. In this case, topological assumptions 776 were made about the path of the traffic from A to Z that affected 777 the characteristics of the foo PDB. If these topological assumptions 778 no longer hold, the loss rate of packets of the foo traffic aggregate 779 transiting the domain could change; for example, a characteristic 780 such as "loss rate no greater than 1% over any interval larger 781 than 10 minutes." A PDB specification should spell out the assumptions 782 made on preserving the attributes. 784 ____X________X_________X___________ / 785 / \ L | 786 A<---->X X<----->| E 787 | | | 788 | D | \ 789 Z<---->X | 790 | | 791 \___________________________________/ 792 X X 794 Figure 4: ISP and DS domain D connected in a ring and connected 795 to DS domain E 797 6.2 Considerations in specifying short-term or bursty PDB attributes 799 Next, consider the short-time behavior of the traffic aggregate 800 associated with a PDB, specifically whether permitting the maximum 801 bursts to add in the same manner as the average rates will lead 802 to properties that aggregate or under what conditions this will 803 lead to properties that aggregate. In our example, if domain D 804 allows each of the uplinks to burst p packets into the foo traffic 805 aggregate, the bursts could accumulate as they transit the ring. 806 Packets headed for link L can come from both directions of the 807 ring and back-to-back packets from foo's traffic aggregate can 808 arrive at the same time. If the bandwidth of link L is the same 809 as the links of the ring, this probably does not present a buffering 810 problem. If there are two input links that can send packets to 811 queue for L, at worst, two packets can arrive simultaneously for 812 L. If the bandwidth of link L equals or exceeds twice B, the packets 813 won't accumulate. Further, if p is limited to one, and the bandwidth 814 of L exceeds the rate of arrival (over the longer term) of foo 815 packets (required for bounding the loss) then the queue of foo 816 packets for link L will empty before new packets arrive. If the 817 bandwidth of L is equal to B, one foo packet must queue while the 818 other is transmitted. This would result in N x p back-to- back 819 packets of this traffic aggregate arriving over L during the same 820 time scale as the bursts of p were permitted on the uplinks. Thus, 821 configuring the PDB so that link L can handle the sum of the rates 822 that ingress to the foo PDB doesn't guarantee that L can handle 823 the sum of the N bursts into the foo PDB. 825 If the bandwidth of L is less than B, then the link must buffer 826 Nxpx(B-L)/B foo packets to avoid loss. If the PDB is getting less 827 than the full bandwidth L, this number is larger. For probabilistic 828 bounds, a smaller buffer might do if the probability of exceeding 829 it can be bounded. 831 More generally, for router indegree of d, bursts of foo packets 832 might arrive on each input. Then, in the absence of any additional 833 traffic conditioning, it is possible that dxpx(# of uplinks) back-to 834 -back foo packets can be sent across link L to domain E. Thus the DS 835 domain E must permit these much larger bursts into the foo PDB 836 than domain D permits on the N uplinks or else the foo traffic 837 aggregate must be made to conform to the TCA for entering E (e.g., 838 by shaping). 840 What conditions should be imposed on a PDB and on the associated 841 PHB in order to ensure PDBs can be concatenated, as across the 842 interior DS domains of figure 1? Traffic conditioning for constructing 843 a PDB that has certain attributes across a DS domain should apply 844 independently of the origin of the packets. With reference to the 845 example we've been exploring, the TCA for the PDB's traffic aggregate 846 entering link L into domain E should not depend on the number of 847 uplinks into domain D. 849 6.3 Remarks 851 This section has been provided as motivational food for thought 852 for PDB specifiers. It is by no means an exhaustive catalog of 853 possible PDB attributes or what kind of analysis must be done. 854 We expect this to be an interesting and evolutionary part of the 855 work of understanding and deploying differentiated services in 856 the Internet. There is a potential for much interesting research 857 work. However, in submitting a PDB specification to the Diffserv 858 WG, a PDB must also meet the test of being useful and relevant 859 by a deployment experience, described in section 8. 861 7 A Reference Per-Domain Behavior 863 The intent of this section is to define as a reference a Best Effort 864 PDB, a PDB that has little in the way of rules or expectations. 866 7.1 Best Effort Behavior PDB 868 7.1.1 Applicability 870 A Best Effort (BE) PDB is for sending "normal internet traffic" 871 across a diffserv network. That is, the definition and use of this 872 PDB is to preserve, to a reasonable extent, the pre-diffserv delivery 873 expectation for packets in a diffserv network that do not require 874 any special differentiation. Although the PDB itself does not include 875 bounds on availability, latency, and packet loss, this does not 876 preclude Service Providers from engineering their networks so as 877 to result in commercially viable bounds on services that utilize 878 the BE PDB. This would be analogous to the Service Level Guarantees 879 that are provided in today's single-service Internet. 881 In the present single-service commercial Internet, Service Level 882 Guarantees for availability, latency, and packet delivery can be 883 found on the web sites of ISPs [WCG, PSI, UU]. For example, a typical 884 North American round-trip latency bound is 85 milliseconds, with 885 each service provider's site information specifying the method 886 of measurement of the bounds and the terms associated with these 887 bounds contractually. 889 7.1.2 TCS and PHB configurations 891 There are no restrictions governing rate and bursts of packets beyond 892 the limits imposed by the ingress link. The network edge ensures 893 that packets using the PDB are marked for the Default PHB (as defined 894 in [RFC2474]), but no other traffic conditioning is required. Interior 895 network nodes apply the Default PHB on these packets. 897 7.1.3 Attributes of this PDB 899 "As much as possible as soon as possible". 901 Packets of this PDB will not be completely starved and when resources 902 are available (i.e., not required by packets from any other traffic 903 aggregate), network elements should be configured to permit packets 904 of this PDB to consume them. 906 Network operators may bound the delay and loss rate for services 907 constructed from this PDB given knowledge about their network, 908 but such attributes are not part of the definition. 910 7.1.4 Parameters 912 None. 914 7.1.5 Assumptions 916 A properly functioning network, i.e. packets may be delivered from 917 any ingress to any egress. 919 7.1.6 Example uses 921 1. For the normal Internet traffic connection of an organization. 923 2. For the "non-critical" Internet traffic of an organization. 925 3. For standard domestic consumer connections 927 8 Guidelines for writing PDB specifications 929 G1. Following the format given in this document, write a draft and 930 submit it as an Internet Draft. The document should have "diffserv" 931 as some part of the name. Either as an appendix to the draft, or 932 in a separate document, provide details of deployment experience 933 with measured results on a network of non-trivial size carrying 934 realistic traffic and/or convincing simulation results (simulation 935 of a range of modern traffic patterns and network topologies as 936 applicable). The document should be brought to the attention of 937 the diffserv WG mailing list, if active. 939 G2. Initial discussion should focus primarily on the merits of the 940 PDB, though comments and questions on the claimed attributes are 941 reasonable. This is in line with the Differentiated Services goal 942 to put relevance before academic interest in the specification 943 of PDBs. Academically interesting PDBs are encouraged, but would 944 be more appropriate for technical publications and conferences, 945 not for submission to the IETF. (An "academically interesting" 946 PDB might become a PDB of interest for deployment over time.) 948 The implementation of the following guidelines varies, depending 949 on whether there is an active diffserv working group or not. 951 Active Diffserv Working Group path: 953 G3. Once consensus has been reached on a version of a draft that 954 it is a useful PDB and that the characteristics "appear" to be 955 correct (i.e., not egregiously wrong) that version of the draft 956 goes to a review panel the WG co-chairs set up to audit and report 957 on the characteristics. The review panel will be given a deadline 958 for the review. The exact timing of the deadline will be set on 959 a case-by-case basis by the co-chairs to reflect the complexity 960 of the task and other constraints (IETF meetings, major holidays) 961 but is expected to be in the 4-8 week range. During that time, 962 the panel may correspond with the authors directly (cc'ing the 963 WG co-chairs) to get clarifications. This process should result 964 in a revised draft and/or a report to the WG from the panel that 965 either endorses or disputes the claimed characteristics. 967 G4. If/when endorsed by the panel, that draft goes to WG last call. 968 If not endorsed, the author(s) can give an itemized response to 969 the panel's report and ask for a WG Last Call. 971 G5. If/when passes Last Call, goes to ADs for publication as a WG 972 Informational RFC in our "PDB series". 974 If no active Diffserv Working Group exists: 976 G3. Following discussion on relevant mailing lists, the authors 977 should revise the Internet Draft and contact the IESG for "Expert 978 Review" as defined in section 2 of RFC 2434 [RFC2434]. 980 G4. Subsequent to the review, the IESG may recommend publication 981 of the Draft as an RFC, request revisions, or decline to publish 982 as an Informational RFC in the "PDB series". 984 9 Acknowledgements 986 The ideas in this document have been heavily influenced by the Diffserv 987 WG and, in particular, by discussions with Van Jacobson, Dave Clark, 988 Lixia Zhang, Geoff Huston, Scott Bradner, Randy Bush, Frank Kastenholz, 989 Aaron Falk, and a host of other people who should be acknowledged 990 for their useful input but not be held accountable for our mangling 991 of it. Grenville Armitage coined "per domain behavior (PDB)" though 992 some have suggested similar terms prior to that. Dan Grossman, 993 Bob Enger, Jung-Bong Suk, and John Dullaert reviewed the document 994 and commented so as to improve its form. 996 References 998 [RFC2474] RFC 2474, "Definition of the Differentiated Services Field 999 (DS Field) in the IPv4 and IPv6 Headers", K.Nichols, S. Blake, F. 1000 Baker, D. Black, www.ietf.org/rfc/rfc2474.txt 1002 [RFC2475] RFC 2475, "An Architecture for Differentiated Services", 1003 S. Blake, D. Black, M.Carlson, E.Davies, Z.Wang, W.Weiss, 1004 www.ietf.org/rfc/rfc2475.txt 1006 [RFC2597] RFC 2597, "Assured Forwarding PHB Group", F. Baker, J. 1007 Heinanen, W. Weiss, J. Wroclawski, www.ietf.org/rfc/rfc2597.txt 1009 [RFC2598] RFC 2598, "An Expedited Forwarding PHB", V.Jacobson, 1010 K.Nichols, K.Poduri, www.ietf.org/ rfc/rfc2598.txt 1012 [RFC2698] RFC 2698, "A Two Rate Three Color Marker", J. Heinanen, R. 1013 Guerin. www.ietf.org/rfc/ rfc2698.txt 1015 [MODEL] "An Informal Management Model for Diffserv Routers", 1016 draft-ietf-diffserv-model-05.txt, Y. Bernet, S. Blake, D. Grossman, 1017 A. Smith 1019 [MIB] "Management Information Base for the Differentiated Services 1020 Architecture", draft-ietf-diffserv- mib-06.txt, F. Baker, K. Chan, 1021 A. Smith 1023 [VW] "The 'Virtual Wire' Per-Domain Behavior", 1024 draft-ietf-diffserv-pdb-vw-00.txt, V. Jacobson, K. Nichols, and 1025 K. Poduri 1027 [WCG] Worldcom, "Internet Service Level Guarantee", 1028 http://www.worldcom.com/terms/service_level_guarantee/t_sla_terms.phtml 1030 [PSI] PSINet, "Service Level Agreements", http://www.psinet.com/sla/ 1032 [UU] UUNET USA Web site, "Service Level Agreements", 1033 http://www.us.uu.net/support/sla/ 1035 [RFC234] RFC 2434, "Guidelines for IANA Considerations", T. Narten, 1036 H. Alverstrand. www.ietf.org/rfc/ rfc2434.txt 1038 Authors' Addresses 1040 Kathleen Nichols Brian Carpenter 1041 Packet Design, Inc. IBM 1042 66 Willow Place c/o iCAIR 1043 Menlo Park, CA 94025 Suite 150 1044 USA 1890 Maple Avenue 1045 Evanston, IL 60201 1046 USA 1047 email: nichols@packetdesign.com email: brian@icair.org