idnits 2.17.1 draft-ietf-tewg-diff-te-mar-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 1) being 805 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There is 1 instance of too long lines in the document, the longest one being 2 characters in excess of 72. ** There are 36 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 789 has weird spacing: '...cedures for...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Experimental ---------------------------------------------------------------------------- == Unused Reference: 'ASH2' is defined on line 378, but no explicit reference was found in the text == Unused Reference: 'DIFF-MPLS' is defined on line 384, but no explicit reference was found in the text == Unused Reference: 'DIFFSERV' is defined on line 390, but no explicit reference was found in the text == Unused Reference: 'KEY' is defined on line 397, but no explicit reference was found in the text == Unused Reference: 'MPLS-ARCH' is defined on line 414, but no explicit reference was found in the text == Unused Reference: 'RFC2026' is defined on line 418, but no explicit reference was found in the text Summary: 5 errors (**), 0 flaws (~~), 9 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Jerry Ash 2 Internet Draft AT&T 3 Category: Experimental 4 5 Expiration Date: December 2003 6 June, 2003 8 Max Allocation with Reservation Bandwidth Constraint Model for 9 MPLS/DiffServ TE & Performance Comparisons 11 13 Status of this Memo 15 This document is an Internet-Draft and is in full conformance with 16 all provisions of Section 10 of RFC2026. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that other 20 groups may also distribute working documents as Internet-Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six months 23 and may be updated, replaced, or obsoleted by other documents at any 24 time. It is inappropriate to use Internet-Drafts as reference 25 material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 Abstract 35 This document complements the DiffServ-aware MPLS TE (DSTE) requirements 36 document by giving a functional specification for the Maximum Allocation 37 with Reservation (MAR) bandwidth constraint model. Assumptions, 38 applicability, and examples of the operation of the MAR bandwidth 39 constraint model are presented. MAR performance is analyzed relative to 40 the criteria for selecting a bandwidth constraint model, in order to 41 provide guidance to user implementation of the model in their networks. 43 Table of Contents 45 1. Introduction 46 2. Definitions 47 3. Assumptions & Applicability 48 4. Functional Specification of the MAR Bandwidth Constraint Model 49 5. Setting Bandwidth Constraints 50 6. Example of MAR Operation 51 7. Summary 52 8. Security Considerations 53 9. Acknowledgments 54 10. References 55 11. Authors' Addresses 56 ANNEX A. MAR Operation & Performance Analysis 58 1. Introduction 60 DiffServ-aware MPLS traffic engineering (DSTE) requirements and protocol 61 extensions are specified in [DSTE-REQ, DSTE-PROTO]. A requirement for 62 DSTE implementation is the specification of bandwidth constraint models 63 for use with DSTE. The bandwidth constraint model provides the 'rules' 64 to support the allocation of bandwidth to individual class types (CTs). 65 CTs are groupings of service classes in the DSTE model, which are 66 provided separate bandwidth allocations, priorities, and QoS objectives. 67 Several CTs can share a common bandwidth pool on an integrated, 68 multiservice MPLS/DiffServ network. 70 This document is intended to complement the DSTE requirements document 71 [DSTE-REQ] by giving a functional specification for the Maximum 72 Allocation with Reservation (MAR) bandwidth constraint model. Examples 73 of the operation of the MAR bandwidth constraint model are presented. 74 MAR performance is analyzed relative to the criteria for selecting a 75 bandwidth constraint model, in order to provide guidance to user 76 implementation of the model in their networks. 78 Two other bandwidth constraint models are being specified for use in 79 DSTE: 81 1. maximum allocation model (MAM) [MAM1, MAM2] - the maximum allowable 82 bandwidth usage of each CT is explicitly specified. 83 2. Russian doll model (RDM) [RDM] - the maximum allowable bandwidth 84 usage is done cumulatively by grouping successive CTs according to 85 priority classes. 87 MAR is similar to MAM in that a maximum bandwidth allocation is given to 88 each CT. However, through the use of bandwidth reservation and 89 protection mechanisms, CTs are allowed to exceed their bandwidth 90 allocations under conditions of no congestion but revert to their 91 allocated bandwidths when overload and congestion occurs. 93 All bandwidth constraint models should meet these objectives: 95 1. applies equally when preemption is either enabled or disabled (when 96 preemption is disabled, the model still works 'reasonably' well), 97 2. Bandwidth efficiency, i.e., good bandwidth sharing among CTs under 98 both normal and overload conditions, 99 3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another 100 CT under overload conditions, 101 4. protection against QoS degradation, at least of the high-priority CTs 102 (e.g. high-priority voice, high-priority data, etc.), and 103 5. reasonably simple, i.e., does not require additional IGP extensions 104 and minimizes signaling load processing requirements. 106 In Annex A modeling analysis is presented which shows that the MAR model 107 meets all these objectives, and provides good network performance 108 relative to MAM and full sharing models, under normal and abnormal 109 operating conditions. It is demonstrated that simultaneously achieves 110 bandwidth efficiency, bandwidth isolation, and protection against QoS 111 degradation without preemption. 113 In Section 3 we give the assumptions and applicability, in Section 4 a 114 functional specification of the MAR bandwidth constraint model, and in 115 Section 5 we give examples of its operation. In Annex A, MAR 116 performance is analyzed relative to the criteria for selecting a 117 bandwidth constraint model, in order to provide guidance to user 118 implementation of the model in their networks. 120 2. Definitions 122 For readability a number of definitions from [DSTE-REQ, DSTE-PROTO] are 123 repeated here: 125 Traffic Trunk: an aggregation of traffic flows of the same class (i.e. 126 which are to be treated equivalently from the DSTE perspective) which 127 are placed inside an LSP. 129 Class-Type (CT): the set of Traffic Trunks crossing a link that is 130 governed by a specific set of Bandwidth constraints. CT is used for the 131 purposes of link bandwidth allocation, constraint based routing and 132 admission control. A given Traffic Trunk belongs to the same CT on all 133 links. 135 Up to 8 CTs (MaxCT = 8) are supported. They are referred to as CTc, 0 136 <= c <= MaxCT-1 = 7. Each CT is assigned either a Bandwidth 137 Constraint, or a set of Bandwidth Constraints. Up to 8 Bandwidth 138 Constraints (MaxBC = 8) are supported and they are referred to as BCc, 139 0 <= c <= MaxBC-1 = 7. 141 TE-Class: A pair of: i. a CT ii. a preemption priority allowed for that 142 CT. This means that an LSP transporting a Traffic Trunk from that CT can 143 use that preemption priority as the set-up priority, as the holding 144 priority or both. 146 MAX_RESERVABLE_BWk: maximum reservable bandwidth on link k specifies the 147 maximum bandwidth that may be reserved; this may be greater than the 148 maximum link bandwidth in which case the link may be oversubscribed 149 [KATZ-YEUNG]. 151 RESERVED_BWck: reserved bandwidth-in-progress on CTc on link k (0 <= c 152 <= MaxCT-1), RESERVED_BWck = sum of the bandwidth reserved by all 153 established LSPs which belong to CTc. 155 UNRESERVED_BWck: unreserved link bandwidth on CTc on link k specifies 156 the amount of bandwidth not yet reserved for CTc, UNRESERVED_BWck = 157 MAX_RESERVABLE_BWk - sum [RESERVED_BWck (0 <= c <= MaxCT-1)]. 159 BCck: bandwidth constraint for CTc on link k = allocated (minimum 160 guaranteed) bandwidth for CTc on link k (see Section 4). 162 RBW_THRESk: reservation bandwidth threshold for link k (see Section 4). 164 3. Assumptions & Applicability 166 In general, DSTE is a bandwidth allocation mechanism, for different 167 classes of traffic allocated to various CTs (e.g., voice, normal data, 168 best-effort data). Network operations functions such as capacity 169 design, bandwidth allocation, routing design, and network planning are 170 normally based on traffic measured load and forecast [ASH1]. 172 As such, the following assumptions are made according to the operation 173 of MAR: 175 1. connection admission control (CAC) allocates bandwidth for network 176 flows/LSPs according to the traffic load assigned to each CT, based on 177 traffic measurement and forecast. 178 2. CAC could allocate bandwidth per flow, per LSP, per traffic trunk, or 179 otherwise. That is, no specific assumption is made on a specific CAC 180 method, only that CT bandwidth allocation is related to the 181 measured/forecast traffic load, as per assumption #1. 182 3. CT bandwidth allocation is adjusted up or down according to 183 measured/forecast traffic load. No specific time period is assumed for 184 this adjustment, it could be short term (hours), daily, weekly, monthly, 185 or otherwise. 186 4. Capacity management and CT bandwidth allocation thresholds (e.g., 187 BCc) are designed according to traffic load, and are based on traffic 188 measurement and forecast. Again, no specific time period is assumed for 189 this adjustment, it could be short term (hours), daily, weekly, monthly, 190 or otherwise. 191 5. No assumption is made on the order in which traffic is allocated to 192 various CTs, again traffic allocation is assumed to be based only on 193 traffic load as it is measured and/or forecast. 194 6. If link bandwidth is exhausted on a given path for a flow/LSP/traffic 195 trunk, alternate paths may be attempted to satisfy CT bandwidth 196 allocation. 198 Note that the above assumptions are not unique to MAR, but are generic, 199 common assumptions for all BC models. 201 4. Functional Specification of the MAR Bandwidth Constraint Model 203 In the MAR bandwidth constraint model, the bandwidth allocation control 204 for each CT is based on estimated bandwidth needs, bandwidth use, and 205 status of links. The LER makes needed bandwidth allocation changes, and 206 uses [RSVP-TE], for example, to determine if link bandwidth can be 207 allocated to a CT. Bandwidth allocated to individual CTs is protected as 208 needed but otherwise shared. Under normal non-congested network 209 conditions, all CTs/services fully share all available bandwidth. When 210 congestion occurs for a particular CTc, bandwidth reservation acts to 211 prohibit traffic from other CTs from seizing the allocated capacity for 212 CTc. 214 On a given link k, a small amount of bandwidth RBW_THRESk, the 215 reservation bandwidth threshold for link k, is reserved and governs the 216 admission control on link k. Also associated with each CTc on link k 217 are the allocated bandwidth constraints BCck to govern bandwidth 218 allocation and protection. The reservation bandwidth on a link, 219 RBW_THRESk, can be accessed when a given CTc has bandwidth-in-use 220 RESERVED_BWck below its allocated bandwidth constraint BCck. However, 221 if RESERVED_BWck exceeds its allocated bandwidth constraint BCck, then 222 the reservation bandwidth RBW_THRESk cannot be accessed. In this way, 223 bandwidth can be fully shared among CTs if available, but is otherwise 224 protected by bandwidth reservation methods. 226 Bandwidth can be accessed for a bandwidth request = DBW for CTc on a 227 given link k based on the following rules: 229 Table 1: Rules for Admitting LSP Bandwidth Request = DBW on Link k 231 For LSP on a high priority or normal priority CTc: 232 If RESERVED_BWck <= BCc: admit if DBW <= UNRESERVED_BWk 233 If RESERVED_BWck > BCc: admit if DBW <= UNRESERVED_BWk - RBW_THRESk 235 For LSP on a best-effort priority CTc: 236 allocated bandwidth BCc = 0; 237 DiffServ queuing admits BE packets only if there is available link 238 bandwidth; 240 The normal semantics of setup and holding priority are applied in the 241 MAR bandwidth constraint model, and cross-CT preemption is permitted 242 when preemption is enabled. 244 The bandwidth allocation rules defined in Table 1 are illustrated with 245 an example in Section 6 and simulation analysis in ANNEX A. 247 5. Setting Bandwidth Constraints 249 For a normal priority CTc, the bandwidth constraints BCck on link k are 250 set by allocating the maximum reservable bandwidth (MAX_RESERVABLE_BWk) 251 in proportion to the forecast or measured traffic load bandwidth 252 TRAF_LOAD_BWck for CTc on link k. That is: 254 PROPORTIONAL_BWck = TRAF_LOAD_BWck/[sum {TRAF_LOAD_BWck, c=0,MaxCT-1}] X 255 MAX_RESERVABLE_BWk 257 For normal priority CTc: 258 BCck = PROPORTIONAL_BWck 260 For a high priority CT, the bandwidth constraint BCck is set to a 261 multiple of the proportional bandwidth. That is: 263 For high priority CTc: 264 BCck = FACTOR X PROPORTIONAL_BWck 266 where FACTOR is set to a multiple of the proportional bandwidth (e.g., 267 FACTOR = 2 or 3 is typical). This results in some 'over-allocation' 268 of the maximum reservable bandwidth, and gives priority to the high 269 priority CTs. Normally the bandwidth allocated to high priority CTs 270 should be a relatively small fraction of the total link bandwidth, a 271 maximum of 10-15 percent being a reasonable guideline. 273 As stated in Section 4, the bandwidth allocated to a best-effort 274 priority CTc should be set to zero. That is: 276 For best-effort priority CTc: 277 BCck = 0 279 6. Example of MAR Operation 281 In the example, assume there are three class-types: CT0, CT1, CT2. We 282 consider a particular link with 284 MAX-RESERVABLE_BW = 100 286 And with the allocated bandwidth constraints set as follows: 288 BC0 = 30 289 BC1 = 20 290 BC2 = 20 292 These bandwidth constraints are based on the normal traffic loads, as 293 discussed in Section 5. With MAR, any of the CTs is allowed to exceed 294 its bandwidth constraint BCc as long a there is at least RBW_THRES 295 (reservation bandwidth threshold on the link) units of spare bandwidth 296 remaining. Let's assume 298 RBW_THRES = 10 300 So under overload, if 302 RESERVED_BW0 = 50 303 RESERVED_BW1 = 30 304 RESERVED_BW2 = 10 306 Therefore, for this loading 308 UNRESERVED_BW = 100 - 50 - 30 - 10 = 10 310 CT0 and CT1 can no longer increase their bandwidth on the link, since 311 they are above their BC values and there is only RBW_THRES=10 units of 312 spare bandwidth left on the link. But CT2 can take the additional 313 bandwidth (up to 10 units) if the demand arrives, since it is below its 314 BC value. 316 As also discussed in Section 4, if best effort traffic is present, it 317 can always seize whatever spare bandwidth is available on the link at 318 the moment, but is subject to being lost at the queues in favor of the 319 higher priority traffic. 321 Let's say an LSP arrives for CT0 needing 5 units of bandwidth (i.e., DBW 322 = 5). We need to decide based on Table 1 whether to admit this LSP or 323 not. Since for CT0 325 RESERVED_BW0 > BC0 (50 > 30), and 326 DBW > UNRESERVED_BW - RBW_THRES (i.e., 5 > 10 - 10) 328 Table 1 says the LSP is rejected/blocked. 330 Now let's say an LSP arrives for CT2 needing 5 units of bandwidth (i.e., 331 DBW = 5). We need to decide based on Table 1 whether to admit this 332 LSP or not. Since for CT2 334 RESERVED_BW2 < BC2 (10 < 20), and 335 DBW < UNRESERVED_BW (i.e., 10 - 10 < 5) 337 Table 1 says to admit the LSP. 339 Hence, in the above example, in the current state of the link and the 340 current CT loading, CT0 and CT1 can no longer increase their bandwidth 341 on the link, since they are above their BCc values and there is only 342 RBW_THRES=10 units of spare bandwidth left on the link. But CT2 can 343 take the additional bandwidth (up to 10 units) if the demand arrives, 344 since it is below its BCc value. 346 7. Summary 348 The proposed MAR bandwidth constraint model includes the following: a) 349 allocate bandwidth to individual CTs, b) protect allocated bandwidth by 350 bandwidth reservation methods, as needed, but otherwise fully share 351 bandwidth, c) differentiate high-priority, normal-priority, and 352 best-effort priority services, and d) provide admission control to 353 reject connection requests when needed to meet performance objectives. 354 Modeling results presented in Annex A show that MAR bandwidth allocation 355 a) achieves greater efficiency in bandwidth sharing while still 356 providing bandwidth isolation and protection against QoS degradation, 357 and b) achieves service differentiation for high-priority, 358 normal-priority, and best-effort priority services. 360 8. Security Considerations 362 No new security considerations are raised by this document, they are the 363 same as in the DSTE requirements document [DSTE-REQ]. 365 9. Acknowledgements 367 DSTE and bandwidth constraint models have been an active area of 368 discussion in the TEWG. I would like to thank Wai Sum Lai for his 369 support and review of this draft. I also appreciate helpful discussions 370 with Francois Le Faucheur. 372 10. References 374 [AKI] Akinpelu, J. M., The Overload Performance of Engineered Networks 375 with Nonhierarchical & Hierarchical Routing, BSTJ, Vol. 63, 1984. 376 [ASH1] Ash, G. R., Dynamic Routing in Telecommunications Networks, 377 McGraw-Hill, 1998. 378 [ASH2] Ash, G. R., et. al., Routing Evolution in Multiservice Integrated 379 Voice/Data Networks, Proceeding of ITC-16, Edinburgh, June 1999. 380 [ASH3] Ash, G. R., Traffic Engineering & QoS Methods for IP-, ATM-, & 381 TDM-Based Multiservice Networks, work in progress. 382 [BUR] Burke, P. J., Blocking Probabilities Associated with Directional 383 Reservation, unpublished memorandum, 1961. 384 [DIFF-MPLS] Le Faucheur, F., et. al., "MPLS Support of Diff-Serv", RFC 385 3270, May 2002. 386 [DSTE-REQ] Le Faucheur, F., et. al., "Requirements for Support of 387 Diff-Serv-aware MPLS Traffic Engineering," work in progress. 388 [DSTE-PROTO] Le Faucheur, F., et. al., "Protocol Extensions for Support 389 of Diff-Serv-aware MPLS Traffic Engineering," work in progress. 390 [DIFFSERV] Blake, S., et. al., "An Architecture for Differentiated 391 Services", RFC 2475, December 1998. 392 [E.360.1 --> E.360.7] ITU-T Recommendations, "QoS Routing & Related 393 Traffic Engineering Methods for Multiservice TDM-, ATM-, & IP-Based 394 Networks". 395 [KATZ-YEUNG] Katz, D., Yeung, D., Kompella, K., "Traffic Engineering 396 Extensions to OSPF Version 2," work in progress. 397 [KEY] Bradner, S., "Key words for Use in RFCs to Indicate Requirement 398 Levels", RFC 2119, March 1997. 399 [KRU] Krupp, R. S., "Stabilization of Alternate Routing Networks", 400 Proceedings of ICC, Philadelphia, 1982. 401 [LAI] Lai, W., "Traffic Engineering for MPLS, Internet Performance and 402 Control of Network Systems III Conference", SPIE Proceedings Vol. 4865, 403 pp. 256-267, Boston, Massachusetts, USA, 29 July-1 August 2002 404 (http://www.columbia.edu/~ffl5/waisum/bcmodel.pdf). 405 [MAM1] Lai, W., "Maximum Allocation Bandwidth Constraints Model for 406 Diffserv-TE & Performance Comparisons", work in progress. 407 [MAM2] Lai, W., Le Faucheur, F., "Maximum Allocations Bandwidth 408 Constraints Model for Diff-Serv-aware MPLS Traffic Engineering", work in 409 progress. 410 [MUM] Mummert, V. S., "Network Management and Its Implementation on the 411 No. 4ESS, International Switching Symposium", Japan, 1976. 412 [NAK] Nakagome, Y., Mori, H., Flexible Routing in the Global 413 Communication Network, Proceedings of ITC-7, Stockholm, 1973. 414 [MPLS-ARCH] Rosen, E., et. al., "Multiprotocol Label Switching 415 Architecture," RFC 3031, January 2001. 416 [RDM] Le Faucheur, F., "Russian Dolls Bandwidth Constraints Model for 417 Diff-Serv-aware MPLS Traffic Engineering", work in progress. 418 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3", 419 BCP 9, RFC 2026, October 1996. 420 [RSVP-TE] Awduche, D., et. al., "RSVP-TE: Extensions to RSVP for LSP 421 Tunnels", RFC 3209, December 2001. 423 11. Authors' Addresses 425 Jerry Ash 426 AT&T 427 Room MT D5-2A01 428 200 Laurel Avenue 429 Middletown, NJ 07748, USA 430 Phone: +1 732-420-4578 431 Email: gash@att.com 433 ANNEX A - MAR Operation & Performance Analysis 435 A.1 MAR Operation 437 In the MAR bandwidth constraint model, the bandwidth allocation control 438 for each CT is based on estimated bandwidth needs, bandwidth use, and 439 status of links. The LER makes needed bandwidth allocation changes, and 440 uses [RSVP-TE], for example, to determine if link bandwidth can be 441 allocated to a CT. Bandwidth allocated to individual CTs is protected as 442 needed but otherwise shared. Under normal non-congested network 443 conditions, all CTs/services fully share all available bandwidth. When 444 congestion occurs for a particular CTc, bandwidth reservation acts to 445 prohibit traffic from other CTs from seizing the allocated capacity for 446 CTc. Associated with each CT is the allocated bandwidth constraint 447 (BCc) to govern bandwidth allocation and protection, these parameters 448 are illustrated with examples in this ANNEX. 450 In performing MAR bandwidth allocation for a given flow/LSP, the LER 451 first determines the egress LSR address, service-identity, and CT. The 452 connection request is allocated an equivalent bandwidth to be routed on 453 a particular CT. The LER then accesses the CT priority, QoS/traffic 454 parameters, and routing table between the LER and egress LSR, and sets 455 up the connection request using the MAR bandwidth allocation rules. The 456 LER selects a first choice path and determines if bandwidth can be 457 allocated on the path based on the MAR bandwidth allocation rules given 458 in Section 4. If the first choice path has insufficient bandwidth, the 459 LER may then try alternate paths, and again applies the MAR bandwidth 460 allocation rules now described. 462 MAR bandwidth allocation is done on a per-CT basis, in which aggregated 463 CT bandwidth is managed to meet the overall bandwidth requirements of CT 464 service needs. Individual flows/LSPs are allocated bandwidth in the 465 corresponding CT according to CT bandwidth availability. A fundamental 466 principle applied in MAR bandwidth allocation methods is the use of 467 bandwidth reservation techniques. 469 Bandwidth reservation gives preference to the preferred traffic by 470 allowing it to seize any idle bandwidth on a link, while allowing the 471 non-preferred traffic to only seize bandwidth if there is a minimum 472 level of idle bandwidth available called the reservation bandwidth 473 threshold RBW_THRES. Burke [BUR] first analyzed bandwidth reservation 474 behavior from the solution of the birth-death equations for the 475 bandwidth reservation model. Burke's model showed the relative 476 lost-traffic level for preferred traffic, which is not subject to 477 bandwidth reservation restrictions, as compared to non-preferred 478 traffic, which is subject to the restrictions. Bandwidth reservation 479 protection is robust to traffic variations and provides significant 480 dynamic protection of particular streams of traffic. It is widely used 481 in large-scale network applications [ASH1, MUM, AKI, KRU, NAK]. 483 Bandwidth reservation is used in MAR bandwidth allocation to control 484 sharing of link bandwidth across different CTs. On a given link, a 485 small amount of bandwidth RBW_THRES is reserved (say 1% of the total 486 link bandwidth), and the reservation bandwidth can be accessed when a 487 given CT has reserved bandwidth-in-progress RESERVED_BW below its 488 allocated bandwidth BC. That is, if the available link bandwidth 489 (unreserved idle link bandwidth UNRESERVED_BW) exceeds RBW_THRES, then 490 any CT is free to access the available bandwidth on the link. However, 491 if UNRESERVED_BW is less than RBW_THRES, then the CT can utilize the 492 available bandwidth only if its current bandwidth usage is below the 493 allocated amount BC. In this way, bandwidth can be fully shared among 494 CTs if available, but is protected by bandwidth reservation if below the 495 reservation level. 497 Through the bandwidth reservation mechanism, MAR bandwidth allocation 498 also gives preference to high-priority CTs, in comparison to 499 normal-priority and best-effort priority CTs. 501 Hence, bandwidth allocated to each CT is protected by bandwidth 502 reservation methods, as needed, but otherwise shared. Each LER monitors 503 CT bandwidth use on each CT, and determines if connection requests can 504 be allocated to the CT bandwidth. For example, for a bandwidth request 505 of DBW on a given flow/LSP, the LER determines the CT priority (high, 506 normal, or best-effort), CT bandwidth-in-use, and CT bandwidth 507 allocation thresholds, and uses these parameters to determine the 508 allowed load state threshold to which capacity can be allocated. In 509 allocating bandwidth DBW to a CT on given LSP, say A-B-E, each link in 510 the path is checked for available bandwidth in comparison to the allowed 511 load state. If bandwidth is unavailable on any link in path A-B-E, 512 another LSP could by tried, such as A-C-D-E. Hence determination of the 513 link load state is necessary for MAR bandwidth allocation, and two link 514 load states are distinguished: available (non-reserved) bandwidth 515 (ABW_STATE), and reserved-bandwidth (RBW_STATE). Management of CT 516 capacity uses the link state and the allowed load state threshold to 517 determine if a bandwidth allocation request can be accepted on a given 518 CT. 520 A.2 Analysis of MAR Performance 522 In this Annex, modeling analysis is presented in which MAR bandwidth 523 allocation is shown to provide good network performance relative to full 524 sharing models, under normal and abnormal operating conditions. A 525 large-scale MPLS/DiffServ TE simulation model is used, in which several 526 CTs with different priority classes share the pool of bandwidth on a 527 multiservice, integrated voice/data network. MAR methods have also been 528 analyzed in practice for TDM-based networks [ASH1], and in modeling 529 studies for IP-based networks [ASH2, ASH3, E.360]. 531 All bandwidth constraint models should meet these objectives: 533 1. applies equally when preemption is either enabled or disabled (when 534 preemption is disabled, the model still works 'reasonably' well), 535 2. Bandwidth efficiency, i.e., good bandwidth sharing among CTs under 536 both normal and overload conditions, 537 3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another 538 CT under overload conditions, 539 4. protection against QoS degradation, at least of the high-priority CTs 540 (e.g. high-priority voice, high-priority data, etc.), and 541 5. reasonably simple, i.e., does not require additional IGP extensions 542 and minimizes signaling load processing requirements. 544 The use of any given bandwidth constraint model has significant impacts 545 on the performance of a network, as explained later. Therefore, the 546 criteria used to select a model must enable us to evaluate how a 547 particular model delivers its performance, relative to other models. Lai 548 [LAI, MAM1] has analyzed the MA and RD models and provided valuable 549 insights into the relative performance of these models under various 550 network conditions. 552 In environments where preemption is not used, MAM is attractive because 553 a) it is good at achieving isolation, and b) it achieves reasonable 554 bandwidth efficiency with some QoS degradation of lower classes. When 555 preemption is used, RDM is attractive because it can achieve bandwidth 556 efficiency under normal load. However, RDM cannot provide service 557 isolation under high load or when preemption is not used. 559 Our performance analysis of MAR bandwidth allocation methods is based on 560 a full-scale, 135-node simulation model of a national network together 561 with a multiservice traffic demand model to study various scenarios and 562 tradeoffs [ASH3]. Three levels of traffic priority - high, normal, and 563 best effort -- are given across 5 CTs: normal priority voice, high 564 priority voice, normal priority data, high priority data, and best 565 effort data. 567 The performance analyses for overloads and failures include a) the MAR 568 bandwidth constraint model, as specified in Section 4, b) the MAM 569 bandwidth constraint model, and c) the No-DSTE bandwidth constraint 570 model. 572 The allocated bandwidth constraints for MAR are as described in Section 573 5: 575 Normal priority CTs: BCck = PROPORTIONAL_BWk, 576 High priority CTs: BCck = FACTOR X PROPORTIONAL_BWk 577 Best-effort priority CTs: BCck = 0 579 In the MAM bandwidth constraint model, the bandwidth constraints for 580 each CT are set to a multiple of the proportional bandwidth allocation: 582 Normal priority CTs: BCck = FACTOR1 X PROPORTIONAL_BWk, 583 High priority CTs: BCck = FACTOR2 X PROPORTIONAL_BWk 584 Best-effort priority CTs: BCck = 0 586 Simulations show that for MAM, the sum (BCc) should exceed 587 MAX_RESERVABLE_BWk for better efficiency, as follows: 589 1. The normal priority CTs the BCc values need to be over-allocated to 590 get reasonable performance. It was found that over-allocating by 100%, 591 that is, setting FACTOR1 = 2, gave reasonable performance. 592 2. The high priority CTs can be over-allocated by a larger multiple 593 FACTOR2 in MAM and this gives better performance. 595 The rather large amount of over-allocation improves efficiency but 596 somewhat defeats the 'bandwidth protection/isolation' needed with a BC 597 model, since one CT can now invade the bandwidth allocated to another 598 CT. Each CT is restricted to its allocated bandwidth constraint BCck, 599 which is the maximum level of bandwidth allocated to each CT on each 600 link, as in normal operation of MAM. 602 In the No-DSTE bandwidth constraint model, no reservation or protection 603 of CT bandwidth is applied, and bandwidth allocation requests are 604 admitted if bandwidth is available. Furthermore, no queueing priority 605 is applied to any of the CTs in the No-DSTE bandwidth constraint model. 607 Table 2 gives performance results for a six-times overload on a single 608 network node at Oakbrook IL. The numbers given in the table are the 609 total network percent lost (blocked) or delayed traffic. Note that in 610 the focused overload scenario studied here, the percent lost/delayed 611 traffic on the Oakbrook node is much higher than the network-wide 612 average values given. 614 Table 2 615 Performance Comparison for MAR, MAM, & No-DSTE 616 Bandwidth Constraint (BC) Models 617 6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic) 619 Class Type MAR BC MAM BC No-DSTE BC 620 Model Model Model 621 NORMAL PRIORITY VOICE 0.00 1.97 10.3009 622 HIGH PRIORITY VOICE 0.00 0.00 7.0509 623 NORMAL PRIORITY DATA 0.00 6.63 13.3009 624 HIGH PRIORITY DATA 0.00 0.00 7.0509 625 BEST EFFORT PRIORITY DATA 12.33 11.92 9.6509 627 Clearly the performance is better with MAR bandwidth allocation, and the 628 results show that performance improves when bandwidth reservation is 629 used. The reason for the poor performance of the No-DSTE model, without 630 bandwidth reservation, is due to the lack of protection of allocated 631 bandwidth. If we add the bandwidth reservation mechanism, then 632 performance of the network is greatly improved. 634 The simulations showed that the performance of MAM is quite sensitive to 635 the over-allocation factors discussed above. For example, if the BCc 636 values are proportionally allocated with FACTOR1 = 1, then the results 637 are much worse, as shown in Table 3: 639 Table 3 640 Performance Comparison for MAM Bandwidth Constraint Model 641 with Different Over-allocation Factors 642 6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic) 644 Class Type (FACTOR1 = 1) (FACTOR1 = 2) 645 NORMAL PRIORITY VOICE 31.69 1.9709 646 HIGH PRIORITY VOICE 0.00 0.0009 647 NORMAL PRIORITY DATA 31.22 6.6309 648 HIGH PRIORITY DATA 0.00 0.0009 649 BEST EFFORT PRIORITY DATA 8.76 11.9209 651 Table 4 illustrates the performance of the MAR, MAM, and No-DSTE 652 bandwidth constraint models for a high-day network load pattern with a 653 30% general overload. The numbers given in the table are the total 654 network percent lost (blocked) or delayed traffic. 656 Table 4 657 Performance Comparison for MAR, MAM, & No-DSTE 658 Bandwidth Constraint (BC) Models 659 50% General Overload (Total Network % Lost/Delayed Traffic) 661 Class Type MAR BC MAM BC No-DSTE BC 662 Model Model Model 663 NORMAL PRIORITY VOICE 0.02 0.13 7.9809 664 HIGH PRIORITY VOICE 0.00 0.00 8.9409 665 NORMAL PRIORITY DATA 0.00 0.26 6.9309 666 HIGH PRIORITY DATA 0.00 0.00 8.9409 667 BEST EFFORT PRIORITY DATA 10.41 10.39 8.4009 669 Again, we can see the performance is always better when MAR bandwidth 670 allocation and reservation is used. 672 Table 5 illustrates the performance of the MAR, MAM, and No-DSTE 673 bandwidth constraint models for a single link failure scenario (3 674 OC-48). The numbers given in the table are the total network percent 675 lost (blocked) or delayed traffic. 677 Table 5 678 Performance Comparison for MAR, MAM, & No-DSTE 679 Bandwidth Constraint (BC) Models 680 Single Link Failure (3 OC-48s) 681 (Total Network % Lost/Delayed Traffic) 683 Class Type MAR BC MAM BC No-DSTE BC 684 Model Model Model 685 NORMAL PRIORITY VOICE 0.00 0.62 0.5809 686 HIGH PRIORITY VOICE 0.00 0.31 0.2909 687 NORMAL PRIORITY DATA 0.00 0.48 0.4609 688 HIGH PRIORITY DATA 0.00 0.31 0.2909 689 BEST EFFORT PRIORITY DATA 0.12 0.72 0.6609 691 Again, we can see the performance is always better when MAR bandwidth 692 allocation and reservation is used. 694 Table 6 illustrates the performance of the MAR, MAM, and No-DSTE 695 bandwidth constraint models for a multiple link failure scenario (3 696 links with 3 OC-48, 3 OC-3, 4 OC-3 capacity, respectively). The numbers 697 given in the table are the total network percent lost (blocked) or 698 delayed traffic. 700 Table 6 701 Performance Comparison for MAR, MAM, & No-DSTE 702 Bandwidth Constraint (BC) Models 703 Multiple Link Failure (3 Links with 3 OC-48, 3 OC-3, 4 OC-3, Respectively) 704 (Total Network % Lost/Delayed Traffic) 706 Class Type MAR BC MAM BC No-DSTE BC 707 Model Model Model 708 NORMAL PRIORITY VOICE 0.00 0.91 0.8609 709 HIGH PRIORITY VOICE 0.00 0.44 0.4209 710 NORMAL PRIORITY DATA 0.00 0.70 0.6409 711 HIGH PRIORITY DATA 0.00 0.44 0.4209 712 BEST EFFORT PRIORITY DATA 0.14 1.03 0.9809 714 Again, we can see the performance is always better when MAR bandwidth 715 allocation and reservation is used. 717 Lai's results [LAI, MAM1] show the trade-off between bandwidth sharing 718 and service protection/isolation, using an analytic model of a single 719 link. He shows that RDM has a higher degree of sharing than MAM. 720 Furthermore, for a single link, the overall loss probability is the 721 smallest under full sharing and largest under MAM, with RDM being 722 intermediate. Hence, on a single link, Lai shows that the full sharing 723 model yields the highest link efficiency and MAM the lowest, and that 724 full sharing has the poorest service protection capability. 726 The results of the present study show that when considering a network 727 context, in which there are many links and multiple-link routing paths 728 are used, full sharing does not necessarily lead to maximum network-wide 729 bandwidth efficiency. In fact, the results in Table 4 show that the 730 No-DSTE model not only degrades total network throughput, but also 731 degrades the performance of every CT that should be protected. Allowing 732 more bandwidth sharing may improve performance up to a point, but can 733 severely degrade performance if care is not taken to protect allocated 734 bandwidth under congestion. 736 Both Lai's study and this study show that increasing the degree of 737 bandwidth sharing among the different CTs leads to a tighter coupling 738 between CTs. Under normal loading conditions, there is adequate capacity 739 for each CT, which minimizes the effect of such coupling. Under overload 740 conditions, when there is a scarcity of capacity, such coupling can 741 cause severe degradation of service, especially for the lower priority 742 CTs. 744 Thus, the objective of maximizing efficient bandwidth usage, as stated 745 in bandwidth constraint model objectives, must be exercised with care. 746 Due consideration needs to be given also to achieving bandwidth 747 isolation under overload, in order to minimize the effect of 748 interactions among the different CTs. The proper tradeoff of bandwidth 749 sharing and bandwidth isolation needs to be achieved in the selection of 750 a bandwidth constraint model. Bandwidth reservation supports greater 751 efficiency in bandwidth sharing while still providing bandwidth 752 isolation and protection against QoS degradation. 754 In summary, the proposed MAR bandwidth constraint model includes the 755 following: a) allocate bandwidth to individual CTs, b) protect allocated 756 bandwidth by bandwidth reservation methods, as needed, but otherwise 757 fully share bandwidth, c) differentiate high-priority, normal-priority, 758 and best-effort priority services, and d) provide admission control to 759 reject connection requests when needed to meet performance objectives. 761 In the modeling results, the MAR bandwidth constraint model compares 762 favorably with methods that do not use bandwidth reservation. In 763 particular, some of the conclusions from the modeling are as follows: 765 o MAR bandwidth allocation is effective in improving performance over 766 methods that lack bandwidth reservation and that allow more bandwidth 767 sharing under congestion, 768 o MAR achieves service differentiation for high-priority, 769 normal-priority, and best-effort priority services, 770 o bandwidth reservation supports greater efficiency in bandwidth sharing 771 while still providing bandwidth isolation and protection against QoS 772 degradation, and is critical to stable and efficient network 773 performance. 775 Full Copyright Statement 777 Copyright (C) The Internet Society (2003). All Rights Reserved. 779 This document and translations of it may be copied and furnished to 780 others, and derivative works that comment on or otherwise explain it or 781 assist in its implementation may be prepared, copied, published and 782 distributed, in whole or in part, without restriction of any kind, 783 provided that the above copyright notice and this paragraph are included 784 on all such copies and derivative works. 786 However, this document itself may not be modified in any way, such as by 787 removing the copyright notice or references to the Internet Society or 788 other Internet organizations, except as needed for the purpose of 789 developing Internet standards in which case the procedures for 790 copyrights defined in the Internet Standards process must be followed, 791 or as required to translate it into languages other than English. 793 The limited permissions granted above are perpetual and will not be 794 revoked by the Internet Society or its successors or assigns. 796 This document and the information contained herein is provided on an "AS 797 IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK 798 FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT 799 LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT 800 INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR 801 FITNESS FOR A PARTICULAR PURPOSE.