Network Working Group Jerry Ash Internet Draft AT&T Category: Experimental Expiration Date: December 2003 June, 2003 Max Allocation with Reservation Bandwidth Constraint Model for MPLS/DiffServ TE & Performance Comparisons Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Abstract This document complements the DiffServ-aware MPLS TE (DSTE) requirements document by giving a functional specification for the Maximum Allocation with Reservation (MAR) bandwidth constraint model. Assumptions, applicability, and examples of the operation of the MAR bandwidth constraint model are presented. MAR performance is analyzed relative to the criteria for selecting a bandwidth constraint model, in order to provide guidance to user implementation of the model in their networks. Table of Contents 1. Introduction 2. Definitions 3. Assumptions & Applicability 4. Functional Specification of the MAR Bandwidth Constraint Model 5. Setting Bandwidth Constraints 6. Example of MAR Operation 7. Summary 8. Security Considerations 9. Acknowledgments 10. References 11. Authors' Addresses ANNEX A. MAR Operation & Performance Analysis 1. Introduction DiffServ-aware MPLS traffic engineering (DSTE) requirements and protocol extensions are specified in [DSTE-REQ, DSTE-PROTO]. A requirement for DSTE implementation is the specification of bandwidth constraint models for use with DSTE. The bandwidth constraint model provides the 'rules' to support the allocation of bandwidth to individual class types (CTs). CTs are groupings of service classes in the DSTE model, which are provided separate bandwidth allocations, priorities, and QoS objectives. Several CTs can share a common bandwidth pool on an integrated, multiservice MPLS/DiffServ network. This document is intended to complement the DSTE requirements document [DSTE-REQ] by giving a functional specification for the Maximum Allocation with Reservation (MAR) bandwidth constraint model. Examples of the operation of the MAR bandwidth constraint model are presented. MAR performance is analyzed relative to the criteria for selecting a bandwidth constraint model, in order to provide guidance to user implementation of the model in their networks. Two other bandwidth constraint models are being specified for use in DSTE: 1. maximum allocation model (MAM) [MAM1, MAM2] - the maximum allowable bandwidth usage of each CT is explicitly specified. 2. Russian doll model (RDM) [RDM] - the maximum allowable bandwidth usage is done cumulatively by grouping successive CTs according to priority classes. MAR is similar to MAM in that a maximum bandwidth allocation is given to each CT. However, through the use of bandwidth reservation and protection mechanisms, CTs are allowed to exceed their bandwidth allocations under conditions of no congestion but revert to their allocated bandwidths when overload and congestion occurs. All bandwidth constraint models should meet these objectives: 1. applies equally when preemption is either enabled or disabled (when preemption is disabled, the model still works 'reasonably' well), 2. Bandwidth efficiency, i.e., good bandwidth sharing among CTs under both normal and overload conditions, 3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another CT under overload conditions, 4. protection against QoS degradation, at least of the high-priority CTs (e.g. high-priority voice, high-priority data, etc.), and 5. reasonably simple, i.e., does not require additional IGP extensions and minimizes signaling load processing requirements. In Annex A modeling analysis is presented which shows that the MAR model meets all these objectives, and provides good network performance relative to MAM and full sharing models, under normal and abnormal operating conditions. It is demonstrated that simultaneously achieves bandwidth efficiency, bandwidth isolation, and protection against QoS degradation without preemption. In Section 3 we give the assumptions and applicability, in Section 4 a functional specification of the MAR bandwidth constraint model, and in Section 5 we give examples of its operation. In Annex A, MAR performance is analyzed relative to the criteria for selecting a bandwidth constraint model, in order to provide guidance to user implementation of the model in their networks. 2. Definitions For readability a number of definitions from [DSTE-REQ, DSTE-PROTO] are repeated here: Traffic Trunk: an aggregation of traffic flows of the same class (i.e. which are to be treated equivalently from the DSTE perspective) which are placed inside an LSP. Class-Type (CT): the set of Traffic Trunks crossing a link that is governed by a specific set of Bandwidth constraints. CT is used for the purposes of link bandwidth allocation, constraint based routing and admission control. A given Traffic Trunk belongs to the same CT on all links. Up to 8 CTs (MaxCT = 8) are supported. They are referred to as CTc, 0 <= c <= MaxCT-1 = 7. Each CT is assigned either a Bandwidth Constraint, or a set of Bandwidth Constraints. Up to 8 Bandwidth Constraints (MaxBC = 8) are supported and they are referred to as BCc, 0 <= c <= MaxBC-1 = 7. TE-Class: A pair of: i. a CT ii. a preemption priority allowed for that CT. This means that an LSP transporting a Traffic Trunk from that CT can use that preemption priority as the set-up priority, as the holding priority or both. MAX_RESERVABLE_BWk: maximum reservable bandwidth on link k specifies the maximum bandwidth that may be reserved; this may be greater than the maximum link bandwidth in which case the link may be oversubscribed [KATZ-YEUNG]. RESERVED_BWck: reserved bandwidth-in-progress on CTc on link k (0 <= c <= MaxCT-1), RESERVED_BWck = sum of the bandwidth reserved by all established LSPs which belong to CTc. UNRESERVED_BWck: unreserved link bandwidth on CTc on link k specifies the amount of bandwidth not yet reserved for CTc, UNRESERVED_BWck = MAX_RESERVABLE_BWk - sum [RESERVED_BWck (0 <= c <= MaxCT-1)]. BCck: bandwidth constraint for CTc on link k = allocated (minimum guaranteed) bandwidth for CTc on link k (see Section 4). RBW_THRESk: reservation bandwidth threshold for link k (see Section 4). 3. Assumptions & Applicability In general, DSTE is a bandwidth allocation mechanism, for different classes of traffic allocated to various CTs (e.g., voice, normal data, best-effort data). Network operations functions such as capacity design, bandwidth allocation, routing design, and network planning are normally based on traffic measured load and forecast [ASH1]. As such, the following assumptions are made according to the operation of MAR: 1. connection admission control (CAC) allocates bandwidth for network flows/LSPs according to the traffic load assigned to each CT, based on traffic measurement and forecast. 2. CAC could allocate bandwidth per flow, per LSP, per traffic trunk, or otherwise. That is, no specific assumption is made on a specific CAC method, only that CT bandwidth allocation is related to the measured/forecast traffic load, as per assumption #1. 3. CT bandwidth allocation is adjusted up or down according to measured/forecast traffic load. No specific time period is assumed for this adjustment, it could be short term (hours), daily, weekly, monthly, or otherwise. 4. Capacity management and CT bandwidth allocation thresholds (e.g., BCc) are designed according to traffic load, and are based on traffic measurement and forecast. Again, no specific time period is assumed for this adjustment, it could be short term (hours), daily, weekly, monthly, or otherwise. 5. No assumption is made on the order in which traffic is allocated to various CTs, again traffic allocation is assumed to be based only on traffic load as it is measured and/or forecast. 6. If link bandwidth is exhausted on a given path for a flow/LSP/traffic trunk, alternate paths may be attempted to satisfy CT bandwidth allocation. Note that the above assumptions are not unique to MAR, but are generic, common assumptions for all BC models. 4. Functional Specification of the MAR Bandwidth Constraint Model In the MAR bandwidth constraint model, the bandwidth allocation control for each CT is based on estimated bandwidth needs, bandwidth use, and status of links. The LER makes needed bandwidth allocation changes, and uses [RSVP-TE], for example, to determine if link bandwidth can be allocated to a CT. Bandwidth allocated to individual CTs is protected as needed but otherwise shared. Under normal non-congested network conditions, all CTs/services fully share all available bandwidth. When congestion occurs for a particular CTc, bandwidth reservation acts to prohibit traffic from other CTs from seizing the allocated capacity for CTc. On a given link k, a small amount of bandwidth RBW_THRESk, the reservation bandwidth threshold for link k, is reserved and governs the admission control on link k. Also associated with each CTc on link k are the allocated bandwidth constraints BCck to govern bandwidth allocation and protection. The reservation bandwidth on a link, RBW_THRESk, can be accessed when a given CTc has bandwidth-in-use RESERVED_BWck below its allocated bandwidth constraint BCck. However, if RESERVED_BWck exceeds its allocated bandwidth constraint BCck, then the reservation bandwidth RBW_THRESk cannot be accessed. In this way, bandwidth can be fully shared among CTs if available, but is otherwise protected by bandwidth reservation methods. Bandwidth can be accessed for a bandwidth request = DBW for CTc on a given link k based on the following rules: Table 1: Rules for Admitting LSP Bandwidth Request = DBW on Link k For LSP on a high priority or normal priority CTc: If RESERVED_BWck <= BCc: admit if DBW <= UNRESERVED_BWk If RESERVED_BWck > BCc: admit if DBW <= UNRESERVED_BWk - RBW_THRESk For LSP on a best-effort priority CTc: allocated bandwidth BCc = 0; DiffServ queuing admits BE packets only if there is available link bandwidth; The normal semantics of setup and holding priority are applied in the MAR bandwidth constraint model, and cross-CT preemption is permitted when preemption is enabled. The bandwidth allocation rules defined in Table 1 are illustrated with an example in Section 6 and simulation analysis in ANNEX A. 5. Setting Bandwidth Constraints For a normal priority CTc, the bandwidth constraints BCck on link k are set by allocating the maximum reservable bandwidth (MAX_RESERVABLE_BWk) in proportion to the forecast or measured traffic load bandwidth TRAF_LOAD_BWck for CTc on link k. That is: PROPORTIONAL_BWck = TRAF_LOAD_BWck/[sum {TRAF_LOAD_BWck, c=0,MaxCT-1}] X MAX_RESERVABLE_BWk For normal priority CTc: BCck = PROPORTIONAL_BWck For a high priority CT, the bandwidth constraint BCck is set to a multiple of the proportional bandwidth. That is: For high priority CTc: BCck = FACTOR X PROPORTIONAL_BWck where FACTOR is set to a multiple of the proportional bandwidth (e.g., FACTOR = 2 or 3 is typical). This results in some 'over-allocation' of the maximum reservable bandwidth, and gives priority to the high priority CTs. Normally the bandwidth allocated to high priority CTs should be a relatively small fraction of the total link bandwidth, a maximum of 10-15 percent being a reasonable guideline. As stated in Section 4, the bandwidth allocated to a best-effort priority CTc should be set to zero. That is: For best-effort priority CTc: BCck = 0 6. Example of MAR Operation In the example, assume there are three class-types: CT0, CT1, CT2. We consider a particular link with MAX-RESERVABLE_BW = 100 And with the allocated bandwidth constraints set as follows: BC0 = 30 BC1 = 20 BC2 = 20 These bandwidth constraints are based on the normal traffic loads, as discussed in Section 5. With MAR, any of the CTs is allowed to exceed its bandwidth constraint BCc as long a there is at least RBW_THRES (reservation bandwidth threshold on the link) units of spare bandwidth remaining. Let's assume RBW_THRES = 10 So under overload, if RESERVED_BW0 = 50 RESERVED_BW1 = 30 RESERVED_BW2 = 10 Therefore, for this loading UNRESERVED_BW = 100 - 50 - 30 - 10 = 10 CT0 and CT1 can no longer increase their bandwidth on the link, since they are above their BC values and there is only RBW_THRES=10 units of spare bandwidth left on the link. But CT2 can take the additional bandwidth (up to 10 units) if the demand arrives, since it is below its BC value. As also discussed in Section 4, if best effort traffic is present, it can always seize whatever spare bandwidth is available on the link at the moment, but is subject to being lost at the queues in favor of the higher priority traffic. Let's say an LSP arrives for CT0 needing 5 units of bandwidth (i.e., DBW = 5). We need to decide based on Table 1 whether to admit this LSP or not. Since for CT0 RESERVED_BW0 > BC0 (50 > 30), and DBW > UNRESERVED_BW - RBW_THRES (i.e., 5 > 10 - 10) Table 1 says the LSP is rejected/blocked. Now let's say an LSP arrives for CT2 needing 5 units of bandwidth (i.e., DBW = 5). We need to decide based on Table 1 whether to admit this LSP or not. Since for CT2 RESERVED_BW2 < BC2 (10 < 20), and DBW < UNRESERVED_BW (i.e., 10 - 10 < 5) Table 1 says to admit the LSP. Hence, in the above example, in the current state of the link and the current CT loading, CT0 and CT1 can no longer increase their bandwidth on the link, since they are above their BCc values and there is only RBW_THRES=10 units of spare bandwidth left on the link. But CT2 can take the additional bandwidth (up to 10 units) if the demand arrives, since it is below its BCc value. 7. Summary The proposed MAR bandwidth constraint model includes the following: a) allocate bandwidth to individual CTs, b) protect allocated bandwidth by bandwidth reservation methods, as needed, but otherwise fully share bandwidth, c) differentiate high-priority, normal-priority, and best-effort priority services, and d) provide admission control to reject connection requests when needed to meet performance objectives. Modeling results presented in Annex A show that MAR bandwidth allocation a) achieves greater efficiency in bandwidth sharing while still providing bandwidth isolation and protection against QoS degradation, and b) achieves service differentiation for high-priority, normal-priority, and best-effort priority services. 8. Security Considerations No new security considerations are raised by this document, they are the same as in the DSTE requirements document [DSTE-REQ]. 9. Acknowledgements DSTE and bandwidth constraint models have been an active area of discussion in the TEWG. I would like to thank Wai Sum Lai for his support and review of this draft. I also appreciate helpful discussions with Francois Le Faucheur. 10. References [AKI] Akinpelu, J. M., The Overload Performance of Engineered Networks with Nonhierarchical & Hierarchical Routing, BSTJ, Vol. 63, 1984. [ASH1] Ash, G. R., Dynamic Routing in Telecommunications Networks, McGraw-Hill, 1998. [ASH2] Ash, G. R., et. al., Routing Evolution in Multiservice Integrated Voice/Data Networks, Proceeding of ITC-16, Edinburgh, June 1999. [ASH3] Ash, G. R., Traffic Engineering & QoS Methods for IP-, ATM-, & TDM-Based Multiservice Networks, work in progress. [BUR] Burke, P. J., Blocking Probabilities Associated with Directional Reservation, unpublished memorandum, 1961. [DIFF-MPLS] Le Faucheur, F., et. al., "MPLS Support of Diff-Serv", RFC 3270, May 2002. [DSTE-REQ] Le Faucheur, F., et. al., "Requirements for Support of Diff-Serv-aware MPLS Traffic Engineering," work in progress. [DSTE-PROTO] Le Faucheur, F., et. al., "Protocol Extensions for Support of Diff-Serv-aware MPLS Traffic Engineering," work in progress. [DIFFSERV] Blake, S., et. al., "An Architecture for Differentiated Services", RFC 2475, December 1998. [E.360.1 --> E.360.7] ITU-T Recommendations, "QoS Routing & Related Traffic Engineering Methods for Multiservice TDM-, ATM-, & IP-Based Networks". [KATZ-YEUNG] Katz, D., Yeung, D., Kompella, K., "Traffic Engineering Extensions to OSPF Version 2," work in progress. [KEY] Bradner, S., "Key words for Use in RFCs to Indicate Requirement Levels", RFC 2119, March 1997. [KRU] Krupp, R. S., "Stabilization of Alternate Routing Networks", Proceedings of ICC, Philadelphia, 1982. [LAI] Lai, W., "Traffic Engineering for MPLS, Internet Performance and Control of Network Systems III Conference", SPIE Proceedings Vol. 4865, pp. 256-267, Boston, Massachusetts, USA, 29 July-1 August 2002 (http://www.columbia.edu/~ffl5/waisum/bcmodel.pdf). [MAM1] Lai, W., "Maximum Allocation Bandwidth Constraints Model for Diffserv-TE & Performance Comparisons", work in progress. [MAM2] Lai, W., Le Faucheur, F., "Maximum Allocations Bandwidth Constraints Model for Diff-Serv-aware MPLS Traffic Engineering", work in progress. [MUM] Mummert, V. S., "Network Management and Its Implementation on the No. 4ESS, International Switching Symposium", Japan, 1976. [NAK] Nakagome, Y., Mori, H., Flexible Routing in the Global Communication Network, Proceedings of ITC-7, Stockholm, 1973. [MPLS-ARCH] Rosen, E., et. al., "Multiprotocol Label Switching Architecture," RFC 3031, January 2001. [RDM] Le Faucheur, F., "Russian Dolls Bandwidth Constraints Model for Diff-Serv-aware MPLS Traffic Engineering", work in progress. [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 3", BCP 9, RFC 2026, October 1996. [RSVP-TE] Awduche, D., et. al., "RSVP-TE: Extensions to RSVP for LSP Tunnels", RFC 3209, December 2001. 11. Authors' Addresses Jerry Ash AT&T Room MT D5-2A01 200 Laurel Avenue Middletown, NJ 07748, USA Phone: +1 732-420-4578 Email: gash@att.com ANNEX A - MAR Operation & Performance Analysis A.1 MAR Operation In the MAR bandwidth constraint model, the bandwidth allocation control for each CT is based on estimated bandwidth needs, bandwidth use, and status of links. The LER makes needed bandwidth allocation changes, and uses [RSVP-TE], for example, to determine if link bandwidth can be allocated to a CT. Bandwidth allocated to individual CTs is protected as needed but otherwise shared. Under normal non-congested network conditions, all CTs/services fully share all available bandwidth. When congestion occurs for a particular CTc, bandwidth reservation acts to prohibit traffic from other CTs from seizing the allocated capacity for CTc. Associated with each CT is the allocated bandwidth constraint (BCc) to govern bandwidth allocation and protection, these parameters are illustrated with examples in this ANNEX. In performing MAR bandwidth allocation for a given flow/LSP, the LER first determines the egress LSR address, service-identity, and CT. The connection request is allocated an equivalent bandwidth to be routed on a particular CT. The LER then accesses the CT priority, QoS/traffic parameters, and routing table between the LER and egress LSR, and sets up the connection request using the MAR bandwidth allocation rules. The LER selects a first choice path and determines if bandwidth can be allocated on the path based on the MAR bandwidth allocation rules given in Section 4. If the first choice path has insufficient bandwidth, the LER may then try alternate paths, and again applies the MAR bandwidth allocation rules now described. MAR bandwidth allocation is done on a per-CT basis, in which aggregated CT bandwidth is managed to meet the overall bandwidth requirements of CT service needs. Individual flows/LSPs are allocated bandwidth in the corresponding CT according to CT bandwidth availability. A fundamental principle applied in MAR bandwidth allocation methods is the use of bandwidth reservation techniques. Bandwidth reservation gives preference to the preferred traffic by allowing it to seize any idle bandwidth on a link, while allowing the non-preferred traffic to only seize bandwidth if there is a minimum level of idle bandwidth available called the reservation bandwidth threshold RBW_THRES. Burke [BUR] first analyzed bandwidth reservation behavior from the solution of the birth-death equations for the bandwidth reservation model. Burke's model showed the relative lost-traffic level for preferred traffic, which is not subject to bandwidth reservation restrictions, as compared to non-preferred traffic, which is subject to the restrictions. Bandwidth reservation protection is robust to traffic variations and provides significant dynamic protection of particular streams of traffic. It is widely used in large-scale network applications [ASH1, MUM, AKI, KRU, NAK]. Bandwidth reservation is used in MAR bandwidth allocation to control sharing of link bandwidth across different CTs. On a given link, a small amount of bandwidth RBW_THRES is reserved (say 1% of the total link bandwidth), and the reservation bandwidth can be accessed when a given CT has reserved bandwidth-in-progress RESERVED_BW below its allocated bandwidth BC. That is, if the available link bandwidth (unreserved idle link bandwidth UNRESERVED_BW) exceeds RBW_THRES, then any CT is free to access the available bandwidth on the link. However, if UNRESERVED_BW is less than RBW_THRES, then the CT can utilize the available bandwidth only if its current bandwidth usage is below the allocated amount BC. In this way, bandwidth can be fully shared among CTs if available, but is protected by bandwidth reservation if below the reservation level. Through the bandwidth reservation mechanism, MAR bandwidth allocation also gives preference to high-priority CTs, in comparison to normal-priority and best-effort priority CTs. Hence, bandwidth allocated to each CT is protected by bandwidth reservation methods, as needed, but otherwise shared. Each LER monitors CT bandwidth use on each CT, and determines if connection requests can be allocated to the CT bandwidth. For example, for a bandwidth request of DBW on a given flow/LSP, the LER determines the CT priority (high, normal, or best-effort), CT bandwidth-in-use, and CT bandwidth allocation thresholds, and uses these parameters to determine the allowed load state threshold to which capacity can be allocated. In allocating bandwidth DBW to a CT on given LSP, say A-B-E, each link in the path is checked for available bandwidth in comparison to the allowed load state. If bandwidth is unavailable on any link in path A-B-E, another LSP could by tried, such as A-C-D-E. Hence determination of the link load state is necessary for MAR bandwidth allocation, and two link load states are distinguished: available (non-reserved) bandwidth (ABW_STATE), and reserved-bandwidth (RBW_STATE). Management of CT capacity uses the link state and the allowed load state threshold to determine if a bandwidth allocation request can be accepted on a given CT. A.2 Analysis of MAR Performance In this Annex, modeling analysis is presented in which MAR bandwidth allocation is shown to provide good network performance relative to full sharing models, under normal and abnormal operating conditions. A large-scale MPLS/DiffServ TE simulation model is used, in which several CTs with different priority classes share the pool of bandwidth on a multiservice, integrated voice/data network. MAR methods have also been analyzed in practice for TDM-based networks [ASH1], and in modeling studies for IP-based networks [ASH2, ASH3, E.360]. All bandwidth constraint models should meet these objectives: 1. applies equally when preemption is either enabled or disabled (when preemption is disabled, the model still works 'reasonably' well), 2. Bandwidth efficiency, i.e., good bandwidth sharing among CTs under both normal and overload conditions, 3. bandwidth isolation, i.e., a CT cannot hog the bandwidth of another CT under overload conditions, 4. protection against QoS degradation, at least of the high-priority CTs (e.g. high-priority voice, high-priority data, etc.), and 5. reasonably simple, i.e., does not require additional IGP extensions and minimizes signaling load processing requirements. The use of any given bandwidth constraint model has significant impacts on the performance of a network, as explained later. Therefore, the criteria used to select a model must enable us to evaluate how a particular model delivers its performance, relative to other models. Lai [LAI, MAM1] has analyzed the MA and RD models and provided valuable insights into the relative performance of these models under various network conditions. In environments where preemption is not used, MAM is attractive because a) it is good at achieving isolation, and b) it achieves reasonable bandwidth efficiency with some QoS degradation of lower classes. When preemption is used, RDM is attractive because it can achieve bandwidth efficiency under normal load. However, RDM cannot provide service isolation under high load or when preemption is not used. Our performance analysis of MAR bandwidth allocation methods is based on a full-scale, 135-node simulation model of a national network together with a multiservice traffic demand model to study various scenarios and tradeoffs [ASH3]. Three levels of traffic priority - high, normal, and best effort -- are given across 5 CTs: normal priority voice, high priority voice, normal priority data, high priority data, and best effort data. The performance analyses for overloads and failures include a) the MAR bandwidth constraint model, as specified in Section 4, b) the MAM bandwidth constraint model, and c) the No-DSTE bandwidth constraint model. The allocated bandwidth constraints for MAR are as described in Section 5: Normal priority CTs: BCck = PROPORTIONAL_BWk, High priority CTs: BCck = FACTOR X PROPORTIONAL_BWk Best-effort priority CTs: BCck = 0 In the MAM bandwidth constraint model, the bandwidth constraints for each CT are set to a multiple of the proportional bandwidth allocation: Normal priority CTs: BCck = FACTOR1 X PROPORTIONAL_BWk, High priority CTs: BCck = FACTOR2 X PROPORTIONAL_BWk Best-effort priority CTs: BCck = 0 Simulations show that for MAM, the sum (BCc) should exceed MAX_RESERVABLE_BWk for better efficiency, as follows: 1. The normal priority CTs the BCc values need to be over-allocated to get reasonable performance. It was found that over-allocating by 100%, that is, setting FACTOR1 = 2, gave reasonable performance. 2. The high priority CTs can be over-allocated by a larger multiple FACTOR2 in MAM and this gives better performance. The rather large amount of over-allocation improves efficiency but somewhat defeats the 'bandwidth protection/isolation' needed with a BC model, since one CT can now invade the bandwidth allocated to another CT. Each CT is restricted to its allocated bandwidth constraint BCck, which is the maximum level of bandwidth allocated to each CT on each link, as in normal operation of MAM. In the No-DSTE bandwidth constraint model, no reservation or protection of CT bandwidth is applied, and bandwidth allocation requests are admitted if bandwidth is available. Furthermore, no queueing priority is applied to any of the CTs in the No-DSTE bandwidth constraint model. Table 2 gives performance results for a six-times overload on a single network node at Oakbrook IL. The numbers given in the table are the total network percent lost (blocked) or delayed traffic. Note that in the focused overload scenario studied here, the percent lost/delayed traffic on the Oakbrook node is much higher than the network-wide average values given. Table 2 Performance Comparison for MAR, MAM, & No-DSTE Bandwidth Constraint (BC) Models 6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic) Class Type MAR BC MAM BC No-DSTE BC Model Model Model NORMAL PRIORITY VOICE 0.00 1.97 10.3009 HIGH PRIORITY VOICE 0.00 0.00 7.0509 NORMAL PRIORITY DATA 0.00 6.63 13.3009 HIGH PRIORITY DATA 0.00 0.00 7.0509 BEST EFFORT PRIORITY DATA 12.33 11.92 9.6509 Clearly the performance is better with MAR bandwidth allocation, and the results show that performance improves when bandwidth reservation is used. The reason for the poor performance of the No-DSTE model, without bandwidth reservation, is due to the lack of protection of allocated bandwidth. If we add the bandwidth reservation mechanism, then performance of the network is greatly improved. The simulations showed that the performance of MAM is quite sensitive to the over-allocation factors discussed above. For example, if the BCc values are proportionally allocated with FACTOR1 = 1, then the results are much worse, as shown in Table 3: Table 3 Performance Comparison for MAM Bandwidth Constraint Model with Different Over-allocation Factors 6X Focused Overload on Oakbrook (Total Network % Lost/Delayed Traffic) Class Type (FACTOR1 = 1) (FACTOR1 = 2) NORMAL PRIORITY VOICE 31.69 1.9709 HIGH PRIORITY VOICE 0.00 0.0009 NORMAL PRIORITY DATA 31.22 6.6309 HIGH PRIORITY DATA 0.00 0.0009 BEST EFFORT PRIORITY DATA 8.76 11.9209 Table 4 illustrates the performance of the MAR, MAM, and No-DSTE bandwidth constraint models for a high-day network load pattern with a 30% general overload. The numbers given in the table are the total network percent lost (blocked) or delayed traffic. Table 4 Performance Comparison for MAR, MAM, & No-DSTE Bandwidth Constraint (BC) Models 50% General Overload (Total Network % Lost/Delayed Traffic) Class Type MAR BC MAM BC No-DSTE BC Model Model Model NORMAL PRIORITY VOICE 0.02 0.13 7.9809 HIGH PRIORITY VOICE 0.00 0.00 8.9409 NORMAL PRIORITY DATA 0.00 0.26 6.9309 HIGH PRIORITY DATA 0.00 0.00 8.9409 BEST EFFORT PRIORITY DATA 10.41 10.39 8.4009 Again, we can see the performance is always better when MAR bandwidth allocation and reservation is used. Table 5 illustrates the performance of the MAR, MAM, and No-DSTE bandwidth constraint models for a single link failure scenario (3 OC-48). The numbers given in the table are the total network percent lost (blocked) or delayed traffic. Table 5 Performance Comparison for MAR, MAM, & No-DSTE Bandwidth Constraint (BC) Models Single Link Failure (3 OC-48s) (Total Network % Lost/Delayed Traffic) Class Type MAR BC MAM BC No-DSTE BC Model Model Model NORMAL PRIORITY VOICE 0.00 0.62 0.5809 HIGH PRIORITY VOICE 0.00 0.31 0.2909 NORMAL PRIORITY DATA 0.00 0.48 0.4609 HIGH PRIORITY DATA 0.00 0.31 0.2909 BEST EFFORT PRIORITY DATA 0.12 0.72 0.6609 Again, we can see the performance is always better when MAR bandwidth allocation and reservation is used. Table 6 illustrates the performance of the MAR, MAM, and No-DSTE bandwidth constraint models for a multiple link failure scenario (3 links with 3 OC-48, 3 OC-3, 4 OC-3 capacity, respectively). The numbers given in the table are the total network percent lost (blocked) or delayed traffic. Table 6 Performance Comparison for MAR, MAM, & No-DSTE Bandwidth Constraint (BC) Models Multiple Link Failure (3 Links with 3 OC-48, 3 OC-3, 4 OC-3, Respectively) (Total Network % Lost/Delayed Traffic) Class Type MAR BC MAM BC No-DSTE BC Model Model Model NORMAL PRIORITY VOICE 0.00 0.91 0.8609 HIGH PRIORITY VOICE 0.00 0.44 0.4209 NORMAL PRIORITY DATA 0.00 0.70 0.6409 HIGH PRIORITY DATA 0.00 0.44 0.4209 BEST EFFORT PRIORITY DATA 0.14 1.03 0.9809 Again, we can see the performance is always better when MAR bandwidth allocation and reservation is used. Lai's results [LAI, MAM1] show the trade-off between bandwidth sharing and service protection/isolation, using an analytic model of a single link. He shows that RDM has a higher degree of sharing than MAM. Furthermore, for a single link, the overall loss probability is the smallest under full sharing and largest under MAM, with RDM being intermediate. Hence, on a single link, Lai shows that the full sharing model yields the highest link efficiency and MAM the lowest, and that full sharing has the poorest service protection capability. The results of the present study show that when considering a network context, in which there are many links and multiple-link routing paths are used, full sharing does not necessarily lead to maximum network-wide bandwidth efficiency. In fact, the results in Table 4 show that the No-DSTE model not only degrades total network throughput, but also degrades the performance of every CT that should be protected. Allowing more bandwidth sharing may improve performance up to a point, but can severely degrade performance if care is not taken to protect allocated bandwidth under congestion. Both Lai's study and this study show that increasing the degree of bandwidth sharing among the different CTs leads to a tighter coupling between CTs. Under normal loading conditions, there is adequate capacity for each CT, which minimizes the effect of such coupling. Under overload conditions, when there is a scarcity of capacity, such coupling can cause severe degradation of service, especially for the lower priority CTs. Thus, the objective of maximizing efficient bandwidth usage, as stated in bandwidth constraint model objectives, must be exercised with care. Due consideration needs to be given also to achieving bandwidth isolation under overload, in order to minimize the effect of interactions among the different CTs. The proper tradeoff of bandwidth sharing and bandwidth isolation needs to be achieved in the selection of a bandwidth constraint model. Bandwidth reservation supports greater efficiency in bandwidth sharing while still providing bandwidth isolation and protection against QoS degradation. In summary, the proposed MAR bandwidth constraint model includes the following: a) allocate bandwidth to individual CTs, b) protect allocated bandwidth by bandwidth reservation methods, as needed, but otherwise fully share bandwidth, c) differentiate high-priority, normal-priority, and best-effort priority services, and d) provide admission control to reject connection requests when needed to meet performance objectives. In the modeling results, the MAR bandwidth constraint model compares favorably with methods that do not use bandwidth reservation. In particular, some of the conclusions from the modeling are as follows: o MAR bandwidth allocation is effective in improving performance over methods that lack bandwidth reservation and that allow more bandwidth sharing under congestion, o MAR achieves service differentiation for high-priority, normal-priority, and best-effort priority services, o bandwidth reservation supports greater efficiency in bandwidth sharing while still providing bandwidth isolation and protection against QoS degradation, and is critical to stable and efficient network performance. Full Copyright Statement Copyright (C) The Internet Society (2003). All Rights Reserved. This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English. The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns. This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.