INTERNATIONAL TELECOMMUNICATION UNION

COM 2 – LS 76 – E

TELECOMMUNICATION
STANDARDIZATION SECTOR

STUDY PERIOD 2001-2004

 

Original: English

Question(s):

2/2

Geneva, 26 November - 6 December 2002

Ref : TD 28 (WP1)

Source:

ITU-T SG2

Title:

Recommendation E.360 – QoS routing and related traffic engineering methods for IP-, ATM- and TDM-based multiservice networks

LIAISON STATEMENT

To:

IETF Traffic Engineering Working Group (TEWG), MPLS Working Group, CCAMP Working Group

Approval:

Agreed to at 26 November-6 December 2002 Study Group 2 Meeting

For:

Information

Deadline:

None

Contact:

Gerald Ash

AT&T

USA

Tel: +1 732 420 4578

Fax: +1 732 368 8659

Email: gash@att.com

 

 

 

 

Q.2/2 would like to inform the IETF TEWG, MPLS and CCAMP Working Groups that Q.2/2 has finalized the work on the E.360 Series of 7 Recommendations on ‘QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks’.  This Liaison provides a summary of the E.360 Recommendations.  A full copy of these Recommendations is available at http://www.research.att.com/~jrex/jerry/.

We would be pleased to receive any comments you have on these documents.

The E.360 Series of Recommendations is relevant to the request from the TEWG for service provider uses, requirements, and desires for traffic engineering best current practices.  In the E.360 Series, analysis models are used to demonstrate that currently operational TE/QoS routing methods and best current practices are extensible to QoS routing and Internet traffic engineering (TE). These Qos routing and TE methods include traffic management through control of routing functions, which include call routing, connection routing, QoS resource management, routing table management, and dynamic transport routing.  Recommendations E.360 provide a performance analysis of various TE/QoS routing methods which control a network's response to traffic demands and other stimuli, such as link failures or node failures. Essentially all of the methods analyzed are already widely applied in operational networks worldwide, particularly in PSTN networks employing TDM-based technology.  However, the methods are shown to be extensible to packet-based technologies, in particular, to IP-based and ATM-based technologies. Results of performance analysis models are presented which illustrate the tradeoffs between various approaches. Based on the results of these studies as well as established practice and experience, methods for dynamic QoS routing and admission control are proposed for consideration in network evolution to IP-based and ATM-based technologies.

1.0 Introduction

QoS routing and related traffic engineering methods are indispensable network functions which controls a network’s response to traffic demands and other stimuli, such as network overloads and failures.  Current and future networks are rapidly evolving to carry a multitude of voice/ISDN services and packet data services on internet protocol (IP)-based and asynchronous transfer mode (ATM)-based networks, driven in part by the extremely rapid growth of packet-based data services.  Within networks and services supported by packet and TDM protocols have evolved various QoS routing methods.  These QoS routing mechanisms are reviewed in the E.360 Series of 7 Recommendations “QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks” [E.360].  This Liaison summarizes these Recommendations, which includes a comparative analysis and performance evaluation of various QoS routing alternatives.

QoS routing functions include a) call routing, which entails number/name translation to routing address, b) connection or bearer-path routing methods, c) QoS resource management, and d) routing table management. These functions can be a) decentralized and distributed to the network nodes, b) centralized and allocated to a centralized controller such as a QoS-routing processor, or c) performed by a hybrid combination of these approaches.  The scope of the QoS routing methods includes the establishment of connections for narrowband, wideband, and broadband multimedia services within multiservice networks and between multiservice networks.  Here a multiservice network refers to one in which various classes of service share the transmission, switching, management, and other resources of the network.  These classes of services can include constant bit rate (CBR), variable bit rate (VBR), and unassigned bit rate (UBR) traffic classes.  There are quantitative performance requirements that the various classes of service normally are required to meet, such as end-to-end blocking, delay, and/or delay-jitter objectives.  These objectives are achieved through a combination of QoS routing, traffic management, and capacity management.

The E.360 Series of Recommendations [E.360] provides a performance analysis of lost/delayed traffic and control load for various QoS routing methods, which control a network's response to traffic demands and other stimuli, such as traffic overloads, link failures, or node failures. Essentially all of the methods analyzed are already widely applied in operational networks worldwide, particularly in PSTN networks employing TDM-based technology.  Such methods have been analyzed in practice for TDM-based networks [ASH1], and in modeling studies for IP-based and ATM-based networks [ASH2, E.360].  In [E.360] these QoS routing methods are described, and the methods are shown to be extensible to packet-based technologies, in particular, to IP/MPLS-based technology. Results of performance analysis models are presented which illustrate the tradeoffs between various approaches. Based on the results of these studies as well as established practice and experience, methods for dynamic QoS routing and admission control are proposed for consideration in network evolution to IP-based and ATM-based technologies.  In particular, we find that aggregated per-virtual-network bandwidth allocation compares favorably with per-flow allocation.  We also find that event-dependent routing methods for management of label switched paths perform just as well or better than the state-dependent routing methods with flooding, which means that event-dependent routing path selection has potential to significantly enhance network scalability.

Awduche [AWD, RFC3272] gives excellent overviews of traffic engineering approaches for IP-based networks, and also provides traffic engineering requirements [RFC2702].  Crawley [RFC2386] and Xiao [XIAO1] provide good background and context for QoS routing in the Internet.  A few early implementations of off-line, network-management-based traffic engineering approaches have been published, such as in the Global Crossing network [XIAO2], Level3 network [SPR], and Cable & Wireless network [LIL].  Some studies have proposed more elaborate QoS routing and traffic engineering approaches in IP networks [APO, ELW, MA, XIAO3], as well as in ATM networks [AHM].  However, sophisticated, on-line, QoS routing and traffic engineering methods widely deployed in TDM networks [ASH1] have yet to be extended to IP-based and ATM-based traffic engineering.  Also, vendors have yet to announce such traffic engineering capabilities in their products.  Perhaps provider interest is tempered by the current practice of over-provisioning packet-based networks, with concomitant low utilization and efficiency [ODL].  There is therefore an opportunity for increased profitability and performance in such networks through application of methods described in the E.360 Series of Recommendations, as summarized in this Liaison.

In Section 2 we summarize the E.360 Series of Recommendations, and in ANNEX A we provide a brief summary of the analysis of QoS routing methods given in Recommendations E.360.  In particular, ANNEX A discusses the general principles of QoS routing methods, including connection routing methods, QoS resource management, and routing table management.  ANNEX A also includes modeling and analysis results, as well as a summary and conclusions.

2.0 Summary of E.360 Series of Recommendations

A new series of seven Recommendations has been approved by the ITU-T Study Group 2, which focus on QoS routing and traffic engineering methods for IP-, ATM-, & TDM-based multiservice networks. The methods addressed include call and connection routing, QoS resource management, routing table management, dynamic transport routing, capacity management, and operational requirements.  The Recommendations provide a performance analysis of various QoS routing methods, and based on the results and established practice, methods for dynamic QoS routing and admission control are recommended for consideration in network evolution to ATM- and IP-based technologies.

The following is a brief summary of each of the seven Recommendations (a full text of the Recommendations is available at http://www.research.att.com/~jrex/jerry/.

2.1 Recommendation E.360.1 “QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks – Framework”

The E.360 Series of Recommendation describes, analyzes, and recommends methods which control a network's response to traffic demands and other stimuli, such as link failures or node failures.  The methods addressed in the E.360 series include call and connection routing, QoS resource management, routing table management, dynamic transport routing, capacity management, and operational requirements.  Analysis models are used to demonstrate that currently operational QoS routing methods and best current practices are extensible to IP-based and ATM-based QoS routing.  The Recommendations provide a performance analysis of various QoS routing methods, where essentially all of the methods analyzed are already widely applied in operational networks worldwide, particularly in PSTN networks employing TDM-based technology.  However, the methods are shown to be extensible to packet-based technologies, in particular, to IP-based and ATM-based technologies. Results of performance analysis models are presented which illustrate the tradeoffs between various approaches. Based on the results of these studies as well as established practice and experience, methods for dynamic QoS routing and admission control are recommended for consideration in network evolution to IP-based and ATM-based technologies.


2.2 Recommendation E.360.2 “QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks – Call Routing & Connection Routing Methods”

Call routing involves the translation of a number or name to a routing address.  This Recommendation describes how number (or name) translation should result in the E.164 ATM end-system addresses (AESA), network routing addresses (NRAs), and/or IP addresses.  These addresses are used for routing purposes and therefore must be carried in the connection-setup information element. Connection or bearer-path routing involves the selection of a path from the originating node to the destination node in a network.  This Recommendation discusses bearer-path selection methods, which are categorized into the following four types: fixed routing (FR), time-dependent routing (TDR), state-dependent routing (SDR), and event-dependent routing (EDR).  These methods are associated with routing tables, which consist of a route and rules to select one path from the route for a given connection or bandwidth-allocation request. Recommendations include a) QoS routing methods to be applied, b) sparse-topology multilink-routing networks, c) single-area flat topologies, d) event-dependent-routing (EDR) QoS routing path selection methods, and e) interdomain routing methods which extend the intradomain call routing and connection routing concepts.

2.3 Recommendation E.360.3 “QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks – QoS Resource Management Methods”

QoS resource management functions include class-of-service derivation, policy-based routing table derivation, connection admission, bandwidth allocation, bandwidth protection, bandwidth reservation, priority routing, priority queuing, and other related resource management functions.  Recommendations include a) QoS resource management to achieve connection-level and packet-level grade-of-service objectives, as well as key service, normal service, and best effort service differentiation, b) admission control, c) bandwidth reservation to achieve stable and efficient performance of QoS routing methods and to ensure the proper operation of multiservice bandwidth allocation, protection, and priority treatment, d) per-virtual network (VNET) bandwidth allocation, and e) application of both MPLS bandwidth management and DiffServ priority queuing management

2.4 Recommendation E.360.4 “QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks – Routing Table Management Methods & Requirements”

Routing table management information, such as topology update, status information, or routing recommendations, is used for purposes of applying the routing table design rules for determining path choices in the routing table.  This information is exchanged between one node and another node, such as between the originating node and destination node, for example, or between a node and a network element such as a bandwidth-broker processor.  This information is used to generate the routing table, and then the routing table is used to determine the path choices used in the selection of a path.  Recommendations include a) per-VNET bandwidth allocation, which is preferred to per-flow allocation because of the much lower routing table management overhead requirements, b) EDR QoS routing methods, which can lead to a large reduction in flooding overhead without loss of network throughput performance, and c) larger administrative areas and lower routing table management overhead requirements.

2.5 Recommendation E.360.5 “QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks – Transport Routing Methods”

Dynamic transport routing combines with dynamic traffic routing to shift transport bandwidth among node pairs and services through use of flexible transport switching technology, such as optical cross-connects (OXCs). Dynamic transport routing offers advantages of simplicity of design and robustness to load variations and network failures, and can provide automatic link provisioning, diverse link routing, and rapid link restoration for improved transport capacity utilization and performance under stress. OXCs can reconfigure logical transport capacity on demand, such as for peak day traffic, weekly redesign of link capacity, or emergency restoration of capacity under node or transport failure.  MPLS control capabilities are proposed for the setup of layer 2 logical links through OXCs.  Recommendations include a) dynamic transport routing, which provides greater network throughput, enhanced revenue, enhanced network performance under failure as well as abnormal and unpredictable traffic load patterns, b) traffic and transport restoration level design, which allows for link diversity to ensure performance under failure, and c) robust routing techniques, which include dynamic traffic routing, multiple ingress/egress routing, and logical link diversity routing; these methods improve response to node or transport failures.

2.6 Recommendation E.360.6 “QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks – Capacity Management Methods”

This Recommendation discusses capacity management principles, which include a) link capacity design models, b) shortest path selection models, c) multihour network design models, d) day-to-day variation design models, and e) forecast uncertainty/reserve capacity design models.  Recommendations include a) discrete event flow optimization design models, which are able to capture very complex routing behavior, b) sparse topology options, which lead to capital cost advantages, operation simplicity and cost reduction.  Capital cost savings are subject to the particular switching and transport cost assumptions, c) voice and data integration, d) multilink routing methods, which exhibit greater design efficiencies in comparison with 2-link routing methods, e) single-area flat topologies, which exhibit greater design efficiencies in termination and transport capacity, f) EDR methods, which exhibit comparable design efficiencies to SDR, and g) dynamic transport routing, which achieves capital savings by concentrating capacity on fewer, high-capacity physical fiber links and higher network throughput and enhanced revenue by their ability to flexibly allocate bandwidth on the logical links serving the access and inter-node traffic.

2.7 Recommendation E.360.7 “QoS Routing & Related Traffic Engineering Methods for IP-, ATM-, & TDM-Based Multiservice Networks – Operational Requirements”

This Recommendation discusses traffic engineering operational requirements, as follows: a) traffic management requirements for real-time performance monitoring, network control, and work center functions, b) capacity management – forecasting requirements for load forecasting, including configuration database functions, load aggregation, basing, and projection functions, and load adjustment cycle and view of business adjustment cycle, c) capacity management – daily and weekly performance monitoring requirements for daily congestion analysis, study-week congestion analysis, and study-period congestion analysis, and d)  capacity management – short-term network adjustment requirements for network design, work center functions, and interfaces to other work centers.

2.8 Analysis of QoS Routing Methods for MPLS-Based Multiservice Networks

ANNEX A provides a summary of QoS routing methods analyzed in the E.360 Series of Recommendations. The ANNEX summarizes the performance analysis of lost/delayed traffic and control load for various QoS routing methods, which control a network's response to traffic demands and other stimuli, such as traffic overloads, link failures, or node failures. Essentially all of the methods analyzed are already widely applied in operational networks worldwide, particularly in PSTN networks employing TDM-based technology.  However, the methods are shown to be extensible to packet-based technologies, in particular, to IP-based and ATM-based technologies. Results of performance analysis models are presented which illustrate the tradeoffs between various approaches. Based on the results of these studies as well as established practice and experience, methods for dynamic QoS routing and admission control are proposed for consideration in network evolution to IP-based and ATM-based technologies.  In particular, we find that aggregated per-virtual-network bandwidth allocation compares favorably with per-flow allocation.  We also find that event-dependent routing methods for management of label switched paths perform just as well or better than the state-dependent routing methods with flooding, which means that event-dependent routing path selection has potential to significantly enhance network scalability.

3.0 References

[AHM] Ahmadi, H., et. al., Dynamic Routing and Call Control in High-Speed Integrated Networks, Proceedings of ITC-13, Copenhagen, 1992.

[AKI] Akinpelu, J. M., The Overload Performance of Engineered Networks with Nonhierarchical & Hierarchical Routing, BSTJ, Vol. 63, 1984.

[APO] Apostolopoulos, G., Intra-Domain QoS Routing in IP Networks: A Feasibility and Cost/Benefit Analysis, IEEE Network, September 1999.

[ASH1]  Ash, G. R., Dynamic Routing in Telecommunications Networks, McGraw-Hill, 1998.

[ASH2]  Ash, G. R., et. al., Routing Evolution in Multiservice Integrated Voice/Data Networks, Proceeding of ITC-16, Edinburgh, June 1999.

[AWD] Awduche, D., MPLS and Traffic Engineering in IP Networks, IEEE Communications Magazine, December 1999.

[BUR] Burke, P. J., Blocking Probabilities Associated with Directional Reservation, unpublished memorandum, 1961.

[E.360] ITU-T Recommendations, QoS Routing & Related Traffic Engineering Methods for Multiservice TDM-, ATM-, & IP-Based Networks.

[ELW] Elwalid, A., et. al., MATE: MPLS Adaptive Traffic Engineering, Proceedings INFOCOM'01, April 2001.

[KRU] Krupp, R. S., Stabilization of Alternate Routing Networks, Proceedings of ICC, Philadelphia, 1982.

[LIL] Liljenstolpe, C., An Approach to IP Network Traffic Engineering (Cable & Wireless), work in progress.

[MA] Ma, Q., Quality of Service Routing in Integrated Services Networks, Ph.D. Thesis, Carnegie Mellon University, 1998.

[MUM] Mummert, V. S., Network Management and Its Implementation on the No. 4ESS, International Switching Symposium, Japan, 1976.

[NAK] Nakagome, Y., Mori, H., Flexible Routing in the Global Communication Network, Proceedings of ITC-7, Stockholm, 1973.

[ODL] Odlyzko, A., The economics of the Internet: Utility, utilization, pricing, and Quality of Service, http://www.dtc.umn.edu/~odlyzko/doc/networks.html.

[PNNI]  ATM Forum Technical Committee, Private Network-Network Interface Specification Version 1.0 (PNNI 1.0), af-pnni-0055.000.

[RFC1247] Moy, J., OSPF Version 2.

[RFC2386] Crawley, E., et. al., A Framework for QoS-based Routing in the Internet.

[RFC2475] Blake, S., et. al., Weiss, W., An Architecture for Differentiated Services.

[RFC2702] Awduche, D., et. al., Requirements for Traffic Engineering over MPLS.

[RFC3031] Rosen, E., et. al., Multiprotocol Label Switching Architecture.

[RFC3209]  Awduche, D., et. al., RSVP-TE: Extension to RSVP for LSP Tunnels.

[RFC3212]  Jamoussi, B., et. al., Constraint-Based LSP Setup using LDP.

[RFC3270] Le Faucheur, F., et. al., MPLS Support of Differentiated Services.

[RFC3272] Awduche, D., et. al., Overview & Principles of Internet Traffic Engineering.

[SPR] Springer, V., et. al., Level3 MPLS Protocol Architecture, work in progress.

[VIL]  Villamizar, C., MPLS Optimized Multipath, work in progress.

[XIAO1] Xiao, X., et. al., Internet QoS: A Big Picture, IEEE Network, March/April 1999.

[XIAO2] Xiao, X., et. al., Traffic Engineering with MPLS in the Internet, IEEE Network, March 2000.

[XIAO3] Xiao, X., Providing Quality of Service in the Internet, Ph.D. Thesis, Michigan State University, 2000.


ANNEX A – Analysis of QoS Routing Methods [E.360]

 

A.1. QoS Routing Methods

In this ANNEX we summarize the QoS routing methods discussed and analyzed in the E.360 Series of Recommendations [E.360], including a) connection or bearer-path routing methods, b) QoS resource management, and c) routing table management.

A.1.1 Connection Routing

Connection routing methods are used for establishment of a bearer path for a given service request or session flow, and include fixed routing, time-dependent routing, state-dependent routing, and event-dependent routing methods.

Hierarchical fixed routing (FR) is an important routing method employed in all types of networks, including packet- and TDM-based networks.  In IP-based and ATM-based networks, there is often a hierarchical relationship among different “areas”, “peer-groups,” or sub-networks. Hierarchical multi-domain (or multi-area, multi-peer-group, or multi-autonomous-system) topologies are normally used with IP routing protocols (OSPF, BGP) and ATM routing protocols (PNNI), as well as within almost all TDM-based network routing topologies.

Time dependent routing (TDR) methods are a type of dynamic routing in which the routing tables are altered at a fixed point in time during the day or week.  TDR routing tables are determined on an off-line, preplanned basis and are implemented consistently over a time period.  The TDR routing tables are determined considering the time variation of traffic load in the network, for example based on measured hourly load patterns. Several TDR time periods are used to divide up the hours on an average business day and weekend into contiguous routing intervals sometimes called load set periods.  Typically, the TDR routing tables used in the network are coordinated by taking advantage of noncoincidence of busy hours among the traffic loads.

In state dependent routing (SDR), illustrated in Figure 1, the routing tables are altered automatically according to the state of the network.  For a given SDR method, the routing table rules are implemented to determine the path choices in response to changing network status, and are used over a relatively short time period.  Information on network status may be collected at a central QoS-routing processor or distributed to nodes in the network.  The information exchange may be performed on a periodic or on-demand basis.  SDR methods use the principle of routing connections on the best available path based on network state information.  For example, in the least loaded routing method, the residual capacity of candidate paths is calculated, and the path having the largest residual capacity is selected for the connection.  Various relative levels of link occupancy can be used to define link load states, such as lightly-loaded, heavily-loaded, or bandwidth-not-available states. In general, SDR methods calculate a path cost for each connection request based on various factors such as the load-state or congestion state of the links in the network. 

In SDR, the routing tables are designed on-line, in real time, by the originating node (ON) or a central QoS-routing processor through the use of network status and topology information obtained through information exchange with other nodes and/or a centralized QoS-routing processor.  There are various implementations of SDR distinguished by

-     whether the computation of the routing tables is distributed among the network nodes or centralized and done in a centralized QoS-routing processor, and

-     whether the computation of the routing tables is done periodically or connection by connection.

This leads to three different implementations of SDR (see Figure 1):

-     centralized periodic SDR (CP-SDR) -- here the centralized QoS-routing processor obtains link status and traffic status information from the various nodes on a periodic basis (e.g., every 10 seconds) and performs a computation of the optimal routing table on a periodic basis.  To determine the optimal routing table, the QoS-routing processor executes a particular routing table optimization procedure such as least-loaded routing and transmits the routing tables to the network nodes on a periodic basis (e.g., every 10 seconds).  Typically, if the shortest path is busy (e.g., bandwidth is unavailable on one or more links), the second path is selected from the list of feasible paths on the basis of having the greatest level of idle bandwidth at the time.

-     distributed periodic SDR (DP-SDR) -- here each node in the SDR network obtains link status and traffic status information from all the other nodes on a periodic basis (e.g., every 5 minutes) and performs a computation of the optimal routing table on a periodic basis (e.g., every 5 minutes).  Flooding is a common technique for distributing the status and traffic data, however other techniques with less overhead are also available, such as a query-for-status method.  To determine the optimal routing table, the ON executes a particular routing table optimization procedure such as least-loaded routing.

-     distributed connection-by-connection SDR (DC-SDR) -- here an ON in the SDR network obtains link status and traffic status information from the destination node (DN), and perhaps from selected via nodes (VNs), on a connection by connection basis and performs a computation of the optimal routing table for each connection. Typically, the ON first tries the primary path and if it is not available finds an optimal alternate path by querying the DN and perhaps several VNs through query-for-status network signaling for the busy-idle load status of all links connected on the alternate paths to the DN.  To determine the optimal routing table, the ON executes a particular routing table optimization procedure such as least-loaded routing.

In event dependent routing (EDR), the routing tables are updated locally on the basis of whether connections succeed or fail on a given path choice.  In the EDR learning approaches, the path last tried, which is also successful, is tried again until blocked, at which time another path is selected at random and tried on the next connection request. EDR path choices can also be changed with time in accordance with changes in traffic load patterns.  Success-to-the-top (STT) EDR path selection, illustrated in Figure 1, is a decentralized, on-line path selection method with update based on random routing.  STT-EDR uses a simplified decentralized learning method to achieve flexible adaptive routing. The primary path path-p is used first if sufficient resources are available, and if not a currently successful alternate path path-s is used until it is blocked (i.e., sufficient resources are not available, such as bandwidth not available on one or more links). In the case that path-s is blocked, a new alternate path path-n is selected at random as the alternate path choice for the next connection request overflow from the primary path.  In the EDR learning approaches, the current alternate path choice can be updated randomly, cyclically (round-robin), or by some other means, and may be maintained as long as a connection can be established successfully on the path. Hence the routing table is constructed with the information determined during connection setup, and no additional information is required by the ON.

There are features commonly applied in all the connection routing methods.  With TDR, SDR, and EDR, dynamically activated bandwidth reservation is typically used under congestion conditions to protect traffic on the primary path, as discussed in Section 2.2.2.  Crankback may be used when an alternate path is blocked at a VN, and the connection request advances to a new path choice.  Many path choices can be tried by a given connection request before the request is blocked.  Paths in the routing table may consist of the direct link, a 2-link path through a single VN, or a multiple-link path through multiple VNs.  Paths in the routing table are subject to allowed load state restrictions on each link.  For either SDR or EDR, as in TDR, the alternate path choices for a connection request may be changed in a time-dependent manner considering the time-variation of the traffic load. 

A.1.2 QoS Resource Management

QoS resource management functions include class-of-service identification, routing table derivation, connection admission, bandwidth allocation, bandwidth protection, bandwidth reservation, priority routing, and packet-level control (e.g., priority queuing) functions.  QoS resource management methods have been applied successfully in TDM-based networks [ASH1], and are being extended to IP-based and ATM-based networks.  In an illustrative QoS resource management method, bandwidth is allocated to each of several virtual networks (VNETs), which are each assigned a priority corresponding to either high-priority services, normal-priority services, or best-effort priority services.  Examples of services within these VNET categories include

-     high-priority services such as emergency telecommunication service,

-     normal-priority services such as constant rate voice, variable rate IP-telephony, and WWW file transfer, and

-     low-priority best-effort services such as voice mail, email, and file transfer.

Bandwidth changes in VNET bandwidth capacity can be determined by edge nodes on a per-flow (per-connection) basis, or based on an overall aggregated bandwidth demand, or “bandwidth pipe” concept, for VNET capacity (not on a per-connection demand basis).  In the latter case of per-VNET bandwidth allocation, based on the aggregated bandwidth demand, edge nodes make periodic discrete changes in bandwidth allocation, that is, either increase or decrease bandwidth, such as on the multiprotocol label switching (MPLS) [RFC3031] constraint-based routing label switched paths (CRLSPs) constituting the VNET bandwidth capacity.

In the illustrative QoS resource management method for per-VNET bandwidth allocation, which we assume is MPLS-based, the bandwidth allocation control for each VNET CRLSP is based on estimated bandwidth needs, bandwidth use, and status of links in the CRLSP. The edge node, or ON, determines when VNET bandwidth needs to be increased or decreased on a CRLSP, and uses an MPLS CRLSP bandwidth modification procedure to execute needed bandwidth allocation changes on VNET CRLSPs.  In the bandwidth allocation procedure the constraint-based routing label distribution protocol [RFC3212] or the resource reservation protocol [RFC3209] could be used, for example, to specify appropriate parameters in the label request message a) to request bandwidth allocation changes on each link in the CRLSP, and b) to determine if link bandwidth can be allocated on each link in the CRLSP.  If a link bandwidth allocation is not allowed, a notification message with a crankback parameter allows the ON to search out possible bandwidth allocation on another CRLSP.  We illustrate an allowed load state (ALS) parameter in the label request message to control the bandwidth allocation on individual links in a CRLSP.  In addition, we illustrate a modify parameter in the label request message to allow dynamic modification of the assigned traffic parameters (such as peak data rate, committed data rate, etc.) of an already existing CRLSP.

In addition to controlling bandwidth allocation, the QoS resource management procedures can check end-to-end transfer delay, delay variation, and transmission quality considerations such as loss, echo, and noise.  QoS resource management provides integration of services on a shared network, for many classes-of-service such as:

-     CBR services including voice, 64- and 384-kbps ISDN switched digital data, virtual private network, 800/free-phone, and other services.

-     Real-time VBR services including IP-telephony, compressed video, and other services .

-     Non-real-time VBR services including WWW file transfer, credit card check, and other services.

-     UBR services including voice mail, email, file transfer, and other services.

A.1.2.1 Class-of-Service Identification & QoS Resource Management Steps

QoS resource management entails identifying class-of-service and class-of-service parameters, which may include, for example:

service-identity,

virtual network (VNET) (with associated priority, QoS, & traffic parameters), and

link-capability.

The service-identity describes the actual service associated with the connection or flow.  The VNET describes the bandwidth allocation and routing table parameters to be used by the connection.  The link-capability describes the link hardware capabilities such as fiber, radio, satellite, and digital circuit multiplexing equipment, that the connection may require, prefer, or avoid. The combination of service-identity, VNET, and link-capability constitute the class-of-service, which together with the network node number is used to access routing table data.

Determination of class-of-service begins with translation at the ON of the number or name identifying the destination end-user, to determine the routing address of the DN.  If multiple ingress/egress routing is used, multiple possible DN addresses are derived for the connection.  Class-of-service parameters are derived through application of policy-based routing, which involves the application of rules against the input parameters to derive a routing table and its associated parameters.  Input parameters for applying policy-based rules to derive service-identity, VNET, and link-capability could include numbering plan, type of origination/destination network, and type of service.  Policy-based routing rules may then be applied to the derived service-identity, VNET, and link-capability to derive the routing table and associated parameters.

The illustrative QoS resource management method consists of the following steps:

-            The ON determines the DN address, service-identity, VNET, and link capability through the number/name translation and other service information available at the ON.

-            The ON accesses the VNET priority, QoS/traffic parameters, and routing table between the ON and DN.

-            The ON sets up the connection request over the first available path in the routing table based on the QoS resource management rules.

In the first step, the connection request for an individual service is allocated an equivalent bandwidth equal to EQBW to be routed on a particular VNET.  For CBR services the equivalent bandwidth EQBW is equal to the average or sustained bit rate.  For VBR services the equivalent bandwidth EQBW is a function of the sustained bit rate, peak bit rate, and perhaps other parameters.  For example, EQBW equals 64 kbps of bandwidth for CBR voice connections, 64 kbps of bandwidth for CBR ISDN switched digital 64-kbps connections, and 384-kbps of bandwidth for CBR ISDN switched digital 384-kbps connections.

In the second step, the service-identity value is used to derive the VNET.  Bandwidth is allocated to individual VNETs, which is protected as needed but otherwise shared.  Under normal non-blocking/delay network conditions, all services fully share all available bandwidth.  When blocking/delay occurs for a particular VNET-i, bandwidth reservation acts to prohibit alternate-routed traffic and traffic from other VNETs from seizing the allocated capacity for VNET-i.  Associated with each VNET are average bandwidth (BWavg) and maximum bandwidth (BWmax) parameters to govern bandwidth allocation and protection, which are discussed further in the next Section.  Link-capability selection allows connection requests to be routed on specific transmission links that have the particular characteristics required by a connection request.

In the third step, the VNET routing table determines which network capacity is allowed to be selected for each connection request.  In using the VNET routing table, for example, the ON selects a first choice path based on the routing table selection rules.  Whether or not bandwidth can be allocated to the connection request on the first choice path is determined by the QoS resource management rules given in the next Section.  If a first choice path cannot be accessed, the ON may then try alternate paths determined by FR, TDR, SDR, or EDR path selection rules, and again applies the QoS resource management rules now described. 

A.1.2.2 Dynamic Bandwidth Allocation, Protection, & Reservation

Through the use of bandwidth allocation, protection, and reservation mechanisms, QoS resource management can provide good network performance under normal and abnormal operating conditions for all services sharing the integrated network.  Such methods have been analyzed in practice for TDM-based networks [ASH1], and in modeling studies for IP-based and ATM-based networks [ASH2, E.360].  In this Section we discuss these mechanisms.

Two approaches to bandwidth allocation are considered in [E.360]: per-VNET bandwidth allocation and per-flow bandwidth allocation.  In the per-VNET method, aggregated MPLS CRLSP bandwidth is managed to meet the overall bandwidth requirements of VNET service needs.  Individual flows are allocated bandwidth within the CRLSPs accordingly, as CRLSP bandwidth is available.  In the per-flow method, bandwidth is allocated to each individual flow from the overall pool of bandwidth, as the total pool bandwidth is available.  A fundamental principle applied in these bandwidth allocation methods is the use of bandwidth reservation techniques.  We first review bandwidth reservation principles and then discuss per-VNET and per-flow bandwidth allocation and protection.

Bandwidth reservation (the TDM-network terminology is “trunk reservation”) gives preference to the preferred traffic by allowing it to seize any idle bandwidth on a link, while allowing the non-preferred traffic to only seize bandwidth if there is a minimum level of idle bandwidth available, where the minimum-bandwidth threshold is called the reservation level.  P. J. Burke [BUR] first analyzed bandwidth reservation behavior from the solution of the birth-death equations for the bandwidth reservation model.  Burke’s model showed the relative lost-traffic level for preferred traffic, which is not subject to bandwidth reservation restrictions, as compared to non-preferred traffic, which is subject to the restrictions. Bandwidth reservation protection is robust to traffic variations and provides significant dynamic protection of particular streams of traffic.

Bandwidth reservation is a crucial technique used in nonhierarchical networks to prevent "instability," which can severely reduce throughput in periods of congestion, perhaps by as much as 50 percent of the traffic-carrying capacity of a network. Bandwidth reservation is used to prevent this unstable behavior by having the preferred traffic on a link be the traffic on the primary, shortest path, and the non-preferred traffic, subjected to bandwidth reservation restrictions as described above, be the alternate-routed traffic on longer paths. In this way the alternate-routed traffic is inhibited from selecting longer alternate paths when sufficient idle trunk capacity is not available on all links of an alternate-routed connection, which is the likely condition under network and link congestion. Mathematically, the studies of bistable network behavior have shown that bandwidth reservation used in this manner to favor primary shortest connections eliminates the bistability problem in nonhierarchical networks and allows such networks to maintain efficient utilization under congestion by favoring connections completed on the shortest path [AKI, KRU, NAK].  For this reason, dynamic bandwidth reservation is universally applied in nonhierarchical TDM-based networks, and often in hierarchical networks [MUM].

It is beneficial for bandwidth reservation techniques to be included in IP-based and ATM-based routing methods, in order to ensure the efficient use of network resources especially under congestion conditions.  Currently proposed path-selection methods, such as methods for optimized multipath in IP-based MPLS networks [VIL], or path selection in ATM-based PNNI networks [PNNI], give no guidance on the necessity for using bandwidth-reservation techniques.  Such guidance is essential for acceptable network performance.

Figure 2 illustrates multi-service, QoS resource management, in which bandwidth is allocated on an aggregated basis to the individual VNETs (high-priority, normal-priority, and best-effort priority services VNETs).  This allocated bandwidth is protected by bandwidth reservation methods, as needed, but otherwise shared.  Each ON monitors VNET bandwidth use on each VNET CRLSP, and determines when VNET CRLSP bandwidth needs to be increased or decreased.  In Figure 2, bandwidth changes in VNET bandwidth capacity are determined by ONs based on an overall aggregated bandwidth demand for VNET capacity (not on a per-connection demand basis).  Based on the aggregated bandwidth demand, ONs make periodic discrete changes in bandwidth allocation, that is, either increase or decrease bandwidth on the CRLSPs constituting the VNET bandwidth capacity. For example, if connection requests are made for VNET CRLSP bandwidth that exceeds the current CRLSP bandwidth allocation, the ON initiates a bandwidth modification request on the appropriate CRLSP(s).  For example, this bandwidth modification request may entail increasing the current CRLSP bandwidth allocation by a discrete increment of bandwidth denoted here as delta-bandwidth (DBW).  DBW, for example, could be the additional amount needed by the current connection request.  In any case, DBW is a large enough bandwidth change so that modification requests are made relatively infrequently.  Also, the ON periodically monitors CRLSP bandwidth use, such as once each minute, and if bandwidth use falls below the current CRLSP allocation the ON initiates a bandwidth modification request to decrease the CRLSP bandwidth allocation, for example, down to the current level of bandwidth utilization.

In making a VNET bandwidth allocation modification, the ON determines the VNET priority (high, normal, or best-effort), VNET bandwidth-in-use, VNET bandwidth allocation thresholds, and whether the CRLSP is a first choice CRLSP or alternate CRLSP.  These parameters are used to access a VNET table (illustrated below in Table 1) to determine the allowed load state threshold (ALSi) to which network capacity can be allocated for the VNET bandwidth modification request. In using the ALS threshold to allocate VNET bandwidth capacity, the ON selects a first choice CRLSP based on the routing table selection rules, or alternate paths if the first choice path is not available.

Path selection may use open shortest path first (OSPF) [RFC1247] for intra-domain routing, which provides at each node a topology database that may also include, for example, available bandwidth on each link.  From the topology database, ON A in Figure 3 could determine a list of shortest paths by using, for example, Dijkstra’s algorithm.  This path list could be determined based on administrative weights of each link, which are communicated to all nodes within the routing domain.  These administrative weights may be set, for example, to [1 + epsilon x distance], where epsilon is a factor giving a relatively smaller weight to the distance in comparison to the hop count.   The ON selects a path from the list based on, for example, FR, TDR, SDR, or EDR path selection.

For example, in using the first CRLSP A-B-E in Figure 3, ON A sends an MPLS label request message to VN B, which in turn forwards the label request message to DN E.  VN B and DN E are passed in the explicit-routing  parameter contained in the label request message.  Each node in the CRLSP reads the explicit-routing information, and passes the label request message to the next node listed in the explicit-routing parameter. The connection admission control for each link in the path is performed based on the status of the link. The ON may select any path for which the first link is allowed according to QoS resource management criteria.  If the first path is blocked at any of the links in the path, an MPLS notification message with a crankback parameter is returned to ON A, which can then attempt the next path.  If FR is used, then this path is the next path in the shortest path list, for example path A-C-D-E.  If TDR is used, then the next path is the next path in the routing table for the current time period.  If SDR is used, OSPF implements a distributed method of flooding link status information, which is triggered either periodically and/or by crossing load state threshold values.  This method of distributing link status information can be resource intensive and may not be any more efficient than simpler path selection methods such as EDR.  If EDR is used, then the next path is the last successful path, and if that path is unsuccessful another alternate path is searched out according to the EDR path selection method.  EDR path selection, which entails the use of the release with crankback mechanism to search for an available path, is an alternative to SDR path selection, which may entail flooding of frequently changing link state parameters such as available-link-bandwidth.  With EDR path selection, the reduction in the frequency of such link-state parameter flooding allows for increased scalability.  This is because link-state flooding can consume substantial processor and link resources, in terms of message processing by the processors and link bandwidth consumed by messages on the links.

Hence in using the selected CRLSP, the ON sends the explicit route, the requested traffic parameters (peak data rate, committed data rate, etc.), an ALS threshold, and a modify-parameter in the MPLS label request message to each VN and the DN in the selected CRLSP.  Whether or not bandwidth can be allocated to the bandwidth modification request on the first choice CRLSP is determined by each VN applying the QoS resource management rules.  That is, the VN determines the CRLSP link states, based on bandwidth use, and compares the link load state to the ALS threshold ALSi sent in the MPLS signaling parameters, as further explained below.  If the first choice CRLSP cannot admit the bandwidth change, a VN or DN returns control to the ON through the use of the crankback-parameter in the MPLS notification message.  At that point the ON may then try an alternate CRLSP.  Whether or not bandwidth can be allocated to the bandwidth modification request on the alternate path again is determined by the use of the ALS threshold compared to the CRLSP link load state at each VN.  Priority queuing is used during the time the CRLSP is established, and at each link the queuing discipline is maintained such that the packets are given priority according to the VNET traffic priority, which is discussed in Section 2.2.3.

Hence determination of the CRLSP link load states is necessary for QoS resource management to select network capacity on either the first choice CRLSP or alternate CRLSPs.  Three link load states are distinguished: available (non-reserved) bandwidth (ABW), reserved-bandwidth (RBW), and bandwidth-not-available (BNA).  Management of CRLSP capacity uses the link state model and the ALS threshold to determine if a bandwidth modification request can be accepted on a given CRLSP.  The allowed load state threshold ALSi determines if a bandwidth modification request can be accepted on a given link to an available bandwidth “depth.”  In setting up the bandwidth modification request, the ON encodes the ALS threshold allowed on each link in the ALS-parameter, which is carried in the MPLS label request.  If a CRLSP link is encountered at a VN in which the idle link bandwidth and link load state are below the allowed load state threshold ALSi, then the VN sends an MPLS notification message with the crankback-parameter to the ON, which can then route the bandwidth modification request to an alternate CRLSP choice.  For example, in Figure 3, CRLSP A-B-E may be the first path tried where link A-B is in the ABW state and link B-E is in the RBW state.  If the ALS load state allowed is ALSi=ABW, then the CRLSP bandwidth modification request in the MPLS label request message is routed on link A-B but will not be admitted on link B-E, and the CRLSP bandwidth modification request will be cranked back in the MPLS notification message to the ON A to try alternate CRLSP A-C-D-E.  Here the CRLSP bandwidth modification request again does not succeed since link CD is in the RBW state.  At this point node A can search for a new successful CRLSP-n among the candidate choices.

Here we discuss a sparse network example of per-VNET bandwidth allocation/reservation.  Methods are similar for meshed-network and per-flow bandwidth allocation, with differences being a) bandwidth reservation is triggered rather not fixed in meshed networks, so as not to over-reserve bandwidth since there is a large number of links on which to reserve bandwidth, and b) bandwidth allocation is triggered on a per-flow rather than per-VNET basis in per-flow bandwidth allocation.  For the sparse network case of bandwidth reservation, a simpler method is illustrated which takes advantage of the concentration of traffic onto fewer, higher capacity backbone links.  That is, a small, fixed level of bandwidth reservation is used and permanently enabled on each link, and the ALS threshold is a simple function of bandwidth-in-progress, VNET priority, and bandwidth allocation thresholds, as follows:

 

Table 1

Determination of Allowed Load State (ALS) Threshold

(Per-VNET Bandwidth Allocation, Sparse Network)

 

Allowed

High-

Normal-Priority VNET

Best-Effort

Load Statei

Priority VNET

First Choice

CRLSP

Alternate

CRLSP

Priority VNET

RBW

If BWIPi Ł
2
´ BWmaxi

If BWIPi Ł BWavgi

Not Allowed

Note 1

ABW

 If 2 ´ BWmaxi  < BWIPi

If BWavgi  <

BWIPi

If BWavgi <

BWIPi

Note 1

 

where

 

            BWIPi           =             bandwidth-in-progress on VNET-i

            BWavgi         =             minimum guaranteed bandwidth required for VNET-i to carry the

                                                average offered bandwidth load

            BWmaxi        =             the bandwidth required for VNET-i to meet the blocking/delay probability

                                                grade-of-service objective for CRLSP bandwidth allocation requests

                                 =             1.1 x BWavgi

            Note 1          =             CRLSPs for the best effort priority VNET are allocated zero bandwidth;

                                                Diffserv queuing admits best effort packets only if  there is available

                                                bandwidth on a link

The corresponding load state table for the sparse network case is as follows:

 


Table 2

Determination of Link Load State (Sparse Network)

 

Link Load State

 

Condition

Bandwidth-Not-Available

BNA

ILBWk < DBW

Reserved-Bandwidth

RBW

ILBWk  - RBWrk < DBW

Available-Bandwidth

ABW

DBW Ł  ILBWk  - RBWrk

 

where

            ILBWk          =             idle link bandwidth on link k

            DBW                =             delta bandwidth requirement for a bandwidth allocation

                                                request

            RBWrk          =             reserved bandwidth for link k

                                 =             .01 x TLBWk

            TLBWk         =             the total link bandwidth on link k

 

Figure 3 summarizes the operation of STT-EDR path selection and admission control combined with per-VNET bandwidth allocation.  ON A monitors VNET bandwidth use on each VNET CRLSP, and determines when VNET CRLSP bandwidth needs to be increased or decreased.  Based on the aggregated bandwidth demand, ON A makes periodic discrete changes in bandwidth allocation, that is, either increase or decrease bandwidth on the CRLSPs constituting the VNET bandwidth capacity. If connection requests are made for VNET CRLSP bandwidth that exceeds the current CRLSP bandwidth allocation, ON A initiates a bandwidth modification request on the appropriate CRLSP(s).  The STT-EDR QoS routing algorithm used is adaptive and distributed in nature and uses learning models to find good paths.  For example, in Figure 3 if the LSR-A to LSR-E bandwidth needs to be modified, say increased by DBW, the primary CRLSP-p (A-B-E) is tried first.  If DBW is not available on one or more links of CRLSP-p, then the currently successful CRLSP-s (A-C-D-E) is tried next.  If DBW is not available on one or more links of CRLSP-s, then a new CRLSP is searched by trying additional candidate paths (not shown) until a new successful CRLSP-n is found or the candidate paths are exhausted.  CRLSP-n is then marked as the currently successful path for the next time bandwidth needs to be modified.  DBW, for example, can be set to the additional amount of bandwidth required by the connection request.  Also, ON A periodically monitors CRLSP bandwidth use, such as once each minute, and if bandwidth use falls below the current CRLSP allocation the ON initiates a bandwidth modification request to decrease the CRLSP bandwidth allocation down to the currently used bandwidth level.  In the models discussed in Section 3, the per-VNET bandwidth allocation and admission control method compares favorably with the per-flow method, and STT-EDR path selection method compares favorable to the SDR method.

A.1.2.3 Packet-Level Control

Packet level traffic control encompasses the procedures which allow packet level grade-of-service objectives to be met.  Once a flow is admitted through the connection admission control functions, packet level control a) ensures through traffic shaping that the traffic conforms to the declared traffic parameters, and b) ensures through packet priority and queue management that the network provides the requested quality of service in conformity with the declared traffic and allocated resources.

Traffic controls may be distinguished according to whether their function is to enable quality of service guarantees at the connection level (e.g. connection blocking probability) or at the packet level (e.g. packet loss ratio).  As discussed in Section 2.2.2, connection admission control (CAC) determines if a link or path is capable of handling the requested connection with its associated traffic and QoS requirements.  When CAC is applied, the network decides if it has sufficient resources to accept it without infringing packet level grade-of-service requirements for all established connections as well as the new connection.  This decision is made by allocating resources to specific connections and refusing new requests when insufficient resources are available, where the resources in question are typically bandwidth and buffer space.  A connection request is specified by traffic and QoS requirements, where end-to-end performance objectives relevant to QoS routing include a) maximum end-to-end queuing delay, b) delay variation, and c) packet loss ratio.  These performance objectives must be apportioned to the various network elements contributing to the performance degradation of a given connection so that the end-to-end QoS criteria are satisfied.

If the connection is accepted, there is a traffic contract whereby the network provides the requested quality of service on condition that the traffic conforms to the declared traffic descriptor.  This has led to a definition of traffic parameters: peak packet rate, sustainable packet rate, and intrinsic burst tolerance allowing traffic conformance to be determined by the generic packet rate algorithm.  In this method, supplementary packet delays may be introduced to shape the characteristics of a given flow.  Various scheduling mechanisms, such as priority queuing, may be used.  The priority of service parameter may be included in the differentiated services (DiffServ) [RFC2475] parameter in the IP packet header or MPLS header [RFC3270]. DiffServ does not require that a particular queuing mechanism be used to achieve needed QoS behavior.  Therefore the queuing implementation used for DiffServ could be weighted fair queuing, priority queuing, or other queuing mechanism, depending on the choice in the implementation.  In the analysis priority queuing is used for illustration, however the same or comparable results would be obtained with weighted fair queuing or other queuing mechanisms. These scheduling and shaping mechanisms compliment the connection admission mechanisms described in the previous Section to appropriately allocate bandwidth on links in the network.

A.1.3 Routing Table Management

Routing table management information, such as topology update, status information, or routing recommendations, is used for purposes of applying the routing table design rules for determining path choices in the routing table.  This information is exchanged, for example, between one node and another node, such as between the ON and DN, or between a node and a network element such as a QoS-routing processor.  This information is used to generate the routing table, and then the routing table is used to determine the path choices used in the selection of a path.

IP networks typically run the OSPF protocol for intra-domain routing [RFC2328], which provides each node a link-state topology exchange mechanism to construct its topology database and from that it constructs shortest path routing tables.  OSPF provides for a) exchange of node information, link state information, and reachable address information, b) automatic update and synchronization of topology databases, and c) fixed and/or dynamic route selection based on topology and status information.  For topology database synchronization the link state advertisement (LSA) is used to automatically provision nodes, links, and reachable addresses in the topology database. For topology database synchronization, each node exchanges status information with its immediate neighbors and then bundles its state information in LSAs, which are reliably flooded throughout the routing domain.

Some of the topology state information is static and some is dynamic, and for network scalability it is important to minimize the amount of dynamic topology state information flooding, such as available link bandwidth.  Query for status methods allow efficient determination of status information, as compared to flooding mechanisms, and are provided in TDM-based networks [ASH1].  Routing recommendation methods provide for a QoS-routing processor, for example, to advertise recommended paths to network nodes based on status information available in the database.  Such routing recommendation methods are provided in TDM-based networks [ASH1].

Different routing table management techniques are employed to achieve a) per-VNET bandwidth allocation and per-flow allocation, and b) EDR versus SDR QoS routing methods.  These tradeoffs have significantly different routing table management overhead requirements, which are investigated in Section 3. EDR QoS routing methods are distinct from SDR QoS routing methods in how the paths are selected.  In the SDR QoS routing case, the available link bandwidth (based on LSA flooding of available-link-bandwidth information) is typically used to compute the path.  In the EDR QoS routing case, the available-link-bandwidth information is not needed to compute the path, therefore the available-link-bandwidth flooding does not need to take place, reducing the overhead.  As discussed in Section 2.1, EDR QoS routing algorithms are adaptive and distributed in nature and typically use learning models to find good paths for QoS routing in a network, such as in the STT method.  Available-link-bandwidth flooding can be very resource intensive, since it requires link bandwidth to carry LSAs, processor capacity to process LSAs, and the overhead can impact network scalability and stability.  Modeling results in Section 3 show EDR QoS routing methods can lead to a large reduction in available-link-bandwidth flooding overhead without loss of network throughput performance.

A.2. QoS Routing Modeling & Analysis

We now provide a performance analysis of lost/delayed traffic and control load for various QoS routing methods developed in [E.360].  A full-scale model of a national network is used together with a multiservice traffic demand model to study various QoS routing scenarios and tradeoffs.  The 135-node model is illustrated in Figure 4.

Typical voice/ISDN traffic loads are used to model the various network alternatives, which are based on 72 hours of actual traffic loads on the national network used for the model. Table 3 summarizes the multiservice traffic model used for the QoS routing studies.  Three levels of traffic priority – high, normal, and best-effort -- are given to the various class-of-service categories, or VNETs, illustrated in Table 3.  The voice/ISDN loads are further segmented in the model into eight CBR VNETs, including business voice, consumer voice, international voice in and out, high-priority voice, normal and high-priority 64-kbps ISDN data, and 384-kbps ISDN data.  For the CBR voice services, the mean data rate is assumed to be 64 kbps for all VNETs except the 384-kbps ISDN data VNET-8, for which the mean data rate is 384 kbps.

 

Table 3

Virtual Network (VNET) Traffic Model used for QoS routing Studies

 

Virtual Network Index

Virtual Network Name

 

Service Identity Examples

Virtual Network Traffic Priority

& Traffic Characteristics

VNET-1 (CBR)

BUSINESS VOICE

VIRTUAL PRIVATE NETWORK (VPN), DIRECT CONNECT 800, 800 SERVICE, 900 SERVICE

NORMAL-PRIORITY;

64 KBPS CBR;

72 HOURS TRAFFIC LOAD DATA (SATURDAY, SUNDAY, MONDAY)

VNET-2 (CBR)

CONSUMER VOICE

LONG DISTANCE SERVICE (LDS)

NORMAL-PRIORITY;

64 KBPS CBR;

72 HOURS TRAFFIC LOAD DATA (SATURDAY, SUNDAY, MONDAY)

VNET-3 (CBR)

INTL VOICE OUTBOUND

INTL LDS OUTBOUND, INTL 800 OUTBOUND, GLOBAL VPN OUTBOUND, INTL TRANSIT

NORMAL-PRIORITY;

64 KBPS CBR;

72 HOURS TRAFFIC LOAD DATA (SATURDAY, SUNDAY, MONDAY)

VNET-4 (CBR)

INTL VOICE INBOUND (HIGH-PRIORITY)

INTL LDS INBOUND, INTL 800 INBOUND, GLOBAL VPN INBOUND, INTL TRANSIT INBOUND

HIGH-PRIORITY;

64 KBPS CBR;

72 HOURS TRAFFIC LOAD DATA (SATURDAY, SUNDAY, MONDAY)

VNET-5 (CBR)

800-GOLD (HIGH-PRIORITY)

DIRECT CONNECT 800 GOLD,

VPN-HIGH-PRIORITY

HIGH-PRIORITY;

64 KBPS CBR;

72 HOURS TRAFFIC LOAD DATA (SATURDAY, SUNDAY, MONDAY)

VNET-6 (CBR)

64 KBPS ISDN

64 KBPS SWITCHED DIGITAL SERVICE (SDS),

64 KBPS SWITCHED DIGITAL INTL (SDI)

NORMAL-PRIORITY;

64 KBPS CBR;

72 HOURS TRAFFIC LOAD DATA (SATURDAY, SUNDAY, MONDAY)

VNET-7 (CBR)

64 KBPS ISDN (HIGH-PRIORITY)

64 KBPS SDS & SDI (HIGH-PRIORITY)

HIGH-PRIORITY;

64 KBPS CBR;

72 HOURS TRAFFIC LOAD DATA (SATURDAY, SUNDAY, MONDAY)

VNET-8 (CBR)

384 KBPS ISDN

384 KBPS SDS, 384 KBPS SDI

NORMAL-PRIORITY;

384 KBPS CBR;

72 HOURS TRAFFIC LOAD DATA (SATURDAY, SUNDAY, MONDAY)

VNET-9 (VBR-RT)

IP TELEPHONY

VARIABLE RATE,

EQUIV-BW ALLOCATION,

INTERACTIVE & DELAY  SENSITIVE

IP TELEPHONY, COMPRESSED VOICE

NORMAL-PRIORITY;

VARIABLE RATE,

EQUIV-BW ALLOCATION,

INTERACTIVE & DELAY  SENSITIVE;

VBR-RT: 10% OF VNET1+VNET2+VNET3+

VNET4+VNET5 TRAFFIC LOAD, CALL DATA RATE  VARIES FROM 6.4 KBPS TO 51.2 KBPS (25.6 KBPS MEAN)

VNET-10 (VBR-NRT)

IP MULTIMEDIA

VARIABLE RATE,

EQUIV-BW ALLOCATION,

NON-INTERACTIVE & NOT DELAY SENSITIVE

IP MULTIMEDIA, WWW, CREDIT CARD CHECK

NORMAL-PRIORITY;

VARIABLE RATE,

EQUIV-BW ALLOCATION,

NON-INTERACTIVE & NOT DELAY  SENSITIVE;

VBR-NRT: 30% OF VNET2 TRAFFIC LOAD, CALL DATA RATE  VARIES FROM 38.4 KBPS TO 64 KBPS (51.2 KBPS MEAN)


 

VNET-11 (UBR)

UBR BEST EFFORT

VARIABLE RATE,

NO BW ALLOCATION,

NON-INTERACTIVE & NOT DELAY SENSITIVE

VOICE MAIL, EMAIL, FILE TRANSFER

BEST-EFFORT  PRIORITY;

VARIABLE RATE,

NO BW ALLOCATION,

NON-INTERACTIVE & NOT DELAY  SENSITIVE;

UBR: 30% OF VNET1 TRAFFIC LOAD, CALL DATA RATE  VARIES FROM 6.4 KBPS TO 3072 KBPS (1536 KBPS MEAN)

 

The data services traffic model incorporates typical traffic load patterns and comprises three additional VNET load patterns.  These data services VNETs include

-            variable bit rate real-time (VBR-RT) VNET-9, representing services such as IP-telephony and compressed voice,

-            variable bit rate non-real-time (VBR-NRT) VNET-10, representing services such as WWW multimedia and credit card check, and

-            unassigned bit rate (UBR) VNET-11, representing services such as email, voice mail, and file transfer multimedia applications.

For the VBR-RT connections, the data rate varies from 6.4 to 51.2 kbps with a mean of 25.6 kbps. The VBR-RT connections are assumed to be interactive and delay sensitive.  For the VBR-NRT connections, the data rate varies from 38.4 to 64 kbps with a mean of 51.2 kbps, and the VBR-NRT flows are assumed to be non-delay sensitive.  For the UBR connections, the data rate varies from 6.4 to 3072 kbps with a mean of 1536 kbps. The UBR flows are assumed to be best-effort priority and non-delay sensitive.  For modeling purposes, the service and link bandwidth is segmented into 6.4 kbps slots, that is, 10 slots per 64 kbps channel.

In addition to the QoS bandwidth management procedure for bandwidth allocation requests, a QoS priority of service queuing capability is used during the time connections are established on each of the VNETs.  At each link, a queuing discipline is maintained such that the packets being served are given priority in the following order: high-priority, normal-priority, and best-effort priority VNET services.  This queuing model quantifies the level of delayed traffic for each VNET.

The cost model represents typical switching and transport costs, and illustrates the economies-of-scale for costs projected for high capacity network elements in the future.  Table 4 gives the model used for average switching and transport costs allocated per 64 kbps unit of bandwidth, as follows:

 

Table 4

Cost Assumptions (average cost per equivalent 64 kbps bandwidth, dollars)

 

Data Rate

Average Transport Cost

Average Switching/Cross-Connect Cost

DS3

0.19 x miles + 8.81

26.12

OC3

0.17 x miles + 9.76

19.28

OC12

0.15 x miles + 7.03

9.64

OC48

0.05 x miles + 2.77

3.92

 

A discrete event network design model is used in the design and analysis of the QoS routing methods: 2-link STT-EDR path routing in a meshed logical network, 2-link DC-SDR routing in a meshed logical network, and multilink STT-EDR, DC-SDR, and DP-SDR routing in a sparse logical network.  We also model the case where no QoS routing methods are applied.

A.2.1 Performance Comparisons of QoS Routing Methods

The network models for the 2-link and multilink networks are now described.  Links in the 2-link models are assumed to have fine-grained (1.536 mbpsT1-level) logical transport link bandwidth allocation, and a meshed network topology design results in which links exist between most (90 percent or more) of the nodes. In the 2-link models, one and 2-link routing with crankback is used with both EDR and SDR path selection.  In routing a connection with 2-link STT-EDR routing, the ON checks the equivalent bandwidth and ALS threshold first on the direct path, then on the current successful 2-link via path, and then sequentially on all candidate 2-link paths.  In routing a connection with 2-link DC-SDR, the ON checks the equivalent bandwidth and allowed ALS threshold first on the direct path, and then on the least-loaded path that meets the equivalent bandwidth and ALS requirements.  Each VN checks the equivalent bandwidth and ALS threshold provided in the setup message, and uses crankback to the ON if the equivalent bandwidth or ALS threshold are not met.

In the multilink model, high rate OC3/12/48 links provide highly aggregated link bandwidth allocation and a sparse network topology design results.  That is, high rate OC3/12/48 links exist between relatively few (10 to 20 percent) of the nodes. The multilink path selection methods which are modeled include STT-EDR, DC-SDR, and DP-SDR path selection, in which crankback is used.  With STT-EDR, the primary CRLSP-p is tried first and if bandwidth resources are not available on one or more links of CRLSP-p, then the currently successful CRLSP-s is tried next.  If bandwidth is not available on one or more links of CRLSP-s, then a new CRLSP is searched by trying additional candidate paths until a new successful CRLSP-n is found or the candidate paths are exhausted.  CRLSP-n is then marked as the currently successful path for the next time bandwidth needs to be modified.  In the model of DP-SDR, the status updates with link-status flooding occur every 10 seconds.  Note that the multilink DP-SDR performance results should also be comparable to the performance of multilink CP-SDR, in which status updates and path selection updates are made every 10 seconds, respectively, to and from a QoS-routing processor.   With the SDR methods, the available link bandwidth information in the topology database is used to generate the shortest, least-congested, paths.  In routing a connection, the ON checks the equivalent bandwidth and ALS threshold first on the first choice path, then on current successful alternate path (EDR) or least loaded shortest path (SDR), and then sequentially on all other candidate alternate paths.  Each VN checks the equivalent bandwidth and ALS threshold provided in the setup message, and uses crankback to the ON if the equivalent bandwidth or ALS threshold are not met.

In the models the logical network design is optimized for each routing alternative, while the physical transport links and node locations are held fixed.  We examine the performance and network design tradeoffs of

logical topology (sparse, mesh),

connection routing method (2-link, multilink, SDR, EDR, etc.), and

bandwidth allocation method (per-VNET, per-flow)

Generally the meshed logical topologies are optimized by 1- and 2-link routing, while the sparse logical topologies are optimized by multilink shortest path routing.  Modeling results include

-         designs for SDR connection routing and EDR connection routing,

-         designs for sparse topology with multilink routing and mesh topology with 2-link routing,

-         designs for  separate voice (VNETs 1-8) & data (VNETs 9-11) and integrated voice/data (VNETs 1-11)

-         designs for per-VNET bandwidth allocation and per-flow bandwidth allocation

Table 5 gives a summary of the design comparisons for the above tradeoff categories. 

 

Table 5

Network Design Comparisons (135-Node Model)

 

Network Design

Parameters

EDR Connection Routing

SDR Connection Routing

(Ratio)

Integrated Voice/Data

Separate Voice/Data

(Ratio)

Per-Flow BW Alloc.

Per-VNET BW Alloc.

(Ratio)

Topology & Routing Design

Mesh with 2-link EDR or 2-link SDR routing

Sparse with multilink EDR routing

Sparse with multilink EDR routing

Termination Capacity

(Equivalent 64-kbps,

Millions)

25.6

25.7

(0.996)

16.4

17.5

(0.937)

16.4

16.5

(0.994)

Transport Capacity

(Equivalent 64-kbps-miles, Millions)

11,630.6

11,629.8

(1.000)

9285.3

9641.4

(0.963)

137.7

148.1

(0.930)

Total Cost

($ Millions)

1238.4

1238.5

(1.000)

1267.2

1338.5

(0.946)

1267.2

1306.2

(0.970)

 

Some of the conclusions from the network design comparisons are as follows:

-            EDR connection routing methods exhibit comparable design efficiencies to SDR routing methods.

-            Sparse topology designs with multilink routing provide switching and transport design efficiencies in comparison to mesh designs with 2-link routing (however, overall capital costs are comparable).

-            Voice and data integration provides some capital (and operational) cost reduction in comparison to separate voice and data design.

-            Per-VNET bandwidth allocation exhibits comparable design efficiencies to per-flow bandwidth allocation.

The performance analyses for overloads and failures include connection admission control (CAC) with QoS resource management.  Performance comparisons are presented in Table 6 for the various QoS routing methods, including 2-link and multilink EDR and SDR approaches, and a baseline case of no QoS routing methods applied.  Table 6 gives performance results for a six-times overload on a single network node at Oakbrook IL.

 


Table 6

Performance Comparison for Various QoS routing Methods & No QoS routing Methods

6X Focused Overload on Oakbrook (% Lost/Delayed Traffic)

 

Virtual Network

2-Link

STT-EDR

2-Link

DC-SDR

Multilink

STT-EDR

Multilink

DC-SDR

Multilink

DP-SDR

No QoS Routing

Methods Applied

BUSINESS-VOICE

5.27

2.28

0.00

0.06

0.08

9.42

CONSUMER-VOICE

7.29

3.50

0.00

0.20

0.23

13.21

INTL-OUT

3.43

3.36

0.00

0.00

0.04

6.03

INTL-IN (HIGH-PRIORITY)

2.19

4.21

0.00

0.00

0.00

6.55

HIGH-PRIORITY VOICE

0.81

1.77

0.00

0.00

0.00

8.47

64-KBPS ISDN DATA

0.84

0.33

0.00

0.00

0.00

2.33

64-KBPS ISDN DATA (HIGH-PRIORITY)

0.00

0.00

0.00

0.00

0.00

0.46

384-KBPS ISDN DATA

0.00

0.00

0.00

0.00

0.00

0.00

VBR-RT VOICE

5.42

2.59

0.00

0.39

0.49

9.87

VBR-NRT MULTIMEDIA

7.12

3.49

0.00

2.75

3.18

12.88

UBR BEST EFFORT

14.07

14.68

12.46

12.39

12.32

9.75

 

In all cases of the QoS routing methods being applied, the performance is always better and usually substantially better than when no QoS routing methods are applied.  The performance analysis results show that the multilink options (in sparse topologies) perform somewhat better under overloads than the 2-link options (in meshed topologies), because of greater sharing of network capacity. Under failure, the 2-link options perform better for many of the VNET categories than the multilink options, because they have a richer choice of alternate routing paths and are much more highly connected than the multilink networks.  Loss of a link in a sparely connected multilink network can have more serious consequences than in more highly connected logical networks.  The performance results illustrate that capacity sharing of CBR, VBR, and UBR traffic classes, when combined with QoS resource management and priority queuing, leads to efficient use of bandwidth with minimal traffic delay and loss impact, even under overload and failure scenarios.

The EDR and SDR path selection methods are quite comparable for the 2-link, meshed-topology network scenarios.  However, the EDR path selection method performs somewhat better than the SDR options in the multilink, sparse-topology case.  In addition, the DC-SDR path selection option performs somewhat better than the DP-SDR option in the multilink case, which is a result of the 10-second old status information causing misdirected paths in some cases.  Hence, it can be concluded that frequently-updated, available-link-bandwidth state information does not necessarily improve performance in all cases, and that if available-link-bandwidth state information is used, it is sometimes better that it is very recent status information.

Some of the conclusions from the performance comparisons are as follows:

-            QoS routing methods result in network performance that is always better and usually substantially better than when no QoS routing methods are applied.

-            Sparse-topology multilink-routing networks provide better overall performance under overload than meshed-topology networks, but performance under failure may favor the 2-link meshed-topology options with more alternate routing choices.

-            EDR QoS routing methods exhibit comparable or better network performance compared to SDR methods.

-            State information as used by the SDR options (such as with link-state flooding) provides essentially equivalent performance to the EDR options, which typically use distributed routing with crankback and no flooding.

-            Single-area flat topologies exhibit better network performance in comparison with multi-area hierarchical topologies.

-            Various path selection methods can interwork with each other in the same network, as required for multi-vendor network operation. 

A.2.2 Performance Comparisons of Bandwidth Allocation, Protection & Reservation Methods

As discussed in Section 2.2.2, dynamic bandwidth reservation can be used to favor one category of traffic over another category of traffic.  A simple example of the use of this method is to reserve bandwidth in order to prefer traffic on the shorter primary paths over traffic using longer alternate paths. We now give illustrations of this method, and compare the performance of a network in which bandwidth reservation is used under congestion to the case when bandwidth reservation is not used.  In the example, traffic is first routed on the shortest path, and then allowed to alternate route on longer paths if the primary path in not available.  In the case where bandwidth reservation is used, five percent of the link bandwidth is reserved for traffic on the primary path when congestion is present on the link.

Table 7 illustrates the performance of bandwidth reservation methods for a high-day network load pattern.  We can see from the results that performance improves when bandwidth reservation is used.  The reason for the poor performance without bandwidth reservation is due to the lack of reserved capacity to favor traffic routed on the more direct primary paths under network congestion conditions.  Without bandwidth reservation nonhierarchical networks can exhibit unstable behavior in which essentially all connections are established on longer alternate paths as opposed to shorter primary paths, which greatly reduces network throughput and increases network congestion [AKI, KRU, NAK].  If we add the bandwidth reservation mechanism, then performance of the network is greatly improved.  Clearly the use of bandwidth reservation protects the performance of each VNET class-of-service category.

 

Table 7

Performance of Dynamic Bandwidth Reservation Methods

Percent Lost/Delayed Traffic under 50% General Overload (Multilink STT-EDR)

 

Virtual Network

Without Bandwidth Reservation

With Bandwidth

Reservation

BUSINESS-VOICE

2.42

0.00

CONSUMER-VOICE

2.33

0.02

INTL-OUT

2.46

1.33

INTL-IN (HIGH-PRIORITY)

2.56

0.00

HIGH-PRIORITY VOICE

2.41

0.00

64-KBPS ISDN DATA

2.37

0.10

64-KBPS ISDN DATA (HIGH-PRIORITY)

2.04

0.00

384-KBPS ISDN DATA

12.87

0.00

VBR-RT VOICE

1.25

0.07

VBR-NRT MULTIMEDIA

1.90

0.01

UBR BEST EFFORT

24.95

11.15

 

We use the 135-node model to compare the per-virtual-network methods of QoS resource management and the per-flow methods, as described in Section 2.  We look at these two cases in Figure 5, which illustrates the case of per-virtual-network CRLSP bandwidth allocation the case of per-flow CRLSP bandwidth allocation.  The two figures compare the performance in terms of lost or delayed traffic under a focused overload scenario on the Oakbrook, IL node (such a scenario might occur, for example, with a radio call-in give-away offer).  The size of the focused overload is varied from the normal load (1X case) to a 10 times overload of the traffic to Oakbrook (10X case).  The results show that the per-flow and per-virtual-network bandwidth allocation performance is similar; however, the improved performance of the high-priority traffic and normal-priority traffic in relation to the best-effort priority traffic is clearly evident.

We illustrate the operation of MPLS and DiffServ in multiservice network bandwidth allocation with some examples.  First suppose there is 10 mbps of normal-priority  traffic and 10 mbps of best-effort priority traffic being carried in the network between nodes A and B.  Best-effort traffic is treated in the model as UBR traffic and is not allocated any bandwidth.  Hence while the best-effort traffic does not get any CRLSP bandwidth allocation, it is always ‘admitted’ by the CAC, and must contend at the lowest priority in the queues.  Hence the best-effort traffic cannot be denied ‘admission’ as a means to throttle back such traffic at the edge router, which can be done with the normal-priority and high-priority traffic (i.e., normal and high-priority traffic could be denied bandwidth allocation through connection admission control).  The only way that the best-effort traffic gets dropped/lost is to drop it at the queues, therefore it is essential that the traffic that is allocated bandwidth on the CRLSPs have higher priority at the queues than the best-effort traffic. Therefore in the model the three classes of traffic get these DiffServ markings: best-effort traffic gets no marking, which corresponds to best-effort priority queuing treatment.  Normal-priority traffic gets a middle priority level of queuing treatment, and high-priority and delay-sensitive traffic gets the highest priority queuing level.

Now suppose that there is 30 mbps of bandwidth available between A and B and that the traffic for both the normal-priority and best-effort traffic increases to 20 mbps.  The normal-priority traffic requests and gets a CRLSP bandwidth allocation increase to 20 mbps on the A to B CRLSP.  However, the best-effort traffic, since it has no CRLSP assigned and therefore no bandwidth allocation, is just sent into the network at 20 mbps.  Since there is only 30 mbps of bandwidth available from A to B, the network must drop 10 mbps of best-effort traffic in order to leave room for the 20 mbps of normal-priority traffic.  The way this is done in the model is through the queuing mechanisms governed by the DiffServ priority settings on each category of traffic.  Through the DiffServ marking, the queuing mechanisms in the model discard 10 mbps of the best-effort traffic at the priority queues.  If the DiffServ markings were not used, then the normal-priority and best-effort traffic would compete equally in the queues, and perhaps 15 mbps of each would get through, which is not the desired situation. 

Taking this example further, if the normal-priority and best-effort traffic both increase to 40 mbps, then the normal-priority traffic tries to get a CRLSP bandwidth allocation increase to 40 mbps.  However, the most it can get is 30 mbps, so 10 mbps is denied for the normal-priority traffic in the MPLS constraint-based routing procedure.  By having the DiffServ markings on the normal-priority traffic and none on the best-effort traffic, essentially all the best-effort traffic is dropped at the queues since the normal-priority traffic is allocated and gets the full 30 mbps of A to B bandwidth.  If there are no DiffServ markings, then again perhaps 15 mbps of both normal-priority and best-effort get through.  Or in this case, perhaps a greater amount of best-effort traffic is carried than normal-priority traffic, since 40 mbps of best-effort traffic is sent into the network and only 30 mbps of normal-priority traffic is sent into the network, and the queues will receive more best-effort pressure than normal-priority pressure.

In a multiservice network where the normal-priority and high-priority traffic use CAC with MPLS to receive bandwidth allocation and there is no best-effort priority traffic, then the DiffServ/priority queuing becomes less important.  This is because the MPLS bandwidth allocation more-or-less assures that the queues will not overflow, and perhaps therefore DiffServ would not be needed as much.  As bandwidth gets more and more plentiful/lower-cost and the network is more ‘over-engineered’, the point at which the MPLS and DiffServ mechanisms have a significant effect under traffic overload goes to a higher and higher threshold.  For example, the models show that the overload factor at which congestion occurs gets larger as the bandwidth modules get larger (i.e., OC3 to OC12 to OC48 to OC192, etc.).  However, the congestion point will always be reached with failures and/or large-enough overloads necessitating the MPLS/DiffServ mechanisms.

Some of the conclusions from the modeling of bandwidth allocation and reservation are as follows:

-            QoS resource management is shown to be effective in achieving connection-level and packet-level grade-of-service objectives, as well as high-priority, normal-priority, and best-effort priority service differentiation.

-            Bandwidth reservation is critical to the stable and efficient performance of QoS routing methods in a network, and to ensure the proper operation of multiservice bandwidth allocation, protection, and priority treatment.

-            Per-VNET bandwidth allocation is essentially equivalent to per-flow bandwidth allocation in network performance and efficiency.  Because of the much lower routing table management overhead, per-VNET bandwidth allocation is preferred to per-flow allocation.

-            Both CAC with MPLS bandwidth management and DiffServ priority queuing management are important for ensuring that multiservice network performance objectives are met under a range of network conditions.  Both mechanisms operate together to ensure QoS resource allocation mechanisms (bandwidth allocation, protection, and priority queuing) are achieved.

-            In a multiservice network environment where high-priority, normal-priority, and best-effort traffic share the same network, under congestion (e.g., from overloads or failures), the DiffServ/priority-queuing mechanisms push out the best-effort priority traffic at the queues so that the normal-priority and high-priority traffic can get through on the MPLS-allocated CRLSP bandwidth. 

A.2.4 Performance Comparisons of Routing Table Management Methods

Table 8 gives a comparison of the control overhead performance of a) DP-SDR with LSA flooding and per-flow bandwidth allocation, b) STT-EDR with per-flow bandwidth allocation, and c) STT-EDR with per-VNET bandwidth allocation.  The numbers in the table give the total messages of each type needed to do the indicated QoS routing functions, including flow setup, bandwidth allocation, crankback, and LSA flooding to update the topology database.  The DP-SDR method does available-link-bandwidth flooding to update the topology database while the EDR method does not.  In the simulation there is a 6-times focused overload on the Oakbrook node.  Clearly the DP-SDR/flooding method is consuming more message resources, particular LSA flooding messages, than the STT-EDR method.  Also, the per-flow bandwidth allocation is consuming far more CRLSP bandwidth allocation messages than per-VNET bandwidth allocation, while the traffic lost/delayed performance of the three methods is comparable. 

 

Table 8

Routing Table Management Overhead

SDR/Flooding/Per-Flow, EDR/Per-Flow, EDR/Per-VNET (6X Focused Overload on Oakbrook)

 

QoS Routing Function

Message Type

DP-SDR/Flooding

(per-flow bandwidth allocation)

DP-SDR/Flooding

(per-flow bandwidth allocation)

STT-EDR

(per-VNET bandwidth allocation)

Flow Routing

Flow Setup

18,758,992

18,758,992

18,758,992

QoS Resource Management (CRLSP Rtg., BW Alloc., Queue Mgmt.)

CRLSP Bandwidth Allocation

18,469,477

18,839,216

2,889,488

Crankback

30,459

12,850

14,867

Topology Database Update

LSA

14,405,040

 

 

 

Some of the conclusions from the comparisons of routing table management overhead are as follows:

-            Per-VNET bandwidth allocation is preferred to per-flow allocation because of the much lower routing table management overhead.  Per-VNET bandwidth allocation is essentially equivalent to per-flow bandwidth allocation in network performance and efficiency.

-            EDR methods provide a large reduction in flooding overhead without loss of network throughput performance. Flooding is very resource intensive since it requires link bandwidth to carry LSAs, processor capacity to process LSAs, and the overhead limits autonomous system size.  EDR methods therefore can help to increase network scalability.

A.3. Summary & Conclusions

In summary, QoS routing methods are proposed in [E.360] for consideration in network evolution.  These proposals are based on results of analysis models, which illustrate the tradeoffs between various QoS routing approaches, and established best current practices and experience.  These QoS routing methods will ensure stable/efficient network performance and help manage resources for and differentiate high-priority, normal-priority, and best-effort priority services.  Figures 2 and 3 illustrates the proposed QoS routing methods, which a) allocate bandwidth to individual VNETs, b) protect allocated bandwidth by bandwidth reservation methods, as needed, but otherwise fully share bandwidth, c) differentiate high-priority, normal-priority, and best-effort priority services, d) monitor VNET bandwidth use and determine when bandwidth needs to be increased or decreased, e) change VNET bandwidth allocation based on aggregated bandwidth demand, and f) provide QoS routing admission control to reject connection requests when needed to meet performance objectives.   In the modeling results, the per-VNET bandwidth allocation method compares favorably with the per-flow method. Furthermore, we find that the fully distributed STT-EDR method of CRLSP management performs just as well or better than the SDR methods with flooding, which means that STT-EDR path selection has potential to significantly enhance network scalability.


Figures

 

 

 

_________________