idnits 2.17.1 draft-deoliveira-diff-te-preemption-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 20. -- Found old boilerplate from RFC 3978, Section 5.5 on line 813. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 824. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 831. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 837. ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** This document has an original RFC 3978 Section 5.5 Disclaimer, instead of the newer disclaimer which includes the IETF Trust according to RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 14, 2006) is 6345 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 711, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 3272 (Obsoleted by RFC 9522) Summary: 4 errors (**), 0 flaws (~~), 2 warnings (==), 8 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group J. de Oliveira, Ed. 3 Internet-Draft Drexel University 4 Intended status: Informational JP. Vasseur, Ed. 5 Expires: May 18, 2007 Cisco Systems, Inc. 6 L. Chen 7 Verizon Laboratories 8 C. Scoglio 9 Kansas State University 10 November 14, 2006 12 LSP Preemption Policies for MPLS Traffic Engineering 13 draft-deoliveira-diff-te-preemption-06 15 Status of this Memo 17 By submitting this Internet-Draft, each author represents that any 18 applicable patent or other IPR claims of which he or she is aware 19 have been or will be disclosed, and any of which he or she becomes 20 aware will be disclosed, in accordance with Section 6 of BCP 79. 22 Internet-Drafts are working documents of the Internet Engineering 23 Task Force (IETF), its areas, and its working groups. Note that 24 other groups may also distribute working documents as Internet- 25 Drafts. 27 Internet-Drafts are draft documents valid for a maximum of six months 28 and may be updated, replaced, or obsoleted by other documents at any 29 time. It is inappropriate to use Internet-Drafts as reference 30 material or to cite them other than as "work in progress." 32 The list of current Internet-Drafts can be accessed at 33 http://www.ietf.org/ietf/1id-abstracts.txt. 35 The list of Internet-Draft Shadow Directories can be accessed at 36 http://www.ietf.org/shadow.html. 38 This Internet-Draft will expire on May 18, 2007. 40 Copyright Notice 42 Copyright (C) The Internet Society (2006). 44 Abstract 46 When the establishment of a higher priority (Traffic Engineering 47 Label Switched Path) TE LSP requires the preemption of a set of lower 48 priority TE LSPs, a node has to make a local decision to select which 49 TE LSPs will be preempted. The preempted LSPs are then rerouted by 50 their respective Head-end Label Switch Router (LSR). This document 51 presents a flexible policy that can be used to achieve different 52 objectives: preempt the lowest priority LSPs; preempt the minimum 53 number of LSPs; preempt the set of TE LSPs that provide the closest 54 amount of bandwidth to the required bandwidth for the preempting TE 55 LSPs (to minimize bandwidth wastage); preempt the LSPs that will have 56 the maximum chance to get rerouted. Simulation results are given and 57 a comparison among several different policies, with respect to 58 preemption cascading, number of preempted LSPs, priority, wasted 59 bandwidth and blocking probability is also included. 61 Table of Contents 63 1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 3 LSP Setup Procedure and Preemption . . . . . . . . . . . . . . 5 66 4 Preemption Cascading . . . . . . . . . . . . . . . . . . . . . 6 67 5 Preemption Heuristic . . . . . . . . . . . . . . . . . . . . . 7 68 5.1 Preempting Resources on a Path . . . . . . . . . . . . . . 7 69 5.2 Preemption Heuristic Algorithm . . . . . . . . . . . . . . 8 70 6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 71 6.1 Simple Case: Single Link . . . . . . . . . . . . . . . . . 10 72 6.2 Network Case . . . . . . . . . . . . . . . . . . . . . . . 12 73 7 Security Considerations . . . . . . . . . . . . . . . . . . . . 16 74 8 Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16 75 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 76 9.1 Normative References . . . . . . . . . . . . . . . . . . . 16 77 9.2 Informative References . . . . . . . . . . . . . . . . . . 17 78 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 18 79 Intellectual Property and Copyright Statements . . . . . . . . . . 19 81 1 Motivation 83 The IETF Traffic Engineering Working Group has defined the 84 requirements and protocol extensions for DiffServ-aware MPLS Traffic 85 Engineering (DS-TE) [RFC3564] [RFC4124]. Several Bandwidth 86 Constraint models for use with DS-TE have been proposed [RFC4127] 87 [RFC4128] [RFC4126] and their performance was analyzed with respect 88 to the use of preemption. 90 Preemption can be used as a tool to help ensure that high priority 91 LSPs can be always routed through relatively favorable paths. 92 Preemption can also be used to implement various prioritized access 93 policies as well as restoration policies following fault events 94 [RFC2702]. 96 Although not a mandatory attribute in the traditional IP world, 97 preemption becomes important in networks using on-line, distributed 98 CSPF strategies for their TE LSP path computation to limit the impact 99 of bandwidth fragmentation. Moreover, preemption is an attractive 100 strategy in an MPLS network in which traffic is treated in a 101 differentiated manner and high importance traffic may be given 102 special treatment over lower importance traffic [DEC-PREP,ATM-PREP]. 103 Nevertheless, in the DS-TE approach, whose issues and requirements 104 are discussed in [RFC3564], the preemption policy is considered an 105 important piece on the bandwidth reservation and management puzzle, 106 but no preemption strategy is defined. Note that preemption also 107 plays an important role in regular MPLS Traffic Engineering 108 environments (with a single pool of bandwidth). 110 This document proposes a flexible preemption policy that can be 111 adjusted in order to give different weight to various preemption 112 criteria: priority of LSPs to be preempted, number of LSPs to be 113 preempted, amount of bandwidth preempted, blocking probability. The 114 implications (cascading effect, bandwidth wastage, priority of 115 preempted LSPs) of selecting a certain order of importance for the 116 criteria are discussed for the examples given. 118 2 Introduction 120 In [RFC2702], issues and requirements for Traffic Engineering in an 121 MPLS network are highlighted. In order to address both traffic- 122 oriented and resource-oriented performance objectives, the authors 123 point out the need for priority and preemption parameters as Traffic 124 Engineering attributes of traffic trunks. The notion of preemption 125 and preemption priority is defined in [RFC3272], and preemption 126 attributes are defined in [RFC2702] and in [RFC3209]. 128 A traffic trunk is defined as an aggregate of traffic flows belonging 129 to the same class that are placed inside an LSP [RFC3564]. In this 130 context, preemption is the act of selecting an LSP that will be 131 removed from a given path in order to give room to another LSP with a 132 higher priority (lower preemption number). More specifically, the 133 preemption attributes determine whether an LSP with a certain setup 134 preemption priority can preempt another LSP with a lower holding 135 preemption priority from a given path, when there is a competition 136 for available resources. Note that competing for resources is one 137 situation in which preemption can be triggered, but other situations 138 may exist, themselves controlled by a policy. 140 For readability, a number of definitions from [RFC3564] are repeated 141 here: 143 Class-Type (CT): The set of Traffic Trunks crossing a link that is 144 governed by a specific set of Bandwidth constraints. CT is used for 145 the purposes of link bandwidth allocation, constraint based routing 146 and admission control. A given Traffic Trunk belongs to the same CT 147 on all links. 149 TE-Class: A pair of: 151 i. a Class-Type 153 ii. a preemption priority allowed for that Class-Type. This means 154 that an LSP transporting a Traffic Trunk from that Class-Type can use 155 that preemption priority as the set-up priority, as the holding 156 priority or both. 158 By definition there may be more than one TE-Class using the same CT, 159 as long as each TE-Class uses a different preemption priority. Also, 160 there may be more than one TE-Class with the same preemption 161 priority, provided that each TE-Class uses a different CT. The 162 network administrator may define the TE-Classes in order to support 163 preemption across CTs, to avoid preemption within a certain CT, or to 164 avoid preemption completely, when so desired. To ensure coherent 165 operation, the same TE-Classes must be configured in every Label 166 Switched Router (LSR) in the DS-TE domain. 168 As a consequence of a per-TE-Class treatment, the Interior Gateway 169 Protocol (IGP) needs to advertise separate Traffic Engineering 170 information for each TE-Class, which consists of the Unreserved 171 Bandwidth (UB) information [RFC4124]. The UB information will be 172 used by the routers, checking against the bandwidth constraint model 173 parameters, to decide whether preemption is needed. Details on how 174 to calculate the UB are given in [RFC4124]. 176 3 LSP Setup Procedure and Preemption 178 A new LSP setup request has two important parameters: bandwidth and 179 preemption priority. The set of LSPs to be preempted can be selected 180 by optimizing an objective function that represents these two 181 parameters, and the number of LSPs to be preempted. More 182 specifically, the objective function could be any or a combination of 183 the following [DEC-PREP, ATM-PREP]: 185 * Preempt the LSPs that have the least priority (preemption 186 priority). The QoS of high priority traffic would be better 187 satisfied and the cascading effect described below can be limited. 189 * Preempt the least number of LSPs. The number of LSPs that need to 190 be rerouted would be lower. 192 * Preempt the least amount of bandwidth that still satisfies the 193 request. Resource utilization could be improved. The preemption of 194 larger TE LSPs (more than requested) by the newly signaled TE LSP 195 implies in a larger amount of bandwidth having to be rerouted, which 196 is likely to increase the probability of blocking (inability to find 197 a path for some TE LSPs). 199 * Preempt LSPs that minimize the blocking probability (risk that 200 preempted TE LSP cannot be rerouted). 202 After the preemption selection phase is finished, the selected LSPs 203 are signaled as preempted and the new LSP is established (if a new 204 path satisfying the constraints can be found). The UB information is 205 then updated via flooding of an IGP-TE update and/or simply pruning 206 the link where preemption occurred. Figure 1 shows a flowchart that 207 summarizes how each LSP setup request is treated in a preemption- 208 enabled scenario. 210 LSP Setup Request 211 (TE-Class i, bw=r) 212 | 213 | 214 v NO 215 UB[TE-Class i] >= r ? -------> Reject LSP 216 Setup and flood an updated IGP-TE 217 | LSA/LSP 218 |YES 219 v NO 220 Preemption Needed ? -------> Setup LSP/Update UB if a threshold is 221 | crossed 222 | YES 223 v 224 Preemption ----> Setup LSP/Reroute Preempted LSPs 225 Algorithm Update UB 226 Figure 1: Flowchart for LSP setup procedure. 228 In [DEC-PREP], the authors propose connection preemption policies 229 that optimize the discussed criteria in a given order of importance: 230 number of LSPs, bandwidth, and priority; bandwidth, priority, and 231 number of LSPs. The novelty in our approach is the use of an 232 objective function that can be adjusted by the service provider in 233 order to stress the desired criteria. No particular criteria order 234 is enforced. Moreover, a new criterion is added to the objective 235 function: optimize the blocking probability (the risk that an LSP 236 will not find a new path in which it can be rerouted). 238 4 Preemption Cascading 240 The decision of preempting an LSP may cause other preemptions in the 241 network. This is called preemption cascading effect and different 242 cascading levels may be achieved by the preemption of a single LSP. 243 The cascading levels are defined in the following manner: when an LSP 244 is preempted and rerouted without causing any further preemption, the 245 cascading is said to be of level 0. However, when a preempted LSP is 246 rerouted and in order to be established in the new route it also 247 causes the preemption of other LSPs, the cascading is said to be of 248 level 1, and so on. 250 Preemption cascading is not desirable and therefore policies that 251 minimize it are of interest. Typically, this can result in severe 252 network instabilities. In the following, a new versatile preemption 253 heuristic will be presented. In the next Section, preemption 254 simulation results will be discussed and the cascading effect will be 255 analyzed. 257 5 Preemption Heuristic 259 5.1 Preempting Resources on a Path 261 It is important to note that once a request for an LSP setup arrives, 262 each LSR along the TE LSP path checks the available bandwidth on its 263 outgoing link. For the links in which the available bandwidth is not 264 enough, the preemption policy needs to be activated in order to 265 guarantee the end-to-end bandwidth reservation for the new LSP. This 266 is a distributed approach, in which every node on the path is 267 responsible for running the preemption algorithm and determining 268 which LSPs would be preempted in order to fit the new request. A 269 distributed approach may sometimes not lead to an optimal solution. 271 Alternatively, in a centralized approach, a manager entity runs the 272 preemption policy and determines the best LSPs to be preempted in 273 order to free the required bandwidth in all the links that compose 274 the path. The preemption policy would try to select LSPs that 275 overlap with the path being considered (preempt a single LSP that 276 overlaps with the route versus preempt a single LSP on every link 277 that belongs to the route). 279 Both centralized and distributed approaches have advantages and 280 drawbacks. A centralized approach would be more precise, but 281 requires that the whole network state be stored and updated 282 accordingly, which raises scalability issues. In a network where 283 LSPs are mostly static, an off-line decision can be made to reroute 284 LSPs and the centralized approach could be appropriate. However, in 285 a dynamic network in which LSPs are setup and torn down in a frequent 286 manner because of new TE LSPs, bandwidth increase, reroute due to 287 failure, etc., the correctness of the stored network state could be 288 questionable. Moreover, the set up time is generally increased when 289 compared to a distributed solution. In this scenario, the 290 distributed approach would bring more benefits, even when resulting 291 in a non-optimal solution (The gain in optimality of a centralized 292 approach compared to a distributed approach depends on many factors: 293 network topology, traffic matrix, TE strategy, etc.). A distributed 294 approach is also easier to be implemented due to the distributed 295 nature of the current Internet protocols. 297 Since the current Internet routing protocols are essentially 298 distributed, a decentralized approach was selected for the LSP 299 preemption policy. The parameters required by the new preemption 300 policies are currently available for OSPF and IS-IS. 302 5.2 Preemption Heuristic Algorithm 304 Consider a request for a new LSP setup with bandwidth b and setup 305 preemption priority p. When preemption is needed, due to lack of 306 available resources, the preemptable LSPs will be chosen among the 307 ones with lower holding preemption priority (higher numerical value) 308 in order to fit r=b-Abw(l). The variable r represents the actual 309 bandwidth that needs to be preempted (the requested, b, minus the 310 available bandwidth on link l: Abw(l)). 312 L is the set of active LSPs having a holding preemption priority 313 lower (numerically higher) than p. So L is the set of candidates for 314 preemption. b(l) is the bandwidth reserved by LSP l in L, expressed 315 in bandwidth units, and p(l) is the holding preemption priority of 316 LSP l. 318 In order to represent a cost for each preemption priority, an 319 associated cost y(l) inversely related to the holding preemption 320 priority p(l) is defined. For simplicity, a linear relation 321 y(l)=8-p(l) is chosen. y is a cost vector with L components, y(l). b 322 is a reserved bandwidth vector with dimension L, and components b(l). 324 Concerning the objective function, four main objectives can be 325 reached in the selection of preempted LSPs: 327 * minimize the priority of preempted LSPs, 328 * minimize the number of preempted LSPs, 329 * minimize the preempted bandwidth, 330 * minimize the blocking probability. 332 To have the widest choice on the overall objective that each service 333 provider needs to achieve, the following equation was defined (for 334 simplicity chosen as a weighted sum of the above mentioned criteria): 336 H(l)= alpha y(l) + beta 1/b(l) + gamma (b(l)-r)^2 + theta b(l) 338 In this equation: 340 - alpha y(l) captures the cost of preempting high priority LSPs 342 - beta 1/b(l) penalizes the preemption of low bandwidth LSPs, 343 capturing the cost of preempting a large number of LSPs 345 - gamma (b(l)-r)^2 captures the cost of preemption of LSPs that are 346 much larger or much smaller than r 348 - theta b(l) captures the cost of preempting large LSPs 349 Coefficients alpha, beta, gamma, and theta can be chosen to emphasize 350 one or more components of H. 352 The coefficient theta is defined such that theta = 0 if gamma > 0. 353 The reason for that is that when trying to minimize the blocking 354 probability of preempted LSPs, the heuristic gives preference to 355 preempting several small LSPs (therefore gamma, which is the weight 356 for minimizing the preempted bandwidth enforcing the selection of 357 LSPs with similar amount of bandwidth as the requested, needs to be 358 set as zero). The selection of several small LSPs in a normally 359 loaded portion of the network will increase the chance that such LSPs 360 are successfully rerouted. Moreover, the selection of several small 361 LSPs may not imply preempting much more than the required bandwidth 362 (resulting in low bandwidth wastage), as it will be seen in the 363 discussed examples. When preemption is to happen in a heavy loaded 364 portion of the network, to minimize blocking probability, the 365 heuristic will select fewer LSPs for preemption in order to increase 366 the chance of rerouting. 368 H is calculated for each LSP in L. The LSPs to be preempted are 369 chosen as the ones with smaller H that add enough bandwidth to 370 accommodate r. When sorting LSPs by H, LSPs with the same value for 371 H are ordered by bandwidth b, in increasing order. For each LSP with 372 repeated H, the algorithm checks whether the bandwidth b assigned to 373 that LSP only is enough to satisfy r. If there is no such LSP, it 374 checks whether the bandwidth of each of those LSPs, added to the 375 previously preempted LSPs' bandwidth is enough to satisfy r. If that 376 is not true for any LSP in that repeated H value sequence, the 377 algorithm preempts the LSP that has the larger amount of bandwidth in 378 the sequence, and keeps preempting in decreasing order of b until r 379 is satisfied or the sequence is finished. If the sequence is 380 finished and r is not satisfied, the algorithm again selects LSPs to 381 be preempted based on an increasing order of H. More details on the 382 algorithm are given in [PREEMPTION]. 384 When the objective is to minimize blocking, the heuristic will follow 385 two options on how to calculate H: 387 * If the link in which preemption is to happen is normally loaded, 388 several small LSPs will be selected for preemption using H(l)= alpha 389 y(l) + theta b(l). 391 * If the link is overloaded, few LSPs are selected using H(l)= alpha 392 y(l) + beta 1/b(l). 394 6 Examples 396 6.1 Simple Case: Single Link 398 We first consider a very simple case, in which the path considered 399 for preemption is composed by a single hop. The objective of this 400 example is to illustrate how the heuristic works. On the next 401 section we will study a more complex case in which the preemption 402 policies are being tested on a network. 404 Consider a link with 16 LSPs with reserved bandwidth b in Mbps, 405 preemption holding priority p, and cost y, as shown in Table 1. In 406 this example, 8 TE-Classes are active. The preemption here is being 407 performed on a single link as an illustrative example. 409 ------------------------------------------------------------------ 410 LSP L1 L2 L3 L4 L5 L6 L7 L8 411 ------------------------------------------------------------------ 412 Bandwidth (b) 20 10 60 25 20 1 75 45 413 Priority (p) 1 2 3 4 5 6 7 5 414 Cost (y) 7 6 5 4 3 2 1 3 415 ------------------------------------------------------------------ 416 LSP L9 L10 L11 L12 L13 L14 L15 L16 417 ------------------------------------------------------------------ 418 Bandwidth (b) 100 5 40 85 50 20 70 25 419 Priority (p) 3 6 4 5 2 3 4 7 420 Cost (y) 5 2 4 3 6 5 4 1 421 ------------------------------------------------------------------ 422 Table 1: LSPs in the considered link. 424 A request for an LSP establishment arrives with r=175 Mbps and p=0 425 (highest possible priority, which implies that all LSPs with p>0 in 426 Table 1 will be considered when running the algorithm). Assume 427 Abw(l)=0. 429 If priority is the only important criterion, the network operator 430 configures alpha=1, beta=gamma=theta=0. In this case, LSPs L6, L7, 431 L10, L12 and L16 are selected for preemption, freeing 191 bandwidth 432 units to establish the high priority LSP. Note that 5 LSPs were 433 preempted, but all with priority level between 5 and 7. 435 In a network in which rerouting is an expensive task to perform (and 436 the number of rerouted TE LSPs should be as small as possible), one 437 might prefer to set beta=1 and alpha=gamma=theta=0. LSPs L9 and L12 438 would then be selected for preemption, adding up to 185 bandwidth 439 units (less wastage than the previous case). The priorities of the 440 selected LSPs are 3 and 5 (which means that they might themselves 441 preempt some other LSPs when rerouted). 443 Suppose the network operator decides that it is more appropriate to 444 configure alpha=1, beta=10, gamma=0, theta=0 (the parameters were set 445 to values that would balance the weight of each component, namely 446 priority and number, in the cost function), because in this network 447 rerouting is very expensive, LSP priority is important, but bandwidth 448 is not a critical issue. In this case, LSPs L7, L12 and L16 are 449 selected for preemption. This configuration resulted in a smaller 450 number of preempted LSPs when compared to the first case, and the 451 priority levels were kept between 5 and 7. 453 To take into account the number of LSPs preempted, the preemption 454 priority, and the amount of bandwidth preempted, the network operator 455 may set alpha > 0, beta > 0, and gamma > 0. To achieve a balance 456 among the three components, the parameters need to be normalized. 457 Aiming for a balance, the parameters could be set as alpha=1, beta=10 458 (bringing the term 1/b(l) closer to the other parameters), and 459 gamma=0.001 (bringing the value of the term (b(l)-r)^2 closer to the 460 other parameters). LSPs L7 and L9 are selected for preemption, 461 resulting in exactly 175 bandwidth units and with priorities 3 and 7 462 (note that less LSP are preempted but they have a higher priority 463 which may result in a cascading effect). 465 If the minimization of the blocking probability is the criterion of 466 most interest, the cost function could be configured with theta=1, 467 alpha=beta=gamma=0. In that case, several small LSPs are selected 468 for preemption: LSPs L2, L4, L5, L6, L7, L10, L14, and L16. Their 469 preemption will free 181 Mbps in this link, and because the selected 470 LSPs have small bandwidth requirement there is a good chance that 471 each of them will find a new route in the network. 473 From the above example, it can be observed that when the priority was 474 the highest concern and the number of preempted LSPs was not an 475 issue, 5 LSPs with the lowest priority were selected for preemption. 476 When only the number of LSPs was an issue, the minimum number of LSPs 477 was selected for preemption: 2, but the priority was higher than in 478 the previous case. When priority and number were important factors 479 and a possible waste of bandwidth was not an issue, 3 LSPs were 480 selected, adding more bandwidth than requested, but still with low 481 preemption priority. When considering all the parameters but the 482 blocking probability, the smallest set of LSP was selected, 2, adding 483 just enough bandwidth, 175 Mbps, and with priority levels 3 and 7. 485 When the blocking probability was the criterion of interest, several 486 (8) small LSPs were preempted. The bandwidth wastage is low, but the 487 number of rerouting events will increase. Given the bandwidth 488 requirement of the preempted LSPs, it is expected that the chances of 489 finding a new route for each LSP will be high. 491 6.2 Network Case 493 For these experiments, we consider a 150 nodes topology with an 494 average network connectivity of 3. 10% of the nodes in the topology 495 have a degree of connectivity of 6. 10% of the links are OC3, 70% are 496 OC48, and 20% are OC192. 498 Two classes of TE LSPs are in use: Voice LSPs and Data Internet/VPN) 499 LSPs. For each class of TE LSP, the set of preemptions (and the 500 proportion of LSPs for each preemption) and the size distributions 501 are as follows (a total of T LSPs is considered): 503 T: total number of TE LSPs in the network (T = 18,306 LSPs) 505 Voice: 507 Number: 20% of T 508 Preemption: 0, 1 and 2 509 Size: uniform distribution between 30M and 50M 511 Internet/VPN TE: 513 Number: 4% of T 514 Preemption: 3 515 Size: uniform distribution between 20M and 50M 517 Number: 8% of T 518 Preemption 4 519 Size: uniform distribution between 15M and 40M 521 Number: 8% of T 522 Preemption 5 523 Size: uniform distribution between 10M and 20M 525 Number: 20% of T 526 Preemption 6 527 Size: uniform distribution between 1M and 20M 529 Number: 40% of T 530 Preemption 7 531 Size: uniform distribution between 1K and 1M 533 LSPs are set up mainly due to network failure: a link or a node 534 failed and LSPs are rerouted. 536 The network failure events were simulated with two functions: 538 - Constant: 1 failure chosen randomly among the set of links every 1 539 hour. 540 - Poisson process with interarrival average = 1 hour. 542 Table 2 shows the results for simulations with constant failure. The 543 simulations were run with the preemption heuristic configured to 544 balance different criteria (left side of the table), and then with 545 different preemption policies that consider the criteria in a given 546 order of importance rather than balancing them (right side of the 547 table). 549 The proposed heuristic was configured to balance the following 550 criteria: 552 HPB: The heuristic with priority and bandwidth wastage as the most 553 important criteria (alpha=10, beta=0, gamma=0.001, theta=0) 555 HBlock: The heuristic considering the minimization of blocking 556 probability (normal load links: alpha=1, beta=0, gamma=0, theta=0.01) 557 (heavy load links: alpha=1, beta=10) 559 HNB: The heuristic with number of preemptions and wasted bandwidth in 560 consideration (alpha=0, beta=10, gamma=0.001, theta=0) 562 Other algorithms that consider the criteria in a given order of 563 importance: 565 P: Sorts candidate LSPs by priority only. 567 PN: Sorts the LSPs by priority, and for cases in which the priority 568 is the same, orders those LSPs by decreasing bandwidth (selects 569 larger LSPs for preemption in order to minimize number of preempted 570 LSPs). 572 PB: Sorts the LSPs by priority, and for LSPs with the same priority, 573 sort those by increasing bandwidth (select smaller LSPs in order to 574 reduce bandwidth wastage) 575 ------------------------------------------------- 576 | Heuristic | Other algorithms | 577 ------------------------------------------------- 578 | HPB | HBlock| HNB | P | PN | PB | 579 ----------------------------------------------------------------- 580 Need to be | 532 | 532 | 532 | 532 | 532 | 532 | 581 Rerouted | | | | | | | 582 ----------------------------------------------------------------- 583 Preempted | 612 | 483 | 619 | 504 | 477 | 598 | 584 ----------------------------------------------------------------- 585 Rerouted |467|76%|341|73%|475|77%|347|69%|335|70%|436|73%| 586 Blocked |145|24%|130|27%|144|23%|157|31%|142|30%|162|27%| 587 ----------------------------------------------------------------- 588 Max Cascading | 4.5 | 2 | 5 | 2.75 | 2 | 2.75 | 589 ----------------------------------------------------------------- 590 Wasted Bandwidth 591 AVR (Mbps) | 6638 | 532 | 6479 | 8247 | 8955 | 6832 | 592 Worst Case(Mbps)| 35321 |26010 |36809 | 28501 | 31406 | 23449 | 593 ----------------------------------------------------------------- 594 Priority 595 Average | 6 | 6.5 | 5.8 | 6.6 | 6.6 | 6.6 | 596 Worst Case | 1.5 | 3.8 | 1.2 | 3.8 | 3.8 | 3.8 | 597 ----------------------------------------------------------------- 598 Extra Hops 599 Average | 0.23 | 0.25 | 0.22 | 0.25 | 0.25 | 0.23 | 600 Worst Case | 3.25 | 3 | 3.25 | 3 | 3 | 2.75 | 601 ----------------------------------------------------------------- 602 Table 2: Simulation results for constant network failure: 1 random 603 failure every hour. 605 From Table 2, we can conclude that among the heuristic (HPB, HBlock, 606 HNB) results, HBlock resulted in the smaller number of LSPs being 607 preempted. More importantly, it also resulted in the overall smaller 608 rejection rate and smaller average wasted bandwidth (and second 609 overall smaller worst case wasted bandwidth.) 611 Although HBlock does not try to minimize the number of preempted 612 LSPs, it ends up doing so, because it preempts LSPs with lower 613 priority mostly, and therefore it does not propagate cascading much 614 further. Cascading was the overall lowest (preemption caused at most 615 two levels of preemption, which was also the case for the policy PN). 616 The average and worst preemption priority was very satisfactory 617 (preempting mostly lowest priority LSPs, like the other algorithms P, 618 PN, and PB). 620 When HPB was in use, more LSPs were preempted as a consequence of the 621 higher cascading effect. That is due to the heuristic's choice of 622 preempting LSPs that are very similar in bandwidth size to the 623 bandwidth size of the preemptor LSP (which can result in preempting a 624 higher priority LSP and therefore causing cascading). The wasted 625 bandwidth was reduced when compared to the other algorithms (P, PN, 626 PB). 628 When HNB was used, cascading was higher than the other cases, due to 629 the fact that LSPs with higher priority could be preempted. When 630 compared to P, PN or PB, the heuristic HNB preempted more LSPs (in 631 fact, it preempted the largest number of LSPs overall, clearly 632 showing the cascading effect), but the average wasted bandwidth was 633 smaller, although not as small as HBlock's (the HNB heuristic tries 634 to preempt a single LSP, meaning it will preempt LSPs that have a 635 reserved bandwidth similar to the actual bandwidth needed. The 636 algorithm is not always successful, because such a match may not 637 exist and it that case, the wasted bandwidth could be high). The 638 preempted priority was the highest on average and worse case, which 639 also shows why the cascading level was also the highest (the 640 heuristic tries to select LSPs for preemption without looking at 641 their priority levels). In summary, this policy resulted in a poor 642 performance. 644 Policy PN resulted in the small number of preempted LSPs overall and 645 small number of not successfully rerouted LSPs. Cascading is low, 646 but bandwidth wastage is very high (overall highest bandwidth 647 wastage). Moreover, in several cases in which rerouting happened on 648 portions of the network that were underloaded, the heuristic HBlock 649 preempted a smaller number of LSPs than PN. 651 Policy P selects a larger number of LSPs (when compared to PN) with 652 low priority for preemption, and therefore it is able to successfully 653 reroute less LSPs when compared to HBlock, HPB, HNB, or PN. The 654 bandwidth wastage is also higher when compared to any of the 655 heuristic results or to PB, and it could be worse if the network had 656 LSPs with low priority and large bandwidth, which is not the case. 658 Policy PB, when compared to PN resulted in larger number of preempted 659 LSPs and overall larger number of LSPs blocked (not rerouted), due to 660 preemption. Cascading was slightly higher. Since the selected LSPs 661 have low priority, they are not able to preempt much further and are 662 blocked when the links are congested. Bandwidth wastage was smaller 663 since the policy tries to minimize wastage, and preempted priority 664 was about the same on average and worst case. 666 The simulation results show that when preemption is based on 667 priority, cascading is not critical since the preempted LSPs will not 668 be able to propagate preemption much further. When the number of 669 LSPs is considered, fewer LSPs are preempted and the chances of 670 rerouting increases. When bandwidth wastage is considered, smaller 671 LSPs are preempted in each link and the wasted bandwidth is low. The 672 heuristic seems to combine these features, yielding the best results, 673 especially in the case of blocking probability. When the heuristic 674 was configured to minimize blocking probability (HBlock), small LSPs 675 with low priority were selected for preemption on normally loaded 676 links and fewer (larger) LSPs with low priority were selected on 677 congested links. Due to their low priority, cascading was not an 678 issue. Several LSPs were selected for preemption, but the rate of 679 LSPs that were not successfully rerouted was the lowest. Since the 680 LSPs are small, it is easier to find a new route in the network. 681 When selecting LSPs on a congested link, fewer larger LSPs are 682 selected improving load balance. Moreover, the bandwidth wastage was 683 the overall lowest. In summary, the heuristic is very flexible and 684 can be configured according to the network provider's best interest 685 regarding the considered criteria. 687 For several cases, the failure of a link resulted in no preemption at 688 all (all LSPs were able to find an alternate path in the network) or 689 resulted in preemption of very few LSPs and subsequent successfully 690 rerouting of the same with no cascading effect. 692 It is also important to note that for all policies in use, the number 693 of extra hops when LSPs are rerouted was not critical, showing that 694 preempted LSPs can be rerouted on a path with the same length or a 695 path that is slightly longer in number of hops. 697 7 Security Considerations 699 The practice described in this document does not raise specific 700 security issues beyond those of existing TE. 702 8 Acknowledgements 704 We would like to acknowledge input and helpful comments from Francois 705 Le Faucheur (Cisco Systems) and George Uhl (Swales Aerospace). 707 9. References 709 9.1. Normative References 711 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 712 Requirement Levels", BCP 14, RFC 2119, March 1997. 714 9.2. Informative References 716 [ATM-PREP] 717 Poretsky, S. and Gannon, T., "An Algorithm for Connection 718 Precedence and Preemption in Asynchronous Transfer Mode 719 (ATM) Networks", Proceedings of IEEE ICC 1998. 721 [DEC-PREP] 722 Peyravian, M. and Kshemkalyani, A. D. , "Decentralized 723 Network Connection Preemption Algorithms", Computer 724 Networks and ISDN Systems, vol. 30 (11), pp. 1029-1043, 725 June 1998. 727 [PREEMPTION] 728 de Oliveira, J. C. et al., "A New Preemption Policy for 729 DiffServ-Aware Traffic Engineering to Minimize Rerouting", 730 Proceedings of IEEE INFOCOM 2002. 732 [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and J. 733 McManus, "Requirements for Traffic Engineering Over MPLS", 734 RFC 2702, September 1999. 736 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 737 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 738 Tunnels", RFC 3209, December 2001. 740 [RFC3272] Awduche, D., Chiu, A., Elwalid, A., Widjaja, I., and X. 741 Xiao, "Overview and Principles of Internet Traffic 742 Engineering", RFC 3272, May 2002. 744 [RFC3564] Le Faucheur, F. and W. Lai, "Requirements for Support of 745 Differentiated Services-aware MPLS Traffic Engineering", 746 RFC 3564, July 2003. 748 [RFC4124] Le Faucheur, F., "Protocol Extensions for Support of 749 Diffserv-aware MPLS Traffic Engineering", RFC 4124, 750 June 2005. 752 [RFC4126] Ash, J., "Max Allocation with Reservation Bandwidth 753 Constraints Model for Diffserv-aware MPLS Traffic 754 Engineering & Performance Comparisons", RFC 4126, 755 June 2005. 757 [RFC4127] Le Faucheur, F., "Russian Dolls Bandwidth Constraints 758 Model for Diffserv-aware MPLS Traffic Engineering", 759 RFC 4127, June 2005. 761 [RFC4128] Lai, W., "Bandwidth Constraints Models for Differentiated 762 Services (Diffserv)-aware MPLS Traffic Engineering: 763 Performance Evaluation", RFC 4128, June 2005. 765 Authors' Addresses 767 Jaudelice C. de Oliveira (editor) 768 Drexel University 769 3141 Chestnut Street (ECE Dept.) 770 Philadelphia, PA 19104 771 USA 773 Email: jau@ece.drexel.edu 775 JP Vasseur (editor) 776 Cisco Systems, Inc. 777 1414 Massachusetts Avenue 778 Boxborough, MA 01719 779 USA 781 Email: jpv@cisco.com 783 Leonardo Chen 784 Verizon Laboratories 785 40 Sylvan Rd. LA0MS55 786 Waltham, MA 02451 787 USA 789 Email: leonardo.c.chen@verizon.com 791 Caterina Scoglio 792 Kansas State University 793 2061 Rathbone Hall 794 Manhattan, Kansas 66506-5204 795 USA 797 Email: caterina@eece.ksu.edu 799 Full Copyright Statement 801 Copyright (C) The Internet Society (2006). 803 This document is subject to the rights, licenses and restrictions 804 contained in BCP 78, and except as set forth therein, the authors 805 retain all their rights. 807 This document and the information contained herein are provided on an 808 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 809 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 810 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 811 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE 812 INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 813 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 815 Intellectual Property 817 The IETF takes no position regarding the validity or scope of any 818 Intellectual Property Rights or other rights that might be claimed to 819 pertain to the implementation or use of the technology described in 820 this document or the extent to which any license under such rights 821 might or might not be available; nor does it represent that it has 822 made any independent effort to identify any such rights. Information 823 on the procedures with respect to rights in RFC documents can be 824 found in BCP 78 and BCP 79. 826 Copies of IPR disclosures made to the IETF Secretariat and any 827 assurances of licenses to be made available, or the result of an 828 attempt made to obtain a general license or permission for the use of 829 such proprietary rights by implementers or users of this 830 specification can be obtained from the IETF on-line IPR repository at 831 http://www.ietf.org/ipr. 833 The IETF invites any interested party to bring to its attention any 834 copyrights, patents or patent applications, or other proprietary 835 rights that may cover technology that may be required to implement 836 this standard. Please address the information to the IETF at 837 ietf-ipr@ietf.org. 839 Acknowledgment 841 Funding for the RFC Editor function is provided by the IETF 842 Administrative Support Activity (IASA).