idnits 2.17.1 draft-ietf-teas-scheduled-resources-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 13, 2016) is 2720 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'T0' is mentioned on line 394, but not defined == Missing Reference: 'B0' is mentioned on line 394, but not defined == Missing Reference: 'T1' is mentioned on line 394, but not defined == Missing Reference: 'B1' is mentioned on line 394, but not defined == Missing Reference: 'T2' is mentioned on line 394, but not defined == Missing Reference: 'B2' is mentioned on line 394, but not defined == Missing Reference: 'T3' is mentioned on line 394, but not defined == Missing Reference: 'B3' is mentioned on line 394, but not defined == Outdated reference: A later version (-21) exists of draft-ietf-pce-stateful-pce-16 Summary: 1 error (**), 0 flaws (~~), 10 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TEAS Working Group Y. Zhuang, Ed. 3 Internet-Draft Q. Wu 4 Intended status: Standards Track H. Chen 5 Expires: May 17, 2017 Huawei 6 A. Farrel 7 Juniper Networks 8 November 13, 2016 10 Architecture for Scheduled Use of Resources 11 draft-ietf-teas-scheduled-resources-00 13 Abstract 15 Time-Scheduled reservation of traffic engineering (TE) resources can 16 be used to provide resource booking for TE Label Switched Paths so as 17 to better guarantee services for customers and to improve the 18 efficiency of network resource usage into the future. This document 19 provides a framework that describes and discusses the architecture 20 for the scheduled reservation of TE resources. This document does 21 not describe specific protocols or protocol extensions needed to 22 realize this service. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on May 17, 2017. 41 Copyright Notice 43 Copyright (c) 2016 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 59 2. Problem statement . . . . . . . . . . . . . . . . . . . . . . 3 60 2.1. Provisioning TE-LSPs and TE Resources . . . . . . . . . . 3 61 2.2. Selecting the Path of an LSP . . . . . . . . . . . . . . 4 62 2.3. Planning Future LSPs . . . . . . . . . . . . . . . . . . 4 63 2.4. Looking at Future Demands on TE Resources . . . . . . . . 5 64 2.5. Requisite State Information . . . . . . . . . . . . . . . 5 65 3. Architectural Concepts . . . . . . . . . . . . . . . . . . . 6 66 3.1. Where is Scheduling State Held? . . . . . . . . . . . . . 6 67 3.2. What State is Held? . . . . . . . . . . . . . . . . . . . 8 68 4. Architecture Overview . . . . . . . . . . . . . . . . . . . . 10 69 4.1. Service Request . . . . . . . . . . . . . . . . . . . . . 10 70 4.2. Initialization and Recovery . . . . . . . . . . . . . . . 11 71 4.3. Synchronization Between PCEs . . . . . . . . . . . . . . 12 72 5. Security Consideration . . . . . . . . . . . . . . . . . . . 12 73 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 74 7. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 13 75 8. Informative References . . . . . . . . . . . . . . . . . . . 13 76 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 78 1. Introduction 80 Traffic Engineering Label Switched Paths (TE-LSPs) are connection 81 oriented tunnels in packet and non-packet networks [RFC3209], 82 [RFC3945]. TE-LSPs may reserve network resources for use by the 83 traffic they carry, thus providing some guarantees of service 84 delivery and allowing a network operator to plan the use of the 85 resources across the whole network. 87 In some technologies (such as wavelength switched optical networks) 88 the resource is synonymous with the label that is switched on the 89 path of the LSP so that it is not possible to establish an LSP that 90 can carry traffic without assigning a concrete resource to the LSP. 91 In other technologies (such as packet switched networks) the 92 resources assigned to an LSP are a measure of the capacity of a link 93 that is dedicated for use by the traffic on the LSP. In all cases, 94 network planning consists of selecting paths for LSPs through the 95 network so that there will be no contention for resources; LSP 96 establishment is the act of setting up an LSP and reserving resources 97 within the network; and network optimization or re-optimization is 98 the process of re-positioning LSPs in the network to make the 99 unreserved network resources more useful for potential future LSPs 100 while ensuring that the established LSPs continue to fulfill their 101 objectives. 103 It is often the case that it is known that an LSP will be needed at 104 some time in the future. While a path for that LSP could be computed 105 using knowledge of the currently established LSPs and the currently 106 available resources, this does not give any degree of certainty that 107 the necessary resources will be available when it is time to set up 108 the new LSP. Yet setting up the LSP ahead of the time when it is 109 needed (which would guarantee the availability of the resources) is 110 wasteful since the network resources could be used for some other 111 purpose in the meantime. 113 Similarly, it may be known that an LSP will no longer be needed after 114 some future time and that it will be torn down releasing the network 115 resources that were assigned to it. This information can be helpful 116 in planning how a future LSP is placed in the network. 118 Time-Scheduled (TS) reservation of TE resources can be used to 119 provide resource booking for TE-LSPs so as to better guarantee 120 services for customers and to improve the efficiency of network 121 resource usage into the future. This document provides a framework 122 that describes and discusses the architecture for the scheduled 123 reservation of TE resources. This document does not describe 124 specific protocols or protocol extensions needed to realize this 125 service. 127 2. Problem statement 129 2.1. Provisioning TE-LSPs and TE Resources 131 TE-LSPs in existing networks are provisioned using RSVP-TE as a 132 signaling protocol [RFC3209] [RFC3473], by direct control of network 133 elements such as in the Software Defined Networking (SDN) paradigm, 134 and using the PCE Communication Protocol (PCEP) [RFC5440] as a 135 control protocol. 137 TE resources are reserved at the point of use. That is, the 138 resources (wavelengths, timeslots, bandwidth, etc.) are reserved for 139 use on a specific link and are tracked by the Label Switching Routers 140 (LSRs) at the end points of the link. Those LSRs learn which 141 resources to reserve during the LSP setup process. 143 The use of TE resources can be varied by changing the parameters of 144 the LSP that uses them, and the resources can be released by tearing 145 down the LSP. 147 2.2. Selecting the Path of an LSP 149 Although TE-LSPs can determine their paths hop-by-hop using the 150 shortest path toward the destination to route the signaling protocol 151 messages [RFC3209], in practice this option is not applied because it 152 does not look far enough ahead into the network to verify that the 153 desired resources are available. Instead, the full length of the 154 path of an LSP is computed ahead of time either by the head-end LSR 155 of a signaled LSP, or by Path Computation Element (PCE) functionality 156 in a dedicated server or built into network management software 157 [RFC4655]. 159 Such full-path computation is applied in order that an end-to-end 160 view of the available resources in the network can be used to 161 determine the best likelihood of establishing a viable LSP that meets 162 the service requirements. Even in this situation, however, it is 163 possible that two LSPs being set up at the same time will compete for 164 scarce network resources meaning that one or both of them will fail 165 to be established. This situation is avoided by using a centralized 166 PCE that is aware of the LSP setup requests that are in progress. 168 2.3. Planning Future LSPs 170 LSPs may be established "on demand" when the requester determines 171 that a new LSP is needed. In this case, the path of the LSP is 172 computed as described in Section 2.2. 174 However, in many situations, the requester knows in advance that an 175 LSP will be needed at a particular time in the future. For example, 176 the requester may be aware of a large traffic flow that will start at 177 a well-known time, perhaps for a database synchronzation or for the 178 exchange of content between streamng sites. Furthermore, the 179 requester may also know for how long the LSP is required before it 180 can be torn down. 182 The set of requests for future LSPs could be collected and held in a 183 central database (such as at a Network Management System - NMS): when 184 the time comes for each LSP to be set up the NMS can ask the PCE to 185 compute a path and can then requst the LSP to be provisioned. This 186 approach has a number of drawbacks because it is not possible to 187 determine in advance whether it will be possible to deliver the LSP 188 since the resources it needs might be used by other LSPs in the 189 network. Thus, at the time the requester asks for the future LSP, 190 the NMS can only make a best-effort guarantee that the LSP will be 191 set up at the desired time. 193 A better solution, therefore, is for the requests for future LSPs to 194 be serviced at once. The paths of the LSPs can be computed ahead of 195 time and converted into reservations of network resources during 196 specific windows in the future. 198 2.4. Looking at Future Demands on TE Resources 200 While path computation as described in Section 2.2 takes account of 201 the currently available network resources, and can act to place LSPs 202 in the network so that there is the best possibility of future LSPs 203 being accommodated, it cannot handle all eventualities. It is simple 204 to construct scenarios where LSPs that are placed one at a time lead 205 to future LSPs being blocked, but where foreknowledge of all of the 206 LSPs would have made it possible for them all to be set up. 208 If, therefore, we were able to know in advance what LSPs were going 209 to be requested we could plan for them and ensure resources were 210 available. Furthermore, such an approach enables a commitment to be 211 made to a service user that an LSP will be set up and available at a 212 specific time. 214 This service can be achieved by tracking the current use of network 215 resources and also a future view of the resource usage. We call this 216 time-scheduled TE (TS-TE) resource reservation. 218 2.5. Requisite State Information 220 In order to achieve the TS-TE resource reservation, the use of 221 resources on the path needs to be scheduled. Scheduling state is 222 used to indicate when resources are reserved and when they are 223 available for use. 225 A simple information model for one piece of scheduling state is as 226 follows: 228 { link id; 229 resource id or reserved capacity; 230 reservation start time; 231 reservation end time 232 } 234 The resource that is scheduled can be link capacity, physical 235 resources on a link, CPU utilization, memory, buffers on an 236 interfaces, etc. The resource might also be the maximal unreserved 237 bandwidth of the link over a time intervals. For any one resource 238 there could be multiple pieces of scheduling state, and for any one 239 link, the timing windows might overlap. 241 There are multiple ways to realize this information model and 242 different ways to store the data. The resource state could be 243 expressed as a start time and and end time as shown above, or could 244 be expressed as a start time and a duration. Multiple periods, 245 possibly of different lengths, may be associated with one reservation 246 request, and a reservation might repeat on a regular cycle. 247 Furthermore, the current state of network reservation could be kept 248 separate from the scheduled usage, or everything could be merged into 249 a single TS databasae. This document does not spend any more time on 250 discussion of encoding of state information except to discuss the 251 location of storage of the state information and the recovery of the 252 information after failure events. 254 This scheduling state information can be used by applications to book 255 resources for future or now, so as to maximize chance of services 256 being delivered. Also, it can avoid contention for resources of 257 LSPs. 259 Note that it is also to store the information about future LSPs. 260 This information is held to allow the LSPs to be instantiated when 261 they are due and using the paths/resources that have been computed 262 for them, but also to provide correlation with the TS-TE resource 263 reservations so that it is clear why resources were reserved allowing 264 pre-emption and handling release of reserved resources in the event 265 of cancelation of future LSPs. 267 3. Architectural Concepts 269 This section examines several important architectural concepts that 270 lead to design decisions that will influence how networks can achieve 271 TS-TE in a scalable and robust manner. 273 3.1. Where is Scheduling State Held? 275 The scheduling state information described in Section 2.5 has to be 276 held somewhere. There are two places where this makes sense: 278 o In the network nodes where the resources exist; 280 o In a central scheduling controller where decisions about resource 281 allocation are made. 283 The first of these makes policing of resource allocation easier. It 284 means that many points in the network can request immediate or 285 scheduled LSPs with the associated resource reservation and that all 286 such requests can be correlated at the point where the resources are 287 allocated. However, this approach has some scaling and technical 288 problems: 290 o The most obvious issue is that each network node must retain the 291 full time-based state for all of its resources. In a busy network 292 with a high arrival rate of new LSPs and a low hold time for each 293 LSP, this could be a lot of state. Yet network nodes are normally 294 implemented with minimal spare memory. 296 o In order that path computation can be performed, the computing 297 entity normally known as a Path Computation Element (PCE) 298 [RFC4655] needs access to a database of available links and nodes 299 in the network, and of the TE properties of the links. This 300 database is known as the Traffic Engineering Database (TED) and is 301 usually populated from information advertised in the IGP by each 302 of the network nodes or exported using BGP-LS 303 [I-D.ietf-idr-ls-distribution]. To be able to compute a path for 304 a future LSP the PCE needs to populate the TED with all of the 305 future resource availability: if this information is held on the 306 network nodes it must also be advertised in the IGP. This could 307 be a significant scaling issue for the IGP and the network nodes 308 as all of the advertised information is held at every network node 309 and must be periodically refreshed by the IGP. 311 o When a normal node restarts it can recover resource reservation 312 state from the forwarding hardware, from Non-volatile random- 313 access memory (NVRAM), or from adjacent nodes through the 314 signaling protocol [RFC5063]. If scheduling state is held at the 315 network nodes it must also be recovered after the restart of a 316 network node. This cannot be achieved from the forwarding 317 hardware because the reservation will not have been made, could 318 require additional expensive NVRAM, or might require that all 319 adjacent nodes also have the scheduling state in order to 320 reinstall it on the restarting node. This is potentially complex 321 processing with scaling and cost implications. 323 Conversely, if the scheduling state is held centrally it is easily 324 available at the point of use. That is, the PCE can utilize the 325 state to plan future LSPs and can update that stored information with 326 the scheduled reservation of resources for those future LSPs. This 327 approach also has several issues: 329 o If there are multiple controllers then they must synchronise their 330 stored scheduling state as they each plan future LSPs, and must 331 have a mechanism to resolve resource contention. This is 332 relatively simple and is mitigated by the fact that there is ample 333 processing time to replan future LSPs in the case of resource 334 contention. 336 o If other sources of immediate LSPs are allowed (for example, other 337 controllers or autonomous action by head-end LSRs) then the 338 changes in resource availability caused by the setup or teardown 339 of these LSPs must be reflected in the TED (by use of the IGP as 340 currently) and may have an impact of planned future LSPs. This 341 impact can be mitigated by replanning future LSPs or through LSP 342 preemption. 344 o If other sources of planned LSPs are allowed, they can request 345 path computation and resource reservation from the centralized PCE 346 using PCEP [RFC5440]. 348 o If the scheduling state is held centrally at a PCE, the state must 349 be held and restored after a system restart. This is relatively 350 easy to achieve on a central server that can have access to non- 351 volatile storage. The PCE could also synchronize the scheduling 352 state with other PCEs after restart. See Section 4.2 for details. 354 o Of course, a centralized system must store informaton about all of 355 the resources in the network. In a busy network with a high 356 arrival rate of new LSPs and a low hold time for each LSP, this 357 could be a lot of state. This is multiplied by the size of the 358 network measured both by the number of links and nodes, and by the 359 number of trackable resources on each link or at each node. The 360 challenge may be mitigated by the centralized server being 361 dedicated hardware, but the problem of collecting the information 362 from the network is only solved if the central server has full 363 control of the booking of resources and the estblshment of new 364 LSPs. 366 Thus the architectural conclusion is that scheduling state should be 367 held centrally at the point of use and not in the network devices. 369 3.2. What State is Held? 371 As already described, the PCE needs access to an enhanced, time-based 372 TED. It stores the traffic engineering (TE) information such as 373 bandwidth for every link for a series of time intervals. There are a 374 few ways to store the TE information in the TED. For example, 375 suppose that the amount of the unreserved bandwidth at a priority 376 level for a link is Bj in a time interval from time Tj to Tk (k = 377 j+1), where j = 0, 1, 2, .... 379 Bandwidth 380 ^ 381 | B3 382 | B1 ___________ 383 | __________ 384 |B0 B4 385 |__________ B2 _________ 386 | ________________ 387 | 388 -+-------------------------------------------------------> Time 389 |T0 T1 T2 T3 T4 391 Figure 1: A Plot of Bandwidth Usage against Time 393 The unreserved bandwidth for the link can be represented and stored 394 in the TED as [T0, B0], [T1, B1], [T2, B2], [T3, B3], ... as shown in 395 Figure 1. 397 But it must be noted that service requests for future LSPs are known 398 in terms of the LSPs whose paths are computed and for which resources 399 are scheduled. For example, if the requester of a future LSP decides 400 to cancel the request or to modify the request, the PCE must be able 401 to map this to the resources that were reserved. When the LSP or the 402 request for the LSP with a number of time intervals is cancelled, the 403 PCE must release the resources that were reserved on each of the 404 links along the path of the LSP in every time intervals from the TED. 405 If the bandwidth reserved on a link for the LSP is B from time T2 to 406 T3 and the unreserved bandwidth on the link is B2 from T2 to T3, B is 407 added to the link for the time interval from T2 to T3 and the 408 unreserved bandwidth on the link from T2 to T3 will be B2 + B. 410 This suggests that the PCE needs an LSP Database (LSP-DB) 411 [I-D.ietf-pce-stateful-pce] that contains information not only about 412 LSPs that are active in the network, but also those that are planned. 413 The information for an LSP stored in the LSP-DB includes for each 414 time interval that applies to the LSP: the time interval, the paths 415 computed for the LSP satisfying the constraints in the time interval, 416 and the resources such as bandwidth reserved for the LSP in the time 417 interval. See also Section 2.3 419 It is an implementation choice how the TED and LSP-DB are stored both 420 for dynamic use and for recovery after failure or restart, but it may 421 be noted that all of the information in the scheduled TED can be 422 recovered from the active network state and from the scheduled LSP- 423 DB. 425 4. Architecture Overview 427 The architectural considerations and conclusions described in the 428 previous section lead to the architecture described in this section. 430 ------------------- 431 | Service Requester | 432 ------------------- 433 ^ 434 a| 435 v 436 ------- b -------- 437 | |<--->| LSP-DB | 438 | | -------- 439 | PCE | 440 | | c ----- 441 | |<---->| TED | 442 ------- ----- 443 ^ ^ 444 | | 445 d| |e 446 | | 447 ------+-----+-------------------- 448 | | Network 449 | -------- 450 | | Router | 451 v -------- 452 ----- ----- 453 | LSR |<------>| LSR | 454 ----- f ----- 456 Figure 2: Reference Architecture for Scheduled Use of Resources 458 4.1. Service Request 460 As shown in Figure 2, some component in the network requests a 461 service. This may be an application, an NMS, an LSR, or any 462 component that qualifies as a Path Computation Client (PCC). We show 463 this on the figure as the "Service Requester" and it sends a request 464 to the PCE for an LSP to be set up at some time (either now or in the 465 future). The request, indicated on Figure 2 by the arrow (a) 466 includes all of the parameters of the LSP that the requester wishes 467 to supply such as bandwidth, start time, and end time. Note that the 468 requester in this case may be the same LSR shown in the figure or may 469 be a distinct system. 471 The PCE enters the LSP request in its LSP-DB (b), and uses 472 information from its TED (c) to compute a path that satisfies 473 constraints such as bandwidth constraint for the LSP in the time 474 interval from a start time to an end time. It updates the future 475 resource availability in the TED so that further path computations 476 can take account of the scheduled resource usage. It stores the path 477 for the LSP into the LSP-DB (b). 479 When it is time such as at a start time for the LSP to be set up, the 480 PCE sends a PCEP Initiate request to the head end LSR (d) providing 481 the path to be signaled as well as other parameters such as the 482 bandwidth of the LSP. 484 As the LSP is signaled between LSRs (f) the use of resources in the 485 network is updated and distributed using the IGP. This information 486 is shared with the PCE either through the IGP or using BGP-LS (e), 487 and the PCE updates the information stored in its TED (c). 489 After the LSP is set up, the head end LSR sends a PCEP LSP State 490 Report (PCRpt message) to the PCE (d). The report contains the 491 resources such as bandwidth usage for the LSP. The PCE updates the 492 status of the LSP in the LSPDB according to the report. 494 When an LSP is no longer required (either because the Service 495 Requester has cancelled the request, or because the LSP's scheduled 496 lifetime has expired) the PCE can remove it. If the LSP is currently 497 active, the PCE instructs the head-end LSR to tear it down (d), and 498 the network resource usage will be updated by the IGP and advertised 499 back to the PCE through the IGP or BGP-LS (e). Once the LSP is no 500 longer active, the PCE can remove it from the LSP-DB (b). 502 4.2. Initialization and Recovery 504 When a PCE in the architecture shown in Figure 2 is initialized, it 505 must learn state from the network, from its stored databases, and 506 potentially from other PCEs in the network. 508 The first step is to get an accurate view of the topology and 509 resource availability in the network. This would normally involve 510 reading the state direct from the network via the IGP or BGP-LS (e), 511 but might include receiving a copy of the TED from another PCE. Note 512 that a TED stored from a previous instantiation of the PCE is 513 unlikely to be valid. 515 Next, the PCE must construct a time-based TED to show scheduled 516 resource usage. How it does this is implementation specific and this 517 document does not dictate any particular mechanism: it may recover a 518 time-based TED previously saved to non-volatile storage, or it may 519 reconstruct the time-based TED from information retrieved from the 520 LSP-DB previously saved to non-volatile storage. If there is more 521 than one PCE active in the network, the recovering PCE will need to 522 synchronize the LSP-DB and time-based TED with other PCEs (see 523 Section 4.3). 525 4.3. Synchronization Between PCEs 527 If there is more than one PCE active in the network which supports 528 scheduling, it is important to achieve some consistency between the 529 scheduled TED and scheduled LSP-DB between the PCEs. 531 [RFC7399] answers various questions around synchronization between 532 the PCEs. It should be noted that the time-based "scheduled" 533 information adds another dimension to it. It should be noted that 534 the deployment may use a primary PCE and the other PCEs as backup, 535 where the backup PCE can take over only in the event of a failure of 536 the primary PCE. Or the PCEs may share the load at all times. The 537 choice of the synchronization technique is largely dependent on the 538 deployment of PCEs in the network. 540 One option for ensuring that multiple PCEs use the same scheduled 541 information is simply to have the PCEs driven from the same shared 542 database, but it is likely to be inefficient and inter-operation 543 between multiple implementation harder. 545 Or the PCEs might be responsible for its own scheduled database and 546 utilize some distributed database synchronization mechanism to have a 547 consistent database. Based on the implementation, this could be 548 efficient but the inter-operation between heterogeneous 549 implementation is still hard. 551 Another approach would be to utilize PCEP messages to synchronize the 552 scheduled state between PCEs. This approach would work well if the 553 number of PCEs which support scheduling are less, but as the number 554 increases considerable message exchange needs to happen to keep the 555 scheduled database in sync. Future solution could also utilize some 556 synchronization optimization techniques for efficiency. Another 557 variation would be to request information from other PCEs for a 558 particular time slice but this might have impact on the optimization 559 algorithm. 561 5. Security Consideration 563 TBD 565 6. Acknowledgements 567 This work has benefited from the discussions of resource scheduling 568 over the years. In particular the DRAGON project [DRAGON] and 569 [I-D.yong-ccamp-ason-gmpls-autobw-service] both of which provide 570 approaches to auto-bandwidth services in GMPLS networks. 572 Mehmet Toy, Lei Liu and Khuzema Pithewan contributed the earlier 573 version of [I-D.chen-teas-frmwk-tts]. We would like to thank authors 574 of that draft on Temporal Tunnel Services andfor help inspire 575 discussion in the TEAS WG and get this work solid. 577 Thanks to Michael Scharf and Daniele Ceccarelli for useful comments 578 on this work. 580 7. Contributors 582 The following people contributed to discussions that led to the 583 development of this document: 585 Dhruv Dhody 586 Email: dhruv.dhody@huawei.com 588 8. Informative References 590 [DRAGON] National Science Foundation, "http://www.maxgigapop.net/ 591 wp-content/uploads/The-DRAGON-Project.pdf". 593 [I-D.chen-teas-frmwk-tts] 594 Chen, H., Toy, M., Liu, L., and K. Pithewan, "Framework 595 for Temporal Tunnel Services", draft-chen-teas-frmwk- 596 tts-01 (work in progress), March 2016. 598 [I-D.ietf-idr-ls-distribution] 599 Gredler, H., Medved, J., Previdi, S., Farrel, A., and S. 600 Ray, "North-Bound Distribution of Link-State and TE 601 Information using BGP", draft-ietf-idr-ls-distribution-13 602 (work in progress), October 2015. 604 [I-D.ietf-pce-stateful-pce] 605 Crabbe, E., Minei, I., Medved, J., and R. Varga, "PCEP 606 Extensions for Stateful PCE", draft-ietf-pce-stateful- 607 pce-16 (work in progress), September 2016. 609 [I-D.yong-ccamp-ason-gmpls-autobw-service] 610 Yong, L. and Y. Lee, "ASON/GMPLS Extension for Reservation 611 and Time Based Automatic Bandwidth Service", draft-yong- 612 ccamp-ason-gmpls-autobw-service-00 (work in progress), 613 October 2006. 615 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 616 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 617 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 618 . 620 [RFC3473] Berger, L., Ed., "Generalized Multi-Protocol Label 621 Switching (GMPLS) Signaling Resource ReserVation Protocol- 622 Traffic Engineering (RSVP-TE) Extensions", RFC 3473, 623 DOI 10.17487/RFC3473, January 2003, 624 . 626 [RFC3945] Mannie, E., Ed., "Generalized Multi-Protocol Label 627 Switching (GMPLS) Architecture", RFC 3945, 628 DOI 10.17487/RFC3945, October 2004, 629 . 631 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 632 Element (PCE)-Based Architecture", RFC 4655, 633 DOI 10.17487/RFC4655, August 2006, 634 . 636 [RFC5063] Satyanarayana, A., Ed. and R. Rahman, Ed., "Extensions to 637 GMPLS Resource Reservation Protocol (RSVP) Graceful 638 Restart", RFC 5063, DOI 10.17487/RFC5063, October 2007, 639 . 641 [RFC5440] Vasseur, JP., Ed. and JL. Le Roux, Ed., "Path Computation 642 Element (PCE) Communication Protocol (PCEP)", RFC 5440, 643 DOI 10.17487/RFC5440, March 2009, 644 . 646 [RFC7399] Farrel, A. and D. King, "Unanswered Questions in the Path 647 Computation Element Architecture", RFC 7399, 648 DOI 10.17487/RFC7399, October 2014, 649 . 651 Authors' Addresses 652 Yan Zhuang (editor) 653 Huawei 654 101 Software Avenue, Yuhua District 655 Nanjing, Jiangsu 210012 656 China 658 Email: zhuangyan.zhuang@huawei.com 660 Qin Wu 661 Huawei 662 101 Software Avenue, Yuhua District 663 Nanjing, Jiangsu 210012 664 China 666 Email: bill.wu@huawei.com 668 Huaimo Chen 669 Huawei 670 Boston, MA 671 US 673 Email: huaimo.chen@huawei.com 675 Adrian Farrel 676 Juniper Networks 678 Email: adrian@olddog.co.uk