idnits 2.17.1 draft-dhody-pce-cso-enabled-path-computation-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([CSO-DATACNTR], [RFC4655]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 21, 2015) is 3377 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-13) exists of draft-ietf-pce-pcep-service-aware-06 == Outdated reference: A later version (-21) exists of draft-ietf-pce-stateful-pce-10 == Outdated reference: A later version (-07) exists of draft-ceccarelli-actn-framework-06 == Outdated reference: A later version (-16) exists of draft-farrkingel-pce-abno-architecture-15 Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 PCE Working Group D. Dhody 3 Internet-Draft Y. Lee 4 Intended status: Informational Huawei Technologies 5 Expires: July 25, 2015 LM. Contreras 6 O. Gonzalez de Dios 7 Telefonica I+D 8 N. Ciulli 9 Nextworks 10 January 21, 2015 12 Cross Stratum Optimization enabled Path Computation 13 draft-dhody-pce-cso-enabled-path-computation-07 15 Abstract 17 Applications like cloud computing, video gaming, HD Video streaming, 18 Live Concerts, Remote Medical Surgery, etc are offered by Data 19 Centers. These data centers are geographically distributed and 20 connected via a network. Many decisions are made in the Application 21 space without any concern of the underlying network. Cross stratum 22 application/network optimization focus on the challenges and 23 opportunities presented by data center based applications and 24 carriers networks together [CSO-DATACNTR]. 26 Constraint-based path computation is a fundamental building block for 27 traffic engineering systems such as Multiprotocol Label Switching 28 (MPLS) and Generalized Multiprotocol Label Switching (GMPLS) 29 networks. [RFC4655] explains the architecture for a Path Computation 30 Element (PCE)-based model to address this problem space. 32 This document explains the architecture for CSO enabled Path 33 Computation. 35 Status of This Memo 37 This Internet-Draft is submitted in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF). Note that other groups may also distribute 42 working documents as Internet-Drafts. The list of current Internet- 43 Drafts is at http://datatracker.ietf.org/drafts/current/. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 49 This Internet-Draft will expire on July 25, 2015. 51 Copyright Notice 53 Copyright (c) 2015 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents 58 (http://trustee.ietf.org/license-info) in effect on the date of 59 publication of this document. Please review these documents 60 carefully, as they describe your rights and restrictions with respect 61 to this document. Code Components extracted from this document must 62 include Simplified BSD License text as described in Section 4.e of 63 the Trust Legal Provisions and are provided without warranty as 64 described in the Simplified BSD License. 66 Table of Contents 68 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 69 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5 70 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 5 71 3. CSO enabled PCE Architecture . . . . . . . . . . . . . . . . 6 72 4. Path Computation and Setup Procedure . . . . . . . . . . . . 10 73 4.1. Path Setup Using NMS . . . . . . . . . . . . . . . . . . 11 74 4.2. Path Setup Using a Network Control Plane . . . . . . . . 12 75 4.3. Path Setup using PCE . . . . . . . . . . . . . . . . . . 13 76 4.4. Path Setup Using a Software Defined Network controller . 14 77 5. Other Consideration . . . . . . . . . . . . . . . . . . . . . 15 78 5.1. Inter-domain . . . . . . . . . . . . . . . . . . . . . . 15 79 5.1.1. One Application Domain with Multiple Network Domains 15 80 5.1.2. Multiple Application Domains with Multiple Network 81 Domains . . . . . . . . . . . . . . . . . . . . . . . 16 82 5.1.2.1. ACG talks to multiple NCGs . . . . . . . . . . . 16 83 5.1.2.2. ACG talks to the primary NCG, which talks to the 84 other NCG of different domains . . . . . . . . . 17 85 5.1.3. Federation of SDN domains . . . . . . . . . . . . . . 18 86 5.1.4. Nesting of multi-layer SDN domains . . . . . . . . . 19 87 5.2. Bottleneck . . . . . . . . . . . . . . . . . . . . . . . 20 88 5.3. Relationship to ABNO . . . . . . . . . . . . . . . . . . 21 89 5.4. Relationship to ACTN . . . . . . . . . . . . . . . . . . 21 90 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 91 7. Security Considerations . . . . . . . . . . . . . . . . . . . 21 92 8. Manageability Considerations . . . . . . . . . . . . . . . . 21 93 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 21 94 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 21 95 10.1. Normative References . . . . . . . . . . . . . . . . . . 22 96 10.2. Informative References . . . . . . . . . . . . . . . . . 22 98 1. Introduction 100 Many application services offered by Data Center to end-users make 101 significant use of the underlying networks resources in the form of 102 bandwidth consumption used to carry the actual traffic between data 103 centers and/or among data center and end-users. There is a need for 104 cross optimization for both network and application resources. 105 [CSO-PROBLEM] describes the problem space for cross stratum 106 optimization. 108 [NS-QUERY] describes the general problem of network stratum (NS) 109 query in Data Center environments. Network Stratum (NS) query is an 110 ability to query the network from application controller in Data 111 Centers so that decision would be jointly performed based on both the 112 application needs and the network status. Figure 1 shows typical 113 data center architecture. 115 --------------- 116 ---------- | DC 1 | 117 | End-user |. . . . .>| o o o | 118 | | | \|/ | 119 ---------- | O | 120 | ----- --|------ 121 | | 122 | | 123 | -----------------|----------- 124 | / | \ 125 | / ..........O PE1 \ -------------- 126 | | . | | o o o DC 2 | 127 | | PE4 . PE2 | | \|/ | 128 ----|---O.........................O---|---|---O | 129 | . | | | 130 | . PE3 | -------------- 131 \ ..........O Carrier / 132 \ | Network / 133 ---------------|------------- 134 | 135 --------|------ 136 | O | 137 | /|\ | 138 | o o o | 139 | DC 3 | 140 --------------- 142 Figure 1: Data Center Architecture 144 Figure 2 shows the context of NS Query within the overarching data 145 center architecture shown in Figure 1. 147 -------------------------------------------- 148 | Application Overlay | 149 | (Data Centers) | 150 | | 151 ---------- | -------------- -------------- | 152 | End-User | | | Application |. . . .| Application | | 153 | |. . . >| | Control | | Processes | | 154 ---------- | | Gateway (ACG)| -------------- | 155 | | | -------------- | 156 | ------------- . . . . | Application | | 157 | /\ | Related Data | | 158 | || -------------- | 159 ----------||-------------------------------- 160 || 161 || Network Stratum Query (First 162 || Stage) 163 || 164 ----------||-------------------------------- 165 | \/ Network Underlay | 166 | | 167 | -------------- ---------------- | 168 | | Network |. . . | Network | | 169 | | Control | | Processes | | 170 | | Gateway (NCG)| ---------------- 171 | | | ---------------- | 172 | ------------- | Network | | 173 | |------------->| Related Data | | 174 | (Second Stage) ---------------- | 175 ------------------------------------------- 177 Figure 2: NS Query Architecture 179 NS Query is a two-stage query that consists of two stages: 181 o A vertical query capability where an external point (i.e., the 182 Application Control Gateway (ACG) in Data Center) will query the 183 network (i.e., the Network Control Gateway (NCG)). The query can 184 be initiated either by ACG to NCG or NCG to ACG depending on the 185 mode of operation. ACG initiated query is an application-centric 186 mode while NCG initiated query is a network-centric mode. It is 187 anticipated that either ACG or NCG can be a final decision making 188 point that chooses the end-to-end resources (i.e., both 189 application IT resources and the network connectivity) depending 190 on the mode of operation. 192 o A horizontal query capability where the NCG gathers the collective 193 information of a variety of horizontal schemes implemented in the 194 network stratum. 196 As an example for vertical query (1st stage), [ALTO-APPNET] describes 197 Application Layer Traffic Optimization (ALTO) information model and 198 protocol extensions to support application and network resource 199 information exchange for high bandwidth applications in partially 200 controlled and controlled environments as part of the infrastructure 201 to application information exposure (i2aex) initiative. 203 For the horizontal query (2nd stage), PCE can be an ideal choice, 204 [CSO-PCE-REQT] describes the general requirement PCE should support 205 in order to accommodate CSO capability. This document is intended to 206 fulfill the general PCE requirements discussed in the aforementioned 207 reference. 209 This document describes how PCE Architecture as described in 210 [RFC4655] can help in the second stage of NS query. 212 1.1. Requirements Language 214 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 215 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 216 document are to be interpreted as described in [RFC2119]. 218 2. Terminology 220 The following terminology is used in this document. 222 ACG: Application Control Gateway. 224 Application Stratum: The application stratum is the functional block 225 which manages and controls application resources and provides 226 application resources to a variety of clients/end-users. 227 Application resources are non-network resources critical to 228 achieving the application service functionality. Examples 229 include: application specific servers, storage, content, large 230 data sets, and computing power. Data Centers are regarded as 231 tangible realization of the application stratum architecture. 233 ALTO: Application Layer Traffic Optimization. 235 CSO: Cross Stratum Optimization. 237 GMPLS: Generalized Multiprotocol Label Switching. 239 i2aex: Infrastructure to application information exposure. 241 LSR: Label Switch Router. 243 MPLS: Multiprotocol Label Switching. 245 NCG: Network Control Gateway. 247 Network Stratum: The network stratum is the functional block which 248 manages and controls network resources and provides transport of 249 data between clients/end-users to and among application resources. 250 Network Resources are resources of any layer 3 or below (L1/L2/L3) 251 such as bandwidth, links, paths, path processing (creation, 252 deletion, and management), network databases, path computation, 253 admission control, and resource reservation capability. 255 NMS: Network Management System 257 PCC: Path Computation Client: any client application requesting a 258 path computation to be performed by a Path Computation Element. 260 PCE: Path Computation Element. An entity (component, application, 261 or network node) that is capable of computing a network path or 262 route based on a network graph and applying computational 263 constraints. 265 PCEP: Path Computation Element Communication Protocol. 267 TE: Traffic Engineering. 269 TED: Traffic Engineering Database. 271 UNI: User Network Interface. 273 3. CSO enabled PCE Architecture 275 In the network stratum, the Network Control Gateway (NCG) serves as 276 the proxy gateway to the network. The NCG receives the query request 277 from the ACG, probes the network to test the capabilities for data 278 flows to/from particular points in the network, and gathers the 279 collective information of a variety of horizontal schemes implemented 280 in the network stratum. This is a horizontal query (Stage 2 in 281 Figure 2). 283 In this section we will describe how PCE fits in this horizontal 284 scheme. 286 A Path Computation Element (PCE) is an entity that is capable of 287 computing a network path or route based on a network graph, and of 288 applying computational constraints during the computation. 290 (1) NCG and PCE are co-located. 292 In this composite solution, the same node implements functionality of 293 both NCG and PCE. When a network stratum query is received from the 294 ACG (stage 1), this query is broken into one or more Path computation 295 requests and handled by the PCE functionality co-located with the 296 NCG. There is no need for PCEP protocol here. In this case, an 297 external PCE interface (e.g., CLI, SNMP, proprietary) needs to be 298 supported. This is out of the scope of this document. 300 +--------------------------------------------------+ 301 | -- -- -- -- -- -- -- -- -- | 302 | | | | | | | | | | | | | | | | | | | | 303 | -- -- -- -- -- -- -- -- -- | 304 | | 305 | Application Stratum | 306 | | 307 | +---------------------------------------+ | 308 | | | | 309 +----+ ACG +-----+ 310 | | 311 +------*---*----------------------------+ 312 | | 313 | | 314 | | 315 +------*---*----------------------------+ 316 | +----------+ +----------+ | 317 +----+ + *----------* * +-----+ 318 | | | NCG | | PCE | | | 319 | | | *----------* * | | 320 | | +----------+ +----------+ | | 321 | | | | 322 | +---------------------------------------+ | 323 | | 324 | Network Stratum | 325 | -- -- -- -- -- -- -- -- -- | 326 | | | | | | | | | | | | | | | | | | | | 327 | -- -- -- -- -- -- -- -- -- | 328 +--------------------------------------------------+ 330 Figure 3: NCG and PCE Collocated 332 (2) NCG and external PCE 334 In this solution, an external node implements PCE functionality. 335 Network stratum query received from the ACG (stage 1) is converted 336 into Path computation requests at the NCG and relayed to the external 337 PCE using the PCEP [RFC5440]. In this case the NCG includes Path 338 Computation Client (PCC) functionalities. 340 +--------------------------------------------------+ 341 | -- -- -- -- -- -- -- -- -- | 342 | | | | | | | | | | | | | | | | | | | | 343 | -- -- -- -- -- -- -- -- -- | 344 | | 345 | Application Stratum | 346 | | 347 | +---------------------------------------+ | 348 | | | | 349 +----+ ACG +-----+ 350 | | 351 +------*---*----------------------------+ 352 | | 353 | | 354 | | 355 +------*---*-------+ 356 | +----------+ | +----------+ 357 +----+ | | *------* *--------+ 358 | | | NCG | | | PCE | | 359 | | | | *------* * | 360 | | +----------+ | +----------+ | 361 | | | | 362 | +------------------+ | 363 | | 364 | Network Stratum | 365 | -- -- -- -- -- -- -- -- -- | 366 | | | | | | | | | | | | | | | | | | | | 367 | -- -- -- -- -- -- -- -- -- | 368 +--------------------------------------------------+ 370 Figure 4: NCG and external PCE 372 PCE has the capability to compute constrained paths between a source 373 and one or more destination(s), optionally providing the value of the 374 metrics associated to the computed path(s). Thus it can fit very 375 well in the horizontal query stage of CSO. A PCE MAY have further 376 capability to do multi-layer and/or inter-domain path computation 377 which can be further utilized. NCG which understands the vertical 378 query and the presence of applications constraints can break the 379 application request into suitable path computation request which PCE 380 understands. In this scenario, the PCE MAY have no knowledge of 381 applications and provide only network related metrics to the NCG: the 382 NCG (or the ACG for an application-centric model) is in charge of 383 correlating the network quotations with the application layer 384 information to achieve the global CSO objective. 386 With this architecture, NCG can request PCE different sets of 387 computation mode that are not currently supported by PCE. For 388 instance, NCG may request PCE a multi-destination and multi-source 389 path computation request. This scenario arises when there are many 390 possible Data Center choices for a given application request and 391 there could be multiple sources for this request. Multi-destination 392 with a single source (aka., anycast) is a default case for multi- 393 destination and multi-source path computation. 395 In addition, with this architecture, NCG may have different sets of 396 objectives and constraints than typical path computation requests. 397 For instance, multi-criteria objective functions that combine the 398 bandwidth requirement and latency may be very useful for some 399 applications. [PCE-SERVICE-AWARE] describes the extension to PCEP to 400 carry Latency, Latency-Variation and Loss as constraints for end to 401 end path computation. 403 In a Stateful PCE (refer [PCE-STATEFUL]), there is a strict 404 synchronization of the network states (in term of topology and 405 resource information), and the set of computed paths and reserved 406 resources in use in the network. In other words, the PCE utilizes 407 information from the TED as well as information about existing paths 408 (for example, TE LSPs) in the network when processing new requests. 409 Stateful PCE will be very important tool to achieve the goals of 410 cross stratum optimization as maintains the status of final path 411 selected after cross (application and network) optimization. 413 As Stateful PCE would keep both LSP ID and the application ID 414 associated with the LSP, it will make path computation more efficient 415 in terms of resource usage and computation time. Moreover, Stateful 416 PCE would have an accurate snapshot of network resource information 417 and as such it can increase adaptability to the changes. This may be 418 important for some application that requires a stringent performance 419 objective. 421 In conclusion - 423 o NCG can use the PCE to do path computation based on constrains 424 from multiple sources and destinations. 426 o Stateful PCE can help in maintaining the status of the final cross 427 optimized path. It can also help NCG in maintaining the 428 relationship of application request and setup path. In case of 429 any change of the path, the Stateful PCE and NCG and cooperate and 430 take suitable action. 432 4. Path Computation and Setup Procedure 434 Path computation flow is shown in Figure 5. 436 1. User for application would contact the application gateway ACG 437 with its requirements. 439 2. ACG would further query the NCG to obtain the underlying network 440 Status and quotations (offers) for the network connectivity 441 services. 443 3. NCG would break the vertical request into suitable horizontal 444 path computation request(s). 446 4. PCE would provide the result to NCG. 448 5. NCG would abstract the computation result and provide to ACG. 450 6. NCG and ACG would cooperate to finalize the path that needs to be 451 setup. 453 7. Note that that the final decision can be made either in ACG or 454 NCG depending on the mode of operation. With application centric 455 mode, minimal data center/IT resource information would flow from 456 ACG to NCG while ACG collects network abstracted information from 457 NCG to choose the optimal application-network resources. With 458 network centric mode, ACG would supply maximal data center/IT 459 resource information to NCG so that NCG in conjunction with PCE 460 would determine the optimal mixed set of application and network 461 resources. In the latter case, the PCE COULD support 462 application/IT- based constrained computation capability beyond 463 network path computation. This requires further PCE capabilities 464 to receive and process data center/IT resource information, 465 possibly in conjunction with network information. 467 +----------+ 1 +---------------------------------------+ 468 | |-------->| | 469 | User | | ACG | 470 | |<--------| | 471 +----------+ 6 +---------------------------------------+ 472 ^ | 473 | 2| 474 | | +----------+ 3 +----------+ 475 | +->| |--------->| | 476 | | NCG | | PCE | 477 +-----| |<---------| | 478 5 +----------+ 4 +----------+ 480 Figure 5: Path Computation Flow 482 In this section we would analyze the mechanisms to finally setup the 483 cross stratum optimized path. 485 4.1. Path Setup Using NMS 487 After ACG and NCG have decided the path that needs to be set, NCG can 488 send a request to NMS asking it relay the message to the head end LSR 489 (also a PCC) to setup the pre computed path. Once the path signaling 490 is completed and the LSP is setup, PCC should relay the status of the 491 LSP to the Stateful PCE. 493 In this mechanism we can reuse the existing NMS to establish the 494 path. Any updates or deletion of such path would be made via the 495 NMS. 497 Head end LSR (PCC) 'H' is always the owner of the path. 499 See Figure 6 for this scenario. 501 +----------+ +---------------------------------------+ 502 | |-------->| | 503 | User | | ACG | 504 | |<--------| | 505 +----------+ +---------------------------------------+ 506 ^ | 507 +-----------------+--+------------------------------------+ 508 |+----------+ | | +----------+ +----------+| 509 || | | +->| |--------->| || 510 || NMS | +-----| NCG | | PCE || 511 || |<----------| |<---------| || 512 |+----------+ +----------+ +----------+| 513 | | ^ | 514 | | +------------------------------------+ | 515 | | | Network Stratum | 516 | | -- -- -- -- -- -- -- -- -- | 517 | +----->|H | | | | | | | | | | | | | | | | | | 518 | -- -- -- -- -- -- -- -- -- | 519 +---------------------------------------------------------+ 521 Figure 6: Path Setup Using NMS 523 4.2. Path Setup Using a Network Control Plane 525 A network control plane (e.g. GMPLS) MAY be used to automatically 526 establish the cross optimized path between the selected end points. 527 This control plane MAY be triggered via - 529 o NCG to Control Plane: GMPLS UNI or other protocols 531 o Control Plane to Head end Router: GMPLS Control Channel Interface 532 (CCI). Suitable protocol extensions are needed to achieve this. 534 See Figure 7 for this scenario. 536 +----------+ +---------------------------------------+ 537 | |-------->| | 538 | User | | ACG | 539 | |<--------| | 540 +----------+ +---------------------------------------+ 541 ^ | 542 +-----------------+--+------------------------------------+ 543 |+----------+ | | +----------+ +----------+| 544 || GMPLS | | +->| |--------->| || 545 || Control | +-----| NCG | | PCE || 546 || plane |<----------| |<---------| || 547 |+----------+ +----------+ +----------+| 548 | | ^ | 549 | | +------------------------------------+ | 550 | | | Network Stratum | 551 | | -- -- -- -- -- -- -- -- -- | 552 | +----->|H | | | | | | | | | | | | | | | | | | 553 | -- -- -- -- -- -- -- -- -- | 554 +---------------------------------------------------------+ 556 Figure 7: Path Setup Using Centralized Control Plane 558 After cross optimization, ACG and NCG will select the suitable end 559 points, (the path is already calculated by PCE), this path is 560 conveyed to the head end LSR which signals the path and notify the 561 status to the Stateful PCE. Later NCG can send suitable message to 562 tear down the path. 564 Using centralized control plane can make the NCG responsible for the 565 LSP. Head end LSR signals and maintains the status but the 566 establishment and tear-down are initiated by the control plane. This 567 would have an obvious advantage in managing the setup paths. The 568 Stateful PCE will maintain the TED as well as the status of setup 569 LSP. NCG through centralized control plane can further 570 setup/teardown/modify/re-optimize those paths. 572 4.3. Path Setup using PCE 574 A Stateful PCE extension MAY be developed to communicate the cross 575 optimized path to the head end LSR. Current PCEP protocol requires 576 PCC to trigger Path request and PCE to provide reply. Even in 577 Stateful PCE, PCC must delegate the LSP to a PCE, a PCE never 578 initiate path setup. An extension to PCEP protocol MAY let PCE 579 notify to PCC (Head end LSR) to establish the path. 581 NCG via PCE and PCEP protocol can establish and tear-down LSP as 582 shown in Figure 8. [PCE_INITIATED] is one such attempt to extend 583 PCEP. 585 +----------+ +---------------------------------------+ 586 | |-------->| | 587 | User | | ACG | 588 | |<--------| | 589 +----------+ +---------------------------------------+ 590 ^ | 591 +-----------------+--+------------------------------------+ 592 | | | +----------+ +----------+| 593 | | +->| |--------->| || 594 | | | NCG | | PCE || 595 | +-----| |<---------| || 596 | +----------+ +----------+| 597 | +---------------------------------------+ ^ | 598 | | +------------------------------------+ | 599 | | | Network Stratum | 600 | | -- -- -- -- -- -- -- -- -- | 601 | +->|H | | | | | | | | | | | | | | | | | | 602 | -- -- -- -- -- -- -- -- -- | 603 +---------------------------------------------------------+ 605 Figure 8: Path Setup using PCE 607 4.4. Path Setup Using a Software Defined Network controller 609 A logically centralized Software Defined Network (SDN) controller MAY 610 be used to properly configure in an automatic way the traffic 611 forwarding rules that allow the end to end communication across the 612 Network Stratum. 614 Figure 9 shows this scenario. 616 +----------+ +---------------------------------------+ 617 | |-------->| | 618 | User | | ACG | 619 | |<--------| | 620 +----------+ +---------------------------------------+ 621 ^ | 622 +-----------------+--+------------------------------------+ 623 | | | +----------+ +----------+| 624 | | +->| |--------->| || 625 | | | NCG | | PCE || 626 | +-----| |<---------| || 627 | +----------+ +----------+| 628 | | ^ | 629 | v | | 630 | +----------------------------+ | 631 | +-| SDN Controller |--+ | 632 | | +----------------------------+ | | 633 | | | | | | | | | | 634 | v v v v v v v v | 635 | -- -- -- -- -- -- -- -- | 636 | | | | | | | | | | | | | | | | | | 637 | -- -- -- -- -- -- -- -- | 638 | | 639 | Network Stratum | 640 | | 641 +---------------------------------------------------------+ 643 Figure 9: Path Setup using SDN 645 A direct interface between the SDN Controller and the PCE could be 646 present in the architecture shown in Figure 9. 648 As result of the interaction between ACG and NCG (including the PCE 649 processing), the NCG is able to instruct the SDN Controller to 650 populate a number of forwarding rules to the network devices for 651 building the end to end path. 653 5. Other Consideration 655 5.1. Inter-domain 657 5.1.1. One Application Domain with Multiple Network Domains 659 Underlying network connecting the datacenters MAYBE made up of 660 multiple domains (AS and Area). In this case an inter-domain path 661 computation is required. 663 +----------+ +---------------------------------------+ 664 | |-------->| | 665 | User | | ACG | 666 | |<--------| | 667 +----------+ +---------------------------------------+ 668 ^ | 669 | | 670 +--------------+ +--+--+------------------------------------+ 671 | +----------+| | | | +----------+ +----------+| 672 | | || | | +->| |--------->| || 673 | | PCE || | | | NCG | | PCE || 674 | | || | +-----| |<---------| || 675 | +----+-----+| | +----------+ +----+-----+| 676 | | | | | | 677 +-------+------+ +-----------------------------------+------+ 678 | | 679 | | 680 |<---------------pcep session----------------->| 681 | | 683 Figure 10: Multi-domain Scenario 685 [RFC5441] describes an inter-domain path computation with cooperating 686 PCEs which can be enhanced and utilized in CSO enabled path 687 computation. 689 5.1.2. Multiple Application Domains with Multiple Network Domains 691 Underlying network connecting the datacenters MAY be made up of 692 multiple domains (AS and Area) as well as applications domains and 693 ACG MAY be distributed. In such case multiple ACG and NCG will be 694 involved in cross optimizing. This needs to be analyzed further. 696 5.1.2.1. ACG talks to multiple NCGs 698 As shown in Figure 11, ACG where the request originates may 699 communicate with multiple NCG to get the network information from 700 multiple domains to be cross optimized. 702 Application stratum 703 +---------------------------+ +---------------------------+ 704 | | | | 705 | | | | 706 | | | | 707 | | | | 708 | | | | 709 | +----------------------+ | | +----------------------+ | 710 | | | | | | | | 711 +--+ ACG +-+ +--+ ACG +-+ 712 | | | | 713 +-+-+-------------+-+--+ +-------+-+------------+ 714 | | | +------------+ | | 715 | | +------------+ | | | 716 +-+-+--------+ +-----+ +-+-----+-+--+ +-----+ 717 +--+ +---+ +-+ +--+ +----+ ++ 718 | | NCG |---| | | | | NCG |----| || 719 | | |---| | | | | |----| || 720 | +------------+ | PCE | | | +------------+ | PCE || 721 | | | | | | || 722 | | |<+--+------------------->| || 723 | +-----+ | | +-----+| 724 |Domain 1 | |Domain 2 | 725 +---------------------------+ +---------------------------+ 726 Network Stratum 728 Figure 11: ACG talks to multiple NCG 730 5.1.2.2. ACG talks to the primary NCG, which talks to the other NCG of 731 different domains 733 As shown in Figure 12, ACG communicated only to the primary NCG, 734 which may gather network information from multiple NCG and then 735 communicate consolidated information to ACG. 737 Application stratum 738 +---------------------------+ +---------------------------+ 739 | | | | 740 | | | | 741 | | | | 742 | | | | 743 | | | | 744 | +----------------------+ | | +----------------------+ | 745 | | | | | | | | 746 +--+ ACG +-+ +--+ ACG +-+ 747 | | | | 748 +-+-+------------------+ +-------+-+------------+ 749 | | | | 750 | | | | 751 +-+-+--------+ +-----+ +-------+-+--+ +-----+ 752 +--+ +---+ +-+ +--+ +----+ ++ 753 | | NCG |---| | | | | NCG |----| || 754 | | |---| | | | | |----| || 755 | +------+-----+ | PCE | | | +---+--------+ | PCE || 756 | | | | | | | | || 757 | | | |<+--+------+------------>| || 758 | | +-----+ | | | +-----+| 759 |Domain 1 | | |Domain|2 | 760 +---------+-----------------+ +------+--------------------+ 761 | | Network Stratum 762 | | 763 |<------------------------->| 764 | | 766 Figure 12: Primary NCG talks to other NCG 768 5.1.3. Federation of SDN domains 770 In this case, the Data Centers are federated building a community 771 cloud. In each Data Center, the connection to the network stratum 772 that interconnects the Data Center federation is done by means of one 773 or more devices controllable through an SDN controller particular for 774 that Data Center. 776 The NCG, then, interacts with a number of separated SDN controllers, 777 orchestrating their operation in order to perform the service 778 requested by the ACG in an optimized way. 780 Figure 13 shows this scenario. 782 +----------+ +---------------------------------------+ 783 | |-------->| | 784 | User | | ACG | 785 | |<--------| | 786 +----------+ +---------------------------------------+ 787 | ^ 788 | | 789 +----------+--+-------------------------+ 790 | | | | 791 | v | | 792 | +----------+ +----------+| 793 | | |-------->| || 794 | | NCG | | PCE || 795 | | |<--------| || 796 | +----------+ +----------+| 797 | | ^ | ^ | 798 | | | | | | 799 +-------+-+----+-+----------------------+ 800 | | | | 801 +-------------+ | | +-------------+ 802 | +-------------+ +-------------+ | 803 | | | | 804 v | v | 805 +--------------------+ +--------------------+ 806 +- | SDN Controller DC1 | . . . | SDN Controller DCN | -+ 807 | +--------------------+ +--------------------+ | 808 | | 809 | Federated Data Centers | 810 +-------------------------------------------------------------+ 812 Figure 13: NCG orchestration of separated SDN domains 814 5.1.4. Nesting of multi-layer SDN domains 816 A different scenario for multi-domain interconnection could be due to 817 the deployment of multi-layered multi-domain networks (and these 818 domains may be technology, administrative or vendor specific (vendor 819 islands))for supporting end-to-end connectivity at the Network 820 Stratum. Each of those domains can be controlled by a distinct SDN 821 controller adapted to the specifics of the technology under control. 823 The NCG requests path calculation to a multi-layer PCE which takes 824 into consideration such diversity providing an integrated computation 825 for the best path according to application constraints. The NCG 826 instruct a primary SDN controller which apart of configuring the 827 elements directly controlled by itself, it is able to communicate 828 with other SDN controllers with responsibility over other domains. 829 Such communication can be done through the usage of specific methods 830 through pre-defined South Bound Interface or East/West Interface (out 831 of the scope of this document). 833 The following figure shows this scenario. 835 +----------+ +---------------------------------------+ 836 | |------->| | 837 | User | | ACG | 838 | |<-------| | 839 +----------+ +---------------------------------------+ 840 | ^ 841 | | 842 +----------+--+-------------------------+ 843 | | | | 844 | v | | 845 | +----------+ +----------+ | 846 | | |------>| Multi- | | 847 | | NCG | | Layer | | 848 | | |<------| PCE | | 849 | +----------+ +----------+ | 850 | | ^ | 851 | | | | 852 +----------+--+-------------------------+ 853 | | 854 v | 855 +----------+ 856 | SDN |----------+ 857 |Controller| | 858 +----------+ | 859 ^ +----------+ 860 | | SDN | 861 v |Controller| 862 Layer-N +----------+ 863 Resources ^ 864 | 865 v 866 Layer-N-1 867 Resources 869 Figure 14: Nested multi-layer SDN domains 871 5.2. Bottleneck 873 In optical networks all PCE messages are sent over control channel, 874 in Stateful PCE cases its observed that in case of a major link or 875 node failure lot of PCEP messages are sent from all PCC to PCE. This 876 use lot of bandwidth of the control channel. 878 PCE MAY become a common point of failure and bottleneck. PCE/NCG/ACG 879 failure as well as the link-failure disrupting connectivity could be 880 highly disruptive to the system. 882 The solution should focus on reducing such bottleneck. 884 5.3. Relationship to ABNO 886 [ABNO] demonstrates cross-stratum application/network optimization 887 for the data center use case with PCE as the heart of Application- 888 Based Network Operations (ABNO) architecture. It further highlights 889 the interaction between various ABNO components and PCE to achieve 890 this use-case. 892 5.4. Relationship to ACTN 894 [ACTN] describes the framework for abstraction and control of 895 transport networks (ACTN) using hierarchy of controllers. The 896 Physical Network Controller (PNC) or Virtual Network Controller (VNC) 897 is equivalent to the NCG in the CSO framework, both rely on PCE for 898 the network optimization. 900 6. IANA Considerations 902 None, This is an informational document. 904 7. Security Considerations 906 TBD 908 8. Manageability Considerations 910 TBD 912 9. Acknowledgements 914 Part of the work in this document has been funded by the European 915 Community's Seventh Framework Programme projects XIFI (L.M. 916 Contreras and O. Gonzalez), under grant agreement n. 604590, and 917 GEYSERS (N. Ciulli and L.M. Contreras), under grant agreement n. 918 248657. 920 10. References 921 10.1. Normative References 923 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 924 Requirement Levels", BCP 14, RFC 2119, March 1997. 926 10.2. Informative References 928 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 929 Element (PCE)-Based Architecture", RFC 4655, August 2006. 931 [RFC5440] Vasseur, JP. and JL. Le Roux, "Path Computation Element 932 (PCE) Communication Protocol (PCEP)", RFC 5440, March 933 2009. 935 [RFC5441] Vasseur, JP., Zhang, R., Bitar, N., and JL. Le Roux, "A 936 Backward-Recursive PCE-Based Computation (BRPC) Procedure 937 to Compute Shortest Constrained Inter-Domain Traffic 938 Engineering Label Switched Paths", RFC 5441, April 2009. 940 [CSO-DATACNTR] 941 Lee, Y., Bernstein, G., So, N., Kim, T., Shiomoto, K., and 942 O. Gonzalez-de-Dios, "Research Proposal for Cross Stratum 943 Optimization (CSO) between Data Centers and Networks. 944 (draft-lee-cross-stratum-optimization-datacenter-00)", 945 March 2011. 947 [CSO-PROBLEM] 948 Lee, Y., Bernstein, G., So, N., Hares, S., Xia, F., 949 Shiomoto, K., and O. Gonzalez-de-Dios, "Problem Statement 950 for Cross-Layer Optimization. (draft-lee-cross-layer- 951 optimization-problem-02)", January 2011. 953 [NS-QUERY] 954 Lee, Y., Bernstein, G., So, N., McDysan, D., Kim, T., 955 Shiomoto, K., and O. Gonzalez-de-Dios, "Problem Statement 956 for Network Stratum Query. (draft-lee-network-stratum- 957 query-problem-02)", April 2011. 959 [CSO-PCE-REQT] 960 Tovar, A., Contreras, L., Landi, G., and N. Ciulli, "Path 961 Computation Requirements for Cross-Stratum-Optimization. 962 (draft-tovar-cso-path-computation-requirements-00)", 963 October 2011. 965 [PCE-SERVICE-AWARE] 966 Dhody, D., Wu, Q., Manral, V., Ali, Z., and K. Kumaki, 967 "Extensions to the Path Computation Element Communication 968 Protocol (PCEP) to compute service aware Label Switched 969 Path (LSP). (draft-ietf-pce-pcep-service-aware-06)", 970 December 2014. 972 [PCE-STATEFUL] 973 Crabbe, E., Medved, J., Varga, R., and I. Minei, "PCEP 974 Extensions for Stateful PCE. (draft-ietf-pce-stateful-pce- 975 10)", October 2014. 977 [ALTO-APPNET] 978 Lee, Y., Bernstein, G., Varga, T., Madhavan, S., Dhody, 979 D., and Q. Wu, "ALTO Extensions to Support Application and 980 Network Resource Information Exchange for High Bandwidth 981 Applications. (draft-lee-alto-app-net-info-exchange-04)", 982 October 2013. 984 [PCE_INITIATED] 985 Crabbe, E., Minei, I., Sivabalan, S., and R. Varga, "PCEP 986 Extensions for PCE-initiated LSP Setup in a Stateful PCE 987 Model. (draft-ietf-pce-pce-initiated-lsp-02)", July 2013. 989 [ACTN] Ceccarelli, D., Fang, L., Lee, Y., Lopez, D., Belotti, S., 990 and D. King, "Framework for Abstraction and Control of 991 Transport Networks", draft-ceccarelli-actn-framework-06 992 (work in progress), December 2014. 994 [ABNO] King, D. and A. Farrel, "A PCE-based Architecture for 995 Application-based Network Operations", draft-farrkingel- 996 pce-abno-architecture-15 (work in progress), January 2015. 998 Authors' Addresses 1000 Dhruv Dhody 1001 Huawei Technologies 1002 Divyashree Techno Park, Whitefield 1003 Bangalore, Karnataka 560037 1004 India 1006 EMail: dhruv.ietf@gmail.com 1007 Young Lee 1008 Huawei Technologies 1009 1700 Alma Drive, Suite 500 1010 Plano, TX 75075 1011 USA 1013 EMail: leeyoung@huawei.com 1015 Luis M. Contreras 1016 Telefonica I+D 1017 Ronda de la Comunicacion, s/n 1018 Sur-3 building, 3rd floor 1019 Madrid 28050 1020 Spain 1022 EMail: lmcm@tid.es 1024 Oscar Gonzalez de Dios 1025 Telefonica I+D 1026 Ronda de la Comunicacion, s/n 1027 Sur-3 building, 3rd floor 1028 Madrid 28050 1029 Spain 1031 EMail: ogondio@tid.es 1033 Nicola Ciulli 1034 Nextworks 1036 EMail: n.ciulli@nextworks.it