idnits 2.17.1 draft-ietf-teas-native-ip-scenarios-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 29, 2019) is 1634 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC3209' is defined on line 698, but no explicit reference was found in the text ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) == Outdated reference: A later version (-30) exists of draft-ietf-pce-pcep-extension-native-ip-04 == Outdated reference: A later version (-17) exists of draft-ietf-teas-pce-native-ip-04 Summary: 1 error (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TEAS Working Group A. Wang 3 Internet-Draft China Telecom 4 Intended status: Informational X. Huang 5 Expires: May 1, 2020 C. Kou 6 BUPT 7 Z. Li 8 China Mobile 9 P. Mi 10 Huawei Technologies 11 October 29, 2019 13 Scenarios and Simulation Results of PCE in Native IP Network 14 draft-ietf-teas-native-ip-scenarios-12 16 Abstract 18 Requirements for providing the End to End(E2E) performance assurance 19 are emerging within the service provider networks. While there are 20 various technology solutions, there is no single solution that can 21 fulfill these requirements for a native IP network. In particular, 22 there is a need for a universal (E2E) solution that can cover both 23 intra- and inter-domain scenarios. 25 One feasible E2E traffic engineering solution is the addition of 26 central control in a native IP network. This document describes 27 various complex scenarios and simulation results when applying the 28 Path Computation Element (PCE) in a native IP network. This 29 solution, referred to as Centralized Control Dynamic Routing (CCDR), 30 integrates the advantage of using distributed protocols and the power 31 of a centralized control technology, providing traffic engineering 32 for native IP networks in a manner that applies equally to intra- and 33 inter-domain scenarios. 35 Status of This Memo 37 This Internet-Draft is submitted in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF). Note that other groups may also distribute 42 working documents as Internet-Drafts. The list of current Internet- 43 Drafts is at https://datatracker.ietf.org/drafts/current/. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 49 This Internet-Draft will expire on May 1, 2020. 51 Copyright Notice 53 Copyright (c) 2019 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents 58 (https://trustee.ietf.org/license-info) in effect on the date of 59 publication of this document. Please review these documents 60 carefully, as they describe your rights and restrictions with respect 61 to this document. Code Components extracted from this document must 62 include Simplified BSD License text as described in Section 4.e of 63 the Trust Legal Provisions and are provided without warranty as 64 described in the Simplified BSD License. 66 Table of Contents 68 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 69 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 70 3. CCDR Scenarios . . . . . . . . . . . . . . . . . . . . . . . 4 71 3.1. QoS Assurance for Hybrid Cloud-based Application . . . . 4 72 3.2. Link Utilization Maximization . . . . . . . . . . . . . . 5 73 3.3. Traffic Engineering for Multi-Domain . . . . . . . . . . 6 74 3.4. Network Temporal Congestion Elimination . . . . . . . . . 7 75 4. CCDR Simulation . . . . . . . . . . . . . . . . . . . . . . . 7 76 4.1. Case Study for CCDR Algorithm . . . . . . . . . . . . . . 8 77 4.2. Topology Simulation . . . . . . . . . . . . . . . . . . . 9 78 4.3. Traffic Matrix Simulation . . . . . . . . . . . . . . . . 10 79 4.4. CCDR End-to-End Path Optimization . . . . . . . . . . . . 10 80 4.5. Network Temporal Congestion Elimination . . . . . . . . . 12 81 5. CCDR Deployment Consideration . . . . . . . . . . . . . . . . 14 82 6. Security Considerations . . . . . . . . . . . . . . . . . . . 14 83 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 84 8. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 15 85 9. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 15 86 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 15 87 10.1. Normative References . . . . . . . . . . . . . . . . . . 15 88 10.2. Informative References . . . . . . . . . . . . . . . . . 16 89 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 16 91 1. Introduction 93 A service provider network is composed of thousands of routers that 94 run distributed protocols to exchange the reachability information. 95 The path for the destination network is mainly calculated, and 96 controlled, by the distributed protocols. These distributed 97 protocols are robust enough to support most applications, however, 98 they have some difficulties supporting the complexities needed for 99 traffic engineering applications, e.g. E2E performance assurance, or 100 maximizing the link utilization within an IP network. 102 Multiprotocol Label Switching (MPLS) using Traffic Engineering (TE) 103 technology (MPLS-TE)[RFC3209]is one solution for traffic engineering 104 networks but it introduces an MPLS network and related technology 105 which would be an overlay of the IP network. MPLS-TE technology is 106 often used for Label Switched Path (LSP) protection and complex path 107 set-up within a domain. It has not been widely deployed for meeting 108 E2E (especially in inter-domain) dynamic performance assurance 109 requirements for an IP network. 111 Segment Routing [RFC8402] is another solution that integrates some 112 advantages of using a distributed protocol and a centrally control 113 technology, but it requires the underlying network, especially the 114 provider edge router, to do a label push and pop action in-depth, and 115 adds complexity when coexisting with the Non-Segment Routing network. 116 Additionally, it can only maneuver the E2E paths for MPLS and IPv6 117 traffic via different mechanisms. 119 Deterministic Networking (DetNet)[RFC8578] is another possible 120 solution. It is primarily focused on providing bounded latency for a 121 flow and introduces additional requirements on the domain edge 122 router. The current DetNet scope is within one domain. The use 123 cases defined in this document do not require the additional 124 complexity of deterministic properties and so differ from the DetNet 125 use cases. 127 This draft describes several scenarios for a native IP network where 128 a Centralized Control Dynamic Routing (CCDR) framework can produce 129 qualitative improvement in efficiency without requiring a change of 130 the data-plane behavior on the router. Using knowledge of BGP(Border 131 Gateway Protocol) session-specific prefixes advertised by a router, 132 the network topology and the near real time link utilization 133 information from network management systems, a central PCE is able to 134 compute an optimal path and give the underlay routers the destination 135 address to use to reach the BGP nexthop, such that the distributed 136 routing protocol will use the computed path via traditional recursive 137 lookup procedure. Some results from simulations of path optimization 138 are also presented, to concretely illustrate a variety of scenarios 139 where CCDR shows significant improvement over traditional distributed 140 routing protocols. 142 This draft is the base document of the following two drafts: the 143 universal solution draft, which is suitable for intra-domain and 144 inter-domain TE scenario, is described in 146 [I-D.ietf-teas-pce-native-ip]; the related protocol extension 147 contents is described in [I-D.ietf-pce-pcep-extension-native-ip] 149 2. Terminology 151 This document uses the following terms defined in [RFC5440]: PCE. 153 The following terms are defined in this document: 155 o BRAS: Broadband Remote Access Server 157 o CD: Congestion Degree 159 o CR: Core Router 161 o CCDR: Centralized Control Dynamic Routing 163 o E2E: End to End 165 o IDC: Internet Data Center 167 o MAN: Metro Area Network 169 o QoS: Quality of Service 171 o SR: Service Router 173 o TE: Traffic Engineering 175 o UID: Utilization Increment Degree 177 o WAN: Wide Area Network 179 3. CCDR Scenarios 181 The following sections describe various deployment scenarios where 182 applying the CCDR framework is intuitively expected to produce 183 improvements, based on the macro-scale properties of the framework 184 and the scenario. 186 3.1. QoS Assurance for Hybrid Cloud-based Application 188 With the emergence of cloud computing technologies, enterprises are 189 putting more and more services on a public oriented cloud 190 environment, but keeping core business within their private cloud. 191 The communication between the private and public cloud sites will 192 span the Wide Area Network (WAN) network. The bandwidth requirements 193 between them are variable and the background traffic between these 194 two sites varies over time. Enterprise applications require 195 assurance of the E2E Quality of Service(QoS) performance on demand 196 for variable bandwidth services. 198 CCDR, which integrates the merits of distributed protocols and the 199 power of centralized control, is suitable for this scenario. The 200 possible solution framework is illustrated below: 202 +------------------------+ 203 | Cloud Based Application| 204 +------------------------+ 205 | 206 +-----------+ 207 | PCE | 208 +-----------+ 209 | 210 | 211 //--------------\\ 212 ///// \\\\\ 213 Private Cloud Site || Distributed |Public Cloud Site 214 | Control Network | 215 \\\\\ ///// 216 \\--------------// 218 Figure 1: Hybrid Cloud Communication Scenario 220 As illustrated in Figure 1, the source and destination of the "Cloud 221 Based Application" traffic are located at "Private Cloud Site" and 222 "Public Cloud Site" respectively. 224 By default, the traffic path between the private and public cloud 225 site is determined by the distributed control network. When 226 application requires the E2E QoS assurance, it can send these 227 requirements to the PCE, and let the PCE compute one E2E path which 228 is based on the underlying network topology and the real traffic 229 information, to accommodate the application's QoS requirements. 230 Section 4.4 of this document describes the simulation results for 231 this use case. 233 3.2. Link Utilization Maximization 235 Network topology within a Metro Area Network (MAN) is generally in a 236 star mode as illustrated in Figure 2, with different devices 237 connected to different customer types. The traffic from these 238 customers is often in a tidal pattern, with the links between the 239 Core Router(CR)/Broadband Remote Access Server(BRAS) and CR/Service 240 Router(SR) experiencing congestion in different periods, because the 241 subscribers under BRAS often use the network at night, and the leased 242 line users under SR often use the network during the daytime. The 243 link between BRAS/SR and CR must satisfy the maximum traffic volume 244 between them, respectively, and this causes these links often to be 245 under-utilized. 247 +--------+ 248 | CR | 249 +----|---+ 250 | 251 --------|--------|-------| 252 | | | | 253 +--|-+ +-|- +--|-+ +-|+ 254 |BRAS| |SR| |BRAS| |SR| 255 +----+ +--+ +----+ +--+ 257 Figure 2: Star-mode Network Topology within MAN 259 If we consider connecting the BRAS/SR with a local link loop (which 260 is usually lower cost), and control the overall MAN topology with the 261 CCDR framework, we can exploit the tidal phenomena between the BRAS/ 262 CR and SR/CR links, maximizing the utilization of these central trunk 263 links (which are usually higher cost than the local loops). 265 +-------+ 266 ----- PCE | 267 | +-------+ 268 +----|---+ 269 | CR | 270 +----|---+ 271 | 272 --------|--------|-------| 273 | | | | 274 +--|-+ +-|- +--|-+ +-|+ 275 |BRAS-----SR| |BRAS-----SR| 276 +----+ +--+ +----+ +--+ 278 Figure 3: Link Utilization Maximization via CCDR 280 3.3. Traffic Engineering for Multi-Domain 282 Service provider networks are often comprised of different domains, 283 interconnected with each other, forming a very complex topology as 284 illustrated in Figure 4. Due to the traffic pattern to/from the MAN 285 and IDC, the utilization of the links between them are often 286 asymmetric. It is almost impossible to balance the utilization of 287 these links via a distributed protocol, but this unbalance can be 288 overcome utilizing the CCDR framework. 290 +---+ +---+ 291 |MAN|-----------------IDC| 292 +-|-| | +-|-+ 293 | ---------| | 294 ------|BackBone|------ 295 | ----|----| | 296 | | | 297 +-|-- | ----+ 298 |IDC|----------------|MAN| 299 +---| |---+ 301 Figure 4: Traffic Engineering for Complex Multi-Domain Topology 303 A solution for this scenario requires the gathering of NetFlow 304 information, analysis of the source/destination AS, and determining 305 what is the main cause of the congested link(s). After this, the 306 operator can use the external Border Gateway Protocol(eBGP) sessions 307 to schedule the traffic among the different domains according to the 308 solution described in CCDR framework. 310 3.4. Network Temporal Congestion Elimination 312 In more general situations, there are often temporal congestion 313 within the service provider's network, for example due to daily or 314 weekly periodic bursts, or large events that are scheduled well in 315 advance. Such congestion phenomena often appear regularly, and if 316 the service provider has methods to mitigate it, it will certainly 317 improve their network operations capabilities and increase 318 satisfaction for their customers. CCDR is also suitable for such 319 scenarios, as the controller can schedule traffic out of the 320 congested links, lowering the utilization of them during these times. 321 Section 4.5 describes the simulation results of this scenario. 323 4. CCDR Simulation 325 The following sections describe a specific case study to illustrate 326 the workings of the CCDR algorithm with concrete paths/metrics, as 327 well as a procedure for generating topology and traffic matrices and 328 the results from simulations applying CCDR for E2E QoS (assured path 329 and congestion elimination) over the generated topologies and traffic 330 matrices. In all cases examined, the CCDR algorithm produces 331 qualitatively significant improvement over the reference (OSPF) 332 algorithm, suggesting that CCDR will have broad applicability. 334 The structure and scale of the simulated topology is similar to that 335 of the real networks. Multiple different traffic matrices were 336 generated to simulate different congestion conditions in the network. 338 Only one of them is illustrated since the others produce similar 339 results. 341 4.1. Case Study for CCDR Algorithm 343 In this section we consider a specific network topology for case 344 study, examining the path selected by OSPF and CCDR and evaluating 345 how and why the paths differ. Figure 5 depicts the topology of the 346 network in this case. There are 8 forwarding devices in the network. 347 The original cost and utilization are marked on it, as shown in the 348 figure. For example, the original cost and utilization for the link 349 (1,2) are 3 and 50% respectively. There are two flows: f1 and f2. 350 Both of these two flows are from node 1 to node 8. For simplicity, 351 it is assumed that the bandwidth of the link in the network is 10Mb/ 352 s. The flow rate of f1 is 1Mb/s, and the flow rate of f2 is 2Mb/s. 353 The threshold of the link in congestion is 90%. 355 If OSPF protocol (ISIS is similar, because it also use the Dijstra's 356 algorithm) is applied in the network, which adopts Dijkstra's 357 algorithm, the two flows from node 1 to node 8 can only use the OSPF 358 path (p1: 1->2->3->8). It is because Dijkstra's algorithm mainly 359 considers original cost of the link. Since CCDR considers cost and 360 utilization simultaneously, the same path as OSPF will not be 361 selected due to the severe congestion of the link (2,3). In this 362 case, f1 will select the path (p2: 1->5->6->7->8) since the new cost 363 of this path is better than that of OSPF path. Moreover, the path p2 364 is also better than the path (p3: 1->2->4->7->8) for for flow f1. 365 However, f2 will not select the same path since it will cause the new 366 congestion in the link (6,7). As a result, f2 will select the path 367 (p3: 1->2->4->7->8). 369 +----+ f1 +-------> +-----+ ----> +-----+ 370 |Edge|-----------+ |+--------| 3 |-------| 8 | 371 |Node|---------+ | ||+-----> +-----+ ----> +-----+ 372 +----+ | | 4/95%||| 6/50% | 373 | | ||| 5/60%| 374 | v ||| | 375 +----+ +-----+ -----> +-----+ +-----+ +-----+ 376 |Edge|-------| 1 |--------| 2 |------| 4 |------| 7 | 377 |Node|-----> +-----+ -----> +-----+7/60% +-----+5/45% +-----+ 378 +----+ f2 | 3/50% | 379 | | 380 | 3/60% +-----+ 5/55%+-----+ 3/75% | 381 +-----------| 5 |------| 6 |---------+ 382 +-----+ +-----+ 383 (a) Dijkstra's Algorithm (OSPF/ISIS) 385 +----+ f1 +-----+ ----> +-----+ 386 |Edge|-----------+ +--------| 3 |-------| 8 | 387 |Node|---------+ | | +-----+ ----> +-----+ 388 +----+ | | 4/95% | 6/50% ^|^ 389 | | | 5/60%||| 390 | v | ||| 391 +----+ +-----+ -----> +-----+ ---> +-----+ ---> +-----+ 392 |Edge|-------| 1 |--------| 2 |------| 4 |------| 7 | 393 |Node|-----> +-----+ +-----+7/60% +-----+5/45% +-----+ 394 +----+ f2 || 3/50% |^ 395 || || 396 || 3/60% +-----+5/55% +-----+ 3/75% || 397 |+-----------| 5 |------| 6 |---------+| 398 +----------> +-----+ ---> +-----+ ---------+ 399 (b) CCDR Algorithm 401 Figure 5: Case Study for CCDR's Algorithm 403 4.2. Topology Simulation 405 Moving on from the specific case study, we now consider a class of 406 networks more representative of real deployments, with a fully-linked 407 core network that serves to connect edge nodes, which themselves 408 connect to only a subset of the core. An example of such a topology 409 is shown in Figure 6, for the case of 4 core nodes and 5 edge nodes. 410 The CCDR simulations presented in this work use topologies involving 411 100 core nodes and 400 edge nodes. While the resulting graph does 412 not fit on this page, this scale of network is similar to what is 413 deployed in production environments. 415 +----+ 416 /|Edge|\ 417 | +----+ | 418 | | 419 | | 420 +----+ +----+ +----+ 421 |Edge|----|Core|-----|Core|---------+ 422 +----+ +----+ +----+ | 423 / | \ / | | 424 +----+ | \ / | | 425 |Edge| | X | | 426 +----+ | / \ | | 427 \ | / \ | | 428 +----+ +----+ +----+ | 429 |Edge|----|Core|-----|Core| | 430 +----+ +----+ +----+ | 431 | | | 432 | +------\ +----+ 433 | ---|Edge| 434 +-----------------/ +----+ 436 Figure 6: Topology of Simulation 438 For the simulations, the number of links connecting one edge node to 439 the set of core nodes is randomly chosen between 2 to 30, and the 440 total number of links is more than 20000. Each link has a congestion 441 threshold, which can be arbitrarily set to (e.g.) 90% of the nominal 442 link capacity without affecting the simulation results. 444 4.3. Traffic Matrix Simulation 446 For each topology, a traffic matrix is generated based on the link 447 capacity of topology. It can result in many kinds of situations, 448 such as congestion, mild congestion and non-congestion. 450 In the CCDR simulation, the dimension of the traffic matrix is 451 500*500 (100 core nodes plus 400 edge nodes). About 20% of links are 452 overloaded when the Open Shortest Path First (OSPF) protocol is used 453 in the network. 455 4.4. CCDR End-to-End Path Optimization 457 The CCDR E2E path optimization is to find the best path which is the 458 lowest in metric value and for each link of the path, the 459 utilizatioin is far below link's congestion threshold. Based on the 460 current state of the network, the PCE within CCDR framework combines 461 the shortest path algorithm with a penalty theory of classical 462 optimization and graph theory. 464 Given a background traffic matrix, which is unscheduled, when a set 465 of new flows comes into the network, the E2E path optimization finds 466 the optimal paths for them. The selected paths bring the least 467 congestion degree to the network. 469 The link Utilization Increment Degree(UID), when the new flows are 470 added into the network, is shown in Figure 7. The first graph in 471 Figure 7 is the UID with OSPF and the second graph is the UID with 472 CCDR E2E path optimization. The average UID of the first graph is 473 more than 30%. After path optimization, the average UID is less than 474 5%. The results show that the CCDR E2E path optimization has an eye- 475 catching decrease in UID relative to the path chosen based on OSPF. 477 While real-world results invariably differ from simulations (for 478 example, real-world topologies are likely to exhibit correlation in 479 the attachment patterns for edge nodes to the core, which are not 480 reflected in these results), the dramatic nature of the improvement 481 in UID and the choice of simulated topology to resemble real-world 482 conditions suggests that real-world deployments will also experience 483 significant improvement in UID results. 485 +-----------------------------------------------------------+ 486 | * * * *| 487 60| * * * * * *| 488 |* * ** * * * * * ** * * * * **| 489 |* * ** * * ** *** ** * * ** * * * ** * * *** **| 490 |* * * ** * ** ** *** *** ** **** ** *** **** ** *** **| 491 40|* * * ***** ** *** *** *** ** **** ** *** ***** ****** **| 492 UID(%)|* * ******* ** *** *** ******* **** ** *** ***** *********| 493 |*** ******* ** **** *********** *********** ***************| 494 |******************* *********** *********** ***************| 495 20|******************* ***************************************| 496 |******************* ***************************************| 497 |***********************************************************| 498 |***********************************************************| 499 0+-----------------------------------------------------------+ 500 0 100 200 300 400 500 600 700 800 900 1000 501 +-----------------------------------------------------------+ 502 | | 503 60| | 504 | | 505 | | 506 | | 507 40| | 508 UID(%)| | 509 | | 510 | | 511 20| | 512 | *| 513 | * *| 514 | * * * * * ** * *| 515 0+-----------------------------------------------------------+ 516 0 100 200 300 400 500 600 700 800 900 1000 517 Flow Number 519 Figure 7: Simulation Result with Congestion Elimination 521 4.5. Network Temporal Congestion Elimination 523 During the simulations, different degrees of network congestion were 524 considered. To examine the effect of CCDR on link congestion, we 525 consider the Congestion Degree (CD) of a link, defined as the link 526 utilization beyond its threshold. 528 The CCDR congestion elimination performance is shown in Figure 8. 529 The first graph is the CD distribution before the process of 530 congestion elimination. The average CD of all congested links is 531 about 20%. The second graph shown in Figure 8 is the CD distribution 532 after using the congestion elimination process. It shows that only 533 12 links among the total of 20000 links exceed the threshold, and all 534 the CD values are less than 3%. Thus, after scheduling of the traffic 535 away from the congested paths, the degree of network congestion is 536 greatly eliminated and the network utilization is in balance. 538 Before congestion elimination 539 +-----------------------------------------------------------+ 540 | * ** * ** ** *| 541 20| * * **** * ** ** *| 542 |* * ** * ** ** **** * ***** *********| 543 |* * * * * **** ****** * ** *** **********************| 544 15|* * * ** * ** **** ********* *****************************| 545 |* * ****** ******* ********* *****************************| 546 CD(%) |* ********* ******* ***************************************| 547 10|* ********* ***********************************************| 548 |*********** ***********************************************| 549 |***********************************************************| 550 5|***********************************************************| 551 |***********************************************************| 552 |***********************************************************| 553 0+-----------------------------------------------------------+ 554 0 0.5 1 1.5 2 556 After congestion elimination 557 +-----------------------------------------------------------+ 558 | | 559 20| | 560 | | 561 | | 562 15| | 563 | | 564 CD(%) | | 565 10| | 566 | | 567 | | 568 5 | | 569 | | 570 | * ** * * * ** * ** * | 571 0 +-----------------------------------------------------------+ 572 0 0.5 1 1.5 2 573 Link Number(*10000) 575 Figure 8: Simulation Result with Congestion Elimination 577 It is clear that using an active path-computation mechanism that is 578 able to take into account observed link traffic/congestion, the 579 occurrence of congestion events can be greatly reduced. Only when a 580 preponderance of links in the network are near their congestion 581 threshold will the central controller be unable to find a clear path, 582 as opposed to when a static metric-based procedure is used, which 583 will produce congested paths once a single bottleneck approaches its 584 capacity. More detailed information about the algorithm can be found 585 in[PTCS] . 587 5. CCDR Deployment Consideration 589 The above CCDR scenarios and simulation results demonstrate that a 590 single general solution can be found that copes with multiple complex 591 situations. The specific situations considered are not known to have 592 any special properties, so it is expected that the benefits 593 demonstrated will have general applicability. Accordingly, the 594 integrated use of a centralized controller for the more complex 595 optimal path computations in a native IP network should result in 596 significant improvements without impacting the underlay network 597 infrastructure. 599 For intra-domain or inter-domain native IP TE scenarios, the 600 deployment of a CCDR solution is similar, with the centralized 601 controller being able to compute paths and no changes required to the 602 underlying network infrastructure. This universal deployment 603 characteristic can facilitate a generic traffic engineering solution, 604 where operators do not need to differentiate between intra-domain and 605 inter-domain TE cases. 607 To deploy the CCDR solution, the PCE should collect the underlay 608 network topology dynamically, for example via BGP-LS[RFC7752]. It 609 also needs to gather the network traffic information periodically 610 from the network management platform. The simulation results show 611 that the PCE can compute the E2E optimal path within seconds, thus it 612 can cope with the change of underlay network on the scale of minutes. 613 More agile requirements would need to increase the sample rate of 614 underlay network and decrease the detection and notification interval 615 of the underlay network. The methods to gather and decrease the 616 latency of these information are out of the scope of this draft. 618 6. Security Considerations 620 This document considers mainly the integration of distributed 621 protocols and the central control capability of a PCE. While it 622 certainly can ease the management of network in various traffic 623 engineering scenarios as described in this document, the centralized 624 control also bring a new point that may be easily attacked. 625 Solutions for CCDR scenarios need to consider protection of the PCE 626 and communication with the underlay devices. 628 [RFC5440] and [RFC8253] provide additional information. 630 The control priority and interaction process should also be carefully 631 designed for the combination of distributed protocol and central 632 control. Generally, the central control instruction should have 633 higher priority than the forwarding actions determined by the 634 distributed protocol. When the communication between PCE and the 635 underlay devices is not in function, the distributed protocol should 636 take over the control right of the underlay network. 637 [I-D.ietf-teas-pce-native-ip] provides more considerations 638 corresponding to the solution. 640 7. IANA Considerations 642 This document does not require any IANA actions. 644 8. Contributors 646 Lu Huang contributed to the content of this draft. 648 9. Acknowledgement 650 The author would like to thank Deborah Brungard, Adrian Farrel, 651 Huaimo Chen, Vishnu Beeram and Lou Berger for their support and 652 comments on this draft. 654 Thanks Benjamin Kaduk for his careful review and valuable suggestions 655 to this draft. Also thanks Roman Danyliw, Alvaro Retana and Eric 656 Vyncke for their views and comments. 658 10. References 660 10.1. Normative References 662 [RFC5440] Vasseur, JP., Ed. and JL. Le Roux, Ed., "Path Computation 663 Element (PCE) Communication Protocol (PCEP)", RFC 5440, 664 DOI 10.17487/RFC5440, March 2009, 665 . 667 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 668 S. Ray, "North-Bound Distribution of Link-State and 669 Traffic Engineering (TE) Information Using BGP", RFC 7752, 670 DOI 10.17487/RFC7752, March 2016, 671 . 673 [RFC8253] Lopez, D., Gonzalez de Dios, O., Wu, Q., and D. Dhody, 674 "PCEPS: Usage of TLS to Provide a Secure Transport for the 675 Path Computation Element Communication Protocol (PCEP)", 676 RFC 8253, DOI 10.17487/RFC8253, October 2017, 677 . 679 10.2. Informative References 681 [I-D.ietf-pce-pcep-extension-native-ip] 682 Wang, A., Khasanov, B., Cheruathur, S., Zhu, C., and S. 683 Fang, "PCEP Extension for Native IP Network", draft-ietf- 684 pce-pcep-extension-native-ip-04 (work in progress), August 685 2019. 687 [I-D.ietf-teas-pce-native-ip] 688 Wang, A., Zhao, Q., Khasanov, B., Chen, H., and R. Mallya, 689 "PCE in Native IP Network", draft-ietf-teas-pce-native- 690 ip-04 (work in progress), August 2019. 692 [PTCS] Zhang, P., Xie, K., Kou, C., Huang, X., Wang, A., and Q. 693 Sun, "A Practical Traffic Control Scheme With Load 694 Balancing Based on PCE Architecture", IEEE 695 Access 18526773, DOI 10.1109/ACCESS.2019.2902610, March 696 2019, . 698 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 699 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 700 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 701 . 703 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 704 Decraene, B., Litkowski, S., and R. Shakir, "Segment 705 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 706 July 2018, . 708 [RFC8578] Grossman, E., Ed., "Deterministic Networking Use Cases", 709 RFC 8578, DOI 10.17487/RFC8578, May 2019, 710 . 712 Authors' Addresses 714 Aijun Wang 715 China Telecom 716 Beiqijia Town, Changping District 717 Beijing, Beijing 102209 718 China 720 Email: wangaj3@chinatelecom.cn 721 Xiaohong Huang 722 Beijing University of Posts and Telecommunications 723 No.10 Xitucheng Road, Haidian District 724 Beijing 725 China 727 Email: huangxh@bupt.edu.cn 729 Caixia Kou 730 Beijing University of Posts and Telecommunications 731 No.10 Xitucheng Road, Haidian District 732 Beijing 733 China 735 Email: koucx@lsec.cc.ac.cn 737 Zhenqiang Li 738 China Mobile 739 32 Xuanwumen West Ave, Xicheng District 740 Beijing 100053 741 China 743 Email: li_zhenqiang@hotmail.com 745 Penghui Mi 746 Huawei Technologies 747 Tower C of Bldg.2, Cloud Park, No.2013 of Xuegang Road 748 Shenzhen, Bantian,Longgang District 518129 749 China 751 Email: mipenghui@huawei.com