idnits 2.17.1 draft-ietf-teas-native-ip-scenarios-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 37 instances of too long lines in the document, the longest one being 10 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 9, 2019) is 1660 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC3209' is defined on line 654, but no explicit reference was found in the text ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) == Outdated reference: A later version (-30) exists of draft-ietf-pce-pcep-extension-native-ip-04 == Outdated reference: A later version (-17) exists of draft-ietf-teas-pce-native-ip-04 Summary: 2 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 TEAS Working Group A. Wang 3 Internet-Draft China Telecom 4 Intended status: Informational X. Huang 5 Expires: April 11, 2020 C. Kou 6 BUPT 7 Z. Li 8 China Mobile 9 P. Mi 10 Huawei Technologies 11 October 9, 2019 13 Scenarios and Simulation Results of PCE in Native IP Network 14 draft-ietf-teas-native-ip-scenarios-10 16 Abstract 18 Requirements for providing the End to End(E2E) performance assurance 19 are emerging within the service provider network. While there are 20 various technology solutions, there is no one solution which can 21 fulfill these requirements for a native IP network. One universal 22 (E2E) solution which can cover both intra-domain and inter-domain 23 scenarios is needed. 25 One feasible E2E traffic engineering solution is the addition of 26 central control in a native IP network. This document describes 27 various complex scenarios and simulation results when applying the 28 Path Computation Element (PCE) in a native IP network. This 29 solution, referred to as Centralized Control Dynamic Routing (CCDR), 30 integrates the advantage of using distributed protocols and the power 31 of a centralized control technology. 33 Status of This Memo 35 This Internet-Draft is submitted in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF). Note that other groups may also distribute 40 working documents as Internet-Drafts. The list of current Internet- 41 Drafts is at https://datatracker.ietf.org/drafts/current/. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on April 11, 2020. 50 Copyright Notice 52 Copyright (c) 2019 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (https://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 68 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 69 3. CCDR Scenarios . . . . . . . . . . . . . . . . . . . . . . . 4 70 3.1. QoS Assurance for Hybrid Cloud-based Application . . . . 4 71 3.2. Link Utilization Maximization . . . . . . . . . . . . . . 5 72 3.3. Traffic Engineering for Multi-Domain . . . . . . . . . . 6 73 3.4. Network Temporal Congestion Elimination . . . . . . . . . 7 74 4. CCDR Simulation . . . . . . . . . . . . . . . . . . . . . . . 7 75 4.1. Case Study for CCDR algorithm . . . . . . . . . . . . . . 8 76 4.2. Topology Simulation . . . . . . . . . . . . . . . . . . . 10 77 4.3. Traffic Matrix Simulation . . . . . . . . . . . . . . . . 10 78 4.4. CCDR End-to-End Path Optimization . . . . . . . . . . . . 11 79 4.5. Network Temporal Congestion Elimination . . . . . . . . . 12 80 5. CCDR Deployment Consideration . . . . . . . . . . . . . . . . 13 81 6. Security Considerations . . . . . . . . . . . . . . . . . . . 14 82 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 14 83 8. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 15 84 9. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 15 85 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 15 86 10.1. Normative References . . . . . . . . . . . . . . . . . . 15 87 10.2. Informative References . . . . . . . . . . . . . . . . . 15 88 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 16 90 1. Introduction 92 A service provider network is composed of thousands of routers that 93 run distributed protocols to exchange the reachability information. 94 The path for the destination network is mainly calculated, and 95 controlled, by the distributed protocols. These distributed 96 protocols are robust enough to support most applications, but have 97 some difficulties supporting the complexities needed for traffic 98 engineering applications, e.g. E2E performance assurance, or 99 maximizing the link utilization within an IP network. 101 Multiprotocol Label Switching (MPLS) using Traffic Engineering (TE) 102 technology (MPLS-TE)[RFC3209]is one solution for traffic engineering 103 network but it introduces an MPLS network and related technology 104 which would be an overlay of the IP network. MPLS-TE technology is 105 often used for Label Switched Path (LSP) protection and complex path 106 set-up within a domain. 108 It has not been widely deployed for meeting E2E (especially in inter- 109 domain) dynamic performance assurance requirements for an IP network. 111 Segment Routing [RFC8402] is another solution that integrates some 112 advantages of using a distributed protocol and a centrally control 113 technology, but it requires the underlying network, especially the 114 provider edge router, to do a label push and pop action in-depth, and 115 adds complexity, when coexisting with the Non-Segment Routing 116 network. Additionally, it can only maneuver the E2E paths for MPLS 117 and IPv6 traffic via different mechanisms. 119 Deterministic Networking (DetNet)[RFC8578] is another possible 120 solution. It is primarily focused on providing bounded latency for a 121 flow and introduces additional requirements on the domain edge 122 router. The current DetNet scope is within one domain. The use 123 cases defined in this document do not require the additional 124 complexity of deterministic properties and so differ from the DetNet 125 use cases. 127 This draft describes scenarios for a native IP network that a 128 Centralized Control Dynamic Routing (CCDR) framework can easily 129 solve, without requiring a change of the data plane behavior on the 130 router. It also provides path optimization simulation results to 131 illustrate the applicability of the CCDR framework. 133 This draft is the base document of the following two drafts: the 134 universal solution draft, which is suitable for intra-domain and 135 inter-domain TE scenario, is described in 136 [I-D.ietf-teas-pce-native-ip]; the related protocol extension 137 contents is described in [I-D.ietf-pce-pcep-extension-native-ip] 139 2. Terminology 141 This document uses the following terms defined in [RFC5440]: PCE. 143 The following terms are defined in this document: 145 o BRAS: Broadband Remote Access Server 146 o CD: Congestion Degree 148 o CR: Core Router 150 o CCDR: Centralized Control Dynamic Routing 152 o E2E: End to End 154 o IDC: Internet Data Center 156 o MAN: Metro Area Network 158 o QoS: Quality of Service 160 o SR: Service Router 162 o TE: Traffic Engineering 164 o UID: Utilization Increment Degree 166 o WAN: Wide Area Network 168 3. CCDR Scenarios 170 The following sections describe various deployment scenarios for 171 applying the CCDR framework. 173 3.1. QoS Assurance for Hybrid Cloud-based Application 175 With the emergence of cloud computing technologies, enterprises are 176 putting more and more services on a public oriented cloud 177 environment, but keeping core business within their private cloud. 178 The communication between the private and public cloud sites will 179 span the Wide Area Network (WAN) network. The bandwidth requirements 180 between them are variable and the background traffic between these 181 two sites varies over time. Enterprise applications require 182 assurance of the E2E Quality of Service(QoS) performance on demand 183 for variable bandwidth services. 185 CCDR, which integrates the merits of distributed protocols and the 186 power of centralized control, is suitable for this scenario. The 187 possible solution framework is illustrated below: 189 +------------------------+ 190 | Cloud Based Application| 191 +------------------------+ 192 | 193 +-----------+ 194 | PCE | 195 +-----------+ 196 | 197 | 198 //--------------\\ 199 ///// \\\\\ 200 Private Cloud Site || Distributed |Public Cloud Site 201 | Control Network | 202 \\\\\ ///// 203 \\--------------// 205 Figure 1: Hybrid Cloud Communication Scenario 207 As illustrated in Figure 1, the source and destination of the "Cloud 208 Based Application" traffic are located at "Private Cloud Site" and 209 "Public Cloud Site" respectively. 211 By default, the traffic path between the private and public cloud 212 site is determined by the distributed control network. When 213 application requires the E2E QoS assurance, it can send these 214 requirements to the PCE, and let the PCE compute one E2E path which 215 is based on the underlying network topology and the real traffic 216 information, to accommodate the application's QoS requirements. 217 Section 4.4 of this document describes the simulation results for 218 this use case. 220 3.2. Link Utilization Maximization 222 Network topology within a Metro Area Network (MAN) is generally in a 223 star mode as illustrated in Figure 2, with different devices 224 connected to different customer types. The traffic from these 225 customers is often in a tidal pattern, with the links between the 226 Core Router(CR)/Broadband Remote Access Server(BRAS) and CR/Service 227 Router(SR), experiencing congestion in different periods, because the 228 subscribers under BRAS, often use the network at night, and the 229 dedicated line users under SR, often use the network during the 230 daytime. The link between BRAS/SR and CR must satisfy the maximum 231 traffic volume between them respectively and this causes these links 232 often to be under-utilized. 234 +--------+ 235 | CR | 236 +----|---+ 237 | 238 --------|--------|-------| 239 | | | | 240 +--|-+ +-|- +--|-+ +-|+ 241 |BRAS| |SR| |BRAS| |SR| 242 +----+ +--+ +----+ +--+ 244 Figure 2: Star-mode Network Topology within MAN 246 If we consider connecting the BRAS/SR with a local link loop (which 247 is usually lower cost), and control the overall MAN topology with the 248 CCDR framework, we can exploit the tidal phenomena between the BRAS/ 249 CR and SR/CR links, maximizing the utilization of these links (which 250 are usually higher cost). 252 +-------+ 253 ----- PCE | 254 | +-------+ 255 +----|---+ 256 | CR | 257 +----|---+ 258 | 259 --------|--------|-------| 260 | | | | 261 +--|-+ +-|- +--|-+ +-|+ 262 |BRAS-----SR| |BRAS-----SR| 263 +----+ +--+ +----+ +--+ 265 Figure 3: Link Utilization Maximization via CCDR 267 3.3. Traffic Engineering for Multi-Domain 269 Service provider networks are often comprised of different domains, 270 interconnected with each other,forming a very complex topology as 271 illustrated in Figure 4. Due to the traffic pattern to/from the MAN 272 and IDC, the utilization of the links between them are often 273 asymmetric. It is almost impossible to balance the utilization of 274 these links via a distributed protocol, but this unbalance can be 275 overcome utilizing the CCDR framework. 277 +---+ +---+ 278 |MAN|-----------------IDC| 279 +-|-| | +-|-+ 280 | ---------| | 281 ------|BackBone|------ 282 | ----|----| | 283 | | | 284 +-|-- | ----+ 285 |IDC|----------------|MAN| 286 +---| |---+ 288 Figure 4: Traffic Engineering for Complex Multi-Domain Topology 290 A solution for this scenario requires the gathering of NetFlow 291 information, analysis of the source/destination AS, and determining 292 what is the main cause of the congested link. After this, the 293 operator can use the external Border Gateway Protocol(eBGP) sessions 294 to schedule the traffic among the different domains according to the 295 solution described in CCDR framework. 297 3.4. Network Temporal Congestion Elimination 299 In more general situations, there are often temporal congestion 300 within the service provider's network. Such congestion phenomena 301 often appear repeatedly, and if the service provider has methods to 302 mitigate it, it will certainly improve their network operations 303 capabilities and increase satisfaction for their customers. CCDR is 304 also suitable for such scenarios, as the controller can schedule 305 traffic out of the congested links, lowering the utilization of them 306 during these times. Section 4.5 describes the simulation results of 307 this scenario. 309 4. CCDR Simulation 311 The following sections describe one case study to illustrate CCDR 312 algorithm, the topology and traffic matrix generation process and the 313 optimization results for E2E QoS assured path and congestion 314 elimination in applied scenarios. 316 The structure and scale of the simulated topology is similar with the 317 real network. Several amounts of traffic matrix are generated to 318 simulate the different congestion condition in network, only one of 319 them is illustrated. 321 4.1. Case Study for CCDR algorithm 323 Figure 5 depicts the topology of the network for the case study of 324 CCDR algorithm. There are 8 forwarding devices in the network. The 325 original cost and utilization are marked on it, as shown in the 326 figure. For example, the original cost and utilization for the link 327 (1,2) are 3 and 50% respectively. There are two flows: f1 and f2. 328 Both of these two flows are from node 1 to node 8. For simplicity, 329 it is assumed that the bandwidth of the link in the network is 10Mb/ 330 s. The flow rate of f1 is 1Mb/s, and the flow rate of f2 is 2Mb/s. 331 The threshold of the link in congestion is 90%. 333 If OSPF protocol (ISIS is similar, because it also use the Dijstra's 334 algorithm) is applied in the network, which adopts Dijkstra's 335 algorithm, the two flows from node 1 to node 8 can only use the OSPF 336 path (p1: 1->2->3->8). It is because Dijkstra's algorithm mainly 337 considers original cost of the link. Since CCDR considers cost and 338 utilization simultaneously, the same path with OSPF will not be 339 selected due to the severe congestion of the link (2,3). In this 340 case, f1 will select the path (p2: 1->5->6->7->8) since the new cost 341 of this path is better than that of OSPF path. Moreover, the path p2 342 is also better than the path (p3: 1->2->4->7->8) for for flow f1. 343 However, f2 will not select the same path since it will cause the new 344 congestion in the link (6,7). As a result, f2 will select the path 345 (p3: 1->2->4->7->8). 347 +-------+ +-------+ 348 +---------+ f1 +--------->| | ----------> | | 349 | |---------------+ | +--------| 3 |-------------| 8 | 350 |Edge Node|-------------+ | | | +----->| | ----------> | | 351 | | | | | | | +-------+ 6/50% +-------+ 352 +---------+ | | 4/95% | | | | 353 | | | | | 5/60% | 354 | v | | | | 355 +---------+ +-------+ +-------+ +-------+ +-------+ 356 | | | |---------> | | | | | | 357 |Edge Node|-------| 1 |---------- | 2 |---------| 4 |--------| 7 | 358 | |-----> | |---------> | | 7/60% | | 5/45% | | 359 +---------+ f2 +-------+ 3/50% +-------+ +-------+ +-------+ 360 | | 361 | | 362 | +-------+ +-------+ | 363 | 3/60% | | 5/55% | | 3/75%| 364 +---------------| 5 |-----------| 6 |----------+ 365 | | | | 366 +-------+ +-------+ 367 (a) Dijkstra's Algorithm(OSPF/ISIS) 369 +-------+ +-------+ 370 +---------+ f1 | | | | 371 | |---------------+ +--------| 3 |-------------| 8 | 372 |Edge Node|-------------+ | | | | | | 373 | | | | | +-------+ 6/50% +-------+ 374 +---------+ | | 4/95%| ^ | ^ 375 | | | 5/60% | | | 376 | v | | | | 377 +---------+ +-------+ +-------+ +-------+ +-------+ 378 | | | |---------> | |-------> | | -----> | | 379 |Edge Node|-------| 1 |---------- | 2 |---------| 4 |--------| 7 | 380 | |-----> | | | | 7/60% | | 5/45% | | 381 +---------+ f2 +-------+ 3/50% +-------+ +-------+ +-------+ 382 | | | ^ 383 | | | | 384 | | +-------+ +-------+ | | 385 | | 3/60% | | 5/55% | | 3/75%| | 386 | +---------------| 5 |-----------| 6 |----------+ | 387 +--------------> | |---------> | |------------+ 388 +-------+ +-------+ 389 (b) CCDR Algorithm 391 Figure 5: Case Study for CCDR's Algorithm 393 4.2. Topology Simulation 395 The network topology mainly contains nodes and links information. 396 Nodes used in the simulation have two types: core node and edge node. 397 The core nodes are fully linked to each other. The edge nodes are 398 connected only with some of the core nodes. Figure 6 is a topology 399 example of 4 core nodes and 5 edge nodes. In this CCDR simulation, 400 100 core nodes and 400 edge nodes are generated. 402 +----+ 403 /|Edge|\ 404 | +----+ | 405 | | 406 | | 407 +----+ +----+ +----+ 408 |Edge|----|Core|-----|Core|---------+ 409 +----+ +----+ +----+ | 410 / | \ / | | 411 +----+ | \ / | | 412 |Edge| | X | | 413 +----+ | / \ | | 414 \ | / \ | | 415 +----+ +----+ +----+ | 416 |Edge|----|Core|-----|Core| | 417 +----+ +----+ +----+ | 418 | | | 419 | +------\ +----+ 420 | ---|Edge| 421 +-----------------/ +----+ 423 Figure 6: Topology of Simulation 425 The number of links connecting one edge node to the set of core nodes 426 is randomly between 2 to 30, and the total number of links is more 427 than 20000. Each link has a congestion threshold. 429 4.3. Traffic Matrix Simulation 431 The traffic matrix is generated based on the link capacity of 432 topology. It can result in many kinds of situations, such as 433 congestion, mild congestion and non-congestion. 435 In the CCDR simulation, the dimension of the traffic matrix is 436 500*500. About 20% links are overloaded when the Open Shortest Path 437 First (OSPF) protocol is used in the network. 439 4.4. CCDR End-to-End Path Optimization 441 The CCDR E2E path optimization is to find the best path which is the 442 lowest in metric value and each link of the path is far below link's 443 threshold. Based on the current state of the network, the PCE within 444 CCDR framework combines the shortest path algorithm with a penalty 445 theory of classical optimization and graph theory. 447 Given a background traffic matrix, which is unscheduled, when a set 448 of new flows comes into the network, the E2E path optimization finds 449 the optimal paths for them. The selected paths bring the least 450 congestion degree to the network. 452 The link Utilization Increment Degree(UID), when the new flows are 453 added into the network, is shown in Figure 7. The first graph in 454 Figure 7 is the UID with OSPF and the second graph is the UID with 455 CCDR E2E path optimization. The average UID of the first graph is 456 more than 30%. After path optimization, the average UID is less than 457 5%. The results show that the CCDR E2E path optimization has an eye- 458 catching decrease in UID relative to the path chosen based on OSPF. 460 +-----------------------------------------------------------+ 461 | * * * *| 462 60| * * * * * *| 463 |* * ** * * * * * ** * * * * **| 464 |* * ** * * ** *** ** * * ** * * * ** * * *** **| 465 |* * * ** * ** ** *** *** ** **** ** *** **** ** *** **| 466 40|* * * ***** ** *** *** *** ** **** ** *** ***** ****** **| 467 UID(%)|* * ******* ** *** *** ******* **** ** *** ***** *********| 468 |*** ******* ** **** *********** *********** ***************| 469 |******************* *********** *********** ***************| 470 20|******************* ***************************************| 471 |******************* ***************************************| 472 |***********************************************************| 473 |***********************************************************| 474 0+-----------------------------------------------------------+ 475 0 100 200 300 400 500 600 700 800 900 1000 476 +-----------------------------------------------------------+ 477 | | 478 60| | 479 | | 480 | | 481 | | 482 40| | 483 UID(%)| | 484 | | 485 | | 486 20| | 487 | *| 488 | * *| 489 | * * * * * ** * *| 490 0+-----------------------------------------------------------+ 491 0 100 200 300 400 500 600 700 800 900 1000 492 Flow Number 493 Figure 7: Simulation Result with Congestion Elimination 495 4.5. Network Temporal Congestion Elimination 497 Different degrees of network congestion were simulated. The 498 Congestion Degree (CD) is defined as the link utilization beyond its 499 threshold. 501 The CCDR congestion elimination performance is shown in Figure 8. 502 The first graph is the CD distribution before the process of 503 congestion elimination. The average CD of all congested links is 504 about 20%. The second graph shown in Figure 8 is the CD distribution 505 after using the congestion elimination process. It shows only 12 506 links among totally 20000 links exceed the threshold, and all the CD 507 values are less than 3%. Thus, after scheduling of the traffic away 508 from the congested paths, the degree of network congestion is greatly 509 eliminated and the network utilization is in balance. 511 Before congestion elimination 512 +-----------------------------------------------------------+ 513 | * ** * ** ** *| 514 20| * * **** * ** ** *| 515 |* * ** * ** ** **** * ***** *********| 516 |* * * * * **** ****** * ** *** **********************| 517 15|* * * ** * ** **** ********* *****************************| 518 |* * ****** ******* ********* *****************************| 519 CD(%) |* ********* ******* ***************************************| 520 10|* ********* ***********************************************| 521 |*********** ***********************************************| 522 |***********************************************************| 523 5|***********************************************************| 524 |***********************************************************| 525 |***********************************************************| 526 0+-----------------------------------------------------------+ 527 0 0.5 1 1.5 2 529 After congestion elimination 530 +-----------------------------------------------------------+ 531 | | 532 20| | 533 | | 534 | | 535 15| | 536 | | 537 CD(%) | | 538 10| | 539 | | 540 | | 541 5 | | 542 | | 543 | * ** * * * ** * ** * | 544 0 +-----------------------------------------------------------+ 545 0 0.5 1 1.5 2 546 Link Number(*10000) 547 Figure 8: Simulation Result with Congestion Elimination 549 More detailed information about the algorithm can refer to [PTCS] . 551 5. CCDR Deployment Consideration 553 Above CCDR scenarios and simulation results demonstrate that it is 554 feasible to find one general solution to cope with various complex 555 situations. Integrated use of a centralized controller for the more 556 complex optimal path computations in a native IP network results in 557 significant improvements without impacting the underlay network 558 infrastructure. 560 For intra-domain or inter-domain native IP TE scenario, the 561 deployment of CCDR solution is similar. This universal deployment 562 characteristic can facilitate the operator to tackle their traffic 563 engineering issues in one general manner. To deploy the CCDR 564 solution, the PCE should collect the underlay network topology 565 dynamically, for example via BGP-LS[RFC7752]. It also needs to 566 gather the network traffic information periodically from the network 567 management platform. The simulation results show PCE can compute the 568 E2E optimal path within seconds thus it can cope with the change of 569 underlay network in minute scale. More agile requirements needs 570 increase the sample rate of underlay network, also decrease the 571 detection and notification interval of underlay network. The methods 572 to gather and decrease the latency of these information are out of 573 the scope of this draft. 575 6. Security Considerations 577 This document considers mainly the integration of distributed 578 protocols and the central control capability of a PCE. While it 579 certainly can ease the management of network in various traffic 580 engineering scenarios as described in this document, the centralized 581 control also bring a new point that may be easily attacked. 582 Solutions for CCDR scenarios need to consider protection of the PCE 583 and communication with the underlay devices. 585 [RFC5440] and [RFC8253] provide additional information. 587 The control priority and interaction process should also be carefully 588 designed for the combination of distributed protocol and central 589 control. Generally, the central control instruction should have 590 higher priority than the forwarding actions determined by the 591 distributed protocol. When the communication between PCE and the 592 underlay devices is not in function, the distributed protocol should 593 take over the control right of the underlay network. 594 [I-D.ietf-teas-pce-native-ip] provide more considerations 595 corresponding to the solution. 597 7. IANA Considerations 599 This document does not require any IANA actions. 601 8. Contributors 603 Lu Huang contributed to the content of this draft. 605 9. Acknowledgement 607 The author would like to thank Deborah Brungard, Adrian Farrel, 608 Huaimo Chen, Vishnu Beeram and Lou Berger for their support and 609 comments on this draft. 611 Thanks Benjamin Kaduk, Roman Danyliw, Alvaro Retana and Eric Vyncke 612 for their views and comments. 614 10. References 616 10.1. Normative References 618 [RFC5440] Vasseur, JP., Ed. and JL. Le Roux, Ed., "Path Computation 619 Element (PCE) Communication Protocol (PCEP)", RFC 5440, 620 DOI 10.17487/RFC5440, March 2009, 621 . 623 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 624 S. Ray, "North-Bound Distribution of Link-State and 625 Traffic Engineering (TE) Information Using BGP", RFC 7752, 626 DOI 10.17487/RFC7752, March 2016, 627 . 629 [RFC8253] Lopez, D., Gonzalez de Dios, O., Wu, Q., and D. Dhody, 630 "PCEPS: Usage of TLS to Provide a Secure Transport for the 631 Path Computation Element Communication Protocol (PCEP)", 632 RFC 8253, DOI 10.17487/RFC8253, October 2017, 633 . 635 10.2. Informative References 637 [I-D.ietf-pce-pcep-extension-native-ip] 638 Wang, A., Khasanov, B., Cheruathur, S., Zhu, C., and S. 639 Fang, "PCEP Extension for Native IP Network", draft-ietf- 640 pce-pcep-extension-native-ip-04 (work in progress), August 641 2019. 643 [I-D.ietf-teas-pce-native-ip] 644 Wang, A., Zhao, Q., Khasanov, B., Chen, H., and R. Mallya, 645 "PCE in Native IP Network", draft-ietf-teas-pce-native- 646 ip-04 (work in progress), August 2019. 648 [PTCS] Zhang, P., Xie, K., Kou, C., Huang, X., Wang, A., and Q. 649 Sun, "A Practical Traffic Control Scheme With Load 650 Balancing Based on PCE Architecture", IEEE 651 Access 18526773, DOI 10.1109/ACCESS.2019.2902610, March 652 2019, . 654 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 655 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 656 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 657 . 659 [RFC8402] Filsfils, C., Ed., Previdi, S., Ed., Ginsberg, L., 660 Decraene, B., Litkowski, S., and R. Shakir, "Segment 661 Routing Architecture", RFC 8402, DOI 10.17487/RFC8402, 662 July 2018, . 664 [RFC8578] Grossman, E., Ed., "Deterministic Networking Use Cases", 665 RFC 8578, DOI 10.17487/RFC8578, May 2019, 666 . 668 Authors' Addresses 670 Aijun Wang 671 China Telecom 672 Beiqijia Town, Changping District 673 Beijing, Beijing 102209 674 China 676 Email: wangaj3@chinatelecom.cn 678 Xiaohong Huang 679 Beijing University of Posts and Telecommunications 680 No.10 Xitucheng Road, Haidian District 681 Beijing 682 China 684 Email: huangxh@bupt.edu.cn 686 Caixia Kou 687 Beijing University of Posts and Telecommunications 688 No.10 Xitucheng Road, Haidian District 689 Beijing 690 China 692 Email: koucx@lsec.cc.ac.cn 693 Zhenqiang Li 694 China Mobile 695 32 Xuanwumen West Ave, Xicheng District 696 Beijing 100053 697 China 699 Email: li_zhenqiang@hotmail.com 701 Penghui Mi 702 Huawei Technologies 703 Tower C of Bldg.2, Cloud Park, No.2013 of Xuegang Road 704 Shenzhen, Bantian,Longgang District 518129 705 China 707 Email: mipenghui@huawei.com