idnits 2.17.1 draft-ietf-ccamp-interconnected-te-info-exchange-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 20, 2014) is 3444 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group A. Farrel (Ed.) 2 Internet-Draft J. Drake 3 Intended status: Standards Track Juniper Networks 4 Expires: May 20, 2015 5 N. Bitar 6 Verizon Networks 8 G. Swallow 9 Cisco Systems, Inc. 11 D. Ceccarelli 12 Ericsson 14 X. Zhang 15 Huawei 16 November 20, 2014 18 Problem Statement and Architecture for Information Exchange 19 Between Interconnected Traffic Engineered Networks 21 draft-ietf-ccamp-interconnected-te-info-exchange-01.txt 23 Abstract 25 In Traffic Engineered (TE) systems, it is sometimes desirable to 26 establish an end-to-end TE path with a set of constraints (such as 27 bandwidth) across one or more network from a source to a destination. 28 TE information is the data relating to nodes and TE links that is 29 used in the process of selecting a TE path. The availability of TE 30 information is usually limited to within a network (such as an IGP 31 area) often referred to as a domain. 33 In order to determine the potential to establish a TE path through a 34 series of connected networks, it is necessary to have available a 35 certain amount of TE information about each network. This need not 36 be the full set of TE information available within each network, but 37 does need to express the potential of providing TE connectivity. This 38 subset of TE information is called TE reachability information. 40 This document sets out the problem statement and architecture for the 41 exchange of TE information between interconnected TE networks in 42 support of end-to-end TE path establishment. For reasons that are 43 explained in the document, this work is limited to simple TE 44 constraints and information that determine TE reachability. 46 Status of This Memo 48 This Internet-Draft is submitted in full conformance with the 49 provisions of BCP 78 and BCP 79. 51 Internet-Drafts are working documents of the Internet Engineering 52 Task Force (IETF). Note that other groups may also distribute 53 working documents as Internet-Drafts. The list of current Internet- 54 Drafts is at http://datatracker.ietf.org/drafts/current/. 56 Internet-Drafts are draft documents valid for a maximum of six months 57 and may be updated, replaced, or obsoleted by other documents at any 58 time. It is inappropriate to use Internet-Drafts as reference 59 material or to cite them other than as "work in progress." 61 Copyright Notice 63 Copyright (c) 2014 IETF Trust and the persons identified as the 64 document authors. All rights reserved. 66 This document is subject to BCP 78 and the IETF Trust's Legal 67 Provisions Relating to IETF Documents 68 (http://trustee.ietf.org/license-info) in effect on the date of 69 publication of this document. Please review these documents 70 carefully, as they describe your rights and restrictions with respect 71 to this document. Code Components extracted from this document must 72 include Simplified BSD License text as described in Section 4.e of 73 the Trust Legal Provisions and are provided without warranty as 74 described in the Simplified BSD License. 76 Table of Contents 78 1. Introduction ................................................. 5 79 1.1. Terminology ................................................ 6 80 1.1.1. TE Paths and TE Connections .............................. 6 81 1.1.2. TE Metrics and TE Attributes ............................. 6 82 1.1.3. TE Reachability .......................................... 6 83 1.1.4. Domain ................................................... 7 84 1.1.5. Aggregation .............................................. 7 85 1.1.6. Abstraction .............................................. 7 86 1.1.7. Abstract Link ............................................ 7 87 1.1.8. Abstraction Layer Network ................................ 8 88 2. Overview of Use Cases ........................................ 8 89 2.1. Peer Networks .............................................. 8 90 2.1.1. Where is the Destination? ................................ 9 91 2.2. Client-Server Networks ..................................... 10 92 2.3. Dual-Homing ................................................ 12 93 2.4. Requesting Connectivity .................................... 13 94 2.4.1. Discovering Server Network Information ................... 15 95 3. Problem Statement ............................................ 15 96 3.1. Use of Existing Protocol Mechanisms ........................ 16 97 3.2. Policy and Filters ......................................... 16 98 3.3. Confidentiality ............................................ 17 99 3.4. Information Overload ....................................... 17 100 3.5. Issues of Information Churn ................................ 18 101 3.6. Issues of Aggregation ...................................... 19 102 3.7. Virtual Network Topology ................................... 20 103 4. Existing Work ................................................ 21 104 4.1. Per-Domain Path Computation ................................ 21 105 4.2. Crankback .................................................. 22 106 4.3. Path Computation Element ................................... 23 107 4.4. GMPLS UNI and Overlay Networks ............................. 24 108 4.5. Layer One VPN .............................................. 25 109 4.6. VNT Manager and Link Advertisement ......................... 25 110 4.7. What Else is Needed and Why? ............................... 26 111 5. Architectural Concepts ....................................... 27 112 5.1. Basic Components ........................................... 27 113 5.1.1. Peer Interconnection ..................................... 27 114 5.1.2. Client-Server Interconnection ............................ 27 115 5.2. TE Reachability ............................................ 28 116 5.3. Abstraction not Aggregation ................................ 29 117 5.3.1. Abstract Links ........................................... 30 118 5.3.2. The Abstraction Layer Network ............................ 30 119 5.3.3. Abstraction in Client-Server Networks..................... 33 120 5.3.4. Abstraction in Peer Networks ............................. 34 121 5.4. Considerations for Dynamic Abstraction ..................... 40 122 5.5. Requirements for Advertising Links and Nodes ............... 40 123 5.6. Addressing Considerations .................................. 40 124 6. Building on Existing Protocols ............................... 41 125 6.1. BGP-LS ..................................................... 41 126 6.2. IGPs ....................................................... 41 127 6.3. RSVP-TE .................................................... 41 128 7. Applicability to Optical Domains and Networks ................. 42 129 8. Modeling the User-to-Network Interface ....................... 43 130 9. Abstraction in L3VPN Multi-AS Environments ................... 47 131 10. Scoping Future Work ......................................... 49 132 10.1. Not Solving the Internet .................................. 49 133 10.2. Working With "Related" Domains ............................ 49 134 10.3. Not Finding Optimal Paths in All Situations ............... 49 135 10.4. Not Breaking Existing Protocols ........................... 49 136 10.5. Sanity and Scaling ........................................ 49 137 11. Manageability Considerations ................................ 50 138 12. IANA Considerations ......................................... 50 139 13. Security Considerations ..................................... 50 140 14. Acknowledgements ............................................ 50 141 15. References .................................................. 50 142 15.1. Informative References .................................... 50 143 Authors' Addresses ............................................... 54 144 Contributors ..................................................... 55 146 1. Introduction 148 Traffic Engineered (TE) systems such as MPLS-TE [RFC2702] and GMPLS 149 [RFC3945] offer a way to establish paths through a network in a 150 controlled way that reserves network resources on specified links. 151 TE paths are computed by examining the Traffic Engineering Database 152 (TED) and selecting a sequence of links and nodes that are capable of 153 meeting the requirements of the path to be established. The TED is 154 constructed from information distributed by the IGP running in the 155 network, for example OSPF-TE [RFC3630] or ISIS-TE [RFC5305]. 157 It is sometimes desirable to establish an end-to-end TE path that 158 crosses more than one network or administrative domain as described 159 in [RFC4105] and [RFC4216]. In these cases, the availability of TE 160 information is usually limited to within each network. Such networks 161 are often referred to as Domains [RFC4726] and we adopt that 162 definition in this document: viz. 164 For the purposes of this document, a domain is considered to be any 165 collection of network elements within a common sphere of address 166 management or path computational responsibility. Examples of such 167 domains include IGP areas and Autonomous Systems. 169 In order to determine the potential to establish a TE path through a 170 series of connected domains and to choose the appropriate domain 171 connection points through which to route a path, it is necessary to 172 have available a certain amount of TE information about each domain. 173 This need not be the full set of TE information available within each 174 domain, but does need to express the potential of providing TE 175 connectivity. This subset of TE information is called TE 176 reachability information. The TE reachability information can be 177 exchanged between domains based on the information gathered from the 178 local routing protocol, filtered by configured policy, or statically 179 configured. 181 This document sets out the problem statement and architecture for the 182 exchange of TE information between interconnected TE domains in 183 support of end-to-end TE path establishment. The scope of this 184 document is limited to the simple TE constraints and information 185 (such as TE metrics, hop count, bandwidth, delay, shared risk) 186 necessary to determine TE reachability: discussion of multiple 187 additional constraints that might qualify the reachability can 188 significantly complicate aggregation of information and the stability 189 of the mechanism used to present potential connectivity as is 190 explained in the body of this document. 192 1.1. Terminology 194 This section introduces some key terms that need to be understood to 195 arrive at a common understanding of the problem space. Some of the 196 terms are defined in more detail in the sections that follow (in 197 which case forward pointers are provided) and some terms are taken 198 from definitions that already exist in other RFCs (in which case 199 references are given, but no apology is made for repeating or 200 summarizing the definitions here). 202 1.1.1. TE Paths and TE Connections 204 A TE connection is a Label Switched Path (LSP) through an MPLS-TE or 205 GMPLS network that directs traffic along a particular path (the TE 206 path) in order to provide a specific service such as bandwidth 207 guarantee, separation of traffic, or resilience between a well-known 208 pair of end points. 210 1.1.2. TE Metrics and TE Attributes 212 TE metrics and TE attributes are terms applied to parameters of links 213 (and possibly nodes) in a network that is traversed by TE 214 connections. The TE metrics and TE attributes are used by path 215 computation algorithms to select the TE paths that the TE connections 216 traverse. Provisioning a TE connection through a network may result 217 in dynamic changes to the TE metrics and TE attributes of the links 218 and nodes in the network. 220 These terms are also sometimes used to describe the end-to-end 221 characteristics of a TE connection and can be derived formulaically 222 from the metrics and attributes of the links and nodes that the TE 223 connection traverses. Thus, for example, the end-to-end delay for a 224 TE connection is usually considered to be the sum of the delay on 225 each link that the connection traverses. 227 1.1.3. TE Reachability 229 In an IP network, reachability is the ability to deliver a packet to 230 a specific address or prefix. That is, the existence of an IP path 231 to that address or prefix. TE reachability is the ability to reach a 232 specific address along a TE path. More specifically, it is the 233 ability to establish a TE connection in an MPLS-TE or GMPLS sense. 234 Thus we talk about TE reachability as the potential of providing TE 235 connectivity. 237 TE reachability may be unqualified (there is a TE path, but no 238 information about available resources or other constraints is 239 supplied) which is helpful especially in determining a path to a 240 destination that lies in an unknown domain, or may be qualified by TE 241 attributes and TE metrics such as hop count, available bandwidth, 242 delay, shared risk, etc. 244 1.1.4. Domain 246 As defined in [RFC4726], a domain is any collection of network 247 elements within a common sphere of address management or path 248 computational responsibility. Examples of such domains include 249 Interior Gateway Protocol (IGP) areas and Autonomous Systems (ASes). 251 1.1.5. Aggregation 253 The concept of aggregation is discussed in Section 3.6. In 254 aggregation, multiple network resources from a domain are represented 255 outside the domain as a single entity. Thus multiple links and nodes 256 forming a TE connection may be represented as a single link, or a 257 collection of nodes and links (perhaps the whole domain) may be 258 represented as a single node with its attachment links. 260 1.1.6. Abstraction 262 Section 5.3 introduces the concept of abstraction and distinguishes 263 it from aggregation. Abstraction may be viewed as "policy-based 264 aggregation" where the policies are applied to overcome the issues 265 with aggregation as identified in section 3 of this document. 267 Abstraction is the process of applying policy to the available TE 268 information within a domain, to produce selective information that 269 represents the potential ability to connect across the domain. Thus, 270 abstraction does not necessarily offer all possible connectivity 271 options, but presents a general view of potential connectivity 272 according to the policies that determine how the domain's 273 administrator wants to allow the domain resources to be used. 275 1.1.7. Abstract Link 277 An abstract link is the representation of the characteristics of a 278 path between two nodes in a domain produced by abstraction. The 279 abstract link is advertised outside that domain as a TE link for use 280 in signaling in other domains. Thus, an abstract link represents 281 the potential to connect between a pair of nodes. 283 More details of abstract links are provided in Section 5.3.1. 285 1.1.8. Abstraction Layer Network 287 The abstraction layer network is introduced in Section 5.3.2. It may 288 be seen as a brokerage layer network between one or more server 289 networks and one or more client network. The abstraction layer 290 network is the collection of abstract links that provide potential 291 connectivity across the server network(s) and on which path 292 computation can be performed to determine edge-to-edge paths that 293 provide connectivity as links in the client network. 295 In the simplest case, the abstraction layer network is just a set of 296 edge-to-edge connections (i.e., abstract links), but to make the use 297 of server resources more flexible, the abstract links might not all 298 extend from edge to edge, but might offer connectivity between server 299 nodes to form a more complex network. 301 2. Overview of Use Cases 303 2.1. Peer Networks 305 The peer network use case can be most simply illustrated by the 306 example in Figure 1. A TE path is required between the source (Src) 307 and destination (Dst), that are located in different domains. There 308 are two points of interconnection between the domains, and selecting 309 the wrong point of interconnection can lead to a sub-optimal path, or 310 even fail to make a path available. 312 For example, when Domain A attempts to select a path, it may 313 determine that adequate bandwidth is available from Src through both 314 interconnection points x1 and x2. It may pick the path through x1 315 for local policy reasons: perhaps the TE metric is smaller. However, 316 if there is no connectivity in Domain Z from x1 to Dst, the path 317 cannot be established. Techniques such as crankback (see Section 318 4.2) may be used to alleviate this situation, but do not lead to 319 rapid setup or guaranteed optimality. Furthermore RSVP signalling 320 creates state in the network that is immediately removed by the 321 crankback procedure. Frequent events of such a kind impact 322 scalability in a non-deterministic manner. 324 -------------- -------------- 325 | Domain A | x1 | Domain Z | 326 | ----- +----+ ----- | 327 | | Src | +----+ | Dst | | 328 | ----- | x2 | ----- | 329 -------------- -------------- 331 Figure 1 : Peer Networks 333 There are countless more complicated examples of the problem of peer 334 networks. Figure 2 shows the case where there is a simple mesh of 335 domains. Clearly, to find a TE path from Src to Dst, Domain A must 336 not select a path leaving through interconnect x1 since Domain B has 337 no connectivity to Domain Z. Furthermore, in deciding whether to 338 select interconnection x2 (through Domain C) or interconnection x3 339 though Domain D, Domain A must be sensitive to the TE connectivity 340 available through each of Domains C and D, as well the TE 341 connectivity from each of interconnections x4 and x5 to Dst within 342 Domain Z. 344 -------------- 345 | Domain B | 346 | | 347 | | 348 /-------------- 349 / 350 / 351 /x1 352 --------------/ -------------- 353 | Domain A | | Domain Z | 354 | | -------------- | | 355 | ----- | x2| Domain C | x4| ----- | 356 | | Src | +---+ +---+ | Dst | | 357 | ----- | | | | ----- | 358 | | -------------- | | 359 --------------\ /-------------- 360 \x3 / 361 \ / 362 \ /x5 363 \--------------/ 364 | Domain D | 365 | | 366 | | 367 -------------- 369 Figure 2 : Peer Networks in a Mesh 371 Of course, many network interconnection scenarios are going to be a 372 combination of the situations expressed in these two examples. There 373 may be a mesh of domains, and the domains may have multiple points of 374 interconnection. 376 2.1.1. Where is the Destination? 378 A variation of the problems expressed in Section 2.1 arises when the 379 source domain (Domain A in both figures) does not know where the 380 destination is located. That is, when the domain in which the 381 destination node is located is not known to the source domain. 383 This is most easily seen in consideration of Figure 2 where the 384 decision about which interconnection to select needs to be based on 385 building a path toward the destination domain. Yet this can only be 386 achieved if it is known in which domain the destination node lies, or 387 at least if there is some indication in which direction the 388 destination lies. This function is obviously provided in IP networks 389 by inter-domain routing [RFC4271]. 391 2.2. Client-Server Networks 393 Two major classes of use case relate to the client-server 394 relationship between networks. These use cases have sometimes been 395 referred to as overlay networks. 397 The first group of use case, shown in Figure 3, occurs when domains 398 belonging to one network are connected by a domain belonging to 399 another network. In this scenario, once connections (or tunnels) are 400 formed across the lower layer network, the domains of the upper layer 401 network can be merged into a single domain by running IGP adjacencies 402 over the tunnels, and treating the tunnels as links in the higher 403 layer network. The TE relationship between the domains (higher and 404 lower layer) in this case is reduced to determining which tunnels to 405 set up, how to trigger them, how to route them, and what capacity to 406 assign them. As the demands in the higher layer network vary, these 407 tunnels may need to be modified. Section 2.4 explains in a little 408 more detail how connectivity may be requested 410 -------------- -------------- 411 | Domain A | | Domain Z | 412 | | | | 413 | ----- | | ----- | 414 | | Src | | | | Dst | | 415 | ----- | | ----- | 416 | | | | 417 --------------\ /-------------- 418 \x1 x2/ 419 \ / 420 \ / 421 \---------------/ 422 | Server Domain | 423 | | 424 | | 425 --------------- 427 Figure 3 : Client-Server Networks 429 The second class of use case of client-server networking is for 430 Virtual Private Networks (VPNs). In this case, as opposed to the 431 former one, it is assumed that the client network has a different 432 address space than that of the server layer where non-overlapping IP 433 addresses between the client and the server networks cannot be 434 guaranteed. A simple example is shown in Figure 4. The VPN sites 435 comprise a set of domains that are interconnected over a core domain, 436 the provider network. 438 -------------- -------------- 439 | Domain A | | Domain Z | 440 | (VPN site) | | (VPN site) | 441 | | | | 442 | ----- | | ----- | 443 | | Src | | | | Dst | | 444 | ----- | | ----- | 445 | | | | 446 --------------\ /-------------- 447 \x1 x2/ 448 \ / 449 \ / 450 \---------------/ 451 | Core Domain | 452 | | 453 | | 454 /---------------\ 455 / \ 456 / \ 457 /x3 x4\ 458 --------------/ \-------------- 459 | Domain B | | Domain C | 460 | (VPN site) | | (VPN site) | 461 | | | | 462 | | | | 463 -------------- -------------- 465 Figure 4 : A Virtual Private Network 467 Note that in the use cases shown in Figures 3 and 4 the client layer 468 domains may (and, in fact, probably do) operate as a single connected 469 network. 471 Both use cases in this section become "more interesting" when 472 combined with the use case in Section 2.1. That is, when the 473 connectivity between higher layer domains or VPN sites is provided 474 by a sequence or mesh of lower layer domains. Figure 5 shows how 475 this might look in the case of a VPN. 477 ------------ ------------ 478 | Domain A | | Domain Z | 479 | (VPN site) | | (VPN site) | 480 | ----- | | ----- | 481 | | Src | | | | Dst | | 482 | ----- | | ----- | 483 | | | | 484 ------------\ /------------ 485 \x1 x2/ 486 \ / 487 \ / 488 \---------- ----------/ 489 | Domain X |x5 | Domain Y | 490 | (core) +---+ (core) | 491 | | | | 492 | +---+ | 493 | |x6 | | 494 /---------- ----------\ 495 / \ 496 / \ 497 /x3 x4\ 498 ------------/ \------------ 499 | Domain B | | Domain C | 500 | (VPN site) | | (VPN site) | 501 | | | | 502 ------------ ------------ 504 Figure 5 : A VPN Supported Over Multiple Server Domains 506 2.3. Dual-Homing 508 A further complication may be added to the client-server relationship 509 described in Section 2.2 by considering what happens when a client 510 domain is attached to more than one server domain, or has two points 511 of attachment to a server domain. Figure 6 shows an example of this 512 for a VPN. 514 ------------ 515 | Domain A | 516 | (VPN site) | 517 ------------ | ----- | 518 | Domain B | | | Src | | 519 | (VPN site) | | ----- | 520 | | | | 521 ------------\ -+--------+- 522 \x1 | | 523 \ x2| |x3 524 \ | | ------------ 525 \--------+- -+-------- | Domain Z | 526 | Domain X | x8 | Domain Y | x4 | (VPN site) | 527 | (core) +----+ (core) +----+ ----- | 528 | | | | | | Dst | | 529 | +----+ +----+ ----- | 530 | | x9 | | x5 | | 531 /---------- ----------\ ------------ 532 / \ 533 / \ 534 /x6 x7\ 535 ------------/ \------------ 536 | Domain C | | Domain D | 537 | (VPN site) | | (VPN site) | 538 | | | | 539 ------------ ------------ 541 Figure 6 : Dual-Homing in a Virtual Private Network 543 2.4. Requesting Connectivity 545 This relationship between domains can be entirely under the control 546 of management processes, dynamically triggered by the client network, 547 or some hybrid of these cases. In the management case, the server 548 network may be requested to establish a set of LSPs to provide client 549 layer connectivity. In the dynamic case, the client may make a 550 request to the server network exerting a range of controls over the 551 paths selected in the server network. This range extends from no 552 control (i.e., a simple request for connectivity), through a set of 553 constraints (such as latency, path protection, etc.), up to and 554 including full control of the path and resources used in the server 555 network (i.e., the use of explicit paths with label subobjects). 557 There are various models by which a server network can be requested 558 to set up the connections that support a service provided to the 559 client network. These requests may come from management systems, 560 directly from the client network control plane, or through some 561 intermediary broker such as the Virtual Network Topology Manager 562 discussed in Section 4.6. 564 The trigger that causes the request to the server layer is also 565 flexible. It could be that the client layer discovers a pressing 566 need for server layer resources (such as the desire to provision an 567 end-to-end connection in the client layer, or severe congestion on 568 a specific path), or it might be that a planning application has 569 considered how best to optimize traffic in the client network or 570 how to handle a predicted traffic demand. 572 In all cases, the relationship between client and server networks is 573 subject to policy so that server resources are under the 574 administrative control of the operator or the server layer network 575 and are only used to support a client layer network in ways that the 576 server layer operator approves. 578 As just noted, connectivity requests issued to a server network may 579 include varying degrees of constraint upon the choice of path that 580 the server network can implement. 582 o Basic Provisioning is a simple request for connectivity. The only 583 constraints are the end points of the connection and the capacity 584 (bandwidth) that the connection will support for the client layer. 585 In the case of some server networks, even the bandwidth component 586 of a basic provisioning request is superfluous because the server 587 layer has no facility to vary bandwidth, but can offer connectivity 588 only at a default capacity. 590 o Basic Provisioning with Optimization is a service request that 591 indicates one or more metrics that the server layer must optimize 592 in its selection of a path. Metrics may be hop count, path length, 593 summed TE metric, jitter, delay, or any number of technology- 594 specific constraints. 596 o Basic Provisioning with Optimization and Constraints enhances the 597 optimization process to apply absolute constraints to functions of 598 the path metrics. For example, a connection may be requested that 599 optimizes for the shortest path, but in any case requests that the 600 end-to-end delay be less than a certain value. Equally, 601 optimization my be expressed in terms of the impact on the network. 602 For example, a service may be requested in order to leave maximal 603 flexibility to satisfy future service requests. 605 o Fate Diversity requests ask for the server layer to provide a path 606 that does not use any network resources (usually links and nodes) 607 that share fate (i.e., can fail as the result of a single event) as 608 the resources used by another connection. This allows the client 609 layer to construct protection services over the server layer 610 network, for example by establishing virtual links that are known 611 to be fate diverse. The connections that have diverse paths need 612 not share end points. 614 o Provisioning with Fate Sharing is the exact opposite of Fate 615 Diversity. In this case two or more connections are requested to 616 to follow same path in the server network. This may be requested, 617 for example, to create a bundled or aggregated link in the client 618 layer where each component of the client layer composite link is 619 required to have the same server layer properties (metrics, delay, 620 etc.) and the same failure characteristics. 622 o Concurrent Provisioning enables the inter-related connections 623 requests described in the previous two bullets to be enacted 624 through a single, compound service request. 626 o Service Resilience requests the server layer to provide 627 connectivity for which the server layer takes responsibility to 628 recover from faults. The resilience may be achieved through the 629 use of link-level protection, segment protection, end-to-end 630 protection, or recovery mechanisms. 632 2.4.1. Discovering Server Network Information 634 Although the topology and resource availability information of a 635 server network may be hidden from the client network, the service 636 request interface may support features that report details about the 637 services and potential services that the server network supports. 639 o Reporting of path details, service parameters, and issues such as 640 path diversity of LSPs that support deployed services allows the 641 client network to understand to what extent its requests were 642 satisfied. This is particularly important when the requests were 643 made as "best effort". 645 o A server network may support requests of the form "if I was to ask 646 you for this service, would you be able to provide it?" That is, 647 a service request that does everything except actually provision 648 the service. 650 3. Problem Statement 652 The problem statement presented in this section is as much about the 653 issues that may arise in any solution (and so have to be avoided) 654 and the features that are desirable within a solution, as it is about 655 the actual problem to be solved. 657 The problem can be stated very simply and with reference to the use 658 cases presented in the previous section. 660 A mechanism is required that allows TE-path computation in one 661 domain to make informed choices about the TE-capabilities and exit 662 points from the domain when signaling an end-to-end TE path that 663 will extend across multiple domains. 665 Thus, the problem is one of information collection and presentation, 666 not about signaling. Indeed, the existing signaling mechanisms for 667 TE LSP establishment are likely to prove adequate [RFC4726] with the 668 possibility of minor extensions. 670 An interesting annex to the problem is how the path is made available 671 for use. For example, in the case of a client-server network, the 672 path established in the server network needs to be made available as 673 a TE link to provide connectivity in the client network. 675 3.1. Use of Existing Protocol Mechanisms 677 TE information may currently be distributed in a domain by TE 678 extensions to one of the two IGPs as described in OSPF-TE [RFC3630] 679 and ISIS-TE [RFC5305]. TE information may be exported from a domain 680 (for example, northbound) using link state extensions to BGP 681 [I-D.ietf-idr-ls-distribution]. 683 It is desirable that a solution to the problem described in this 684 document does not require the implementation of a new, network-wide 685 protocol. Instead, it would be advantageous to make use of an 686 existing protocol that is commonly implemented on network nodes and 687 is currently deployed, or to use existing computational elements such 688 as Path Computation Elements (PCEs). This has many benefits in 689 network stability, time to deployment, and operator training. 691 It is recognized, however, that existing protocols are unlikely to be 692 immediately suitable to this problem space without some protocol 693 extensions. Extending protocols must be done with care and with 694 consideration for the stability of existing deployments. In extreme 695 cases, a new protocol can be preferable to a messy hack of an 696 existing protocol. 698 3.2. Policy and Filters 700 A solution must be amenable to the application of policy and filters. 701 That is, the operator of a domain that is sharing information with 702 another domain must be able to apply controls to what information is 703 shared. Furthermore, the operator of a domain that has information 704 shared with it must be able to apply policies and filters to the 705 received information. 707 Additionally, the path computation within a domain must be able to 708 weight the information received from other domains according to local 709 policy such that the resultant computed path meets the local 710 operator's needs and policies rather than those of the operators of 711 other domains. 713 3.3. Confidentiality 715 A feature of the policy described in Section 3.3 is that an operator 716 of a domain may desire to keep confidential the details about its 717 internal network topology and loading. This information could be 718 construed as commercially sensitive. 720 Although it is possible that TE information exchange will take place 721 only between parties that have significant trust, there are also use 722 cases (such as the VPN supported over multiple server domains 723 described in Section 2.4) where information will be shared between 724 domains that have a commercial relationship, but a low level of 725 trust. 727 Thus, it must be possible for a domain to limit the information share 728 to just that which the computing domain needs to know with the 729 understanding that less information that is made available the more 730 likely it is that the result will be a less optimal path and/or more 731 crankback events. 733 3.4. Information Overload 735 One reason that networks are partitioned into separate domains is to 736 reduce the set of information that any one router has to handle. 737 This also applies to the volume of information that routing protocols 738 have to distribute. 740 Over the years routers have become more sophisticated with greater 741 processing capabilities and more storage, the control channels on 742 which routing messages are exchanged have become higher capacity, and 743 the routing protocols (and their implementations) have become more 744 robust. Thus, some of the arguments in favor of dividing a network 745 into domains may have been reduced. Conversely, however, the size of 746 networks continues to grow dramatically with a consequent increase in 747 the total amount of routing-related information available. 748 Additionally, in this case, the problem space spans two or more 749 networks. 751 Any solution to the problems voiced in this document must be aware of 752 the issues of information overload. If the solution was to simply 753 share all TE information between all domains in the network, the 754 effect from the point of view of the information load would be to 755 create one single flat network domain. Thus the solution must 756 deliver enough information to make the computation practical (i.e., 757 to solve the problem), but not so much as to overload the receiving 758 domain. Furthermore, the solution cannot simply rely on the policies 759 and filters described in Section 3.2 because such filters might not 760 always be enabled. 762 3.5. Issues of Information Churn 764 As LSPs are set up and torn down, the available TE resources on links 765 in the network change. In order to reliably compute a TE path 766 through a network, the computation point must have an up-to-date view 767 of the available TE resources. However, collecting this information 768 may result in considerable load on the distribution protocol and 769 churn in the stored information. In order to deal with this problem 770 even in a single domain, updates are sent at periodic intervals or 771 whenever there is a significant change in resources, whichever 772 happens first. 774 Consider, for example, that a TE LSP may traverse ten links in a 775 network. When the LSP is set up or torn down, the resources 776 available on each link will change resulting in a new advertisement 777 of the link's capabilities and capacity. If the arrival rate of new 778 LSPs is relatively fast, and the hold times relatively short, the 779 network may be in a constant state of flux. Note that the 780 problem here is not limited to churn within a single domain, since 781 the information shared between domains will also be changing. 782 Furthermore, the information that one domain needs to share with 783 another may change as the result of LSPs that are contained within or 784 cross the first domain but which are of no direct relevance to the 785 domain receiving the TE information. 787 In packet networks, where the capacity of an LSP is often a small 788 fraction of the resources available on any link, this issue is 789 partially addressed by the advertising routers. They can apply a 790 threshold so that they do not bother to update the advertisement of 791 available resources on a link if the change is less than a configured 792 percentage of the total (or alternatively, the remaining) resources. 793 The updated information in that case will be disseminated based on an 794 update interval rather than a resource change event. 796 In non-packet networks, where link resources are physical switching 797 resources (such as timeslots or wavelengths) the capacity of an LSP 798 may more frequently be a significant percentage of the available link 799 resources. Furthermore, in some switching environments, it is 800 necessary to achieve end-to-end resource continuity (such as using 801 the same wavelength on the whole length of an LSP), so it is far more 802 desirable to keep the TE information held at the computation points 803 up-to-date. Fortunately, non-packet networks tend to be quite a bit 804 smaller than packet networks, the arrival rates of non-packet LSPs 805 are much lower, and the hold times considerably longer. Thus the 806 information churn may be sustainable. 808 3.6. Issues of Aggregation 810 One possible solution to the issues raised in other sub-sections of 811 this section is to aggregate the TE information shared between 812 domains. Two aggregation mechanisms are often considered: 814 - Virtual node model. In this view, the domain is aggregated as if 815 it was a single node (or router / switch). Its links to other 816 domains are presented as real TE links, but the model assumes that 817 any LSP entering the virtual node through a link can be routed to 818 leave the virtual node through any other link (although recent work 819 on "limited cross-connect switches" may help with this problem 820 [I-D.ietf-ccamp-general-constraint-encode]). 822 - Virtual link model. In this model, the domain is reduced to a set 823 of edge-to-edge TE links. Thus, when computing a path for an LSP 824 that crosses the domain, a computation point can see which domain 825 entry points can be connected to which other and with what TE 826 attributes. 828 It is of the nature of aggregation that information is removed from 829 the system. This can cause inaccuracies and failed path computation. 830 For example, in the virtual node model there might not actually be a 831 TE path available between a pair of domain entry points, but the 832 model lacks the sophistication to represent this "limited cross- 833 connect capability" within the virtual node. On the other hand, in 834 the virtual link model it may prove very hard to aggregate multiple 835 link characteristics: for example, there may be one path available 836 with high bandwidth, and another with low delay, but this does not 837 mean that the connectivity should be assumed or advertised as having 838 both high bandwidth and low delay. 840 The trick to this multidimensional problem, therefore, is to 841 aggregate in a way that retains as much useful information as 842 possible while removing the data that is not needed. An important 843 part of this trick is a clear understanding of what information is 844 actually needed. 846 It should also be noted in the context of Section 3.5 that changes in 847 the information within a domain may have a bearing on what aggregated 848 data is shared with another domain. Thus, while the data shared in 849 reduced, the aggregation algorithm (operating on the routers 850 responsible for sharing information) may be heavily exercised. 852 3.7. Virtual Network Topology 854 The terms "virtual topology" and "virtual network topology" have 855 become overloaded in a relatively short time. We draw on [RFC5212] 856 and [RFC5623] for inspiration to provide a definition for use in this 857 document. Our definition is based on the fact that a topology at the 858 and [RFC5623] for inspiration to provide a definition for use in this 859 document. Our definition is based on the fact that a topology at the 860 client network layer is constructed of nodes and links. Typically, 861 the nodes are routers in the client layer, and the links are data 862 links. However, a layered network provides connectivity through the 863 lower layer as LSPs, and these LSPs can provide links in the client 864 layer. Furthermore, those LSPs may have been established in advance, 865 or might be LSPs that could be set up if required. This leads to the 866 definition: 868 A Virtual Network Topology (VNT) is made up of links in a network 869 layer. Those links may be realized as direct data links or as 870 multi-hop connections (LSPs) in a lower network layer. Those 871 underlying LSPs may be established in advance or created on demand. 873 The creation and management of a VNT requires interaction with 874 management and policy. Activity is needed in both the client and 875 server layer: 877 - In the server layer, LSPs need to be set up either in advance in 878 response to management instructions or in answer to dynamic 879 requests subject to policy considerations. 881 - In the server layer, evaluation of available TE resources can lead 882 to the announcement of potential connectivity (i.e., LSPs that 883 could be set up on demand). 885 - In the client layer, connectivity (lower layer LSPs or potential 886 LSPs) needs to be announced in the IGP as a normal TE link. Such 887 links may or may not be made available to IP routing: but, they are 888 never made available to IP routing until fully instantiated. 890 - In the client layer, requests to establish lower layer LSPs need to 891 be made either when links supported by potential LSPs are about to 892 be used (i.e., when a higher layer LSP is signalled to cross the 893 link, the setup of the lower layer LSP is triggered), or when the 894 client layer determines it needs more connectivity or capacity. 896 It is a fundamental of the use of a VNT that there is a policy point 897 at the lower-layer node responsible for the instantiation of a lower- 898 layer LSP. At the moment that the setup of a lower-layer LSP is 899 triggered, whether from a client-layer management tool or from 900 signaling in the client layer, the server layer must be able to apply 901 policy to determine whether to actually set up the LSP. Thus, fears 902 that a micro-flow in the client layer might cause the activation of 903 100G optical resources in the server layer can be completely 904 controlled by the policy of the server layer network's operator (and 905 could even be subject to commercial terms). 907 These activities require an architecture and protocol elements as 908 well as management components and policy elements. 910 4. Existing Work 912 This section briefly summarizes relevant existing work that is used 913 to route TE paths across multiple domains. 915 4.1. Per-Domain Path Computation 917 The per-domain mechanism of path establishment is described in 918 [RFC5152] and its applicability is discussed in [RFC4726]. In 919 summary, this mechanism assumes that each domain entry point is 920 responsible for computing the path across the domain, but that 921 details of the path in the next domain are left to the next domain 922 entry point. The computation may be performed directly by the entry 923 point or may be delegated to a computation server. 925 This basic mode of operation can run into many of the issues 926 described alongside the use cases in Section 2. However, in practice 927 it can be used effectively with a little operational guidance. 929 For example, RSVP-TE [RFC3209] includes the concept of a "loose hop" 930 in the explicit path that is signaled. This allows the original 931 request for an LSP to list the domains or even domain entry points to 932 include on the path. Thus, in the example in Figure 1, the source 933 can be told to use the interconnection x2. Then the source computes 934 the path from itself to x2, and initiates the signaling. When the 935 signaling message reaches Domain Z, the entry point to the domain 936 computes the remaining path to the destination and continues the 937 signaling. 939 Another alternative suggested in [RFC5152] is to make TE routing 940 attempt to follow inter-domain IP routing. Thus, in the example 941 shown in Figure 2, the source would examine the BGP routing 942 information to determine the correct interconnection point for 943 forwarding IP packets, and would use that to compute and then signal 944 a path for Domain A. Each domain in turn would apply the same 945 approach so that the path is progressively computed and signaled 946 domain by domain. 948 Although the per-domain approach has many issues and drawbacks in 949 terms of achieving optimal (or, indeed, any) paths, it has been the 950 mainstay of inter-domain LSP set-up to date. 952 4.2. Crankback 954 Crankback addresses one of the main issues with per-domain path 955 computation: what happens when an initial path is selected that 956 cannot be completed toward the destination? For example, what 957 happens if, in Figure 2, the source attempts to route the path 958 through interconnection x2, but Domain C does not have the right TE 959 resources or connectivity to route the path further? 961 Crankback for MPLS-TE and GMPLS networks is described in [RFC4920] 962 and is based on a concept similar to the Acceptable Label Set 963 mechanism described for GMPLS signaling in [RFC3473]. When a node 964 (i.e., a domain entry point) is unable to compute a path further 965 across the domain, it returns an error message in the signaling 966 protocol that states where the blockage occurred (link identifier, 967 node identifier, domain identifier, etc.) and gives some clues about 968 what caused the blockage (bad choice of label, insufficient bandwidth 969 available, etc.). This information allows a previous computation 970 point to select an alternative path, or to aggregate crankback 971 information and return it upstream to a previous computation point. 973 Crankback is a very powerful mechanism and can be used to find an 974 end-to-end path in a multi-domain network if one exists. 976 On the other hand, crankback can be quite resource-intensive as 977 signaling messages and path setup attempts may "wander around" in the 978 network attempting to find the correct path for a long time. Since 979 RSVP-TE signaling ties up networks resources for partially 980 established LSPs, since network conditions may be in flux, and most 981 particularly since LSP setup within well-known time limits is highly 982 desirable, crankback is not a popular mechanism. 984 Furthermore, even if crankback can always find an end-to-end path, it 985 does not guarantee to find the optimal path. (Note that there have 986 been some academic proposals to use signaling-like techniques to 987 explore the whole network in order to find optimal paths, but these 988 tend to place even greater burdens on network processing.) 990 4.3. Path Computation Element 992 The Path Computation Element (PCE) is introduced in [RFC4655]. It is 993 an abstract functional entity that computes paths. Thus, in the 994 example of per-domain path computation (Section 4.1) the source node 995 and each domain entry point is a PCE. On the other hand, the PCE can 996 also be realized as a separate network element (a server) to which 997 computation requests can be sent using the Path Computation Element 998 Communication Protocol (PCEP) [RFC5440]. 1000 Each PCE has responsibility for computations within a domain, and has 1001 visibility of the attributes within that domain. This immediately 1002 enables per-domain path computation with the opportunity to off-load 1003 complex, CPU-intensive, or memory-intensive computation functions 1004 from routers in the network. But the use of PCE in this way does not 1005 solve any of the problems articulated in Sections 4.1 and 4.2. 1007 Two significant mechanisms for cooperation between PCEs have been 1008 described. These mechanisms are intended to specifically address the 1009 problems of computing optimal end-to-end paths in multi-domain 1010 environments. 1012 - The Backward-Recursive PCE-Based Computation (BRPC) mechanism 1013 [RFC5441] involves cooperation between the set of PCEs along the 1014 inter-domain path. Each one computes the possible paths from 1015 domain entry point (or source node) to domain exit point (or 1016 destination node) and shares the information with its upstream 1017 neighbor PCE which is able to build a tree of possible paths 1018 rooted at the destination. The PCE in the source domain can 1019 select the optimal path. 1021 BRPC is sometimes described as "crankback at computation time". It 1022 is capable of determining the optimal path in a multi-domain 1023 network, but depends on knowing the domain that contains the 1024 destination node. Furthermore, the mechanism can become quite 1025 complicated and involve a lot of data in a mesh of interconnected 1026 domains. Thus, BRPC is most often proposed for a simple mesh of 1027 domains and specifically for a path that will cross a known 1028 sequence of domains, but where there may be a choice of domain 1029 interconnections. In this way, BRPC would only be applied to 1030 Figure 2 if a decision had been made (externally) to traverse 1031 Domain C rather than Domain D (notwithstanding that it could 1032 functionally be used to make that choice itself), but BRPC could be 1033 used very effectively to select between interconnections x1 and x2 1034 in Figure 1. 1036 - Hierarchical PCE (H-PCE) [RFC6805] offers a parent PCE that is 1037 responsible for navigating a path across the domain mesh and for 1038 coordinating intra-domain computations by the child PCEs 1039 responsible for each domain. This approach makes computing an end- 1040 to-end path across a mesh of domains far more tractable. However, 1041 it still leaves unanswered the issue of determining the location of 1042 the destination (i.e., discovering the destination domain) as 1043 described in Section 2.1.1. Furthermore, it raises the question of 1044 who operates the parent PCE especially in networks where the 1045 domains are under different administrative and commercial control. 1047 It should also be noted that [RFC5623] discusses how PCE is used in a 1048 multi-layer network with coordination between PCEs operating at each 1049 network layer. Further issues and considerations of the use of PCE 1050 can be found in [RFC7399]. 1052 4.4. GMPLS UNI and Overlay Networks 1054 [RFC4208] defines the GMPLS User-to-Network Interface (UNI) to 1055 present a routing boundary between an overlay network and the core 1056 network, i.e. the client-server interface. In the client network, 1057 the nodes connected directly to the core network are known as edge 1058 nodes, while the nodes in the server network are called core nodes. 1060 In the overlay model defined by [RFC4208] the core nodes act as a 1061 closed system and the edge nodes do not participate in the routing 1062 protocol instance that runs among the core nodes. Thus the UNI 1063 allows access to and limited control of the core nodes by edge nodes 1064 that are unaware of the topology of the core nodes. This respects 1065 the operational and layer boundaries while scaling the network. 1067 [RFC4208] does not define any routing protocol extension for the 1068 interaction between core and edge nodes but allows for the exchange 1069 of reachability information between them. In terms of a VPN, the 1070 client network can be considered as the customer network comprised 1071 of a number of disjoint sites, and the edge nodes match the VPN CE 1072 nodes. Similarly, the provider network in the VPN model is 1073 equivalent to the server network. 1075 [RFC4208] is, therefore, a signaling-only solution that allows edge 1076 nodes to request connectivity cross the core network, and leaves the 1077 core network to select the paths and set up the core LSPs. This 1078 solution is supplemented by a number of signaling extensions such as 1079 [RFC4874], [RFC5553], [I-D.ietf-ccamp-xro-lsp-subobject], 1080 [I-D.ietf-ccamp-rsvp-te-srlg-collect], and 1081 [I-D.ietf-ccamp-te-metric-recording] to give the edge node more 1082 control over the LSP that the core network will set up by exchanging 1083 information about core LSPs that have been established and by 1084 allowing the edge nodes to supply additional constraints on the core 1085 LSPs that are to be set up. 1087 Nevertheless, in this UNI/overlay model, the edge node has limited 1088 information of precisely what LSPs could be set up across the core, 1089 and what TE services (such as diverse routes for end-to-end 1090 protection, end-to-end bandwidth, etc.) can be supported. 1092 4.5. Layer One VPN 1094 A Layer One VPN (L1VPN) is a service offered by a core layer 1 1095 network to provide layer 1 connectivity (TDM, LSC) between two or 1096 more customer networks in an overlay service model [RFC4847]. 1098 As in the UNI case, the customer edge has some control over the 1099 establishment and type of the connectivity. In the L1VPN context 1100 three different service models have been defined classified by the 1101 semantics of information exchanged over the customer interface: 1102 Management Based, Signaling Based (a.k.a. basic), and Signaling and 1103 Routing service model (a.k.a. enhanced). 1105 In the management based model, all edge-to-edge connections are set 1106 up using configuration and management tools. This is not a dynamic 1107 control plane solution and need not concern us here. 1109 In the signaling based service model [RFC5251] the CE-PE interface 1110 allows only for signaling message exchange, and the provider network 1111 does not export any routing information about the core network. VPN 1112 membership is known a priori (presumably through configuration) or is 1113 discovered using a routing protocol [RFC5195], [RFC5252], [RFC5523], 1114 as is the relationship between CE nodes and ports on the PE. This 1115 service model is much in line with GMPLS UNI as defined in [RFC4208]. 1117 In the enhanced model there is an additional limited exchange of 1118 routing information over the CE-PE interface between the provider 1119 network and the customer network. The enhanced model considers four 1120 different types of service models, namely: Overlay Extension, Virtual 1121 Node, Virtual Link and Per-VPN service models. All of these 1122 represent particular cases of the TE information aggregation and 1123 representation. 1125 4.6. VNT Manager and Link Advertisement 1127 As discussed in Section 3.7, operation of a VNT requires policy and 1128 management input. In order to handle this, [RFC5623] introduces the 1129 concept of the Virtual Network Topology Manager (VNTM). This is a 1130 functional component that applies policy to requests from client 1131 networks (or agents of the client network, such as a PCE) for the 1132 establishment of LSPs in the server network to provide connectivity 1133 in the client network. 1135 The VNTM would, in fact, form part of the provisioning path for all 1136 server network LSPs whether they are set up ahead of client network 1137 demand or triggered by end-to-end client network LSP signaling. 1139 An important companion to this function is determining how the LSP 1140 set up across the server network is made available as a TE link in 1141 the client network. Obviously, if the LSP is established using 1142 management intervention, the subsequent client network TE link can 1143 also be configured manually. However, if the LSP is signaled 1144 dynamically there is need for the end points to exchange the link 1145 properties that they should advertise within the client network, and 1146 in the case of a server network that supports more than one client, 1147 it will be necessary to indicate which client or clients can use the 1148 link. This capability it provided in [RFC6107]. 1150 Note that a potential server network LSP that is advertised as a TE 1151 link in the client network might to be determined dynamically by 1152 the edge nodes. In this case there will need to be some effort to 1153 ensure that both ends of the link have the same view of the available 1154 TE resources, or else the advertised link will be asymmetrical. 1156 4.7. What Else is Needed and Why? 1158 As can be seen from Sections 4.1 through 4.6, a lot of effort has 1159 focused on client-server networks as described in Figure 3. Far less 1160 consideration has been given to network peering or the combination of 1161 the two use cases. 1163 Various work has been suggested to extend the definition of the UNI 1164 such that routing information can be passed across the interface. 1165 However, this approach seems to break the architectural concept of 1166 network separation that the UNI facilitates. 1168 Other approaches are working toward a flattening of the network with 1169 complete visibility into the server networks being made available in 1170 the client network. These approaches, while functional, ignore the 1171 main reasons for introducing network separation in the first place. 1173 The remainder of this document introduces a new approach based on 1174 network abstraction that allows a server network to use its own 1175 knowledge of its resources and topology combined with its own 1176 policies to determine what edge-to-edge connectivity capabilities it 1177 will inform the client networks about. 1179 5. Architectural Concepts 1181 5.1. Basic Components 1183 This section revisits the use cases from Section 2 to present the 1184 basic architectural components that provide connectivity in the 1185 peer and client-server cases. These component models can then be 1186 used in later sections to enable discussion of a solution 1187 architecture. 1189 5.1.1. Peer Interconnection 1191 Figure 7 shows the basic architectural concepts for connecting across 1192 peer networks. Nodes from four networks are shown: A1 and A2 come 1193 from one network; B1, B2, and B3 from another network; etc. The 1194 interfaces between the networks (sometimes known as External Network- 1195 to-Network Interfaces - ENNIs) are A2-B1, B3-C1, and C3-D1. 1197 The objective is to be able to support an end-to-end connection A1- 1198 to-D2. This connection is for TE connectivity. 1200 As shown in the figure, LSP tunnels that span the transit networks 1201 are used to achieve the required connectivity. These transit LSPs 1202 form the key building blocks of the end-to-end connectivity. 1204 The transit tunnels can be used as hierarchical LSPs [RFC4206] to 1205 carry the end-to-end LSP, or can become stitching segments [RFC5150] 1206 of the end-to-end LSP. The transit tunnels B1-B3 and C-C3 can be 1207 as an abstract link as discussed in Section 5.3. 1209 : : : 1210 Network A : Network B : Network C : Network D 1211 : : : 1212 -- -- -- -- -- -- -- -- -- -- 1213 |A1|--|A2|---|B1|--|B2|--|B3|---|C1|--|C2|--|C3|---|D1|--|D2| 1214 -- -- | | -- | | | | -- | | -- -- 1215 | |========| | | |========| | 1216 -- -- -- -- 1218 Key 1219 --- Direct connection between two nodes 1220 === LSP tunnel across transit network 1222 Figure 7 : Architecture for Peering 1224 5.1.2. Client-Server Interconnection 1226 Figure 8 shows the basic architectural concepts for a client-server 1227 network. The client network nodes are C1, C2, CE1, CE2, C3, and C4. 1228 The core network nodes are CN1, CN2, CN3, and CN4. The interfaces 1229 CE1-CN1 and CE2-CN2 are the interfaces between the client and core 1230 networks. 1232 The objective is to be able to support an end-to-end connection, 1233 C1-to-C4, in the client network. This connection may support TE or 1234 normal IP forwarding. To achieve this, CE1 is to be connected to CE2 1235 by a link in the client layer that is supported by a core network 1236 LSP. 1238 As shown in the figure, two LSPs are used to achieve the required 1239 connectivity. One LSP is set up across the core from CN1 to CN2. 1240 This core LSP then supports a three-hop LSP from CE1 to CE2 with its 1241 middle hop being the core LSP. It is this LSP that is presented as a 1242 link in the client network. 1244 The practicalities of how the CE1-CE2 LSP is carried across the core 1245 LSP may depend on the switching and signaling options available in 1246 the core network. The LSP may be tunneled down the core LSP using 1247 the mechanisms of a hierarchical LSP [RFC4206], or the LSP segments 1248 CE1-CN1 and CN2-CE2 may be stitched to the core LSP as described in 1249 [RFC5150]. 1251 : : 1252 Client Network : Core Network : Client Network 1253 : : 1254 -- -- --- --- -- -- 1255 |C1|--|C2|--|CE1|................................|CE2|--|C3|--|C4| 1256 -- -- | | --- --- | | -- -- 1257 | |---|CN1|================|CN4|---| | 1258 --- | | --- --- | | --- 1259 | |--|CN2|--|CN3|--| | 1260 --- --- --- --- 1262 Key 1263 --- Direct connection between two nodes 1264 ... CE-to-CE LSP tunnel 1265 === LSP tunnel across the core 1267 Figure 8 : Architecture for Client-Server Network 1269 5.2. TE Reachability 1271 As described in Section 1.1, TE reachability is the ability to reach 1272 a specific address along a TE path. The knowledge of TE reachability 1273 enables an end-to-end TE path to be computed. 1275 In a single network, TE reachability is derived from the Traffic 1276 Engineering Database (TED) that is the collection of all TE 1277 information about all TE links in the network. The TED is usually 1278 built from the data exchanged by the IGP, although it can be 1279 supplemented by configuration and inventory details especially in 1280 transport networks. 1282 In multi-network scenarios, TE reachability information can be 1283 described as "You can get from node X to node Y with the following 1284 TE attributes." For transit cases, nodes X and Y will be edge nodes 1285 of the transit network, but it is also important to consider the 1286 information about the TE connectivity between an edge node and a 1287 specific destination node. 1289 TE reachability may be unqualified (there is a TE path), or may be 1290 qualified by TE attributes such as TE metrics, hop count, available 1291 bandwidth, delay, shared risk, etc. 1293 TE reachability information can be exchanged between networks so that 1294 nodes in one network can determine whether they can establish TE 1295 paths across or into another network. Such exchanges are subject to 1296 a range of policies imposed by the advertiser (for security and 1297 administrative control) and by the receiver (for scalability and 1298 stability). 1300 5.3. Abstraction not Aggregation 1302 Aggregation is the process of synthesizing from available 1303 information. Thus, the virtual node and virtual link models 1304 described in Section 3.6 rely on processing the information available 1305 within a network to produce the aggregate representations of links 1306 and nodes that are presented to the consumer. As described in 1307 Section 3, dynamic aggregation is subject to a number of pitfalls. 1309 In order to distinguish the architecture described in this document 1310 from the previous work on aggregation, we use the term "abstraction" 1311 in this document. The process of abstraction is one of applying 1312 policy to the available TE information within a domain, to produce 1313 selective information that represents the potential ability to 1314 connect across the domain. 1316 Abstraction does not offer all possible connectivity options (refer 1317 to Section 3.6), but does present a general view of potential 1318 connectivity. Abstraction may have a dynamic element, but is not 1319 intended to keep pace with the changes in TE attribute availability 1320 within the network. 1322 Thus, when relying on an abstraction to compute an end-to-end path, 1323 the process might not deliver a usable path. That is, there is no 1324 actual guarantee that the abstractions are current or feasible. 1326 While abstraction uses available TE information, it is subject to 1327 policy and management choices. Thus, not all potential connectivity 1328 will be advertised to each client. The filters may depend on 1329 commercial relationships, the risk of disclosing confidential 1330 information, and concerns about what use is made of the connectivity 1331 that is offered. 1333 5.3.1. Abstract Links 1335 An abstract link is a measure of the potential to connect a pair of 1336 points with certain TE parameters. An abstract link may be realized 1337 by an existing LSP, or may represent the possibility of setting up an 1338 LSP. 1340 When looking at a network such as that in Figure 8, the link from CN1 1341 to CN4 may be an abstract link. If the LSP has already been set up, 1342 it is easy to advertise it as a link with known TE attributes: policy 1343 will have been applied in the server network to decide what LSP to 1344 set up. If the LSP has not yet been established, the potential for 1345 an LSP can be abstracted from the TE information in the core network 1346 subject to policy, and the resultant potential LSP can be advertised. 1348 Since the client nodes do not have visibility into the core network, 1349 they must rely on abstraction information delivered to them by the 1350 core network. That is, the core network will report on the potential 1351 for connectivity. 1353 5.3.2. The Abstraction Layer Network 1355 Figure 9 introduces the Abstraction Layer Network. This construct 1356 separates the client layer resources (nodes C1, C2, C3, and C4, and 1357 the corresponding links), and the server layer resources (nodes CN1, 1358 CN2, CN3, and CN4 and the corresponding links). Additionally, the 1359 architecture introduces an intermediary layer called the Abstraction 1360 Layer. The Abstraction Layer contains the client layer edge nodes 1361 (C2 and C3), the server layer edge nodes (CN1 and CN4), the client- 1362 server links (C2-CN1 and CN4-C3) and the abstract link CN1-CN4. 1364 -- -- -- -- 1365 |C1|--|C2| |C3|--|C4| Client Network 1366 -- | | | | -- 1367 | | | | . . . . . . . . . . . 1368 | | | | 1369 | | | | 1370 | | --- --- | | Abstraction 1371 | |---|CN1|================|CN4|---| | Layer Network 1372 -- | | | | -- 1373 | | | | . . . . . . . . . . . . . . 1374 | | | | 1375 | | | | 1376 | | --- --- | | Server Network 1377 | |--|CN2|--|CN3|--| | 1378 --- --- --- --- 1380 Key 1381 --- Direct connection between two nodes 1382 === Abstract link 1384 Figure 9 : Architecture for Abstraction Layer Network 1386 The client layer network is able to operate as normal. Connectivity 1387 across the network can either be found or not found based on links 1388 that appear in the client layer TED. If connectivity cannot be 1389 found, end-to-end LSPs cannot be set up. This failure may be 1390 reported but no dynamic action is taken by the client layer. 1392 The server network layer also operates as normal. LSPs across the 1393 server layer are set up in response to management commands or in 1394 response to signaling requests. 1396 The Abstraction Layer consists of the physical links between the 1397 two networks, and also the abstract links. The abstract links are 1398 created by the server network according to local policy and represent 1399 the potential connectivity that could be created across the server 1400 network and which the server network is willing to make available for 1401 use by the client network. Thus, in this example, the diameter of 1402 the Abstraction Layer Network is only three hops, but an instance of 1403 an IGP could easily be run so that all nodes participating in the 1404 Abstraction Layer (and in particular the client network edge nodes) 1405 can see the TE connectivity in the layer. 1407 When the client layer needs additional connectivity it can make a 1408 request to the Abstraction Layer Network. For example, the operator 1409 of the client network may want to create a link from C2 to C3. The 1410 Abstraction Layer can see the potential path C2-CN1-CN4-C3, and asks 1411 the server layer to realise the abstract link CN1-CN4. The server 1412 layer provisions the LSP CN1-CN2-CN3-CN4 and makes the LSP available 1413 as a hierarchical LSP to turn the abstract link into a link that can 1414 be used in the client network. The Abstraction Layer can then set up 1415 an LSP C2-CN1-CN4-C3 using stitching or tunneling, and make the LSP 1416 available as a virtual link in the client network. 1418 Sections 5.3.3 and 5.3.4 show how this model is used to satisfy the 1419 requirements for connectivity in client-server networks and in peer 1420 networks. 1422 5.3.2.1. Nodes in the Abstraction Layer Network 1424 Figure 9 shows a very simplified network diagram and the reader would 1425 be forgiven for thinking that only Client Network edge nodes and 1426 Server Network edge nodes may appear in the Abstraction Layer 1427 Network. But this is not the case: other nodes from the Server 1428 Network may be present. This allows the Abstraction Layer network 1429 to be more complex than a full mesh with access spokes. 1431 Thus, as shown in Figure 10, a transit node in the Server Network 1432 (here the node is CN3) can be exposed as a node in the Abstraction 1433 Layer Network with Abstract Links connecting it to other nodes in 1434 the Abstraction Layer Network. Of course, in the network shown in 1435 Figure 10, there is little if any value in exposing CN3, but if it 1436 had other Abstract Links to other nodes in the Abstraction Layer 1437 Network and/or direct connections to Client Network nodes, then the 1438 resulting network would be richer. 1440 -- -- -- -- Client 1441 |C1|--|C2| |C3|--|C4| Network 1442 -- | | | | -- 1443 | | | | . . . . . . . . . 1444 | | | | 1445 | | | | 1446 | | --- --- --- | | Abstraction 1447 | |--|CN1|========|CN3|========|CN5|--| | Layer Network 1448 -- | | | | | | -- 1449 | | | | | | . . . . . . . . . . . . 1450 | | | | | | 1451 | | | | | | Server 1452 | | --- | | --- | | Network 1453 | |--|CN2|-| |-|CN4|--| | 1454 --- --- --- --- --- 1456 Figure 10 : Abstraction Layer Network with Additional Node 1458 It should be noted that the nodes included in the Abstraction Layer 1459 network in this way are not "Abstract Nodes" in the sense of a 1460 virtual node described in Section 3.6. While it is the case that 1461 the policy point responsible for advertising Server Network resources 1462 into the Abstraction Layer Network could choose to advertise Abstract 1463 Nodes in place of real physical nodes, it is believed that doing so 1464 would introduce significant complexity in terms of: 1466 - Coordination between all of the external interfaces of the Abstract 1467 Node 1469 - Management of changes in the Server Network that lead to limited 1470 capabilities to reach (cross-connect) across the Abstract Node. It 1471 may be noted that recent work on limited cross-connect capabilities 1472 such as exist in asymmetrical switches could be used to represent 1473 the limitations in an Abstract Node 1474 [I-D.ietf-ccamp-general-constraint-encode], 1475 [I-D.ietf-ccamp-gmpls-general-constraints-ospf-te]. 1477 5.3.3. Abstraction in Client-Server Networks 1479 Section 5.3.2 has already introduced the concept of the Abstraction 1480 Layer Network through an example of a simple layered network. But it 1481 may be helpful to expand on the example using a slightly more complex 1482 network. 1484 Figure 11 shows a multi-layer network comprising client nodes 1485 (labeled as Cn for n= 0 to 9) and server nodes (labeled as Sn for 1486 n = 1 to 9). 1488 -- -- 1489 |C3|---|C4| 1490 /-- --\ 1491 -- -- -- -- --/ \-- 1492 |C1|---|C2|---|S1|---|S2|----|S3| |C5| 1493 -- /-- --\ --\ --\ /-- 1494 / \-- \-- \-- --/ -- 1495 / |S4| |S5|----|S6|---|C6|---|C7| 1496 / /-- --\ /-- /-- -- 1497 --/ -- --/ -- \--/ --/ 1498 |C8|---|C9|---|S7|---|S8|----|S9|---|C0| 1499 -- -- -- -- -- -- 1501 Figure 11 : An example Multi-Layer Network 1503 If the network in Figure 11 is operated as separate client and server 1504 networks then the client layer topology will appear as shown in 1505 Figure 12. As can be clearly seen, the network is partitioned and 1506 there is no way to set up an LSP from a node on the left hand side 1507 (say C1) to a node on the right hand side (say C7). 1509 -- -- 1510 |C3|---|C4| 1511 -- --\ 1512 -- -- \-- 1513 |C1|---|C2| |C5| 1514 -- /-- /-- 1515 / --/ -- 1516 / |C6|---|C7| 1517 / /-- -- 1518 --/ -- --/ 1519 |C8|---|C9| |C0| 1520 -- -- -- 1522 Figure 12 : Client Layer Topology Showing Partitioned Network 1524 For reference, Figure 13 shows the corresponding server layer 1525 topology. 1527 -- -- -- 1528 |S1|---|S2|----|S3| 1529 --\ --\ --\ 1530 \-- \-- \-- 1531 |S4| |S5|----|S6| 1532 /-- --\ /-- 1533 --/ -- \--/ 1534 |S7|---|S8|----|S9| 1535 -- -- -- 1537 Figure 13 : Server Layer Topology 1539 Operating on the TED for the server layer, a management entity or a 1540 software component may apply policy and consider what abstract links 1541 it might offer for use by the client layer. To do this it obviously 1542 needs to be aware of the connections between the layers (there is no 1543 point in offering an abstract link S2-S8 since this could not be of 1544 any use in this example). 1546 In our example, after consideration of which LSPs could be set up in 1547 the server layer, four abstract links are offered: S1-S3, S3-S6, 1548 S1-S9, and S7-S9. These abstract links are shown as double lines on 1549 the resulting topology of the Abstraction Layer Network in Figure 14. 1550 As can be seen, two of the links must share part of a path (S1-S9 1551 must share with either S1-S3 or with S7-S9). This could be achieved 1552 using distinct resources (for example, separate lambdas) where the 1553 paths are common, but it could also be done using resource sharing. 1555 That would mean that when both S1-S3 and S7-S9 are realized as links 1556 carrying Abstraction Layer LSPs, the link S1-S9 can no longer be 1557 used. 1559 -- 1560 |C3| 1561 /-- 1562 -- -- --/ 1563 |C2|---|S1|==========|S3| 1564 -- --\\ --\\ 1565 \\ \\ 1566 \\ \\-- -- 1567 \\ |S6|---|C6| 1568 \\ -- -- 1569 -- -- \\-- -- 1570 |C9|---|S7|=====|S9|---|C0| 1571 -- -- -- -- 1573 Figure 14 : Abstraction Layer Network with Abstract Links 1575 The separate IGP instance running in the Abstraction Layer Network 1576 mean that this topology is visible at the edge nodes (C2, C3, C6, C9, 1577 and C0) as well as at a PCE if one is present. 1579 Now the client layer is able to make requests to the Abstraction 1580 Layer Network to provide connectivity. In our example, it requests 1581 that C2 is connected to C3 and that C2 is connected to C0. This 1582 results in several actions: 1584 1. The management component for the Abstraction Layer Network asks 1585 its PCE to compute the paths necessary to make the connections. 1586 This yields C2-S1-S3-C3 and C2-S1-S9-C0. 1588 2. The management component for the Abstraction Layer Network 1589 instructs C2 to start the signaling process for the new LSPs in 1590 the Abstraction Layer. 1592 3. C2 signals the LSPs for setup using the explicit routes 1593 C2-S1-S3-C3 and C2-S1-S9-C0. 1595 4. When the signaling messages reach S1 (in our example, both LSPs 1596 traverse S1) the Abstraction Layer Network may find that the 1597 necessary underlying LSPs (S1-S2-S3 and S1-S2-S5-S9) have not 1598 been established since it is not a requirement that an abstract 1599 link be backed up by a real LSP. In this case, S1 computes the 1600 paths of the underlying LSPs and signals them. 1602 5. Once the serve layer LSPs have been established, S1 can continue 1603 to signal the Abstraction Layer LSPs either using the server layer 1604 LSPs as tunnels or as stitching segments. 1606 -- -- 1607 |C3|-|C4| 1608 /-- --\ 1609 / \-- 1610 -- --/ |C5| 1611 |C1|---|C2| /-- 1612 -- /--\ --/ -- 1613 / \ |C6|---|C7| 1614 / \ /-- -- 1615 / \--/ 1616 --/ -- |C0| 1617 |C8|---|C9| -- 1618 -- -- 1620 Figure 15 : Connected Client Layer Network with Additional Links 1622 6. Finally, once the Abstraction Layer LSPs have been set up, the 1623 client layer can be informed and can start to advertise the 1624 new TE links C2-C3 and C2-C0. The resulting client layer topology 1625 is shown in Figure 15. 1627 7. Now the client layer can compute an end-to-end path from C1 to C7. 1629 5.3.3.1 Macro Shared Risk Link Groups 1631 Network links often share fate with one or more other links. That 1632 is, a scenario that may cause a links to fail could cause one or more 1633 other links to fail. This may occur, for example, if the links are 1634 supported by the same fiber bundle, or if some links are routed down 1635 the same duct or in a common piece of infrastructure such as a 1636 bridge. A common way to identify the links that may share fate is to 1637 label them as belonging to a Shared Risk Link Group (SRLG) [RFC4202]. 1639 TE links created from LSPs in lower layers may also share fate, and 1640 it can be hard for a client network to know about this problem 1641 because it does not know the topology of the server network or the 1642 path of the server layer LSPs that are used to create the links in 1643 the client network. 1645 For example, looking at the example used in Section 5.3.3 and 1646 considering the two abstract links S1-S3 and S1-S9 there is no way 1647 for the client layer to know whether the links C2-C0 and C2-C3 share 1648 fate. Clearly, if the client layer uses these links to provide a 1649 link-diverse end-to-end protection scheme, it needs to know that the 1650 links actually share a piece of network infrastructure (the server 1651 layer link S1-S2). 1653 Per [RFC4202], an SRLG represents a shared physical network resource 1654 upon which the normal functioning of a link depends. Multiple SRLGs 1655 can be identified and advertised for every TE link in a network. 1656 However, this can produce a scalability problem in a mutli-layer 1657 network that equates to advertising in the client layer the server 1658 layer route of each TE link. 1660 Macro SRLGs (MSRLGs) address this scaling problem and are a form of 1661 abstraction performed at the same time that the abstract links are 1662 derived. In this way, only the links that actually links in the 1663 server layer need to be advertised rather than every link that 1664 potentially shares resources. This saving is possible because the 1665 abstract links are formulated on behalf of the server layer by a 1666 central management agency that is aware of all of the link 1667 abstractions being offered. 1669 It may be noted that a less optimal alternative path for the abstract 1670 link S1-S9 exists in the server layer (S1-S4-S7-S8-S9). It is would 1671 be possible for the client layer request for connectivity C2-C0 to 1672 request that the path be maximally disjoint from the path C2-C3. 1673 While nothing can be done about the shared link C2-S1, the 1674 Abstraction Layer could request that the server layer instantiate the 1675 link S1-S9 to be diverse from the link S1-S3, and this request could 1676 be honored if the server layer policy allows. 1678 5.3.3.2 A Server with Multiple Clients 1680 A single server network may support multiple client networks. This 1681 is not an uncommon state of affairs for example when the server 1682 network provides connectivity for multiple customers. 1684 In this case, the abstraction provided by the server layer may vary 1685 considerably according to the policies and commercial relationships 1686 with each customer. This variance would lead to a separate 1687 Abstraction Layer Network maintained to support each client network. 1689 On the other hand, it may be that multiple clients are subject to the 1690 same policies and the abstraction can be identical. In this case, a 1691 single Abstraction Layer Network can support more than one client. 1693 The choices here are made as an operational issue by the server layer 1694 network. 1696 5.3.3.3 A Client with Multiple Servers 1698 A single client network may be supported by multiple server networks. 1699 The server networks may provide connectivity between different parts 1700 of the client network or may provide parallel (redundant) 1701 connectivity for the client network. 1703 In this case the Abstraction Layer Network should contain the 1704 abstract links from all server networks so that it can make suitable 1705 computations and create the correct TE links in the client network. 1706 That is, the relationship between client network and Abstraction 1707 Layer Network should be one-to-one. 1709 Note that SRLGs and MSRLGs may be very hard to describe in the case 1710 of multiple server layer networks because the abstraction points will 1711 not know whether the resources in the various server layers share 1712 physical locations. 1714 5.3.4. Abstraction in Peer Networks 1716 Peer networks exist in many situations in the Internet. Packet 1717 networks may peer as IGP areas (levels) or as ASes. Transport 1718 networks (such as optical networks) may peer to provide 1719 concatenations of optical paths through single vendor environments 1720 (see Section 7). Figure 16 shows a simple example of three peer 1721 networks (A, B, and C) each comprising a few nodes. 1723 Network A : Network B : Network C 1724 : : 1725 -- -- -- : -- -- -- : -- -- 1726 |A1|---|A2|----|A3|---|B1|---|B2----|B3|---|C1|---|C2| 1727 -- --\ /-- : -- /--\ -- : -- -- 1728 \--/ : / \ : 1729 |A4| : / \ : 1730 --\ : / \ : 1731 -- \-- : --/ \-- : -- -- 1732 |A5|---|A6|---|B4|----------|B6|---|C3|---|C4| 1733 -- -- : -- -- : -- -- 1734 : : 1735 : : 1737 Figure 16 : A Network Comprising Three Peer Networks 1739 As discussed in Section 2, peered networks do not share visibility of 1740 their topologies or TE capabilities for scaling and confidentiality 1741 reasons. That means, in our example, that computing a path from A1 1742 to C4 can be impossible without the aid of cooperating PCEs or some 1743 form of crankback. 1745 But it is possible to produce abstract links for the reachability 1746 across transit peer networks and instantiate an Abstraction Layer 1747 Network. That network can be enhanced with specific reachability 1748 information if a destination network is partitioned as is the case 1749 with Network C in Figure 16. 1751 Suppose Network B decides to offer three abstract links B1-B3, B4-B3, 1752 and B4-B6. The Abstraction Layer Network could then be constructed 1753 to look like the network in Figure 17. 1755 -- -- -- -- 1756 |A3|---|B1|====|B3|----|C1| 1757 -- -- //-- -- 1758 // 1759 // 1760 // 1761 -- --// -- -- 1762 |A6|---|B4|=====|B6|---|C3| 1763 -- -- -- -- 1765 Figure 17 : Abstraction Layer Network for the Peer Network Example 1767 Using a process similar to that described in Section 5.3.3, Network A 1768 can request connectivity to Network C and the abstract links can be 1769 instantiated as tunnels across the transit network, and edge-to-edge 1770 LSPs can be set up to join the two networks. Furthermore, if Network 1771 C is partitioned, reachability information can be exchanged to allow 1772 Network A to select the correct edge-to-edge LSP as shown in Figure 1773 18. 1775 Network A : Network C 1776 : 1777 -- -- -- : -- -- 1778 |A1|---|A2|----|A3|=========|C1|.....|C2| 1779 -- --\ /-- : -- -- 1780 \--/ : 1781 |A4| : 1782 --\ : 1783 -- \-- : -- -- 1784 |A5|---|A6|=========|C3|.....|C4| 1785 -- -- : -- -- 1787 Figure 18 : Tunnel Connections to Network C with TE Reachability 1789 Peer networking cases can be made far more complex by dual homing 1790 between network peering nodes (for example, A3 might connect to B1 1791 and B4 in Figure 17) and by the networks themselves being arranged in 1792 a mesh (for example, A6 might connect to B4 and C1 in Figure 17). 1794 These additional complexities can be handled gracefully by the 1795 Abstraction Layer Network model. 1797 Further examples of abstraction in peer networks can be found in 1798 Sections 7 and 9. 1800 5.4. Considerations for Dynamic Abstraction 1802 1804 5.5. Requirements for Advertising Links and Nodes 1806 The Abstraction Layer Network is "just another network layer". The 1807 links and nodes in the network need to be advertised along with their 1808 associated TE information (metrics, bandwidth, etc.) so that the 1809 topology is disseminated and so that routing decisions can be made. 1811 This requires a routing protocol running between the nodes in the 1812 Abstraction Layer Network. Note that this routing information 1813 exchange could be piggy-backed on an existing routing protocol 1814 instance, or use a new instance (or even a new protocol). Clearly, 1815 the information exchanged is only that which has been created as 1816 part of the abstraction function according to policy. 1818 It should be noted that in some cases Abstract Link enablement is on- 1819 demand and all that is advertised in the topology for the Abstraction 1820 Layer Network is the potential for an Abstract Link to be set up. In 1821 this case we may ponder how the routing protocol will advertise 1822 topology information over a link that is not yet established. In 1823 other words, there must be a communication channel between the 1824 participating nodes so that the routing protocol messages can flow. 1825 The answer is that control plane connectivity exists in the Server 1826 Network and on the client-server edge links, and this can be used to 1827 carry the routing protocol messages for the Abstraction Layer 1828 Network. The same consideration applies to the advertisement, in the 1829 Client Network of the potential connectivity that the Abstraction 1830 Layer Network can provide. 1832 5.6. Addressing Considerations 1834 [Editor Note: Need to work up some text on addressing to cover the case 1835 of each domain having a different (potentially overlapping) address 1836 space and the need for inter-domain addressing. In fact, this should be 1837 quite simple but needs discussion. 1838 Also needed is a discussion of the case where two client networks share 1839 an abstraction network (section 5.3.3.2). How does addressing work here? 1840 Are there security issues?] 1841 6. Building on Existing Protocols 1843 This section is not intended to prejudge a solutions framework or any 1844 applicability work. It does, however, very briefly serve to note the 1845 existence of protocols that could be examined for applicability to 1846 serve in realizing the model described in this document. 1848 The general principle of protocol re-use is preferred over the 1849 invention of new protocols or additional protocol extensions as 1850 mentioned in Section 3.1. 1852 6.1. BGP-LS 1854 BGP-LS is a set of extensions to BGP described in 1855 [I-D.ietf-idr-ls-distribution]. It's purpose is to announce topology 1856 information from one network to a "north-bound" consumer. 1857 Application of BGP-LS to date has focused on a mechanism to build a 1858 TED for a PCE. However, BGP's mechanisms would also serve well to 1859 advertise Abstract Links from a Server Network into the Abstraction 1860 Layer Network, or to advertise potential connectivity from the 1861 Abstraction Layer Network to the Client Network. 1863 6.2. IGPs 1865 Both OSPF and IS-IS have been extended through a number of RFCs to 1866 advertise TE information. Additionally, both protocols are capable 1867 of running in a multi-instance mode either as ships that pass in the 1868 night (i.e., completely separate instances using different address) 1869 or as dual instances on the same address space. This means that 1870 either IGP could probably be used as the routing protocol in the 1871 Abstraction Layer Network. 1873 6.3. RSVP-TE 1875 RSVP-TE signaling can be used to set up traffic engineered LSPs to 1876 serve as hierarchical LSPs in the core network providing Abstract 1877 Links for the Abstraction Layer Network as described in [RFC4206]. 1878 Similarly, the CE-to-CE LSP tunnel across the Abstraction Layer 1879 Network can be established using RSVP-TE without any protocol 1880 extensions. 1882 Furthermore, the procedures in [RFC6107] allow the dynamic signaling 1883 of the purpose of any LSP that is established. This means that 1884 when an LSP tunnel is set up, the two ends can coordinate into which 1885 routing protocol instance it should be advertised, and can also agree 1886 on the addressing to be said to identify the link that will be 1887 created. 1889 7. Applicability to Optical Domains and Networks 1891 Many optical networks are arranged a set of small domains. Each 1892 domain is a cluster of nodes, usually from the same equipment vendor 1893 and with the same properties. The domain may be constructed as a 1894 mesh or a ring, or maybe as an interconnected set of rings. 1896 The network operator seeks to provide end-to-end connectivity across 1897 a network constructed from multiple domains, and so (of course) the 1898 domains are interconnected. In a network under management control 1899 such as through an Operations Support System (OSS), each domain is 1900 under the operational control of a Network Management System (NMS). 1901 In this way, an end-to-end path may be commissioned by the OSS 1902 instructing each NMS, and the NMSes setting up the path fragments 1903 across the domains. 1905 However, in a system that uses a control plane, there is a need for 1906 integration between the domains. 1908 Consider a simple domain, D1, as shown in Figure 19. In this case, 1909 the nodes A through F are arranged in a topological ring. Suppose 1910 that there is a control plane in use in this domain, and that OSPF is 1911 used as the TE routing protocol. 1913 ----------------- 1914 | D1 | 1915 | B---C | 1916 | / \ | 1917 | / \ | 1918 | A D | 1919 | \ / | 1920 | \ / | 1921 | F---E | 1922 | | 1923 ----------------- 1925 Figure 19 : A Simple Optical Domain 1927 Now consider that the operator's network is built from a mesh of such 1928 domains, D1 through D7, as shown in Figure 20. It is possible that 1929 these domains share a single, common instance of OSPF in which case 1930 there is nothing further to say because that OSPF instance will 1931 distribute sufficient information to build a single TED spanning the 1932 whole network, and an end-to-end path can be computed. A more likely 1933 scenario is that each domain is running its own OSPF instance. In 1934 this case, each is able to handle the peculiarities (or rather, 1935 advanced functions) of each vendor's equipment capabilities. 1937 ------ ------ ------ ------ 1938 | | | | | | | | 1939 | D1 |---| D2 |---| D3 |---| D4 | 1940 | | | | | | | | 1941 ------\ ------\ ------\ ------ 1942 \ | \ | \ | 1943 \------ \------ \------ 1944 | | | | | | 1945 | D5 |---| D6 |---| D7 | 1946 | | | | | | 1947 ------ ------ ------ 1949 Figure 20 : A Simple Optical Domain 1951 The question now is how to combine the multiple sets of information 1952 distributed by the different OSPF instances. Three possible models 1953 suggest themselves based on pre-existing routing practices. 1955 o In the first model (the Area-Based model) each domain is treated as 1956 a separate OSPF area. The end-to-end path will be specified to 1957 traverse multiple areas, and each area will be left to determine 1958 the path across the nodes in the area. The feasibility of an end- 1959 to-end path (and, thus, the selection of the sequence of areas and 1960 their interconnections) can be derived using hierarchical PCE. 1962 This approach, however, fits poorly with established use of the 1963 OSPF area: in this form of optical network, the interconnection 1964 points between domains are likely to be links; and the mesh of 1965 domains is far more interconnected and unstructured than we are 1966 used to seeing in the normal area-based routing paradigm. 1968 Furthermore, while hierarchical PCE may be able to solve this type 1969 of network, the effort involved may be considerable for more than a 1970 small collection of domains. 1972 o Another approach (the AS-Based model) treats each domain as a 1973 separate Autonomous System (AS). The end-to-end path will be 1974 specified to traverse multiple ASes, and each AS will be left to 1975 determine the path across the AS. 1977 This model sits more comfortably with the established routing 1978 paradigm, but causes a massive escalation of ASes in the global 1979 Internet. It would, in practice, require that the operator used 1980 private AS numbers [RFC6996] of which there are plenty. 1982 Then, as suggested in the Area-Based model, hierarchical PCE 1983 could be used to determine the feasibility of an end-to-end path 1984 and to derive the sequence of domains and the points of 1985 interconnection to use. But, just as in that other model, the 1986 scalability of the hierarchical PCE approach must be questioned. 1988 Furthermore, determining the mesh of domains (i.e., the inter-AS 1989 connections) conventionally requires the use of BGP as an inter- 1990 domain routing protocol. However, not only is BGP not normally 1991 available on optical equipment, but this approach indicates that 1992 the TE properties of the inter-domain links would need to be 1993 distributed and updated using BGP: something for which it is not 1994 well suited. 1996 o The third approach (the ASON model) follows the architectural 1997 model set out by the ITU-T [G.8080] and uses the routing protocol 1998 extensions described in [RFC6827]. In this model the concept of 1999 "levels" is introduced to OSPF. Referring back to Figure 20, each 2000 OSPF instance running in a domain would be construed as a "lower 2001 level" OSPF instance and would leak routes into a "higher level" 2002 instance of the protocol that runs across the whole network. 2004 This approach handles the awkwardness of representing the domains 2005 as areas or ASes by simply considering them as domains running 2006 distinct instances of OSPF. Routing advertisements flow "upward" 2007 from the domains to the high level OSPF instance giving it a full 2008 view of the whole network and allowing end-to-end paths to be 2009 computed. Routing advertisements may also flow "downward" from the 2010 network-wide OSPF instance to any one domain so that it has 2011 visibility of the connectivity of the whole network. 2013 While architecturally satisfying, this model suffers from having to 2014 handle the different characteristics of different equipment 2015 vendors. The advertisements coming from each low level domain 2016 would be meaningless when distributed into the other domains, and 2017 the high level domain would need to be kept up-to-date with the 2018 semantics of each new release of each vendor's equipment. 2019 Additionally, the scaling issues associated with a well-meshed 2020 network of domains each with many entry and exit points and each 2021 with network resources that are continually being updated reduces 2022 to the same problem as noted in the virtual link model. 2023 Furthermore, in the event that the domains are under control of 2024 different administrations, the domains would not want to distribute 2025 the details of their topologies and TE resources. 2027 Practically, this third model turns out to be very close to the 2028 methodology described in this document. As noted in Section 7.1 of 2029 [RFC6827], there are policy rules that can be applied to define 2030 exactly what information is exported from or imported to a low level 2031 OSPF instance. The document even notes that some forms of 2032 aggregation may be appropriate. Thus, we can apply the following 2033 simplifications to the mechanisms defined in RFC 6827: 2035 - Zero information is imported to low level domains. 2037 - Low level domains export only abstracted links as defined in this 2038 document and according to local abstraction policy and with 2039 appropriate removal of vendor-specific information. 2041 - There is no need to formally define routing levels within OSPF. 2043 - Export of abstracted links from the domains to the network-wide 2044 routing instance (the abstraction routing layer) can take place 2045 through any mechanism including BGP-LS or direct interaction 2046 between OSPF implementations. 2048 With these simplifications, it can be seen that the framework defined 2049 in this document can be constructed from the architecture discussed 2050 in RFC 6827, but without needing any of the protocol extensions that 2051 that document defines. Thus, using the terminology and concepts 2052 already established, the problem may solved as shown in Figure 21. 2053 The abstraction layer network is constructed from the inter-domain 2054 links, the domain border nodes, and the abstracted (cross-domain) 2055 links. 2057 Abstraction Layer 2058 -- -- -- -- -- -- 2059 | |===========| |--| |===========| |--| |===========| | 2060 | | | | | | | | | | | | 2061 ..| |...........| |..| |...........| |..| |...........| |...... 2062 | | | | | | | | | | | | 2063 | | -- -- | | | | -- -- | | | | -- -- | | 2064 | |_| |_| |_| | | |_| |_| |_| | | |_| |_| |_| | 2065 | | | | | | | | | | | | | | | | | | | | | | | | 2066 -- -- -- -- -- -- -- -- -- -- -- -- 2067 Domain 1 Domain 2 Domain 3 2068 Key Optical Layer 2069 ... Layer separation 2070 --- Physical link 2071 === Abstract link 2073 Figure 21 : The Optical Network Implemented Through the 2074 Abstraction Layer Network 2076 8. Modeling the User-to-Network Interface 2078 The User-to-Network Interface (UNI) is an important architectural 2079 concept in many implementations and deployments of client-server 2080 networks especially those where the client and server network have 2081 different technologies. The UNI can be seen described in [G.8080], 2082 and the GMPLS approach to the UNI is documented in [RFC4208]. Other 2083 GMPLS-related documents describe the application of GMPLS to specific 2084 UNI scenarios: for example, [RFC6005] describes how GMPLS can support 2085 a UNI that provides access to Ethernet services. 2087 Figure 1 of [RFC6005] is reproduced here as Figure 22. It shows the 2088 Ethernet UNI reference model, and that figure can serve as an example 2089 for all similar UNIs. In this case, the UNI is an interface between 2090 client network edge nodes and the server network. It should be noted 2091 that neither the client network nor the server network need be an 2092 Ethernet switching network. 2094 There are three network layers in this model: the client network, the 2095 "Ethernet service network", and the server network. The so-called 2096 Ethernet service network consists of links comprising the UNI links 2097 and the tunnels across the server network, and nodes comprising the 2098 client network edge nodes and various server nodes. That is, the 2099 Ethernet service network is equivalent to the Abstraction Layer 2100 Network with the UNI links being the physical links between the 2101 client and server networks, and the client edge nodes taking the 2102 role of UNI Client-side (UNI-C) and the server edge nodes acting as 2103 the UNI Network-side (UNI-N) nodes. 2105 Client Client 2106 Network +----------+ +-----------+ Network 2107 -------------+ | | | | +------------- 2108 +----+ | | +-----+ | | +-----+ | | +----+ 2109 ------+ | | | | | | | | | | | | +------ 2110 ------+ EN +-+-----+--+ CN +-+----+--+ CN +--+-----+-+ EN +------ 2111 | | | +--+--| +-+-+ | | +--+-----+-+ | 2112 +----+ | | | +--+--+ | | | +--+--+ | | +----+ 2113 | | | | | | | | | | 2114 -------------+ | | | | | | | | +------------- 2115 | | | | | | | | 2116 -------------+ | | | | | | | | +------------- 2117 | | | +--+--+ | | | +--+--+ | | 2118 +----+ | | | | | | +--+--+ | | | +----+ 2119 ------+ +-+--+ | | CN +-+----+--+ CN | | | | +------ 2120 ------+ EN +-+-----+--+ | | | | +--+-----+-+ EN +------ 2121 | | | | +-----+ | | +-----+ | | | | 2122 +----+ | | | | | | +----+ 2123 | +----------+ |-----------+ | 2124 -------------+ Server Network(s) +------------- 2125 Client UNI UNI Client 2126 Network <-----> <-----> Network 2127 Scope of This Document 2129 Legend: EN - Client Edge Node 2130 CN - Server Node 2132 Figure 22 : Ethernet UNI Reference Model 2134 An issue that is often raised concerns how a dual-homed client edge 2135 node (such as that shown at the bottom left-hand corner of Figure 22) 2136 can make determinations about how they connect across the UNI. This 2137 can be particularly important when reachability across the server 2138 network is limited or when two diverse paths are desired (for 2139 example, to provide protection). However, in the model described in 2140 this network, the edge node (the UNI-C) is part of the Abstraction 2141 Layer Network and can see sufficient topology information to make 2142 these decisions. If the approach introduced in this document is used 2143 to model the UNI as described in this section, there is no need to 2144 enhance the signaling protocols at the GMPLS UNI nor to add routing 2145 exchanges at the UNI. 2147 9. Abstraction in L3VPN Multi-AS Environments 2149 Serving layer-3 VPNs (L3PVNs) across a multi-AS or multi-operator 2150 environment currently provides a significant planning challenge. 2151 Figure 6 shows the general case of the problem that needs to be 2152 solved. This section shows how the Abstraction Layer Network can 2153 address this problem. 2155 In the VPN architecture, the CE nodes are the client network edge 2156 nodes, and the PE nodes are the server network edge nodes. The 2157 Abstraction Layer Network is made up of the CE nodes, the CE-PE 2158 links, the PE nodes, and PE-PE tunnels that are the Abstract Links. 2160 In the multi-AS or multi-operator case, the Abstraction Layer Network 2161 also includes the PEs (maybe ASBRs) at the edges of the multiple 2162 server networks, and the PE-PE (maybe inter-AS) links. This gives 2163 rise to the architecture shown in Figure 23. 2165 ........... ............. 2166 VPN Site : : VPN Site 2167 -- -- : : -- -- 2168 |C1|-|CE| : : |CE|-|C2| 2169 -- | | : : | | -- 2170 | | : : | | 2171 | | : : | | 2172 | | : : | | 2173 | | : -- -- -- -- : | | 2174 | |----|PE|=========|PE|---|PE|=====|PE|----| | 2175 -- : | | | | | | | | : -- 2176 ........... | | | | | | | | ............ 2177 | | | | | | | | 2178 | | | | | | | | 2179 | | | | | | | | 2180 | | - - | | | | - | | 2181 | |-|P|-|P|-| | | |-|P|-| | 2182 -- - - -- -- - -- 2184 Figure 23 : The Abstraction Layer Network for a Multi-AS VPN 2186 The policy for adding Abstract Links to the Abstraction Layer Network 2187 will be driven substantially by the needs of the VPN. Thus, when a 2188 new VPN site is added and the existing Abstraction Layer Network 2189 cannot support the required connectivity, a new Abstract Link will be 2190 created out of the underlying network. 2192 It is important to note that each VPN instance can have a separate 2193 Abstraction Layer Network. This means that the Server Network 2194 resources can be partitioned and that traffic can be kept separate. 2195 This can be achieved even when VPN sites from different VPNs connect 2196 at the same PE. Alternatively, multiple VPNs can share the same 2197 Abstraction Layer Network if that is operationally preferable. 2199 Lastly, just as for the UNI discussed in Section 8, the issue of 2200 dual-homing of VPN sites is a function of the Abstraction Layer 2201 Network and so is just a normal routing problem in that network. 2203 10. Scoping Future Work 2205 The section is provided to help guide the work on this problem and to 2206 ensure that oceans are not knowingly boiled. 2208 10.1. Not Solving the Internet 2210 The scope of the use cases and problem statement in this document is 2211 limited to "some small set of interconnected domains." In 2212 particular, it is not the objective of this work to turn the whole 2213 Internet into one large, interconnected TE network. 2215 10.2. Working With "Related" Domains 2217 Subsequent to Section 10.1, the intention of this work is to solve 2218 the TE interconnectivity for only "related" domains. Such domains 2219 may be under common administrative operation (such as IGP areas 2220 within a single AS, or ASes belonging to a single operator), or may 2221 have a direct commercial arrangement for the sharing of TE 2222 information to provide specific services. Thus, in both cases, there 2223 is a strong opportunity for the application of policy. 2225 10.3. Not Finding Optimal Paths in All Situations 2227 As has been well described in this document, abstraction necessarily 2228 involves compromises and removal of information. That means that it 2229 is not possible to guarantee that an end-to-end path over 2230 interconnected TE domains follows the absolute optimal (by any measure 2231 of optimality) path. This is taken as understood, and future work 2232 should not attempt to achieve such paths which can only be found by a 2233 full examination of all network information across all connected 2234 networks. 2236 10.4. Not Breaking Existing Protocols 2238 It is a clear objective of this work to not break existing protocols. 2239 The Internet relies on the stability of a few key routing protocols, 2240 and so it is critical that any new work must not make these protocols 2241 brittle or unstable. 2243 10.5. Sanity and Scaling 2245 All of the above points play into a final observation. This work is 2246 intended to bite off a small problem for some relatively simple use 2247 cases as described in Section 2. It is not intended that this work 2248 will be immediately (or even soon) extended to cover many large 2249 interconnected domains. Obviously the solution should as far as 2250 possible be designed to be extensible and scalable, however, it is 2251 also reasonable to make trade-offs in favor of utility and 2252 simplicity. 2254 11. Manageability Considerations 2256 2258 12. IANA Considerations 2260 This document makes no requests for IANA action. The RFC Editor may 2261 safely remove this section. 2263 13. Security Considerations 2265 2267 14. Acknowledgements 2269 Thanks to Igor Bryskin for useful discussions in the early stages of 2270 this work. 2272 Thanks to Gert Grammel for discussions on the extent of aggregation 2273 in abstract nodes and links. 2275 Thanks to Deborah Brungard, Dieter Beller, Dhruv Dhody, and 2276 Vallinayakam Somasundaram for review and input. 2278 Particular thanks to Vishnu Pavan Beeram for detailed discussions and 2279 white-board scribbling that made many of the ideas in this document 2280 come to life. 2282 Text in Section 5.3.3 is freely adapted from the work of Igor 2283 Bryskin, Wes Doonan, Vishnu Pavan Beeram, John Drake, Gert Grammel, 2284 Manuel Paul, Ruediger Kunze, Friedrich Armbruster, Cyril Margaria, 2285 Oscar Gonzalez de Dios, and Daniele Ceccarelli in 2286 [I-D.beeram-ccamp-gmpls-enni] for which the authors of this document 2287 express their thanks. 2289 15. References 2291 15.1. Informative References 2293 [G.8080] ITU-T, "Architecture for the automatically switched optical 2294 network (ASON)", Recommendation G.8080. 2296 [I-D.beeram-ccamp-gmpls-enni] 2297 Bryskin, I., Beeram, V. P., Drake, J. et al., "Generalized 2298 Multiprotocol Label Switching (GMPLS) External Network 2299 Network Interface (E-NNI): Virtual Link Enhancements for 2300 the Overlay Model", draft-beeram-ccamp-gmpls-enni, work in 2301 progress. 2303 [I-D.ietf-ccamp-general-constraint-encode] 2304 Bernstein, G., Lee, Y., Li, D., and Imajuku, W., "General 2305 Network Element Constraint Encoding for GMPLS Controlled 2306 Networks", draft-ietf-ccamp-general-constraint-encode, work 2307 in progress. 2309 [I-D.ietf-ccamp-gmpls-general-constraints-ospf-te] 2310 Zhang, F., Lee, Y,. Han, J, Bernstein, G., and Xu, Y., 2311 "OSPF-TE Extensions for General Network Element 2312 Constraints", draft-ietf-ccamp-gmpls-general-constraints- 2313 ospf-te, work in progress. 2315 [I-D.ietf-ccamp-rsvp-te-srlg-collect] 2316 Zhang, F. (Ed.) and O. Gonzalez de Dios (Ed.), "RSVP-TE 2317 Extensions for Collecting SRLG Information", draft-ietf- 2318 ccamp-rsvp-te-srlg-collect, work in progress. 2320 [I-D.ietf-ccamp-te-metric-recording] 2321 Z. Ali, et al., "Resource ReserVation Protocol-Traffic 2322 Engineering (RSVP-TE) extension for recording TE Metric of 2323 a Label Switched Path," draft-ali-ccamp-te-metric- 2324 recording, work in progress. 2326 [I-D.ietf-ccamp-xro-lsp-subobject] 2327 Z. Ali, et al., "Resource ReserVation Protocol-Traffic 2328 Engineering (RSVP-TE) LSP Route Diversity using Exclude 2329 Routes," draft-ali-ccamp-xro-lsp-subobject, work in 2330 progress. 2332 [I-D.ietf-idr-ls-distribution] 2333 Gredler, H., Medved, J., Previdi, S., Farrel, A., and Ray, 2334 S., "North-Bound Distribution of Link-State and TE 2335 Information using BGP", draft-ietf-idr-ls-distribution, 2336 work in progress. 2338 [RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O'Dell, M., and 2339 McManus, J., "Requirements for Traffic Engineering Over 2340 MPLS", RFC 2702, September 1999. 2342 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 2343 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 2344 Tunnels", RFC 3209, December 2001. 2346 [RFC3473] L. Berger, "Generalized Multi-Protocol Label Switching 2347 (GMPLS) Signaling Resource ReserVation Protocol-Traffic 2348 Engineering (RSVP-TE) Extensions", RC 3473, January 2003. 2350 [RFC3630] Katz, D., Kompella, and K., Yeung, D., "Traffic Engineering 2351 (TE) Extensions to OSPF Version 2", RFC 3630, September 2352 2003. 2354 [RFC3945] Mannie, E., (Ed.), "Generalized Multi-Protocol Label 2355 Switching (GMPLS) Architecture", RFC 3945, October 2004. 2357 [RFC4105] Le Roux, J.-L., Vasseur, J.-P., and Boyle, J., 2358 "Requirements for Inter-Area MPLS Traffic Engineering", 2359 RFC 4105, June 2005. 2361 [RFC4202] Kompella, K. and Y. Rekhter, "Routing Extensions in Support 2362 of Generalized Multi-Protocol Label Switching (GMPLS)", 2363 RFC 4202, October 2005. 2365 [RFC4206] Kompella, K. and Y. Rekhter, "Label Switched Paths (LSP) 2366 Hierarchy with Generalized Multi-Protocol Label Switching 2367 (GMPLS) Traffic Engineering (TE)", RFC 4206, October 2005. 2369 [RFC4208] Swallow, G., Drake, J., Ishimatsu, H., and Y. Rekhter, 2370 "User-Network Interface (UNI): Resource ReserVation 2371 Protocol-Traffic Engineering (RSVP-TE) Support for the 2372 Overlay Model", RFC 4208, October 2005. 2374 [RFC4216] Zhang, R., and Vasseur, J.-P., "MPLS Inter-Autonomous 2375 System (AS) Traffic Engineering (TE) Requirements", 2376 RFC 4216, November 2005. 2378 [RFC4271] Rekhter, Y., Li, T., and Hares, S., "A Border Gateway 2379 Protocol 4 (BGP-4)", RFC 4271, January 2006. 2381 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 2382 Element (PCE)-Based Architecture", RFC 4655, August 2006. 2384 [RFC4726] Farrel, A., Vasseur, J.-P., and Ayyangar, A., "A Framework 2385 for Inter-Domain Multiprotocol Label Switching Traffic 2386 Engineering", RFC 4726, November 2006. 2388 [RFC4847] T. Takeda (Ed.), "Framework and Requirements for Layer 1 2389 Virtual Private Networks," RFC 4847, April 2007. 2391 [RFC4874] Lee, CY., Farrel, A., and S. De Cnodder, "Exclude Routes - 2392 Extension to Resource ReserVation Protocol-Traffic 2393 Engineering (RSVP-TE)", RFC 4874, April 2007. 2395 [RFC4920] Farrel, A., Satyanarayana, A., Iwata, A., Fujita, N., and 2396 Ash, G., "Crankback Signaling Extensions for MPLS and GMPLS 2397 RSVP-TE", RFC 4920, July 2007. 2399 [RFC5150] Ayyangar, A., Kompella, K., Vasseur, JP., and A. Farrel, 2400 "Label Switched Path Stitching with Generalized 2401 Multiprotocol Label Switching Traffic Engineering (GMPLS 2402 TE)", RFC 5150, February 2008. 2404 [RFC5152] Vasseur, JP., Ayyangar, A., and Zhang, R., "A Per-Domain 2405 Path Computation Method for Establishing Inter-Domain 2406 Traffic Engineering (TE) Label Switched Paths (LSPs)", 2407 RFC 5152, February 2008. 2409 [RFC5195] Ould-Brahim, H., Fedyk, D., and Y. Rekhter, "BGP-Based 2410 Auto-Discovery for Layer-1 VPNs", RFC 5195, June 2008. 2412 [RFC5212] Shiomoto, K., Papadimitriou, D., Le Roux, JL., Vigoureux, 2413 M., and D. Brungard, "Requirements for GMPLS-Based Multi- 2414 Region and Multi-Layer Networks (MRN/MLN)", RFC 5212, July 2415 2008. 2417 [RFC5251] Fedyk, D., Rekhter, Y., Papadimitriou, D., Rabbat, R., and 2418 L. Berger, "Layer 1 VPN Basic Mode", RFC 5251, July 2008. 2420 [RFC5252] Bryskin, I. and L. Berger, "OSPF-Based Layer 1 VPN Auto- 2421 Discovery", RFC 5252, July 2008. 2423 [RFC5305] Li, T., and Smit, H., "IS-IS Extensions for Traffic 2424 Engineering", RFC 5305, October 2008. 2426 [RFC5440] Vasseur, JP. and Le Roux, JL., "Path Computation Element 2427 (PCE) Communication Protocol (PCEP)", RFC 5440, March 2009. 2429 [RFC5441] Vasseur, JP., Zhang, R., Bitar, N, and Le Roux, JL., "A 2430 Backward-Recursive PCE-Based Computation (BRPC) Procedure 2431 to Compute Shortest Constrained Inter-Domain Traffic 2432 Engineering Label Switched Paths", RFC 5441, April 2009. 2434 [RFC5523] L. Berger, "OSPFv3-Based Layer 1 VPN Auto-Discovery", RFC 2435 5523, April 2009. 2437 [RFC5553] Farrel, A., Bradford, R., and JP. Vasseur, "Resource 2438 Reservation Protocol (RSVP) Extensions for Path Key 2439 Support", RFC 5553, May 2009. 2441 [RFC5623] Oki, E., Takeda, T., Le Roux, JL., and A. Farrel, 2442 "Framework for PCE-Based Inter-Layer MPLS and GMPLS Traffic 2443 Engineering", RFC 5623, September 2009. 2445 [RFC6005] Nerger, L., and D. Fedyk, "Generalized MPLS (GMPLS) Support 2446 for Metro Ethernet Forum and G.8011 User Network Interface 2447 (UNI)", RFC 6005, October 2010. 2449 [RFC6107] Shiomoto, K., and A. Farrel, "Procedures for Dynamically 2450 Signaled Hierarchical Label Switched Paths", RFC 6107, 2451 February 2011. 2453 [RFC6805] King, D., and A. Farrel, "The Application of the Path 2454 Computation Element Architecture to the Determination of a 2455 Sequence of Domains in MPLS and GMPLS", RFC 6805, November 2456 2012. 2458 [RFC6827] Malis, A., Lindem, A., and D. Papadimitriou, "Automatically 2459 Switched Optical Network (ASON) Routing for OSPFv2 2460 Protocols", RFC 6827, January 2013. 2462 [RFC6996] J. Mitchell, "Autonomous System (AS) Reservation for 2463 Private Use", BCP 6, RFC 6996, July 2013. 2465 [RFC7399] Farrel, A. and D. King, "Unanswered Questions in the Path 2466 Computation Element Architecture", RFC 7399, October 2014. 2468 Authors' Addresses 2470 Adrian Farrel 2471 Juniper Networks 2472 EMail: adrian@olddog.co.uk 2474 John Drake 2475 Juniper Networks 2476 EMail: jdrake@juniper.net 2478 Nabil Bitar 2479 Verizon 2480 40 Sylvan Road 2481 Waltham, MA 02145 2482 EMail: nabil.bitar@verizon.com 2483 George Swallow 2484 Cisco Systems, Inc. 2485 1414 Massachusetts Ave 2486 Boxborough, MA 01719 2487 EMail: swallow@cisco.com 2489 Daniele Ceccarelli 2490 Ericsson 2491 Via A. Negrone 1/A 2492 Genova - Sestri Ponente 2493 Italy 2494 EMail: daniele.ceccarelli@ericsson.com 2496 Xian Zhang 2497 Huawei Technologies 2498 Email: zhang.xian@huawei.com 2500 Contributors 2502 Gert Grammel 2503 Juniper Networks 2504 Email: ggrammel@juniper.net 2506 Vishnu Pavan Beeram 2507 Juniper Networks 2508 Email: vbeeram@juniper.net 2510 Oscar Gonzalez de Dios 2511 Email: ogondio@tid.es 2513 Fatai Zhang 2514 Email: zhangfatai@huawei.com 2516 Zafar Ali 2517 Email: zali@cisco.com 2519 Rajan Rao 2520 Email: rrao@infinera.com 2522 Sergio Belotti 2523 Email: sergio.belotti@alcatel-lucent.com 2525 Diego Caviglia 2526 Email: diego.caviglia@ericsson.com 2528 Jeff Tantsura 2529 Email: jeff.tantsura@ericsson.com 2530 Khuzema Pithewan 2531 Email: kpithewan@infinera.com 2533 Cyril Margaria 2534 Email: cyril.margaria@googlemail.com 2536 Victor Lopez 2537 Email: vlopez@tid.es