idnits 2.17.1 draft-ietf-rtgwg-bgp-pic-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 5 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 25, 2017) is 2499 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 3107 (ref. '4') (Obsoleted by RFC 8277) == Outdated reference: A later version (-15) exists of draft-ietf-idr-add-paths-12 == Outdated reference: A later version (-22) exists of draft-ietf-spring-segment-routing-mpls-02 Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group A. Bashandy, Ed. 2 Internet Draft C. Filsfils 3 Intended status: Informational Cisco Systems 4 Expires: November 2017 P. Mohapatra 5 Sproute Networks 6 May 25, 2017 8 BGP Prefix Independent Convergence 9 draft-ietf-rtgwg-bgp-pic-05.txt 11 Abstract 13 In the network comprising thousands of iBGP peers exchanging millions 14 of routes, many routes are reachable via more than one next-hop. 15 Given the large scaling targets, it is desirable to restore traffic 16 after failure in a time period that does not depend on the number of 17 BGP prefixes. In this document we proposed an architecture by which 18 traffic can be re-routed to ECMP or pre-calculated backup paths in a 19 timeframe that does not depend on the number of BGP prefixes. The 20 objective is achieved through organizing the forwarding data 21 structures in a hierarchical manner and sharing forwarding elements 22 among the maximum possible number of routes. The proposed technique 23 achieves prefix independent convergence while ensuring incremental 24 deployment, complete automation, and zero management and provisioning 25 effort. It is noteworthy to mention that the benefits of BGP-PIC are 26 hinged on the existence of more than one path whether as ECMP or 27 primary-backup. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 This document may contain material from IETF Documents or IETF 35 Contributions published or made publicly available before November 36 10, 2008. The person(s) controlling the copyright in some of this 37 material may not have granted the IETF Trust the right to allow 38 modifications of such material outside the IETF Standards Process. 39 Without obtaining an adequate license from the person(s) 40 controlling the copyright in such materials, this document may not 41 be modified outside the IETF Standards Process, and derivative 42 works of it may not be created outside the IETF Standards Process, 43 except to format it for publication as an RFC or to translate it 44 into languages other than English. 46 Internet-Drafts are working documents of the Internet Engineering 47 Task Force (IETF), its areas, and its working groups. Note that 48 other groups may also distribute working documents as Internet- 49 Drafts. 51 Internet-Drafts are draft documents valid for a maximum of six 52 months and may be updated, replaced, or obsoleted by other 53 documents at any time. It is inappropriate to use Internet-Drafts 54 as reference material or to cite them other than as "work in 55 progress." 57 The list of current Internet-Drafts can be accessed at 58 http://www.ietf.org/ietf/1id-abstracts.txt 60 The list of Internet-Draft Shadow Directories can be accessed at 61 http://www.ietf.org/shadow.html 63 This Internet-Draft will expire on November 25, 2017. 65 Copyright Notice 67 Copyright (c) 2017 IETF Trust and the persons identified as the 68 document authors. All rights reserved. 70 This document is subject to BCP 78 and the IETF Trust's Legal 71 Provisions Relating to IETF Documents 72 (http://trustee.ietf.org/license-info) in effect on the date of 73 publication of this document. Please review these documents 74 carefully, as they describe your rights and restrictions with 75 respect to this document. Code Components extracted from this 76 document must include Simplified BSD License text as described in 77 Section 4.e of the Trust Legal Provisions and are provided without 78 warranty as described in the Simplified BSD License. 80 Table of Contents 82 1. Introduction...................................................3 83 1.1. Conventions used in this document.........................4 84 1.2. Terminology...............................................4 85 2. Overview.......................................................6 86 2.1. Dependency................................................6 87 2.1.1. Hierarchical Hardware FIB............................6 88 2.1.2. Availability of more than one primary or secondary BGP 89 next-hops...................................................7 90 2.2. BGP-PIC Illustration......................................7 91 3. Constructing the Shared Hierarchical Forwarding Chain..........9 92 3.1. Constructing the BGP-PIC forwarding Chain.................9 93 3.2. Example: Primary-Backup Path Scenario....................10 94 4. Forwarding Behavior...........................................11 95 5. Handling Platforms with Limited Levels of Hierarchy...........12 96 5.1. Flattening the Forwarding Chain..........................12 97 5.2. Example: Flattening a forwarding chain...................14 98 6. Forwarding Chain Adjustment at a Failure......................21 99 6.1. BGP-PIC core.............................................22 100 6.2. BGP-PIC edge.............................................23 101 6.2.1. Adjusting forwarding Chain in egress node failure...23 102 6.2.2. Adjusting Forwarding Chain on PE-CE link Failure....23 103 6.3. Handling Failures for Flattened Forwarding Chains........24 104 7. Properties....................................................25 105 7.1. Coverage.................................................25 106 7.1.1. A remote failure on the path to a BGP next-hop......25 107 7.1.2. A local failure on the path to a BGP next-hop.......25 108 7.1.3. A remote iBGP next-hop fails........................26 109 7.1.4. A local eBGP next-hop fails.........................26 110 7.2. Performance..............................................26 111 7.3. Automated................................................27 112 7.4. Incremental Deployment...................................27 113 8. Security Considerations.......................................27 114 9. IANA Considerations...........................................27 115 10. Conclusions..................................................27 116 11. References...................................................28 117 11.1. Normative References....................................28 118 11.2. Informative References..................................28 119 12. Acknowledgments..............................................29 120 Appendix A. Perspective..........................................30 122 1. Introduction 124 As a path vector protocol, BGP propagates reachability serially. 125 Hence BGP convergence speed is limited by the time taken to 126 serially propagate reachability information from the point of 127 failure to the device that must re-converge. BGP speakers exchange 128 reachability information about prefixes[2][3] and, for labeled 129 address families, namely AFI/SAFI 1/4, 2/4, 1/128, and 2/128, an 130 edge router assigns local labels to prefixes and associates the 131 local label with each advertised prefix such as L3VPN [8], 6PE 132 [9], and Softwire [7] using BGP label unicast technique[4]. A BGP 133 speaker then applies the path selection steps to choose the best 134 path. In modern networks, it is not uncommon to have a prefix 135 reachable via multiple edge routers. In addition to proprietary 136 techniques, multiple techniques have been proposed to allow for 137 BGP to advertise more than one path for a given prefix 138 [6][11][12], whether in the form of equal cost multipath or 139 primary-backup. Another common and widely deployed scenario is 140 L3VPN with multi-homed VPN sites with unique Route Distinguisher. 141 It is advantageous to utilize the commonality among paths used by 142 NLRIs to significantly improve convergence in case of topology 143 modifications. 145 This document proposes a hierarchical and shared forwarding chain 146 organization that allows traffic to be restored to pre-calculated 147 alternative equal cost primary path or backup path in a time 148 period that does not depend on the number of BGP prefixes. The 149 technique relies on internal router behavior that is completely 150 transparent to the operator and can be incrementally deployed and 151 enabled with zero operator intervention. 153 1.1. Conventions used in this document 155 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 156 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" 157 in this document are to be interpreted as described in RFC-2119 158 [1]. 160 In this document, these words will appear with that interpretation 161 only when in ALL CAPS. Lower case uses of these words are not to 162 be interpreted as carrying RFC-2119 significance. 164 1.2. Terminology 166 This section defines the terms used in this document. For ease of 167 use, we will use terms similar to those used by L3VPN [8] 169 o BGP prefix: A prefix P/m (of any AFI/SAFI) that a BGP speaker 170 has a path for. 172 o IGP prefix: A prefix P/m (of any AFI/SAFI) that is learnt via 173 an Interior Gateway Protocol, such as OSPF and ISIS, has a path 174 for. The prefix may be learnt directly through the IGP or 175 redistributed from other protocol(s) 177 o CE: An external router through which an egress PE can reach a 178 prefix P/m. 180 o Ingress PE, "iPE": A BGP speaker that learns about a prefix 181 through a IBGP peer and chooses an egress PE as the next-hop for 182 the prefix. 184 o Path: The next-hop in a sequence of nodes starting from the 185 current node and ending with the destination node or network 186 identified by the prefix. The nodes may not be directly 187 connected. 189 o Recursive path: A path consisting only of the IP address of the 190 next-hop without the outgoing interface. Subsequent lookups are 191 necessary to determine the outgoing interface and a directly 192 connected next-hop 194 o Non-recursive path: A path consisting of the IP address of a 195 directly connected next-hop and outgoing interface 197 o Primary path: A recursive or non-recursive path that can be 198 used all the time as long as a walk starting from this path can 199 end to an adjacency. A prefix can have more than one primary 200 path 202 o Backup path: A recursive or non-recursive path that can be used 203 only after some or all primary paths become unreachable 205 o Leaf: A container data structure for a prefix or local label. 206 Alternatively, it is the data structure that contains prefix 207 specific information. 209 o IP leaf: The leaf corresponding to an IPv4 or IPv6 prefix 211 o Label leaf. The leaf corresponding to a locally allocated label 212 such as the VPN label on an egress PE [8]. 214 o Pathlist: An array of paths used by one or more prefix to forward 215 traffic to destination(s) covered by a IP prefix. Each path in 216 the pathlist carries its "path-index" that identifies its 217 position in the array of paths. "). In general, the value of the 218 "path-index" stored in path may not necessarily has the same 219 value of the location of the path in the pathlist. For example 220 the 3rd path may carry path-index value of 1 222 o A pathlist may contain a mix of primary and backup paths 224 o OutLabel-List: Each labeled prefix is associated with an 225 OutLabel-List. The OutLabel-List is an array of one or more 226 outgoing labels and/or label actions where each label or label 227 action has 1-to-1 correspondence to a path in the pathlist. 228 Label actions are: push the label, pop the label, swap the 229 incoming label with the label in the Outlabel-Array entry, or 230 don't push anything at all in case of "unlabeled". The prefix 231 may be an IGP or BGP prefix 233 o Adjacency: The layer 2 encapsulation leading to the layer 3 234 directly connected next-hop 236 o Dependency: An object X is said to be a dependent or child of 237 object Y if there is at least one forwarding chain where the 238 forwarding engine must visits the object X before visiting the 239 object Y in order to forward a packet. Note that if object X is 240 a child of object Y, then Y cannot be deleted unless object X 241 is no longer a dependent/child of object Y 243 o Route: A prefix with one or more paths associated with it. 244 Hence the minimum set of objects needed to construct a route is 245 a leaf and a pathlist. 247 2. Overview 249 The idea of BGP-PIC is based on two pillars 251 o A shared hierarchical forwarding Chain: It is not uncommon to see 252 multiple destinations are reachable via the same list of next- 253 hops. Instead of having a separate list of next-hops for each 254 destination, all destinations sharing the same list of next-hops 255 can point to a single copy of this list thereby allowing fast 256 convergence by making changes to a single shared list of next- 257 hops rather than possibly a large number of destinations. Because 258 paths in a pathlist may be recursive, a hierarchy is formed 259 between pathlist and the resolving prefix whereby the pathlist 260 depends on the resolving prefix. 262 o A forwarding plane that supports multiple levels of indirection: 263 A forwarding that starts with a destination and ends with an 264 outgoing interface is not a simple flat structure. Instead a 265 forwarding entry is constructed via multiple levels of 266 dependency. A BGP NLRI uses a recursive next-hop, which in turn 267 resolves via an IGP next-hop, which in turn resolves via an 268 adjacency consisting of one or more outgoing interface(s) and 269 next-hop(s). 271 Designing a forwarding plane that constructs multi-level forwarding 272 chains with maximal sharing of forwarding objects allows rerouting a 273 large number of destinations by modifying a small number of objects 274 thereby achieving convergence in a time frame that does not depend 275 on the number of destinations. For example, if the IGP prefix that 276 resolves a recursive next-hop is updated there is no need to update 277 the possibly large number of BGP NLRIs that use this recursive next- 278 hop. 280 2.1. Dependency 282 This section describes the required functionality in the forwarding 283 and control planes to support BGP-PIC described in this document 285 2.1.1. Hierarchical Hardware FIB 287 BGP PIC requires a hierarchical hardware FIB support: for each BGP 288 forwarded packet, a BGP leaf is looked up, then a BGP Pathlist is 289 consulted, then an IGP Pathlist, then an Adjacency. 291 An alternative method consists in "flattening" the dependencies when 292 programming the BGP destinations into HW FIB resulting in 293 potentially eliminating both the BGP Path-List and IGP Path-List 294 consultation. Such an approach decreases the number of memory 295 lookup's per forwarding operation at the expense of HW FIB memory 296 increase (flattening means less sharing hence duplication), loss of 297 ECMP properties (flattening means less pathlist entropy) and loss of 298 BGP PIC properties. 300 2.1.2. Availability of more than one primary or secondary BGP next-hops 302 When the primary BGP next-hop fails, BGP PIC depends on the 303 availability of a pre-computed and pre-installed secondary BGP next- 304 hop in the BGP Pathlist. 306 The existence of a secondary next-hop is clear for the following 307 reason: a service caring for network availability will require two 308 disjoint network connections hence two BGP next-hops. 310 The BGP distribution of the secondary next-hop is available thanks 311 to the following BGP mechanisms: Add-Path [11], BGP Best-External 312 [6], diverse path [12], and the frequent use in VPN deployments of 313 different VPN RD's per PE. It is noteworthy to mention that the 314 availability of another BGP path does not mean that all failure 315 scenarios can be covered by simply forwarding traffic to the 316 available secondary path. The discussion of how to cover various 317 failure scenarios is beyond the scope of this document 319 2.2. BGP-PIC Illustration 321 To illustrate the two pillars above as well as the platform 322 dependency, we will use an example of a simple multihomed L3VPN [8] 323 prefix in a BGP-free core running LDP [5] or segment routing over 324 MPLS forwarding plane [14]. 326 +--------------------------------+ 327 | | 328 | ePE2 (IGP-IP1 192.0.2.1, Loopback) 329 | | \ 330 | | \ 331 | | \ 332 iPE | CE....VRF "Blue", ASnum 65000 333 | | / (VPN-IP1 11.1.1.0/24) 334 | | / (VPN-IP2 11.1.2.0/24) 335 | LDP/Segment-Routing Core | / 336 | ePE1 (IGP-IP2 192.0.2.2, Loopback) 337 | | 338 +--------------------------------+ 339 Figure 1 VPN prefix reachable via multiple PEs 341 Referring to Figure 1, suppose the iPE (the ingress PE) receives 342 NLRIs for the VPN prefixes VPN-IP1 and VPN-IP2 from two egress PEs, 343 ePE1 and ePE2 with next-hop BGP-NH1 and BGP-NH2, respectively. 344 Assume that ePE1 advertise the VPN labels VPN-L11 and VPN-L12 while 345 ePE2 advertise the VPN labels VPN-L21 and VPN-L22 for VPN-IP1 and 346 VPN-IP2, respectively. Suppose that BGP-NH1 and BGP-NH2 are resolved 347 via the IGP prefixes IGP-IP1 and IGP-P2, where each happen to have 2 348 ECMP paths with IGP-NH1 and IGP-NH2 reachable via the interfaces I1 349 and I2, respectively. Suppose that local labels (whether LDP [5] or 350 segment routing [14]) on the downstream LSRs for IGP-IP1 are IGP-L11 351 and IGP-L12 while for IGP-P2 are IGP-L21 and IGP-L22. As such, the 352 routing table at iPE is as follows: 354 65000:11.1.1.0/24 355 via ePE1 (192.0.2.1), VPN Label: VPN-L11 356 via ePE2 (192.0.2.2), VPN Label: VPN-L21 358 65000:11.1.2.0/24 359 via ePE1 (192.0.2.1), VPN Label: VPN-L12 360 via ePE2 (192.0.2.2), VPN Label: VPN-L22 362 192.0.2.1/32 363 via Core, Label: IGP-L11 364 via Core, Label: IGP-L12 366 192.0.2.2/32 367 via Core, Label: IGP-L21 368 via Core, Label: IGP-L22 370 Based on the above routing table, a hierarchical forwarding chain 371 can be constructed as shown in Figure 2. 373 IP Leaf: Pathlist: IP Leaf: Pathlist: 374 -------- +-------+ -------- +----------+ 375 VPN-IP1-->|BGP-NH1|-->IGP-IP1(BGP NH1)--->|IGP NH1,I1|--->Adjacency1 376 | |BGP-NH2|-->.... | |IGP NH2,I2|--->Adjacency2 377 | +-------+ | +----------+ 378 | | 379 | | 380 v v 381 OutLabel-List: OutLabel-List: 382 +----------------------+ +----------------------+ 383 |VPN-L11 (VPN-IP1, NH1)| |IGP-L11 (IGP-IP1, NH1)| 384 |VPN-L12 (VPN-IP1, NH2)| |IGP-L12 (IGP-IP1, NH2)| 385 +----------------------+ +----------------------+ 387 Figure 2 Shared Hierarchical Forwarding Chain at iPE 389 The forwarding chain depicted in Figure 2 illustrates the first 390 pillar, which is sharing and hierarchy. We can see that the BGP 391 pathlist consisting of BGP-NH1 and BGP-NH2 is shared by all NLRIs 392 reachable via ePE1 and ePE2. As such, it is possible to make changes 393 to the pathlist without having to make changes to the NLRIs. For 394 example, if BGP-NH2 becomes unreachable, there is no need to modify 395 any of the possibly large number of NLRIs. Instead only the shared 396 pathlist needs to be modified. Likewise, due to the hierarchical 397 structure of the forwarding chain, it is possible to make 398 modifications to the IGP routes without having to make any changes 399 to the BGP NLRIs. For example, if the interface "I2" goes down, only 400 the shared IGP pathlist needs to be updated, but none of the IGP 401 prefixes sharing the IGP pathlist nor the BGP NLRIs using the IGP 402 prefixes for resolution need to be modified. 404 Figure 2 can also be used to illustrate the second BGP-PIC pillar. 405 Having a deep forwarding chain such as the one illustrated in Figure 406 2 requires a forwarding plane that is capable of accessing multiple 407 levels of indirection in order to calculate the outgoing 408 interface(s) and next-hops(s). While a deeper forwarding chain 409 minimizes the re-convergence time on topology change, there will 410 always exist platforms with limited capabilities and hence imposing 411 a limit on the depth of the forwarding chain. Section 5 describes 412 how to gracefully trade off convergence speed with the number of 413 hierarchical levels to support platforms with different 414 capabilities. 416 3. Constructing the Shared Hierarchical Forwarding Chain 418 Constructing the forwarding chain is an application of the two 419 pillars described in Section 2. This section describes how to 420 construct the forwarding chain in hierarchical shared manner 422 3.1. Constructing the BGP-PIC forwarding Chain 424 The whole process starts when BGP downloads a prefix to FIB. The 425 prefix contains one or more outgoing paths. For certain labeled 426 prefixes, such as VPN [8] prefixes, each path may be associated with 427 an outgoing label and the prefix itself may be assigned a local 428 label. The list of outgoing paths defines a pathlist. If such 429 pathlist does not already exist, then FIB creates a new pathlist, 430 otherwise the existing pathlist is used. The BGP prefix is added as 431 a dependent of the pathlist. 433 The previous step constructs the upper part of the hierarchical 434 forwarding chain. The forwarding chain is completed by resolving the 435 paths of the pathlist. A BGP path usually consists of a next-hop. 436 The next-hop is resolved by finding a matching IGP prefix. 438 The end result is a hierarchical shared forwarding chain where the 439 BGP pathlist is shared by all BGP prefixes that use the same list of 440 paths and the IGP prefix is shared by all pathlists that have a path 441 resolving via that IGP prefix. It is noteworthy to mention that the 442 forwarding chain is constructed without any operator intervention at 443 all. 445 The remainder of this section goes over an example to illustrate the 446 applicability of BGP-PIC in a primary-backup path scenario. 448 3.2. Example: Primary-Backup Path Scenario 450 Consider the egress PE ePE1 in the case of the multi-homed VPN 451 prefixes in the BGP-free core depicted in Figure 1. Suppose ePE1 452 determines that the primary path is the external path but the backup 453 path is the iBGP path to the other PE ePE2 with next-hop BGP-NH2. 454 ePE2 constructs the forwarding chain depicted in Figure 3. We are 455 only showing a single VPN prefix for simplicity. But all prefixes 456 that are multihomed to ePE1 and ePE2 share the BGP pathlist. 458 BGP OutLabel Array 459 VPN-L11 +---------+ 460 (Label-leaf)---+---->|Unlabeled| 461 | +---------+ 462 | | VPN-L21 | 463 | | (swap) | 464 | +---------+ 465 | 466 | BGP Pathlist 467 | +------------+ Connected route 468 | | CE-NH |------>(to the CE) 469 | |path-index=0| 470 | +------------+ 471 | | VPN-NH2 | 472 VPN-IP1 -----+------------------>| (backup) |------>IGP Leaf 473 (IP prefix leaf) |path-index=1| (Towards ePE2) 474 | +------------+ 475 | 476 | BGP OutLabel Array 477 | +---------+ 478 +------------->|Unlabeled| 479 +---------+ 480 | VPN-L21 | 481 | (push) | 482 +---------+ 484 Figure 3 : VPN Prefix Forwarding Chain with eiBGP paths on egress PE 485 The example depicted in Figure 3 differs from the example in Figure 486 2 in two main aspects. First, as long as the primary path towards 487 the CE (external path) is useable, it will be the only path used for 488 forwarding while the OutLabel-List contains both the unlabeled label 489 (primary path) and the VPN label (backup path) advertised by the 490 backup path ePE2. The second aspect is presence of the label leaf 491 corresponding to the VPN prefix. This label leaf is used to match 492 VPN traffic arriving from the core. Note that the label leaf shares 493 the pathlist with the IP prefix. 495 4. Forwarding Behavior 497 This section explains how the forwarding plane uses the hierarchical 498 shared forwarding chain to forward a packet. 500 When a packet arrives at a router, it matches a leaf. A labeled 501 packet matches a label leaf while an IP packet matches an IP prefix 502 leaf. The forwarding engines walks the forwarding chain starting 503 from the leaf until the walk terminates on an adjacency. Thus when a 504 packet arrives, the chain is walked as follows: 506 1. Lookup the leaf based on the destination address or the label at 507 the top of the packet 509 2. Retrieve the parent pathlist of the leaf 511 3. Pick the outgoing path "Pi" from the list of resolved paths in 512 the pathlist. The method by which the outgoing path is picked is 513 beyond the scope of this document (e.g. flow-preserving hash 514 exploiting entropy within the MPLS stack and IP header). Let the 515 "path-index" of the outgoing path "Pi" be "j". 517 4. If the prefix is labeled, use the "path-index" "j" to retrieve 518 the jth label "Lj" stored the jth entry in the OutLabel-List and 519 apply the label action of the label on the packet (e.g. for VPN 520 label on the ingress PE, the label action is "push"). As 521 mentioned in Section 1.2, the value of the "path-index" stored 522 in path may not necessarily be the same value of the location of 523 the path in the pathlist. 525 5. Move to the parent of the chosen path "Pi" 527 6. If the chosen path "Pi" is recursive, move to its parent prefix 528 and go to step 2 530 7. If the chosen path is non-recursive move to its parent adjacency. 531 Otherwise go to the next step. 533 8. Encapsulate the packet in the layer string specified by the 534 adjacency and send the packet out. 536 Let's apply the above forwarding steps to the forwarding chain 537 depicted in Figure 2 in Section 2. Suppose a packet arrives at 538 ingress PE iPE from an external neighbor. Assume the packet matches 539 the VPN prefix VPN-IP1. While walking the forwarding chain, the 540 forwarding engine applies a hashing algorithm to choose the path and 541 the hashing at the BGP level yields path 0 while the hashing at the 542 IGP level yields path 1. In that case, the packet will be sent out 543 of interface I2 with the label stack "IGP-L12,VPN-L11". 545 5. Handling Platforms with Limited Levels of Hierarchy 547 This section describes the construction of the forwarding chain if a 548 platform does not support the number of recursion levels required to 549 resolve the NLRIs. There are two main design objectives 551 o Being able to reduce the number of hierarchical levels from any 552 arbitrary value to a smaller arbitrary value that can be 553 supported by the forwarding engine 555 o Minimal modifications to the forwarding algorithm due to such 556 reduction. 558 5.1. Flattening the Forwarding Chain 560 Let's consider a pathlist associated with the leaf "R1" consisting 561 of the list of paths . Assume that the leaf "R1" has 562 an Outlabel-list . Suppose the path Pi is a 563 recursive path that resolves via a prefix represented by the leaf 564 "R2". The leaf "R2" itself is pointing to a pathlist consisting of 565 the paths 567 If the platform supports the number of hierarchy levels of the 568 forwarding chain, then a packet that uses the path "Pi" will be 569 forwarded as follows: 571 1. The forwarding engine is now at leaf "R1" 573 2. So it moves to its parent pathlist, which contains the list . 576 3. The forwarding engine applies a hashing algorithm and picks the 577 path "Pi". So now the forwarding engine is at the path "Pi" 579 4. The forwarding engine retrieves the label "Li" from the outlabel- 580 list attached to the leaf "R1" and applies the label action 582 5. The path "Pi" uses the leaf "R2" 583 6. The forwarding engine walks forward to the leaf "R2" for 584 resolution 586 7. The forwarding plane performs a hash to pick a path among the 587 pathlist of the leaf "R2", which is 589 8. Suppose the forwarding engine picks the path "Qj" 591 9. Now the forwarding engine continues the walk to the parent of 592 "Qj" 594 Suppose the platform cannot support the number of hierarchy levels 595 in the forwarding chain. FIB needs to reduce the number of hierarchy 596 levels. The idea of reducing the number of hierarchy levels is to 597 "flatten" two chain levels into a single level. The "flattening" 598 steps are as follows 600 1. FIB wants to reduce the number of levels used by "Pi" by 1 602 2. FIB walks to the parent of "Pi", which is the leaf "R2" 604 3. FIB extracts the parent pathlist of the leaf "R2", which is 607 4. FIB also extracts the OutLabel-list(R2) associated with the leaf 608 "R2". Remember that OutLabel-list(R2) = 610 5. FIB replaces the path "Pi", with the list of paths 613 6. Hence the path list now becomes " 616 7. The path index stored inside the locations "Q1", "Q2", ..., "Qm" 617 must all be "i" because the index "i" refers to the label "Li" 618 associated with leaf "R1" 620 8. FIB attaches an OutLabel-list with the new pathlist as follows: 621 . The size of the label list associated with the 623 flattened pathlist equals the size of the pathlist. Hence there 624 is a 1-1 mapping between every path in the "flattened" pathlist 625 and the OutLabel-list associated with it. 627 It is noteworthy to mention that the labels in the outlabel-list 628 associated with the "flattened" pathlist may be stored in the same 629 memory location as the path itself to avoid additional memory 630 access. But that is an implementation detail that is beyond the 631 scope of this document. 633 The same steps can be applied to all paths in the pathlist so that all paths are "flattened" thereby reducing the 635 number of hierarchical levels by one. Note that that "flattening" a 636 pathlist pulls in all paths of the parent paths, a desired feature 637 to utilize all ECMP/UCMP paths at all levels. A platform that has a 638 limit on the number of paths in a pathlist for any given leaf may 639 choose to reduce the number paths using methods that are beyond the 640 scope of this document. 642 The steps can be recursively applied to other paths at the same 643 levels or other levels to recursively reduce the number of 644 hierarchical levels to an arbitrary value so as to accommodate the 645 capability of the forwarding engine. 647 Because a flattened pathlist may have an associated OutLabel-list 648 the forwarding behavior has to be slightly modified. The 649 modification is done by adding the following step right after step 4 650 in Section 4. 652 5. If there is an OutLabel-list associated with the pathlist, then 653 if the path "Pi" is chosen by the hashing algorithm, retrieve the 654 label at location "i" in that OutLabel-list and apply the label 655 action of that label on the packet 657 In the next subsection, we apply the steps in this subsection to a 658 sample scenario. 660 5.2. Example: Flattening a forwarding chain 662 This example uses a case of inter-AS option C [8] where there are 3 663 levels of hierarchy. Figure 4 illustrates the sample topology. To 664 force 3 levels of hierarchy, the ASBRs on the ingress domain (domain 665 1) advertise the core routers of the egress domain (domain 2) to the 666 ingress PE (iPE) via BGP-LU [4] instead of redistributing them into 667 the IGP of domain 1. The end result is that the ingress PE (iPE) has 668 2 levels of recursion for the VPN prefix VPN-IP1 and VPN2-IP2. 670 Domain 1 Domain 2 671 +-------------+ +-------------+ 672 | | | | 673 | LDP/SR Core | | LDP/SR core | 674 | | | | 675 | (192.0.1.1) | | 676 | ASBR11---------ASBR21........ePE1(192.0.2.1) 677 | | \ / | . . |\ 678 | | \ / | . . | \ 679 | | \ / | . . | \ 680 | | \/ | .. | \VPN-IP1 (11.1.1.0/24) 681 | | /\ | . . | /VRF "Blue" ASn: 65000 682 | | / \ | . . | / 683 | | / \ | . . | / 684 | | / \ | . . |/ 685 iPE ASBR12---------ASBR22........ePE2 (192.0.2.2) 686 | (192.0.1.2) | |\ 687 | | | | \ 688 | | | | \ 689 | | | | \VRF "Blue" ASn: 65000 690 | | | | /VPN-IP2 (11.1.2.0/24) 691 | | | | / 692 | | | | / 693 | | | |/ 694 | ASBR13---------ASBR23........ePE3(192.0.2.3) 695 | (192.0.1.3) | | 696 | | | | 697 | | | | 698 +-------------+ +-------------+ 699 <============ <========= <============ 700 Advertise ePEx Advertise Redistribute 701 Using iBGP-LU ePEx Using IGP into 702 eBGP-LU BGP 704 Figure 4 : Sample 3-level hierarchy topology 706 We will make the following assumptions about connectivity 708 o In "domain 2", both ASBR21 and ASBR22 can reach both ePE1 and 709 ePE2 using the same distance 711 o In "domain 2", only ASBR23 can reach ePE3 713 o In "domain 1", iPE (the ingress PE) can reach ASBR11, ASBR12, and 714 ASBR13 via IGP using the same distance. 716 We will make the following assumptions about the labels 717 o The VPN labels advertised by ePE1 and ePE2 for prefix VPN-IP1 are 718 VPN-L11 and VPN-L21, respectively 720 o The VPN labels advertised by ePE2 and ePE3 for prefix VPN-IP2 are 721 VPN-L22 and VPN-L32, respectively 723 o The labels advertised by ASBR11 to iPE using BGP-LU [4] for the 724 egress PEs ePE1 and ePE2 are LASBR11(ePE1) and LASBR11(ePE2), 725 respectively. 727 o The labels advertised by ASBR12 to iPE using BGP-LU [4] for the 728 egress PEs ePE1 and ePE2 are LASBR12(ePE1) and LASBR12(ePE2), 729 respectively 731 o The label advertised by ASBR11 to iPE using BGP-LU [4] for the 732 egress PE ePE3 is LASBR13(ePE3) 734 o The IGP labels advertised by the next hops directly connected to 735 iPE towards ASBR11, ASBR12, and ASBR13 in the core of domain 1 736 are IGP-L11, IGP-L12, and IGP-L13, respectively. 738 Based on these connectivity assumptions and the topology in Figure 739 4, the routing table on iPE is 741 65000:11.1.1.0/24 742 via ePE1 (192.0.2.1), VPN Label: VPN-L11 743 via ePE2 (192.0.2.2), VPN Label: VPN-L21 744 65000:11.1.2.0/24 745 via ePE1 (192.0.2.2), VPN Label: VPN-L22 746 via ePE2 (192.0.2.3), VPN Label: VPN-L23 748 192.0.2.1/32 (ePE1) 749 Via ASBR11, BGP-LU Label: LASBR11(ePE1) 750 Via ASBR12, BGP-LU Label: LASBR12(ePE1) 751 192.0.2.2/32 (ePE2) 752 Via ASBR11, BGP-LU Label: LASBR11(ePE2) 753 Via ASBR12, BGP-LU Label: LASBR12(ePE2) 754 192.0.2.3/32 (ePE3) 755 Via ASBR13, BGP-LU Label: LASBR13(ePE3) 757 192.0.1.1/32 (ASBR11) 758 via Core, Label: IGP-L11 759 192.0.1.2/32 (ASBR12) 760 via Core, Label: IGP-L12 761 192.0.1.3/32 (ASBR13) 762 via Core, Label: IGP-L13 764 The diagram in Figure 5 illustrates the forwarding chain in iPE 765 assuming that the forwarding hardware in iPE supports 3 levels of 766 hierarchy. The leaves corresponding to the ABSRs on domain 1 767 (ASBR11, ASBR12, and ASBR13) are at the bottom of the hierarchy. 768 There are few important points: 770 o Because the hardware supports the required depth of hierarchy, 771 the sizes of a pathlist equal the size of the label list 772 associated with the leaves using this pathlist 774 o The index inside the pathlist entry indicates the label that will 775 be picked from the Outlabel-List associated with the child leaf 776 if that path is chosen by the forwarding engine hashing function. 778 Outlabel-List Outlabel-List 779 For VPN-IP1 For VPN-IP2 780 +------------+ +--------+ +-------+ +------------+ 781 | VPN-L11 |<---| VPN-IP1| |VPN-IP2|-->| VPN-L22 | 782 +------------+ +---+----+ +---+---+ +------------+ 783 | VPN-L21 | | | | VPN-L32 | 784 +------------+ | | +------------+ 785 | | 786 V V 787 +---+---+ +---+---+ 788 | 0 | 1 | | 0 | 1 | 789 +-|-+-\-+ +-/-+-\-+ 790 | \ / \ 791 | \ / \ 792 | \ / \ 793 | \ / \ 794 v \ / \ 795 +-----+ +-----+ +-----+ 796 +----+ ePE1| |ePE2 +-----+ | ePE3+-----+ 797 | +--+--+ +-----+ | +--+--+ | 798 v | / v | v 799 +-------------+ | / +-------------+ | +-------------+ 800 |LASBR11(ePE1)| | / |LASBR11(ePE2)| | |LASBR13(ePE3)| 801 +-------------+ | / +-------------+ | +-------------+ 802 |LASBR12(ePE1)| | / |LASBR12(ePE2)| | Outlabel-List 803 +-------------+ | / +-------------+ | For ePE3 804 Outlabel-List | / Outlabel-List | 805 For ePE1 | / For ePE2 | 806 | / | 807 | / | 808 | / | 809 v / v 810 +---+---+ Shared Pathlist +---+ Pathlist 811 | 0 | 1 | For ePE1 and ePE2 | 0 | For ePE3 812 +-|-+-\-+ +-|-+ 813 | \ | 814 | \ | 815 | \ | 816 | \ | 817 v \ v 818 +------+ +------+ +------+ 819 +---+ASBR11| |ASBR12+--+ |ASBR13+---+ 820 | +------+ +------+ | +------+ | 821 v v v 822 +-------+ +-------+ +-------+ 823 |IGP-L11| |IGP-L12| |IGP-L13| 824 +-------+ +-------+ +-------+ 826 Figure 5 : Forwarding Chain for hardware supporting 3 Levels 828 Now suppose the hardware on iPE (the ingress PE) supports 2 levels 829 of hierarchy only. In that case, the 3-levels forwarding chain in 830 Figure 5 needs to be "flattened" into 2 levels only. 832 Outlabel-List Outlabel-List 833 For VPN-IP1 For VPN-IP2 834 +------------+ +-------+ +-------+ +------------+ 835 | VPN-L11 |<---|VPN-IP1| | VPN-IP2|--->| VPN-L22 | 836 +------------+ +---+---+ +---+---+ +------------+ 837 | VPN-L21 | | | | VPN-L32 | 838 +------------+ | | +------------+ 839 | | 840 | | 841 | | 842 Flattened | | Flattened 843 pathlist V V pathlist 844 +===+===+ +===+===+===+ +=============+ 845 +--------+ 0 | 1 | | 0 | 0 | 1 +---->|LASBR11(ePE2)| 846 | +=|=+=\=+ +=/=+=/=+=\=+ +=============+ 847 v | \ / / \ |LASBR12(ePE2)| 848 +=============+ | \ +-----+ / \ +=============+ 849 |LASBR11(ePE1)| | \/ / \ |LASBR13(ePE3)| 850 +=============+ | /\ / \ +=============+ 851 |LASBR12(ePE1)| | / \ / \ 852 +=============+ | / \ / \ 853 | / \ / \ 854 | / + + \ 855 | + | | \ 856 | | | | \ 857 v v v v \ 858 +------+ +------+ +------+ 859 +----|ASBR11| |ASBR12+---+ |ASBR13+---+ 860 | +------+ +------+ | +------+ | 861 v v v 862 +-------+ +-------+ +-------+ 863 |IGP-L11| |IGP-L12| |IGP-L13| 864 +-------+ +-------+ +-------+ 866 Figure 6 : Flattening 3 levels to 2 levels of Hierarchy on iPE 868 Figure 6 represents one way to "flatten" a 3 levels hierarchy into 869 two levels. There are few important points: 871 o As mentioned in Section 5.1 a flattened pathlist may have label 872 lists associated with them. The size of the label list associated 873 with a flattened pathlist equals the size of the pathlist. Hence 874 it is possible that an implementation includes these label lists 875 in the flattened pathlist itself 877 o Again as mentioned in Section 5.1, the size of a flattened 878 pathlist may not be equal to the size of the OutLabel-lists of 879 leaves using the flattened pathlist. So the indices inside a 880 flattened pathlist still indicate the label index in the 881 Outlabel-Lists of the leaves using that pathlist. Because the 882 size of the flattened pathlist may be different from the size of 883 the OutLabel-lists of the leaves, the indices may be repeated. 885 o Let's take a look at the flattened pathlist used by the prefix 886 "VPN-IP2", The pathlist associated with the prefix "VPN-IP2" has 887 three entries. 889 o The first and second entry have index "0". This is because 890 both entries correspond to ePE2. Hence when hashing performed 891 by the forwarding engine results in using first or the second 892 entry in the pathlist, the forwarding engine will pick the 893 correct VPN label "VPN-L22", which is the label advertised by 894 ePE2 for the prefix "VPN-IP2" 896 o The third entry has the index "1". This is because the third 897 entry corresponds to ePE3. Hence when the hashing is performed 898 by the forwarding engine results in using the third entry in 899 the flattened pathlist, the forwarding engine will pick the 900 correct VPN label "VPN-L32", which is the label advertised by 901 "ePE3" for the prefix "VPN-IP2" 903 Now let's try and apply the forwarding steps in Section 4 together 904 with the additional step in Section 5.1 to the flattened forwarding 905 chain illustrated in Figure 6. 907 o Suppose a packet arrives at "iPE" and matches the VPN prefix 908 "VPN-IP2" 910 o The forwarding engine walks to the parent of the "VPN-IP2", which 911 is the flattened pathlist and applies a hashing algorithm to pick 912 a path 914 o Suppose the hashing by the forwarding engine picks the second 915 path in the flattened pathlist associated with the leaf "VPN- 916 IP2". 918 o Because the second path has the index "0", the label "VPN-L22" is 919 pushed on the packet 921 o Next the forwarding engine picks the second label from the 922 Outlabel-Array associated with the flattened pathlist. Hence the 923 next label that is pushed is "LASBR12(ePE2)" 925 o The forwarding engine now moves to the parent of the flattened 926 pathlist corresponding to the second path. The parent is the IGP 927 label leaf corresponding to "ASBR12" 929 o So the packet is forwarded towards the ASBR "ASBR12" and the IGP 930 label at the top will be "L12" 932 Based on the above steps, a packet arriving at iPE and destined to 933 the prefix VPN-L22 reaches its destination as follows 935 o iPE sends the packet along the shortest path towards ASBR12 with 936 the following label stack starting from the top: {L12, 937 LASBR12(ePE2), VPN-L22}. 939 o The penultimate hop of ASBR12 pops the top label "L12". Hence the 940 packet arrives at ASBR12 with the label stack {LASBR12(ePE2), 941 VPN-L22} where "LASBR12(ePE2)" is the top label. 943 o ASBR12 swaps "LASBR12(ePE2)" with the label "LASBR22(ePE2)", 944 which is the label advertised by ASBR22 for the ePE2 (the egress 945 PE). 947 o ASBR22 receives the packet with "LASBR22(ePE2)" at the top. 949 o Hence ASBR22 swaps "LASBR22(ePE2)" with the IGP label for ePE2 950 advertised by the next-hop towards ePE2 in domain 2, and sends 951 the packet along the shortest path towards ePE2. 953 o The penultimate hop of ePE2 pops the top label. Hence ePE2 954 receives the packet with the top label VPN-L22 at the top 956 o ePE2 pops "VPN-L22" and sends the packet as a pure IP packet 957 towards the destination VPN-IP2. 959 6. Forwarding Chain Adjustment at a Failure 961 The hierarchical and shared structure of the forwarding chain 962 explained in the previous section allows modifying a small number of 963 forwarding chain objects to re-route traffic to a pre-calculated 964 equal-cost or backup path without the need to modify the possibly 965 very large number of BGP prefixes. In this section, we go over 966 various core and edge failure scenarios to illustrate how FIB 967 manager can utilize the forwarding chain structure to achieve BGP 968 prefix independent convergence. 970 6.1. BGP-PIC core 972 This section describes the adjustments to the forwarding chain when 973 a core link or node fails but the BGP next-hop remains reachable. 975 There are two case: remote link failure and attached link failure. 976 Node failures are treated as link failures. 978 When a remote link or node fails, IGP on the ingress PE receives 979 advertisement indicating a topology change so IGP re-converges to 980 either find a new next-hop and/or outgoing interface or remove the 981 path completely from the IGP prefix used to resolve BGP next-hops. 982 IGP and/or LDP download the modified IGP leaves with modified 983 outgoing labels for labeled core. 985 When a local link fails, FIB manager detects the failure almost 986 immediately. The FIB manager marks the impacted path(s) as unusable 987 so that only useable paths are used to forward packets. Hence only 988 IGP pathlists with paths using the failed local link need to be 989 modified. All other pathlists are not impacted. Note that in this 990 particular case there is actually no need even to backwalk to IGP 991 leaves to adjust the OutLabel-Lists because FIB can rely on the 992 path-index stored in the useable paths in the pathlist to pick the 993 right label. 995 It is noteworthy to mention that because FIB manager modifies the 996 forwarding chain starting from the IGP leaves only. BGP pathlists 997 and leaves are not modified. Hence traffic restoration occurs within 998 the time frame of IGP convergence, and, for local link failure, 999 assuming a backup path has been precomputed, within the timeframe of 1000 local detection (e.g. 50ms). Examples of solutions that pre- 1001 computing backup paths are IP FRR [16] remote LFA [17], Ti-LFA [15] 1002 and MRT [18] or eBGP path having a backup path [10]. 1004 Let's apply the procedure mentioned in this subsection to the 1005 forwarding chain depicted in Figure 2. Suppose a remote link failure 1006 occurs and impacts the first ECMP IGP path to the remote BGP next- 1007 hop. Upon IGP convergence, the IGP pathlist used by the BGP next-hop 1008 is updated to reflect the new topology (one path instead of two). As 1009 soon as the IGP convergence is effective for the BGP next-hop entry, 1010 the new forwarding state is immediately available to all dependent 1011 BGP prefixes. The same behavior would occur if the failure was local 1012 such as an interface going down. As soon as the IGP convergence is 1013 complete for the BGP next-hop IGP route, all its BGP depending 1014 routes benefit from the new path. In fact, upon local failure, if 1015 LFA protection is enabled for the IGP route to the BGP next-hop and 1016 a backup path was pre-computed and installed in the pathlist, upon 1017 the local interface failure, the LFA backup path is immediately 1018 activated (e.g. sub-50msec) and thus protection benefits all the 1019 depending BGP traffic through the hierarchical forwarding dependency 1020 between the routes. 1022 6.2. BGP-PIC edge 1024 This section describes the adjustments to the forwarding chains as a 1025 result of edge node or edge link failure. 1027 6.2.1. Adjusting forwarding Chain in egress node failure 1029 When an edge node fails, IGP on neighboring core nodes send route 1030 updates indicating that the edge node is no longer reachable. IGP 1031 running on the iBGP peers instructs FIB to remove the IP and label 1032 leaves corresponding to the failed edge node from FIB. So FIB 1033 manager performs the following steps: 1035 o FIB manager deletes the IGP leaf corresponding to the failed edge 1036 node 1038 o FIB manager backwalks to all dependent BGP pathlists and marks 1039 that path using the deleted IGP leaf as unresolved 1041 o Note that there is no need to modify the possibly large number of 1042 BGP leaves because each path in the pathlist carries its path 1043 index and hence the correct outgoing label will be picked. 1044 Consider for example the forwarding chain depicted in Figure 2. 1045 If the 1st BGP path becomes unresolved, then the forwarding 1046 engine will only use the second path for forwarding. Yet the path 1047 index of that single resolved path will still be 1 and hence the 1048 label VPN-L12 will be pushed. 1050 6.2.2. Adjusting Forwarding Chain on PE-CE link Failure 1052 Suppose the link between an edge router and its external peer fails. 1053 There are two scenarios (1) the edge node attached to the failed 1054 link performs next-hop self and (2) the edge node attached to the 1055 failure advertises the IP address of the failed link as the next-hop 1056 attribute to its iBGP peers. 1058 In the first case, the rest of iBGP peers will remain unaware of the 1059 link failure and will continue to forward traffic to the edge node 1060 until the edge node attached to the failed link withdraws the BGP 1061 prefixes. If the destination prefixes are multi-homed to another 1062 iBGP peer, say ePE2, then FIB manager on the edge router detecting 1063 the link failure applies the following steps: 1065 o FIB manager backwalks to the BGP pathlists marks the path through 1066 the failed link to the external peer as unresolved 1068 o Hence traffic will be forwarded used the backup path towards ePE2 1070 o For labeled traffic 1072 o The Outlabel-List attached to the BGP leaf already contains 1073 an entry corresponding to the backup path. 1075 o The label entry in OutLabel-List corresponding to the 1076 internal path to backup egress PE has swap action to the 1077 label advertised by backup egress PE 1079 o For an arriving label packet (e.g. VPN), the top label is 1080 swapped with the label advertised by backup egress PE and the 1081 packet is sent towards that backup egress PE 1083 o For unlabeled traffic, packets are simply redirected towards 1084 backup egress PE. 1086 In the second case where the edge router uses the IP address of the 1087 failed link as the BGP next-hop, the edge router will still perform 1088 the previous steps. But, unlike the case of next-hop self, IGP on 1089 failed edge node informs the rest of the iBGP peers that IP address 1090 of the failed link is no longer reachable. Hence the FIB manager on 1091 iBGP peers will delete the IGP leaf corresponding to the IP prefix 1092 of the failed link. The behavior of the iBGP peers will be identical 1093 to the case of edge node failure outlined in Section 6.2.1. 1095 It is noteworthy to mention that because the edge link failure is 1096 local to the edge router, sub-50 msec convergence can be achieved as 1097 described in [10]. 1099 Let's try to apply the case of next-hop self to the forwarding chain 1100 depicted in Figure 3. After failure of the link between ePE1 and CE, 1101 the forwarding engine will route traffic arriving from the core 1102 towards VPN-NH2 with path-index=1. A packet arriving from the core 1103 will contain the label VPN-L11 at top. The label VPN-L11 is swapped 1104 with the label VPN-L21 and the packet is forwarded towards ePE2. 1106 6.3. Handling Failures for Flattened Forwarding Chains 1108 As explained in the in Section 5 if the number of hierarchy levels 1109 of a platform cannot support the native number of hierarchy levels 1110 of a recursive forwarding chain, the instantiated forwarding chain 1111 is constructed by flattening two or more levels. Hence a 3 levels 1112 chain in Figure 5 is flattened into the 2 levels chain in Figure 6. 1114 While reducing the benefits of BGP-PIC, flattening one hierarchy 1115 into a shallower hierarchy does not always result in a complete loss 1116 of the benefits of the BGP-PIC. To illustrate this fact suppose 1117 ASBR12 is no longer reachable in domain 1. If the platform supports 1118 the full hierarchy depth, the forwarding chain is the one depicted 1119 in Figure 5 and hence the FIB manager needs to backwalk one level to 1120 the pathlist shared by "ePE1" and "ePE2" and adjust it. If the 1121 platform supports 2 levels of hierarchy, then a useable forwarding 1122 chain is the one depicted in Figure 6. In that case, if ASBR12 is no 1123 longer reachable, the FIB manager has to backwalk to the two 1124 flattened pathlists and updates both of them. 1126 The main observation is that the loss of convergence speed due to 1127 the loss of hierarchy depth depends on the structure of the 1128 forwarding chain itself. To illustrate this fact, let's take two 1129 extremes. Suppose the forwarding objects in level i+1 depend on the 1130 forwarding objects in level i. If every object on level i+1 depends 1131 on a separate object in level i, then flattening level i into level 1132 i+1 will not result in loss of convergence speed. Now let's take the 1133 other extreme. Suppose "n" objects in level i+1 depend on 1 object 1134 in level i. Now suppose FIB flattens level i into level i+1. If a 1135 topology change results in modifying the single object in level i, 1136 then FIB has to backwalk and modify "n" objects in the flattened 1137 level, thereby losing all the benefit of BGP-PIC. Experience shows 1138 that flattening forwarding chains usually results in moderate loss 1139 of BGP-PIC benefits. Further analysis is needed to corroborate and 1140 quantify this statement. 1142 7. Properties 1144 7.1. Coverage 1146 All the possible failures, except CE node failure, are covered, 1147 whether they impact a local or remote IGP path or a local or remote 1148 BGP next-hop as described in Section 6. This section provides 1149 details for each failure and now the hierarchical and shared FIB 1150 structure proposed in this document allows recovery that does not 1151 depend on number of BGP prefixes. 1153 7.1.1. A remote failure on the path to a BGP next-hop 1155 Upon IGP convergence, the IGP leaf for the BGP next-hop is updated 1156 upon IGP convergence and all the BGP depending routes leverage the 1157 new IGP forwarding state immediately. Details of this behavior can 1158 be found in Section 6.1. 1160 This BGP resiliency property only depends on IGP convergence and is 1161 independent of the number of BGP prefixes impacted. 1163 7.1.2. A local failure on the path to a BGP next-hop 1165 Upon LFA protection, the IGP leaf for the BGP next-hop is updated to 1166 use the precomputed LFA backup path and all the BGP depending routes 1167 leverage this LFA protection. Details of this behavior can be found 1168 in Section 6.1. 1170 This BGP resiliency property only depends on LFA protection and is 1171 independent of the number of BGP prefixes impacted. 1173 7.1.3. A remote iBGP next-hop fails 1175 Upon IGP convergence, the IGP leaf for the BGP next-hop is deleted 1176 and all the depending BGP Path-Lists are updated to either use the 1177 remaining ECMP BGP best-paths or if none remains available to 1178 activate precomputed backups. Details about this behavior can be 1179 found in Section 6.2.1. 1181 This BGP resiliency property only depends on IGP convergence and is 1182 independent of the number of BGP prefixes impacted. 1184 7.1.4. A local eBGP next-hop fails 1186 Upon local link failure detection, the adjacency to the BGP next-hop 1187 is deleted and all the depending BGP pathlists are updated to either 1188 use the remaining ECMP BGP best-paths or if none remains available 1189 to activate precomputed backups. Details about this behavior can be 1190 found in Section 6.2.2. 1192 This BGP resiliency property only depends on local link failure 1193 detection and is independent of the number of BGP prefixes impacted. 1195 7.2. Performance 1197 When the failure is local (a local IGP next-hop failure or a local 1198 eBGP next-hop failure), a pre-computed and pre-installed backup is 1199 activated by a local-protection mechanism that does not depend on 1200 the number of BGP destinations impacted by the failure. Sub-50msec 1201 is thus possible even if millions of BGP routes are impacted. 1203 When the failure is remote (a remote IGP failure not impacting the 1204 BGP next-hop or a remote BGP next-hop failure), an alternate path is 1205 activated upon IGP convergence. All the impacted BGP destinations 1206 benefit from a working alternate path as soon as the IGP convergence 1207 occurs for their impacted BGP next-hop even if millions of BGP 1208 routes are impacted. 1210 Appendix A puts the BGP PIC benefits in perspective by providing 1211 some results using actual numbers. 1213 7.3. Automated 1215 The BGP PIC solution does not require any operator involvement. The 1216 process is entirely automated as part of the FIB implementation. 1218 The salient points enabling this automation are: 1220 o Extension of the BGP Best Path to compute more than one primary 1221 ([11]and [12]) or backup BGP next-hop ([6] and [13]). 1223 o Sharing of BGP Path-list across BGP destinations with same 1224 primary and backup BGP next-hop 1226 o Hierarchical indirection and dependency between BGP pathlist and 1227 IGP pathlist 1229 7.4. Incremental Deployment 1231 As soon as one router supports BGP PIC solution, it benefits from 1232 all its benefits without any requirement for other routers to 1233 support BGP PIC. 1235 8. Security Considerations 1237 The behavior described in this document is internal functionality 1238 to a router that result in significant improvement to convergence 1239 time as well as reduction in CPU and memory used by FIB while not 1240 showing change in basic routing and forwarding functionality. As 1241 such no additional security risk is introduced by using the 1242 mechanisms proposed in this document. 1244 9. IANA Considerations 1246 No requirements for IANA 1248 10. Conclusions 1250 This document proposes a hierarchical and shared forwarding chain 1251 structure that allows achieving BGP prefix independent 1252 convergence, and in the case of locally detected failures, sub-50 1253 msec convergence. A router can construct the forwarding chains in 1254 a completely transparent manner with zero operator intervention 1255 thereby supporting smooth and incremental deployment. 1257 11. References 1259 11.1. Normative References 1261 [1] Bradner, S., "Key words for use in RFCs to Indicate 1262 Requirement Levels", BCP 14, RFC 2119, March 1997. 1264 [2] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway Protocol 1265 4 (BGP-4), RFC 4271, January 2006 1267 [3] Bates, T., Chandra, R., Katz, D., and Rekhter Y., 1268 "Multiprotocol Extensions for BGP", RFC 4760, January 2007 1270 [4] Y. Rekhter and E. Rosen, " Carrying Label Information in BGP- 1271 4", RFC 3107, May 2001 1273 [5] Andersson, L., Minei, I., and B. Thomas, "LDP Specification", 1274 RFC 5036, October 2007 1276 11.2. Informative References 1278 [6] Marques,P., Fernando, R., Chen, E, Mohapatra, P., Gredler, H., 1279 "Advertisement of the best external route in BGP", draft-ietf- 1280 idr-best-external-05.txt, January 2012. 1282 [7] Wu, J., Cui, Y., Metz, C., and E. Rosen, "Softwire Mesh 1283 Framework", RFC 5565, June 2009. 1285 [8] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 1286 Networks (VPNs)", RFC 4364, February 2006. 1288 [9] De Clercq, J. , Ooms, D., Prevost, S., Le Faucheur, F., 1289 "Connecting IPv6 Islands over IPv4 MPLS Using IPv6 Provider 1290 Edge Routers (6PE)", RFC 4798, February 2007 1292 [10] O. Bonaventure, C. Filsfils, and P. Francois. "Achieving sub- 1293 50 milliseconds recovery upon bgp peering link failures, " 1294 IEEE/ACM Transactions on Networking, 15(5):1123-1135, 2007 1296 [11] D. Walton, A. Retana, E. Chen, J. Scudder, "Advertisement of 1297 Multiple Paths in BGP", draft-ietf-idr-add-paths-12.txt, 1298 November 2015 1300 [12] R. Raszuk, R. Fernando, K. Patel, D. McPherson, K. Kumaki, 1301 "Distribution of diverse BGP paths", RFC 6774, November 2012 1303 [13] P. Mohapatra, R. Fernando, C. Filsfils, and R. Raszuk, "Fast 1304 Connectivity Restoration Using BGP Add-path", draft-pmohapat- 1305 idr-fast-conn-restore-03, Jan 2013 1307 [14] C. Filsfils, S. Previdi, A. Bashandy, B. Decraene, S. 1308 Litkowski, M. Horneffer, R. Shakir, J. Tansura, E. Crabbe 1309 "Segment Routing with MPLS data plane", draft-ietf-spring- 1310 segment-routing-mpls-02 (work in progress), October 2015 1312 [15] C. Filsfils, S. Previdi, A. Bashandy, B. Decraene, " Topology 1313 Independent Fast Reroute using Segment Routing", draft- 1314 francois-spring-segment-routing-ti-lfa-02 (work in progress), 1315 August 2015 1317 [16] M. Shand and S. Bryant, "IP Fast Reroute Framework", RFC 5714, 1318 January 2010 1320 [17] S. Bryant, C. Filsfils, S. Previdi, M. Shand, N So, " Remote 1321 Loop-Free Alternate (LFA) Fast Reroute (FRR)", RFC 7490 April 1322 2015 1324 [18] A. Atlas, C. Bowers, G. Enyedi, " An Architecture for IP/LDP 1325 Fast-Reroute Using Maximally Redundant Trees", draft-ietf- 1326 rtgwg-mrt-frr-architecture-10 (work in progress), February 1327 2016 1329 12. Acknowledgments 1331 Special thanks to Neeraj Malhotra, Yuri Tsier for the valuable 1332 help 1334 Special thanks to Bruno Decraene for the valuable comments 1336 This document was prepared using 2-Word-v2.0.template.dot. 1338 Authors' Addresses 1340 Ahmed Bashandy 1341 Cisco Systems 1342 170 West Tasman Dr, San Jose, CA 95134, USA 1343 Email: bashandy@cisco.com 1345 Clarence Filsfils 1346 Cisco Systems 1347 Brussels, Belgium 1348 Email: cfilsfil@cisco.com 1350 Prodosh Mohapatra 1351 Sproute Networks 1352 Email: mpradosh@yahoo.com 1354 Appendix A. Perspective 1356 The following table puts the BGP PIC benefits in perspective 1357 assuming 1359 o 1M impacted BGP prefixes 1361 o IGP convergence ~ 500 msec 1363 o local protection ~ 50msec 1365 o FIB Update per BGP destination ~ 100usec conservative, 1367 ~ 10usec optimistic 1369 o BGP Convergence per BGP destination ~ 200usec conservative, 1371 ~ 100usec optimistic 1373 Without PIC With PIC 1375 Local IGP Failure 10 to 100sec 50msec 1377 Local BGP Failure 100 to 200sec 50msec 1379 Remote IGP Failure 10 to 100sec 500msec 1381 Local BGP Failure 100 to 200sec 500msec 1383 Upon local IGP next-hop failure or remote IGP next-hop failure, the 1384 existing primary BGP next-hop is intact and usable hence the 1385 resiliency only depends on the ability of the FIB mechanism to 1386 reflect the new path to the BGP next-hop to the depending BGP 1387 destinations. Without BGP PIC, a conservative back-of-the-envelope 1388 estimation for this FIB update is 100usec per BGP destination. An 1389 optimistic estimation is 10usec per entry. 1391 Upon local BGP next-hop failure or remote BGP next-hop failure, 1392 without the BGP PIC mechanism, a new BGP Best-Path needs to be 1393 recomputed and new updates need to be sent to peers. This depends on 1394 BGP processing time that will be shared between best-path 1395 computation, RIB update and peer update. A conservative back-of-the- 1396 envelope estimation for this is 200usec per BGP destination. An 1397 optimistic estimation is 100usec per entry.