idnits 2.17.1 draft-ietf-rtgwg-bgp-pic-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 5 instances of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. == The document seems to contain a disclaimer for pre-RFC5378 work, but was first submitted on or after 10 November 2008. The disclaimer is usually necessary only for documents that revise or obsolete older RFCs, and that take significant amounts of text from those RFCs. If you can contact all authors of the source material and they are willing to grant the BCP78 rights to the IETF Trust, you can and should remove the disclaimer. Otherwise, the disclaimer is needed and you can ignore this comment. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (May 22, 2017) is 2502 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 3107 (ref. '4') (Obsoleted by RFC 8277) == Outdated reference: A later version (-15) exists of draft-ietf-idr-add-paths-12 == Outdated reference: A later version (-22) exists of draft-ietf-spring-segment-routing-mpls-02 Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group A. Bashandy, Ed. 2 Internet Draft C. Filsfils 3 Intended status: Informational Cisco Systems 4 Expires: November 2017 P. Mohapatra 5 Sproute Networks 6 May 22, 2017 7 BGP Prefix Independent Convergence 8 draft-ietf-rtgwg-bgp-pic-04.txt 10 Abstract 12 In the network comprising thousands of iBGP peers exchanging millions 13 of routes, many routes are reachable via more than one next-hop. 14 Given the large scaling targets, it is desirable to restore traffic 15 after failure in a time period that does not depend on the number of 16 BGP prefixes. In this document we proposed an architecture by which 17 traffic can be re-routed to ECMP or pre-calculated backup paths in a 18 timeframe that does not depend on the number of BGP prefixes. The 19 objective is achieved through organizing the forwarding data 20 structures in a hierarchical manner and sharing forwarding elements 21 among the maximum possible number of routes. The proposed technique 22 achieves prefix independent convergence while ensuring incremental 23 deployment, complete automation, and zero management and provisioning 24 effort. It is noteworthy to mention that the benefits of BGP-PIC are 25 hinged on the existence of more than one path whether as ECMP or 26 primary-backup. 28 Status of this Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 This document may contain material from IETF Documents or IETF 34 Contributions published or made publicly available before November 35 10, 2008. The person(s) controlling the copyright in some of this 36 material may not have granted the IETF Trust the right to allow 37 modifications of such material outside the IETF Standards Process. 38 Without obtaining an adequate license from the person(s) 39 controlling the copyright in such materials, this document may not 40 be modified outside the IETF Standards Process, and derivative 41 works of it may not be created outside the IETF Standards Process, 42 except to format it for publication as an RFC or to translate it 43 into languages other than English. 45 Internet-Drafts are working documents of the Internet Engineering 46 Task Force (IETF), its areas, and its working groups. Note that 47 other groups may also distribute working documents as Internet- 48 Drafts. 50 Internet-Drafts are draft documents valid for a maximum of six 51 months and may be updated, replaced, or obsoleted by other 52 documents at any time. It is inappropriate to use Internet-Drafts 53 as reference material or to cite them other than as "work in 54 progress." 56 The list of current Internet-Drafts can be accessed at 57 http://www.ietf.org/ietf/1id-abstracts.txt 59 The list of Internet-Draft Shadow Directories can be accessed at 60 http://www.ietf.org/shadow.html 62 This Internet-Draft will expire on November 22, 2017. 64 Copyright Notice 66 Copyright (c) 2017 IETF Trust and the persons identified as the 67 document authors. All rights reserved. 69 This document is subject to BCP 78 and the IETF Trust's Legal 70 Provisions Relating to IETF Documents 71 (http://trustee.ietf.org/license-info) in effect on the date of 72 publication of this document. Please review these documents 73 carefully, as they describe your rights and restrictions with 74 respect to this document. Code Components extracted from this 75 document must include Simplified BSD License text as described in 76 Section 4.e of the Trust Legal Provisions and are provided without 77 warranty as described in the Simplified BSD License. 79 Table of Contents 81 1. Introduction...................................................3 82 1.1. Conventions used in this document.........................4 83 1.2. Terminology...............................................4 84 2. Overview.......................................................6 85 2.1. Dependency................................................6 86 2.1.1. Hierarchical Hardware FIB............................6 87 2.1.2. Availability of more than one primary or secondary BGP 88 next-hops...................................................7 89 2.2. BGP-PIC Illustration......................................7 90 3. Constructing the Shared Hierarchical Forwarding Chain..........9 91 3.1. Constructing the BGP-PIC forwarding Chain.................9 92 3.2. Example: Primary-Backup Path Scenario....................10 93 4. Forwarding Behavior...........................................11 94 5. Handling Platforms with Limited Levels of Hierarchy...........12 95 5.1. Flattening the Forwarding Chain..........................12 96 5.2. Example: Flattening a forwarding chain...................14 97 6. Forwarding Chain Adjustment at a Failure......................21 98 6.1. BGP-PIC core.............................................22 99 6.2. BGP-PIC edge.............................................23 100 6.2.1. Adjusting forwarding Chain in egress node failure...23 101 6.2.2. Adjusting Forwarding Chain on PE-CE link Failure....23 102 6.3. Handling Failures for Flattened Forwarding Chains........24 103 7. Properties....................................................25 104 7.1. Coverage.................................................25 105 7.1.1. A remote failure on the path to a BGP next-hop......25 106 7.1.2. A local failure on the path to a BGP next-hop.......25 107 7.1.3. A remote iBGP next-hop fails........................26 108 7.1.4. A local eBGP next-hop fails.........................26 109 7.2. Performance..............................................26 110 7.3. Automated................................................27 111 7.4. Incremental Deployment...................................27 112 8. Security Considerations.......................................27 113 9. IANA Considerations...........................................27 114 10. Conclusions..................................................27 115 11. References...................................................28 116 11.1. Normative References....................................28 117 11.2. Informative References..................................28 118 12. Acknowledgments..............................................29 119 Appendix A. Perspective..........................................30 121 1. Introduction 123 As a path vector protocol, BGP propagates reachability serially. 124 Hence BGP convergence speed is limited by the time taken to 125 serially propagate reachability information from the point of 126 failure to the device that must re-converge. BGP speakers exchange 127 reachability information about prefixes[2][3] and, for labeled 128 address families, namely AFI/SAFI 1/4, 2/4, 1/128, and 2/128, an 129 edge router assigns local labels to prefixes and associates the 130 local label with each advertised prefix such as L3VPN [8], 6PE 131 [9], and Softwire [7] using BGP label unicast technique[4]. A BGP 132 speaker then applies the path selection steps to choose the best 133 path. In modern networks, it is not uncommon to have a prefix 134 reachable via multiple edge routers. In addition to proprietary 135 techniques, multiple techniques have been proposed to allow for 136 BGP to advertise more than one path for a given prefix 137 [6][11][12], whether in the form of equal cost multipath or 138 primary-backup. Another common and widely deployed scenario is 139 L3VPN with multi-homed VPN sites with unique Route Distinguisher. 140 It is advantageous to utilize the commonality among paths used by 141 NLRIs to significantly improve convergence in case of topology 142 modifications. 144 This document proposes a hierarchical and shared forwarding chain 145 organization that allows traffic to be restored to pre-calculated 146 alternative equal cost primary path or backup path in a time 147 period that does not depend on the number of BGP prefixes. The 148 technique relies on internal router behavior that is completely 149 transparent to the operator and can be incrementally deployed and 150 enabled with zero operator intervention. 152 1.1. Conventions used in this document 154 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 155 NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" 156 in this document are to be interpreted as described in RFC-2119 157 [1]. 159 In this document, these words will appear with that interpretation 160 only when in ALL CAPS. Lower case uses of these words are not to 161 be interpreted as carrying RFC-2119 significance. 163 1.2. Terminology 165 This section defines the terms used in this document. For ease of 166 use, we will use terms similar to those used by L3VPN [8] 168 o BGP prefix: A prefix P/m (of any AFI/SAFI) that a BGP speaker 169 has a path for. 171 o IGP prefix: A prefix P/m (of any AFI/SAFI) that is learnt via 172 an Interior Gateway Protocol, such as OSPF and ISIS, has a path 173 for. The prefix may be learnt directly through the IGP or 174 redistributed from other protocol(s) 176 o CE: An external router through which an egress PE can reach a 177 prefix P/m. 179 o Ingress PE, "iPE": A BGP speaker that learns about a prefix 180 through a IBGP peer and chooses an egress PE as the next-hop for 181 the prefix. 183 o Path: The next-hop in a sequence of nodes starting from the 184 current node and ending with the destination node or network 185 identified by the prefix. The nodes may not be directly 186 connected. 188 o Recursive path: A path consisting only of the IP address of the 189 next-hop without the outgoing interface. Subsequent lookups are 190 necessary to determine the outgoing interface and a directly 191 connected next-hop 193 o Non-recursive path: A path consisting of the IP address of a 194 directly connected next-hop and outgoing interface 196 o Primary path: A recursive or non-recursive path that can be 197 used all the time as long as a walk starting from this path can 198 end to an adjacency. A prefix can have more than one primary 199 path 201 o Backup path: A recursive or non-recursive path that can be used 202 only after some or all primary paths become unreachable 204 o Leaf: A container data structure for a prefix or local label. 205 Alternatively, it is the data structure that contains prefix 206 specific information. 208 o IP leaf: The leaf corresponding to an IPv4 or IPv6 prefix 210 o Label leaf. The leaf corresponding to a locally allocated label 211 such as the VPN label on an egress PE [8]. 213 o Pathlist: An array of paths used by one or more prefix to forward 214 traffic to destination(s) covered by a IP prefix. Each path in 215 the pathlist carries its "path-index" that identifies its 216 position in the array of paths. "). In general, the value of the 217 "path-index" stored in path may not necessarily has the same 218 value of the location of the path in the pathlist. For example 219 the 3rd path may carry path-index value of 1 221 o A pathlist may contain a mix of primary and backup paths 223 o OutLabel-List: Each labeled prefix is associated with an 224 OutLabel-List. The OutLabel-List is an array of one or more 225 outgoing labels and/or label actions where each label or label 226 action has 1-to-1 correspondence to a path in the pathlist. 227 Label actions are: push the label, pop the label, swap the 228 incoming label with the label in the Outlabel-Array entry, or 229 don't push anything at all in case of "unlabeled". The prefix 230 may be an IGP or BGP prefix 232 o Adjacency: The layer 2 encapsulation leading to the layer 3 233 directly connected next-hop 235 o Dependency: An object X is said to be a dependent or child of 236 object Y if there is at least one forwarding chain where the 237 forwarding engine must visits the object X before visiting the 238 object Y in order to forward a packet. Note that if object X is 239 a child of object Y, then Y cannot be deleted unless object X 240 is no longer a dependent/child of object Y 242 o Route: A prefix with one or more paths associated with it. 243 Hence the minimum set of objects needed to construct a route is 244 a leaf and a pathlist. 246 2. Overview 248 The idea of BGP-PIC is based on two pillars 250 o A shared hierarchical forwarding Chain: It is not uncommon to see 251 multiple destinations are reachable via the same list of next- 252 hops. Instead of having a separate list of next-hops for each 253 destination, all destinations sharing the same list of next-hops 254 can point to a single copy of this list thereby allowing fast 255 convergence by making changes to a single shared list of next- 256 hops rather than possibly a large number of destinations. Because 257 paths in a pathlist may be recursive, a hierarchy is formed 258 between pathlist and the resolving prefix whereby the pathlist 259 depends on the resolving prefix. 261 o A forwarding plane that supports multiple levels of indirection: 262 A forwarding that starts with a destination and ends with an 263 outgoing interface is not a simple flat structure. Instead a 264 forwarding entry is constructed via multiple levels of 265 dependency. A BGP NLRI uses a recursive next-hop, which in turn 266 resolves via an IGP next-hop, which in turn resolves via an 267 adjacency consisting of one or more outgoing interface(s) and 268 next-hop(s). 270 Designing a forwarding plane that constructs multi-level forwarding 271 chains with maximal sharing of forwarding objects allows rerouting a 272 large number of destinations by modifying a small number of objects 273 thereby achieving convergence in a time frame that does not depend 274 on the number of destinations. For example, if the IGP prefix that 275 resolves a recursive next-hop is updated there is no need to update 276 the possibly large number of BGP NLRIs that use this recursive next- 277 hop. 279 2.1. Dependency 281 This section describes the required functionality in the forwarding 282 and control planes to support BGP-PIC described in this document 284 2.1.1. Hierarchical Hardware FIB 286 BGP PIC requires a hierarchical hardware FIB support: for each BGP 287 forwarded packet, a BGP leaf is looked up, then a BGP Pathlist is 288 consulted, then an IGP Pathlist, then an Adjacency. 290 An alternative method consists in "flattening" the dependencies when 291 programming the BGP destinations into HW FIB resulting in 292 potentially eliminating both the BGP Path-List and IGP Path-List 293 consultation. Such an approach decreases the number of memory 294 lookup's per forwarding operation at the expense of HW FIB memory 295 increase (flattening means less sharing hence duplication), loss of 296 ECMP properties (flattening means less pathlist entropy) and loss of 297 BGP PIC properties. 299 2.1.2. Availability of more than one primary or secondary BGP next-hops 301 When the primary BGP next-hop fails, BGP PIC depends on the 302 availability of a pre-computed and pre-installed secondary BGP next- 303 hop in the BGP Pathlist. 305 The existence of a secondary next-hop is clear for the following 306 reason: a service caring for network availability will require two 307 disjoint network connections hence two BGP next-hops. 309 The BGP distribution of the secondary next-hop is available thanks 310 to the following BGP mechanisms: Add-Path [11], BGP Best-External 311 [6], diverse path [12], and the frequent use in VPN deployments of 312 different VPN RD's per PE. It is noteworthy to mention that the 313 availability of another BGP path does not mean that all failure 314 scenarios can be covered by simply forwarding traffic to the 315 available secondary path. The discussion of how to cover various 316 failure scenarios is beyond the scope of this document 318 2.2. BGP-PIC Illustration 320 To illustrate the two pillars above as well as the platform 321 dependency, we will use an example of a simple multihomed L3VPN [8] 322 prefix in a BGP-free core running LDP [5] or segment routing over 323 MPLS forwarding plane [14]. 325 +--------------------------------+ 326 | | 327 | ePE2 (IGP-IP1 192.0.2.1, Loopback) 328 | | \ 329 | | \ 330 | | \ 331 iPE | CE....VRF "Blue", ASnum 65000 332 | | / (VPN-IP1 11.1.1.0/24) 333 | | / (VPN-IP2 11.1.2.0/24) 334 | LDP/Segment-Routing Core | / 335 | ePE1 (IGP-IP2 192.0.2.2, Loopback) 336 | | 337 +--------------------------------+ 338 Figure 1 VPN prefix reachable via multiple PEs 340 Referring to Figure 1, suppose the iPE (the ingress PE) receives 341 NLRIs for the VPN prefixes VPN-IP1 and VPN-IP2 from two egress PEs, 342 ePE1 and ePE2 with next-hop BGP-NH1 and BGP-NH2, respectively. 343 Assume that ePE1 advertise the VPN labels VPN-L11 and VPN-L12 while 344 ePE2 advertise the VPN labels VPN-L21 and VPN-L22 for VPN-IP1 and 345 VPN-IP2, respectively. Suppose that BGP-NH1 and BGP-NH2 are resolved 346 via the IGP prefixes IGP-IP1 and IGP-P2, where each happen to have 2 347 ECMP paths with IGP-NH1 and IGP-NH2 reachable via the interfaces I1 348 and I2, respectively. Suppose that local labels (whether LDP [5] or 349 segment routing [14]) on the downstream LSRs for IGP-IP1 are IGP-L11 350 and IGP-L12 while for IGP-P2 are IGP-L21 and IGP-L22. As such, the 351 routing table at iPE is as follows: 353 65000:11.1.1.0/24 354 via ePE1 (192.0.2.1), VPN Label: VPN-L11 355 via ePE2 (192.0.2.2), VPN Label: VPN-L21 357 65000:11.1.2.0/24 358 via ePE1 (192.0.2.1), VPN Label: VPN-L12 359 via ePE2 (192.0.2.2), VPN Label: VPN-L22 361 192.0.2.1/32 362 via Core, Label: IGP-L11 363 via Core, Label: IGP-L12 365 192.0.2.2/32 366 via Core, Label: IGP-L21 367 via Core, Label: IGP-L22 369 Based on the above routing table, a hierarchical forwarding chain 370 can be constructed as shown in Figure 2. 372 IP Leaf: Pathlist: IP Leaf: Pathlist: 373 -------- +-------+ -------- +----------+ 374 VPN-IP1-->|BGP-NH1|-->IGP-IP1(BGP NH1)--->|IGP NH1,I1|--->Adjacency1 375 | |BGP-NH2|-->.... | |IGP NH2,I2|--->Adjacency2 376 | +-------+ | +----------+ 377 | | 378 | | 379 v v 380 OutLabel-List: OutLabel-List: 381 +----------------------+ +----------------------+ 382 |VPN-L11 (VPN-IP1, NH1)| |IGP-L11 (IGP-IP1, NH1)| 383 |VPN-L12 (VPN-IP1, NH2)| |IGP-L12 (IGP-IP1, NH2)| 384 +----------------------+ +----------------------+ 386 Figure 2 Shared Hierarchical Forwarding Chain at iPE 388 The forwarding chain depicted in Figure 2 illustrates the first 389 pillar, which is sharing and hierarchy. We can see that the BGP 390 pathlist consisting of BGP-NH1 and BGP-NH2 is shared by all NLRIs 391 reachable via ePE1 and ePE2. As such, it is possible to make changes 392 to the pathlist without having to make changes to the NLRIs. For 393 example, if BGP-NH2 becomes unreachable, there is no need to modify 394 any of the possibly large number of NLRIs. Instead only the shared 395 pathlist needs to be modified. Likewise, due to the hierarchical 396 structure of the forwarding chain, it is possible to make 397 modifications to the IGP routes without having to make any changes 398 to the BGP NLRIs. For example, if the interface "I2" goes down, only 399 the shared IGP pathlist needs to be updated, but none of the IGP 400 prefixes sharing the IGP pathlist nor the BGP NLRIs using the IGP 401 prefixes for resolution need to be modified. 403 Figure 2 can also be used to illustrate the second BGP-PIC pillar. 404 Having a deep forwarding chain such as the one illustrated in Figure 405 2 requires a forwarding plane that is capable of accessing multiple 406 levels of indirection in order to calculate the outgoing 407 interface(s) and next-hops(s). While a deeper forwarding chain 408 minimizes the re-convergence time on topology change, there will 409 always exist platforms with limited capabilities and hence imposing 410 a limit on the depth of the forwarding chain. Section 5 describes 411 how to gracefully trade off convergence speed with the number of 412 hierarchical levels to support platforms with different 413 capabilities. 415 3. Constructing the Shared Hierarchical Forwarding Chain 417 Constructing the forwarding chain is an application of the two 418 pillars described in Section 2. This section describes how to 419 construct the forwarding chain in hierarchical shared manner 421 3.1. Constructing the BGP-PIC forwarding Chain 423 The whole process starts when BGP downloads a prefix to FIB. The 424 prefix contains one or more outgoing paths. For certain labeled 425 prefixes, such as VPN [8] prefixes, each path may be associated with 426 an outgoing label and the prefix itself may be assigned a local 427 label. The list of outgoing paths defines a pathlist. If such 428 pathlist does not already exist, then FIB creates a new pathlist, 429 otherwise the existing pathlist is used. The BGP prefix is added as 430 a dependent of the pathlist. 432 The previous step constructs the upper part of the hierarchical 433 forwarding chain. The forwarding chain is completed by resolving the 434 paths of the pathlist. A BGP path usually consists of a next-hop. 435 The next-hop is resolved by finding a matching IGP prefix. 437 The end result is a hierarchical shared forwarding chain where the 438 BGP pathlist is shared by all BGP prefixes that use the same list of 439 paths and the IGP prefix is shared by all pathlists that have a path 440 resolving via that IGP prefix. It is noteworthy to mention that the 441 forwarding chain is constructed without any operator intervention at 442 all. 444 The remainder of this section goes over an example to illustrate the 445 applicability of BGP-PIC in a primary-backup path scenario. 447 3.2. Example: Primary-Backup Path Scenario 449 Consider the egress PE ePE1 in the case of the multi-homed VPN 450 prefixes in the BGP-free core depicted in Figure 1. Suppose ePE1 451 determines that the primary path is the external path but the backup 452 path is the iBGP path to the other PE ePE2 with next-hop BGP-NH2. 453 ePE2 constructs the forwarding chain depicted in Figure 3. We are 454 only showing a single VPN prefix for simplicity. But all prefixes 455 that are multihomed to ePE1 and ePE2 share the BGP pathlist. 457 BGP OutLabel Array 458 VPN-L11 +---------+ 459 (Label-leaf)---+---->|Unlabeled| 460 | +---------+ 461 | | VPN-L21 | 462 | | (swap) | 463 | +---------+ 464 | 465 | BGP Pathlist 466 | +------------+ Connected route 467 | | CE-NH |------>(to the CE) 468 | |path-index=0| 469 | +------------+ 470 | | VPN-NH2 | 471 VPN-IP1 -----+------------------>| (backup) |------>IGP Leaf 472 (IP prefix leaf) |path-index=1| (Towards ePE2) 473 | +------------+ 474 | 475 | BGP OutLabel Array 476 | +---------+ 477 +------------->|Unlabeled| 478 +---------+ 479 | VPN-L21 | 480 | (push) | 481 +---------+ 483 Figure 3 : VPN Prefix Forwarding Chain with eiBGP paths on egress PE 485 The example depicted in Figure 3 differs from the example in Figure 486 2 in two main aspects. First, as long as the primary path towards 487 the CE (external path) is useable, it will be the only path used for 488 forwarding while the OutLabel-List contains both the unlabeled label 489 (primary path) and the VPN label (backup path) advertised by the 490 backup path ePE2. The second aspect is presence of the label leaf 491 corresponding to the VPN prefix. This label leaf is used to match 492 VPN traffic arriving from the core. Note that the label leaf shares 493 the pathlist with the IP prefix. 495 4. Forwarding Behavior 497 This section explains how the forwarding plane uses the hierarchical 498 shared forwarding chain to forward a packet. 500 When a packet arrives at a router, it matches a leaf. A labeled 501 packet matches a label leaf while an IP packet matches an IP prefix 502 leaf. The forwarding engines walks the forwarding chain starting 503 from the leaf until the walk terminates on an adjacency. Thus when a 504 packet arrives, the chain is walked as follows: 506 1. Lookup the leaf based on the destination address or the label at 507 the top of the packet 509 2. Retrieve the parent pathlist of the leaf 511 3. Pick the outgoing path "Pi" from the list of resolved paths in 512 the pathlist. The method by which the outgoing path is picked is 513 beyond the scope of this document (e.g. flow-preserving hash 514 exploiting entropy within the MPLS stack and IP header). Let the 515 "path-index" of the outgoing path "Pi" be "j". 517 4. If the prefix is labeled, use the "path-index" "j" to retrieve 518 the jth label "Lj" stored the jth entry in the OutLabel-List and 519 apply the label action of the label on the packet (e.g. for VPN 520 label on the ingress PE, the label action is "push"). As 521 mentioned in Section 1.2, the value of the "path-index" stored 522 in path may not necessarily be the same value of the location of 523 the path in the pathlist. 525 5. Move to the parent of the chosen path "Pi" 527 6. If the chosen path "Pi" is recursive, move to its parent prefix 528 and go to step 2 530 7. If the chosen path is non-recursive move to its parent adjacency. 531 Otherwise go to the next step. 533 8. Encapsulate the packet in the layer string specified by the 534 adjacency and send the packet out. 536 Let's apply the above forwarding steps to the forwarding chain 537 depicted in Figure 2 in Section 2. Suppose a packet arrives at 538 ingress PE iPE from an external neighbor. Assume the packet matches 539 the VPN prefix VPN-IP1. While walking the forwarding chain, the 540 forwarding engine applies a hashing algorithm to choose the path and 541 the hashing at the BGP level yields path 0 while the hashing at the 542 IGP level yields path 1. In that case, the packet will be sent out 543 of interface I2 with the label stack "IGP-L12,VPN-L11". 545 5. Handling Platforms with Limited Levels of Hierarchy 547 This section describes the construction of the forwarding chain if a 548 platform does not support the number of recursion levels required to 549 resolve the NLRIs. There are two main design objectives 551 o Being able to reduce the number of hierarchical levels from any 552 arbitrary value to a smaller arbitrary value that can be 553 supported by the forwarding engine 555 o Minimal modifications to the forwarding algorithm due to such 556 reduction. 558 5.1. Flattening the Forwarding Chain 560 Let's consider a pathlist associated with the leaf "R1" consisting 561 of the list of paths . Assume that the leaf "R1" has 562 an Outlabel-list . Suppose the path Pi is a 563 recursive path that resolves via a prefix represented by the leaf 564 "R2". The leaf "R2" itself is pointing to a pathlist consisting of 565 the paths 567 If the platform supports the number of hierarchy levels of the 568 forwarding chain, then a packet that uses the path "Pi" will be 569 forwarded as follows: 571 1. The forwarding engine is now at leaf "R1" 573 2. So it moves to its parent pathlist, which contains the list . 576 3. The forwarding engine applies a hashing algorithm and picks the 577 path "Pi". So now the forwarding engine is at the path "Pi" 579 4. The forwarding engine retrieves the label "Li" from the outlabel- 580 list attached to the leaf "R1" and applies the label action 582 5. The path "Pi" uses the leaf "R2" 584 6. The forwarding engine walks forward to the leaf "R2" for 585 resolution 587 7. The forwarding plane performs a hash to pick a path among the 588 pathlist of the leaf "R2", which is 590 8. Suppose the forwarding engine picks the path "Qj" 592 9. Now the forwarding engine continues the walk to the parent of 593 "Qj" 595 Suppose the platform cannot support the number of hierarchy levels 596 in the forwarding chain. FIB needs to reduce the number of hierarchy 597 levels. The idea of reducing the number of hierarchy levels is to 598 "flatten" two chain levels into a single level. The "flattening" 599 steps are as follows 601 1. FIB wants to reduce the number of levels used by "Pi" by 1 603 2. FIB walks to the parent of "Pi", which is the leaf "R2" 605 3. FIB extracts the parent pathlist of the leaf "R2", which is 608 4. FIB also extracts the OutLabel-list(R2) associated with the leaf 609 "R2". Remember that OutLabel-list(R2) = 611 5. FIB replaces the path "Pi", with the list of paths 614 6. Hence the path list now becomes " 617 7. The path index stored inside the locations "Q1", "Q2", ..., "Qm" 618 must all be "i" because the index "i" refers to the label "Li" 619 associated with leaf "R1" 621 8. FIB attaches an OutLabel-list with the new pathlist as follows: 622 . The size of the label list associated with the 624 flattened pathlist equals the size of the pathlist. Hence there 625 is a 1-1 mapping between every path in the "flattened" pathlist 626 and the OutLabel-list associated with it. 628 It is noteworthy to mention that the labels in the outlabel-list 629 associated with the "flattened" pathlist may be stored in the same 630 memory location as the path itself to avoid additional memory 631 access. But that is an implementation detail that is beyond the 632 scope of this document. 634 The same steps can be applied to all paths in the pathlist so that all paths are "flattened" thereby reducing the 636 number of hierarchical levels by one. Note that that "flattening" a 637 pathlist pulls in all paths of the parent paths, a desired feature 638 to utilize all ECMP/UCMP paths at all levels. A platform that has a 639 limit on the number of paths in a pathlist for any given leaf may 640 choose to reduce the number paths using methods that are beyond the 641 scope of this document. 643 The steps can be recursively applied to other paths at the same 644 levels or other levels to recursively reduce the number of 645 hierarchical levels to an arbitrary value so as to accommodate the 646 capability of the forwarding engine. 648 Because a flattened pathlist may have an associated OutLabel-list 649 the forwarding behavior has to be slightly modified. The 650 modification is done by adding the following step right after step 4 651 in Section 4. 653 5. If there is an OutLabel-list associated with the pathlist, then 654 if the path "Pi" is chosen by the hashing algorithm, retrieve the 655 label at location "i" in that OutLabel-list and apply the label 656 action of that label on the packet 658 In the next subsection, we apply the steps in this subsection to a 659 sample scenario. 661 5.2. Example: Flattening a forwarding chain 663 This example uses a case of inter-AS option C [8] where there are 3 664 levels of hierarchy. Figure 4 illustrates the sample topology. To 665 force 3 levels of hierarchy, the ASBRs on the ingress domain (domain 666 1) advertise the core routers of the egress domain (domain 2) to the 667 ingress PE (iPE) via BGP-LU [4] instead of redistributing them into 668 the IGP of domain 1. The end result is that the ingress PE (iPE) has 669 2 levels of recursion for the VPN prefix VPN-IP1 and VPN2-IP2. 671 Domain 1 Domain 2 672 +-------------+ +-------------+ 673 | | | | 674 | LDP/SR Core | | LDP/SR core | 675 | | | | 676 | (192.0.1.1) | | 677 | ASBR11---------ASBR21........ePE1(192.0.2.1) 678 | | \ / | . . |\ 679 | | \ / | . . | \ 680 | | \ / | . . | \ 681 | | \/ | .. | \VPN-IP1 (11.1.1.0/24) 682 | | /\ | . . | /VRF "Blue" ASn: 65000 683 | | / \ | . . | / 684 | | / \ | . . | / 685 | | / \ | . . |/ 686 iPE ASBR12---------ASBR22........ePE2 (192.0.2.2) 687 | (192.0.1.2) | |\ 688 | | | | \ 689 | | | | \ 690 | | | | \VRF "Blue" ASn: 65000 691 | | | | /VPN-IP2 (11.1.2.0/24) 692 | | | | / 693 | | | | / 694 | | | |/ 695 | ASBR13---------ASBR23........ePE3(192.0.2.3) 696 | (192.0.1.3) | | 697 | | | | 698 | | | | 699 +-------------+ +-------------+ 700 <============ <========= <============ 701 Advertise ePEx Advertise Redistribute 702 Using iBGP-LU ePEx Using IGP into 703 eBGP-LU BGP 705 Figure 4 : Sample 3-level hierarchy topology 707 We will make the following assumptions about connectivity 709 o In "domain 2", both ASBR21 and ASBR22 can reach both ePE1 and 710 ePE2 using the same distance 712 o In "domain 2", only ASBR23 can reach ePE3 714 o In "domain 1", iPE (the ingress PE) can reach ASBR11, ASBR12, and 715 ASBR13 via IGP using the same distance. 717 We will make the following assumptions about the labels 718 o The VPN labels advertised by ePE1 and ePE2 for prefix VPN-IP1 are 719 VPN-L11 and VPN-L21, respectively 721 o The VPN labels advertised by ePE2 and ePE3 for prefix VPN-IP2 are 722 VPN-L22 and VPN-L32, respectively 724 o The labels advertised by ASBR11 to iPE using BGP-LU [4] for the 725 egress PEs ePE1 and ePE2 are LASBR11(ePE1) and LASBR11(ePE2), 726 respectively. 728 o The labels advertised by ASBR12 to iPE using BGP-LU [4] for the 729 egress PEs ePE1 and ePE2 are LASBR12(ePE1) and LASBR12(ePE2), 730 respectively 732 o The label advertised by ASBR11 to iPE using BGP-LU [4] for the 733 egress PE ePE3 is LASBR13(ePE3) 735 o The IGP labels advertised by the next hops directly connected to 736 iPE towards ASBR11, ASBR12, and ASBR13 in the core of domain 1 737 are IGP-L11, IGP-L12, and IGP-L13, respectively. 739 Based on these connectivity assumptions and the topology in Figure 740 4, the routing table on iPE is 742 65000:11.1.1.0/24 743 via ePE1 (192.0.2.1), VPN Label: VPN-L11 744 via ePE2 (192.0.2.2), VPN Label: VPN-L21 745 65000:11.1.2.0/24 746 via ePE1 (192.0.2.2), VPN Label: VPN-L22 747 via ePE2 (192.0.2.3), VPN Label: VPN-L23 749 192.0.2.1/32 (ePE1) 750 Via ASBR11, BGP-LU Label: LASBR11(ePE1) 751 Via ASBR12, BGP-LU Label: LASBR12(ePE1) 752 192.0.2.2/32 (ePE2) 753 Via ASBR11, BGP-LU Label: LASBR11(ePE2) 754 Via ASBR12, BGP-LU Label: LASBR12(ePE2) 755 192.0.2.3/32 (ePE3) 756 Via ASBR13, BGP-LU Label: LASBR13(ePE3) 758 192.0.1.1/32 (ASBR11) 759 via Core, Label: IGP-L11 760 192.0.1.2/32 (ASBR12) 761 via Core, Label: IGP-L12 762 192.0.1.3/32 (ASBR13) 763 via Core, Label: IGP-L13 765 The diagram in Figure 5 illustrates the forwarding chain in iPE 766 assuming that the forwarding hardware in iPE supports 3 levels of 767 hierarchy. The leaves corresponding to the ABSRs on domain 1 768 (ASBR11, ASBR12, and ASBR13) are at the bottom of the hierarchy. 769 There are few important points: 771 o Because the hardware supports the required depth of hierarchy, 772 the sizes of a pathlist equal the size of the label list 773 associated with the leaves using this pathlist 775 o The index inside the pathlist entry indicates the label that will 776 be picked from the Outlabel-List associated with the child leaf 777 if that path is chosen by the forwarding engine hashing function. 779 Outlabel-List Outlabel-List 780 For VPN-IP1 For VPN-IP2 781 +------------+ +--------+ +-------+ +------------+ 782 | VPN-L11 |<---| VPN-IP1| |VPN-IP2|-->| VPN-L22 | 783 +------------+ +---+----+ +---+---+ +------------+ 784 | VPN-L21 | | | | VPN-L32 | 785 +------------+ | | +------------+ 786 | | 787 V V 788 +---+---+ +---+---+ 789 | 0 | 1 | | 0 | 1 | 790 +-|-+-\-+ +-/-+-\-+ 791 | \ / \ 792 | \ / \ 793 | \ / \ 794 | \ / \ 795 v \ / \ 796 +-----+ +-----+ +-----+ 797 +----+ ePE1| |ePE2 +-----+ | ePE3+-----+ 798 | +--+--+ +-----+ | +--+--+ | 799 v | / v | v 800 +-------------+ | / +-------------+ | +-------------+ 801 |LASBR11(ePE1)| | / |LASBR11(ePE2)| | |LASBR13(ePE3)| 802 +-------------+ | / +-------------+ | +-------------+ 803 |LASBR12(ePE1)| | / |LASBR12(ePE2)| | Outlabel-List 804 +-------------+ | / +-------------+ | For ePE3 805 Outlabel-List | / Outlabel-List | 806 For ePE1 | / For ePE2 | 807 | / | 808 | / | 809 | / | 810 v / v 811 +---+---+ Shared Pathlist +---+ Pathlist 812 | 0 | 1 | For ePE1 and ePE2 | 0 | For ePE3 813 +-|-+-\-+ +-|-+ 814 | \ | 815 | \ | 816 | \ | 817 | \ | 818 v \ v 819 +------+ +------+ +------+ 820 +---+ASBR11| |ASBR12+--+ |ASBR13+---+ 821 | +------+ +------+ | +------+ | 822 v v v 823 +-------+ +-------+ +-------+ 824 |IGP-L11| |IGP-L12| |IGP-L13| 825 +-------+ +-------+ +-------+ 827 Figure 5 : Forwarding Chain for hardware supporting 3 Levels 829 Now suppose the hardware on iPE (the ingress PE) supports 2 levels 830 of hierarchy only. In that case, the 3-levels forwarding chain in 831 Figure 5 needs to be "flattened" into 2 levels only. 833 Outlabel-List Outlabel-List 834 For VPN-IP1 For VPN-IP2 835 +------------+ +-------+ +-------+ +------------+ 836 | VPN-L11 |<---|VPN-IP1| | VPN-IP2|--->| VPN-L22 | 837 +------------+ +---+---+ +---+---+ +------------+ 838 | VPN-L21 | | | | VPN-L32 | 839 +------------+ | | +------------+ 840 | | 841 | | 842 | | 843 Flattened | | Flattened 844 pathlist V V pathlist 845 +===+===+ +===+===+===+ +=============+ 846 +--------+ 0 | 1 | | 0 | 0 | 1 +---->|LASBR11(ePE2)| 847 | +=|=+=\=+ +=/=+=/=+=\=+ +=============+ 848 v | \ / / \ |LASBR12(ePE2)| 849 +=============+ | \ +-----+ / \ +=============+ 850 |LASBR11(ePE1)| | \/ / \ |LASBR13(ePE3)| 851 +=============+ | /\ / \ +=============+ 852 |LASBR12(ePE1)| | / \ / \ 853 +=============+ | / \ / \ 854 | / \ / \ 855 | / + + \ 856 | + | | \ 857 | | | | \ 858 v v v v \ 859 +------+ +------+ +------+ 860 +----|ASBR11| |ASBR12+---+ |ASBR13+---+ 861 | +------+ +------+ | +------+ | 862 v v v 863 +-------+ +-------+ +-------+ 864 |IGP-L11| |IGP-L12| |IGP-L13| 865 +-------+ +-------+ +-------+ 867 Figure 6 : Flattening 3 levels to 2 levels of Hierarchy on iPE 869 Figure 6 represents one way to "flatten" a 3 levels hierarchy into 870 two levels. There are few important points: 872 o As mentioned in Section 5.1 a flattened pathlist may have label 873 lists associated with them. The size of the label list associated 874 with a flattened pathlist equals the size of the pathlist. Hence 875 it is possible that an implementation includes these label lists 876 in the flattened pathlist itself 878 o Again as mentioned in Section 5.1, the size of a flattened 879 pathlist may not be equal to the size of the OutLabel-lists of 880 leaves using the flattened pathlist. So the indices inside a 881 flattened pathlist still indicate the label index in the 882 Outlabel-Lists of the leaves using that pathlist. Because the 883 size of the flattened pathlist may be different from the size of 884 the OutLabel-lists of the leaves, the indices may be repeated. 886 o Let's take a look at the flattened pathlist used by the prefix 887 "VPN-IP2", The pathlist associated with the prefix "VPN-IP2" has 888 three entries. 890 o The first and second entry have index "0". This is because 891 both entries correspond to ePE2. Hence when hashing performed 892 by the forwarding engine results in using first or the second 893 entry in the pathlist, the forwarding engine will pick the 894 correct VPN label "VPN-L22", which is the label advertised by 895 ePE2 for the prefix "VPN-IP2" 897 o The third entry has the index "1". This is because the third 898 entry corresponds to ePE3. Hence when the hashing is performed 899 by the forwarding engine results in using the third entry in 900 the flattened pathlist, the forwarding engine will pick the 901 correct VPN label "VPN-L32", which is the label advertised by 902 "ePE3" for the prefix "VPN-IP2" 904 Now let's try and apply the forwarding steps in Section 4 together 905 with the additional step in Section 5.1 to the flattened forwarding 906 chain illustrated in Figure 6. 908 o Suppose a packet arrives at "iPE" and matches the VPN prefix 909 "VPN-IP2" 911 o The forwarding engine walks to the parent of the "VPN-IP2", which 912 is the flattened pathlist and applies a hashing algorithm to pick 913 a path 915 o Suppose the hashing by the forwarding engine picks the second 916 path in the flattened pathlist associated with the leaf "VPN- 917 IP2". 919 o Because the second path has the index "0", the label "VPN-L22" is 920 pushed on the packet 922 o Next the forwarding engine picks the second label from the 923 Outlabel-Array associated with the flattened pathlist. Hence the 924 next label that is pushed is "LASBR12(ePE2)" 926 o The forwarding engine now moves to the parent of the flattened 927 pathlist corresponding to the second path. The parent is the IGP 928 label leaf corresponding to "ASBR12" 930 o So the packet is forwarded towards the ASBR "ASBR12" and the IGP 931 label at the top will be "L12" 933 Based on the above steps, a packet arriving at iPE and destined to 934 the prefix VPN-L22 reaches its destination as follows 936 o iPE sends the packet along the shortest path towards ASBR12 with 937 the following label stack starting from the top: {L12, 938 LASBR12(ePE2), VPN-L22}. 940 o The penultimate hop of ASBR12 pops the top label "L12". Hence the 941 packet arrives at ASBR12 with the label stack {LASBR12(ePE2), 942 VPN-L22} where "LASBR12(ePE2)" is the top label. 944 o ASBR12 swaps "LASBR12(ePE2)" with the label "LASBR22(ePE2)", 945 which is the label advertised by ASBR22 for the ePE2 (the egress 946 PE). 948 o ASBR22 receives the packet with "LASBR22(ePE2)" at the top. 950 o Hence ASBR22 swaps "LASBR22(ePE2)" with the IGP label for ePE2 951 advertised by the next-hop towards ePE2 in domain 2, and sends 952 the packet along the shortest path towards ePE2. 954 o The penultimate hop of ePE2 pops the top label. Hence ePE2 955 receives the packet with the top label VPN-L22 at the top 957 o ePE2 pops "VPN-L22" and sends the packet as a pure IP packet 958 towards the destination VPN-IP2. 960 6. Forwarding Chain Adjustment at a Failure 962 The hierarchical and shared structure of the forwarding chain 963 explained in the previous section allows modifying a small number of 964 forwarding chain objects to re-route traffic to a pre-calculated 965 equal-cost or backup path without the need to modify the possibly 966 very large number of BGP prefixes. In this section, we go over 967 various core and edge failure scenarios to illustrate how FIB 968 manager can utilize the forwarding chain structure to achieve BGP 969 prefix independent convergence. 971 6.1. BGP-PIC core 973 This section describes the adjustments to the forwarding chain when 974 a core link or node fails but the BGP next-hop remains reachable. 976 There are two case: remote link failure and attached link failure. 977 Node failures are treated as link failures. 979 When a remote link or node fails, IGP on the ingress PE receives 980 advertisement indicating a topology change so IGP re-converges to 981 either find a new next-hop and/or outgoing interface or remove the 982 path completely from the IGP prefix used to resolve BGP next-hops. 983 IGP and/or LDP download the modified IGP leaves with modified 984 outgoing labels for labeled core. 986 When a local link fails, FIB manager detects the failure almost 987 immediately. The FIB manager marks the impacted path(s) as unusable 988 so that only useable paths are used to forward packets. Hence only 989 IGP pathlists with paths using the failed local link need to be 990 modified. All other pathlists are not impacted. Note that in this 991 particular case there is actually no need even to backwalk to IGP 992 leaves to adjust the OutLabel-Lists because FIB can rely on the 993 path-index stored in the useable paths in the pathlist to pick the 994 right label. 996 It is noteworthy to mention that because FIB manager modifies the 997 forwarding chain starting from the IGP leaves only. BGP pathlists 998 and leaves are not modified. Hence traffic restoration occurs within 999 the time frame of IGP convergence, and, for local link failure, 1000 assuming a backup path has been precomputed, within the timeframe of 1001 local detection (e.g. 50ms). Examples of solutions that pre- 1002 computing backup paths are IP FRR [16] remote LFA [17], Ti-LFA [15] 1003 and MRT [18] or eBGP path having a backup path [10]. 1005 Let's apply the procedure mentioned in this subsection to the 1006 forwarding chain depicted in Figure 2. Suppose a remote link failure 1007 occurs and impacts the first ECMP IGP path to the remote BGP next- 1008 hop. Upon IGP convergence, the IGP pathlist used by the BGP next-hop 1009 is updated to reflect the new topology (one path instead of two). As 1010 soon as the IGP convergence is effective for the BGP next-hop entry, 1011 the new forwarding state is immediately available to all dependent 1012 BGP prefixes. The same behavior would occur if the failure was local 1013 such as an interface going down. As soon as the IGP convergence is 1014 complete for the BGP next-hop IGP route, all its BGP depending 1015 routes benefit from the new path. In fact, upon local failure, if 1016 LFA protection is enabled for the IGP route to the BGP next-hop and 1017 a backup path was pre-computed and installed in the pathlist, upon 1018 the local interface failure, the LFA backup path is immediately 1019 activated (e.g. sub-50msec) and thus protection benefits all the 1020 depending BGP traffic through the hierarchical forwarding dependency 1021 between the routes. 1023 6.2. BGP-PIC edge 1025 This section describes the adjustments to the forwarding chains as a 1026 result of edge node or edge link failure. 1028 6.2.1. Adjusting forwarding Chain in egress node failure 1030 When an edge node fails, IGP on neighboring core nodes send route 1031 updates indicating that the edge node is no longer reachable. IGP 1032 running on the iBGP peers instructs FIB to remove the IP and label 1033 leaves corresponding to the failed edge node from FIB. So FIB 1034 manager performs the following steps: 1036 o FIB manager deletes the IGP leaf corresponding to the failed edge 1037 node 1039 o FIB manager backwalks to all dependent BGP pathlists and marks 1040 that path using the deleted IGP leaf as unresolved 1042 o Note that there is no need to modify the possibly large number of 1043 BGP leaves because each path in the pathlist carries its path 1044 index and hence the correct outgoing label will be picked. 1045 Consider for example the forwarding chain depicted in Figure 2. 1046 If the 1st BGP path becomes unresolved, then the forwarding 1047 engine will only use the second path for forwarding. Yet the path 1048 index of that single resolved path will still be 1 and hence the 1049 label VPN-L12 will be pushed. 1051 6.2.2. Adjusting Forwarding Chain on PE-CE link Failure 1053 Suppose the link between an edge router and its external peer fails. 1054 There are two scenarios (1) the edge node attached to the failed 1055 link performs next-hop self and (2) the edge node attached to the 1056 failure advertises the IP address of the failed link as the next-hop 1057 attribute to its iBGP peers. 1059 In the first case, the rest of iBGP peers will remain unaware of the 1060 link failure and will continue to forward traffic to the edge node 1061 until the edge node attached to the failed link withdraws the BGP 1062 prefixes. If the destination prefixes are multi-homed to another 1063 iBGP peer, say ePE2, then FIB manager on the edge router detecting 1064 the link failure applies the following steps: 1066 o FIB manager backwalks to the BGP pathlists marks the path through 1067 the failed link to the external peer as unresolved 1069 o Hence traffic will be forwarded used the backup path towards ePE2 1071 o For labeled traffic 1073 o The Outlabel-List attached to the BGP leaf already contains 1074 an entry corresponding to the backup path. 1076 o The label entry in OutLabel-List corresponding to the 1077 internal path to backup egress PE has swap action to the 1078 label advertised by backup egress PE 1080 o For an arriving label packet (e.g. VPN), the top label is 1081 swapped with the label advertised by backup egress PE and the 1082 packet is sent towards that backup egress PE 1084 o For unlabeled traffic, packets are simply redirected towards 1085 backup egress PE. 1087 In the second case where the edge router uses the IP address of the 1088 failed link as the BGP next-hop, the edge router will still perform 1089 the previous steps. But, unlike the case of next-hop self, IGP on 1090 failed edge node informs the rest of the iBGP peers that IP address 1091 of the failed link is no longer reachable. Hence the FIB manager on 1092 iBGP peers will delete the IGP leaf corresponding to the IP prefix 1093 of the failed link. The behavior of the iBGP peers will be identical 1094 to the case of edge node failure outlined in Section 6.2.1. 1096 It is noteworthy to mention that because the edge link failure is 1097 local to the edge router, sub-50 msec convergence can be achieved as 1098 described in [10]. 1100 Let's try to apply the case of next-hop self to the forwarding chain 1101 depicted in Figure 3. After failure of the link between ePE1 and CE, 1102 the forwarding engine will route traffic arriving from the core 1103 towards VPN-NH2 with path-index=1. A packet arriving from the core 1104 will contain the label VPN-L11 at top. The label VPN-L11 is swapped 1105 with the label VPN-L21 and the packet is forwarded towards ePE2. 1107 6.3. Handling Failures for Flattened Forwarding Chains 1109 As explained in the in Section 5 if the number of hierarchy levels 1110 of a platform cannot support the native number of hierarchy levels 1111 of a recursive forwarding chain, the instantiated forwarding chain 1112 is constructed by flattening two or more levels. Hence a 3 levels 1113 chain in Figure 5 is flattened into the 2 levels chain in Figure 6. 1115 While reducing the benefits of BGP-PIC, flattening one hierarchy 1116 into a shallower hierarchy does not always result in a complete loss 1117 of the benefits of the BGP-PIC. To illustrate this fact suppose 1118 ASBR12 is no longer reachable in domain 1. If the platform supports 1119 the full hierarchy depth, the forwarding chain is the one depicted 1120 in Figure 5 and hence the FIB manager needs to backwalk one level to 1121 the pathlist shared by "ePE1" and "ePE2" and adjust it. If the 1122 platform supports 2 levels of hierarchy, then a useable forwarding 1123 chain is the one depicted in Figure 6. In that case, if ASBR12 is no 1124 longer reachable, the FIB manager has to backwalk to the two 1125 flattened pathlists and updates both of them. 1127 The main observation is that the loss of convergence speed due to 1128 the loss of hierarchy depth depends on the structure of the 1129 forwarding chain itself. To illustrate this fact, let's take two 1130 extremes. Suppose the forwarding objects in level i+1 depend on the 1131 forwarding objects in level i. If every object on level i+1 depends 1132 on a separate object in level i, then flattening level i into level 1133 i+1 will not result in loss of convergence speed. Now let's take the 1134 other extreme. Suppose "n" objects in level i+1 depend on 1 object 1135 in level i. Now suppose FIB flattens level i into level i+1. If a 1136 topology change results in modifying the single object in level i, 1137 then FIB has to backwalk and modify "n" objects in the flattened 1138 level, thereby losing all the benefit of BGP-PIC. Experience shows 1139 that flattening forwarding chains usually results in moderate loss 1140 of BGP-PIC benefits. Further analysis is needed to corroborate and 1141 quantify this statement. 1143 7. Properties 1145 7.1. Coverage 1147 All the possible failures, except CE node failure, are covered, 1148 whether they impact a local or remote IGP path or a local or remote 1149 BGP next-hop as described in Section 6. This section provides 1150 details for each failure and now the hierarchical and shared FIB 1151 structure proposed in this document allows recovery that does not 1152 depend on number of BGP prefixes. 1154 7.1.1. A remote failure on the path to a BGP next-hop 1156 Upon IGP convergence, the IGP leaf for the BGP next-hop is updated 1157 upon IGP convergence and all the BGP depending routes leverage the 1158 new IGP forwarding state immediately. Details of this behavior can 1159 be found in Section 6.1. 1161 This BGP resiliency property only depends on IGP convergence and is 1162 independent of the number of BGP prefixes impacted. 1164 7.1.2. A local failure on the path to a BGP next-hop 1166 Upon LFA protection, the IGP leaf for the BGP next-hop is updated to 1167 use the precomputed LFA backup path and all the BGP depending routes 1168 leverage this LFA protection. Details of this behavior can be found 1169 in Section 6.1. 1171 This BGP resiliency property only depends on LFA protection and is 1172 independent of the number of BGP prefixes impacted. 1174 7.1.3. A remote iBGP next-hop fails 1176 Upon IGP convergence, the IGP leaf for the BGP next-hop is deleted 1177 and all the depending BGP Path-Lists are updated to either use the 1178 remaining ECMP BGP best-paths or if none remains available to 1179 activate precomputed backups. Details about this behavior can be 1180 found in Section 6.2.1. 1182 This BGP resiliency property only depends on IGP convergence and is 1183 independent of the number of BGP prefixes impacted. 1185 7.1.4. A local eBGP next-hop fails 1187 Upon local link failure detection, the adjacency to the BGP next-hop 1188 is deleted and all the depending BGP pathlists are updated to either 1189 use the remaining ECMP BGP best-paths or if none remains available 1190 to activate precomputed backups. Details about this behavior can be 1191 found in Section 6.2.2. 1193 This BGP resiliency property only depends on local link failure 1194 detection and is independent of the number of BGP prefixes impacted. 1196 7.2. Performance 1198 When the failure is local (a local IGP next-hop failure or a local 1199 eBGP next-hop failure), a pre-computed and pre-installed backup is 1200 activated by a local-protection mechanism that does not depend on 1201 the number of BGP destinations impacted by the failure. Sub-50msec 1202 is thus possible even if millions of BGP routes are impacted. 1204 When the failure is remote (a remote IGP failure not impacting the 1205 BGP next-hop or a remote BGP next-hop failure), an alternate path is 1206 activated upon IGP convergence. All the impacted BGP destinations 1207 benefit from a working alternate path as soon as the IGP convergence 1208 occurs for their impacted BGP next-hop even if millions of BGP 1209 routes are impacted. 1211 Appendix A puts the BGP PIC benefits in perspective by providing 1212 some results using actual numbers. 1214 7.3. Automated 1216 The BGP PIC solution does not require any operator involvement. The 1217 process is entirely automated as part of the FIB implementation. 1219 The salient points enabling this automation are: 1221 o Extension of the BGP Best Path to compute more than one primary 1222 ([11]and [12]) or backup BGP next-hop ([6] and [13]). 1224 o Sharing of BGP Path-list across BGP destinations with same 1225 primary and backup BGP next-hop 1227 o Hierarchical indirection and dependency between BGP pathlist and 1228 IGP pathlist 1230 7.4. Incremental Deployment 1232 As soon as one router supports BGP PIC solution, it benefits from 1233 all its benefits without any requirement for other routers to 1234 support BGP PIC. 1236 8. Security Considerations 1238 The behavior described in this document is internal functionality 1239 to a router that result in significant improvement to convergence 1240 time as well as reduction in CPU and memory used by FIB while not 1241 showing change in basic routing and forwarding functionality. As 1242 such no additional security risk is introduced by using the 1243 mechanisms proposed in this document. 1245 9. IANA Considerations 1247 No requirements for IANA 1249 10. Conclusions 1251 This document proposes a hierarchical and shared forwarding chain 1252 structure that allows achieving BGP prefix independent 1253 convergence, and in the case of locally detected failures, sub-50 1254 msec convergence. A router can construct the forwarding chains in 1255 a completely transparent manner with zero operator intervention 1256 thereby supporting smooth and incremental deployment. 1258 11. References 1260 11.1. Normative References 1262 [1] Bradner, S., "Key words for use in RFCs to Indicate 1263 Requirement Levels", BCP 14, RFC 2119, March 1997. 1265 [2] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway Protocol 1266 4 (BGP-4), RFC 4271, January 2006 1268 [3] Bates, T., Chandra, R., Katz, D., and Rekhter Y., 1269 "Multiprotocol Extensions for BGP", RFC 4760, January 2007 1271 [4] Y. Rekhter and E. Rosen, " Carrying Label Information in BGP- 1272 4", RFC 3107, May 2001 1274 [5] Andersson, L., Minei, I., and B. Thomas, "LDP Specification", 1275 RFC 5036, October 2007 1277 11.2. Informative References 1279 [6] Marques,P., Fernando, R., Chen, E, Mohapatra, P., Gredler, H., 1280 "Advertisement of the best external route in BGP", draft-ietf- 1281 idr-best-external-05.txt, January 2012. 1283 [7] Wu, J., Cui, Y., Metz, C., and E. Rosen, "Softwire Mesh 1284 Framework", RFC 5565, June 2009. 1286 [8] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 1287 Networks (VPNs)", RFC 4364, February 2006. 1289 [9] De Clercq, J. , Ooms, D., Prevost, S., Le Faucheur, F., 1290 "Connecting IPv6 Islands over IPv4 MPLS Using IPv6 Provider 1291 Edge Routers (6PE)", RFC 4798, February 2007 1293 [10] O. Bonaventure, C. Filsfils, and P. Francois. "Achieving sub- 1294 50 milliseconds recovery upon bgp peering link failures, " 1295 IEEE/ACM Transactions on Networking, 15(5):1123-1135, 2007 1297 [11] D. Walton, A. Retana, E. Chen, J. Scudder, "Advertisement of 1298 Multiple Paths in BGP", draft-ietf-idr-add-paths-12.txt, 1299 November 2015 1301 [12] R. Raszuk, R. Fernando, K. Patel, D. McPherson, K. Kumaki, 1302 "Distribution of diverse BGP paths", RFC 6774, November 2012 1304 [13] P. Mohapatra, R. Fernando, C. Filsfils, and R. Raszuk, "Fast 1305 Connectivity Restoration Using BGP Add-path", draft-pmohapat- 1306 idr-fast-conn-restore-03, Jan 2013 1308 [14] C. Filsfils, S. Previdi, A. Bashandy, B. Decraene, S. 1309 Litkowski, M. Horneffer, R. Shakir, J. Tansura, E. Crabbe 1310 "Segment Routing with MPLS data plane", draft-ietf-spring- 1311 segment-routing-mpls-02 (work in progress), October 2015 1313 [15] C. Filsfils, S. Previdi, A. Bashandy, B. Decraene, " Topology 1314 Independent Fast Reroute using Segment Routing", draft- 1315 francois-spring-segment-routing-ti-lfa-02 (work in progress), 1316 August 2015 1318 [16] M. Shand and S. Bryant, "IP Fast Reroute Framework", RFC 5714, 1319 January 2010 1321 [17] S. Bryant, C. Filsfils, S. Previdi, M. Shand, N So, " Remote 1322 Loop-Free Alternate (LFA) Fast Reroute (FRR)", RFC 7490 April 1323 2015 1325 [18] A. Atlas, C. Bowers, G. Enyedi, " An Architecture for IP/LDP 1326 Fast-Reroute Using Maximally Redundant Trees", draft-ietf- 1327 rtgwg-mrt-frr-architecture-10 (work in progress), February 1328 2016 1330 12. Acknowledgments 1332 Special thanks to Neeraj Malhotra, Yuri Tsier for the valuable 1333 help 1335 Special thanks to Bruno Decraene for the valuable comments 1337 This document was prepared using 2-Word-v2.0.template.dot. 1339 Authors' Addresses 1341 Ahmed Bashandy 1342 Cisco Systems 1343 170 West Tasman Dr, San Jose, CA 95134, USA 1344 Email: bashandy@cisco.com 1346 Clarence Filsfils 1347 Cisco Systems 1348 Brussels, Belgium 1349 Email: cfilsfil@cisco.com 1351 Prodosh Mohapatra 1352 Sproute Networks 1353 Email: mpradosh@yahoo.com 1355 Appendix A. Perspective 1357 The following table puts the BGP PIC benefits in perspective 1358 assuming 1360 o 1M impacted BGP prefixes 1362 o IGP convergence ~ 500 msec 1364 o local protection ~ 50msec 1366 o FIB Update per BGP destination ~ 100usec conservative, 1368 ~ 10usec optimistic 1370 o BGP Convergence per BGP destination ~ 200usec conservative, 1372 ~ 100usec optimistic 1374 Without PIC With PIC 1376 Local IGP Failure 10 to 100sec 50msec 1378 Local BGP Failure 100 to 200sec 50msec 1380 Remote IGP Failure 10 to 100sec 500msec 1382 Local BGP Failure 100 to 200sec 500msec 1384 Upon local IGP next-hop failure or remote IGP next-hop failure, the 1385 existing primary BGP next-hop is intact and usable hence the 1386 resiliency only depends on the ability of the FIB mechanism to 1387 reflect the new path to the BGP next-hop to the depending BGP 1388 destinations. Without BGP PIC, a conservative back-of-the-envelope 1389 estimation for this FIB update is 100usec per BGP destination. An 1390 optimistic estimation is 10usec per entry. 1392 Upon local BGP next-hop failure or remote BGP next-hop failure, 1393 without the BGP PIC mechanism, a new BGP Best-Path needs to be 1394 recomputed and new updates need to be sent to peers. This depends on 1395 BGP processing time that will be shared between best-path 1396 computation, RIB update and peer update. A conservative back-of-the- 1397 envelope estimation for this is 200usec per BGP destination. An 1398 optimistic estimation is 100usec per entry.