idnits 2.17.1 draft-ietf-grow-diverse-bgp-path-dist-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 4, 2011) is 4861 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 821, but no explicit reference was found in the text == Unused Reference: 'RFC5226' is defined on line 831, but no explicit reference was found in the text ** Obsolete normative reference: RFC 5226 (Obsoleted by RFC 8126) == Outdated reference: A later version (-15) exists of draft-ietf-idr-add-paths-04 == Outdated reference: A later version (-05) exists of draft-ietf-idr-best-external-02 -- Unexpected draft version: The latest known version of draft-ietf-idr-route-oscillation is -00, but you're referring to -01. == Outdated reference: A later version (-03) exists of draft-pmohapat-idr-fast-conn-restore-00 Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 GROW Working Group R. Raszuk, Ed. 3 Internet-Draft R. Fernando 4 Intended status: Informational K. Patel 5 Expires: July 8, 2011 Cisco Systems 6 D. McPherson 7 Verisign 8 K. Kumaki 9 KDDI Corporation 10 January 4, 2011 12 Distribution of diverse BGP paths. 13 draft-ietf-grow-diverse-bgp-path-dist-03 15 Abstract 17 The BGP4 protocol specifies the selection and propagation of a single 18 best path for each prefix. As defined today BGP has no mechanisms to 19 distribute paths other then best path between its speakers. This 20 behaviour results in number of disadvantages for new applications and 21 services. 23 This document presents an alternative mechanism for solving the 24 problem based on the concept of parallel route reflector planes. 25 Such planes can be build in parallel or they can co-exit on the 26 current route reflection platforms. Document also compares existing 27 solutions and proposed ideas that enable distribution of more paths 28 than just the best path. 30 This proposal does not specify any changes to the BGP protocol 31 definition. It does not require upgrades to provider edge or core 32 routers nor does it need network wide upgrades. The authors believe 33 that the GROW WG would be the best place for this work. 35 Status of this Memo 37 This Internet-Draft is submitted in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF). Note that other groups may also distribute 42 working documents as Internet-Drafts. The list of current Internet- 43 Drafts is at http://datatracker.ietf.org/drafts/current/. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 49 This Internet-Draft will expire on July 5, 2011. 51 Copyright Notice 53 Copyright (c) 2011 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents 58 (http://trustee.ietf.org/license-info) in effect on the date of 59 publication of this document. Please review these documents 60 carefully, as they describe your rights and restrictions with respect 61 to this document. Code Components extracted from this document must 62 include Simplified BSD License text as described in Section 4.e of 63 the Trust Legal Provisions and are provided without warranty as 64 described in the Simplified BSD License. 66 Table of Contents 68 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 69 2. History . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 70 2.1. BGP Add-Paths Proposal . . . . . . . . . . . . . . . . . . 4 71 3. Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 72 4. Multi plane route reflection . . . . . . . . . . . . . . . . . 6 73 4.1. Co-located best and backup path RRs . . . . . . . . . . . 9 74 4.2. Randomly located best and backup path RRs . . . . . . . . 10 75 4.3. Multi plane route servers for Internet Exchanges . . . . . 13 76 5. Discussion on current models of IBGP route distribution . . . 13 77 5.1. Full Mesh . . . . . . . . . . . . . . . . . . . . . . . . 13 78 5.2. Confederations . . . . . . . . . . . . . . . . . . . . . . 15 79 5.3. Route reflectors . . . . . . . . . . . . . . . . . . . . . 15 80 6. Deployment considerations . . . . . . . . . . . . . . . . . . 15 81 7. Summary of benefits . . . . . . . . . . . . . . . . . . . . . 17 82 8. Applications . . . . . . . . . . . . . . . . . . . . . . . . . 18 83 9. Security considerations . . . . . . . . . . . . . . . . . . . 18 84 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 85 11. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 19 86 12. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 19 87 13. References . . . . . . . . . . . . . . . . . . . . . . . . . . 19 88 13.1. Normative References . . . . . . . . . . . . . . . . . . . 19 89 13.2. Informative References . . . . . . . . . . . . . . . . . . 20 90 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 21 92 1. Introduction 94 Current BGP4 [RFC4271] protocol specification allows for the 95 selection and propagation of only one best path for each prefix. The 96 BGP protocol as defined today has no mechanism to distribute other 97 then best path between its speakers. This behaviour results in a 98 number of problems in the deployment of new applications and 99 services. 101 This document presents an alternative mechanism for solving the 102 problem based on the concept of parallel route reflector planes. It 103 also compares existing solutions and proposed ideas that enable 104 distribution of more paths than just the best path. The parallel 105 route reflector planes solution brings very significant benefits at a 106 negligible capex and opex deployment price as compared to the 107 alternative techniques and is being considered by a number of network 108 operators for deployment in their networks. 110 This proposal does not specify any changes to the BGP protocol 111 definition. It does not require upgrades to provider edge or core 112 routers nor does it need network wide upgrades. The only upgrade 113 required is the new functionality on the new or current route 114 reflectors. The authors believe that the GROW WG would be the best 115 place for this work. 117 2. History 119 The need to disseminate more paths than just the best path is 120 primarily driven by three requirements. First is the problem of BGP 121 oscillations [I-D.ietf-idr-route-oscillation]. The second is the 122 desire for reduction of time of reachability restoration in the event 123 of network or network element's failure. Third requirement is to 124 enhance BGP load balancing capabilities. Those reasons have lead to 125 the proposal of BGP add-paths [I-D.ietf-idr-add-paths]. 127 2.1. BGP Add-Paths Proposal 129 As it has been proven that distribution of only the best path of a 130 route is not sufficient to meet the needs of continuously growing 131 number of services carried over BGP the add-paths proposal was 132 submitted in 2002 to enable BGP to distribute more then one path. 133 This is achieved by including as a part of the NLRI an additional 134 four octet value called the Path Identifier. 136 The implication of this change on a BGP implementation is that it 137 must now maintain per path, instead of per prefix, peer advertisement 138 state to track which of the peers each path was advertised to. This 139 new requirement has its own memory and processing cost. Suffice to 140 say that by the end of 2009 none of the commercial BGP implementation 141 could claim to support the new add-path behaviour in production code, 142 in part because of this resource overhead. 144 An important observation is that distribution of more than one best 145 path by Autonomous System Border Routers (ASBRs) with multiple EBGP 146 peers attached to it where no "next hop self" is set may result in 147 bestpath selection inconsistency within the autonomous system. 148 Therefore it is also required to attach in the form of a new 149 attribute the possible tie breakers and propagate those within the 150 domain. The example of such attribute for the purpose of fast 151 connectivity restoration to address that very case of ASBR injecting 152 multiple external paths into the IBGP mesh has been presented and 153 discussed in Fast Connectivity Restoration Using BGP Add-paths 154 [I-D.ietf-idr-add-paths] document. Based on the additionally 155 propagated information also best path selection is recommended to be 156 modified to make sure that best and backup path selection within the 157 domain stays consistent. More discussion on this particular point 158 will be contained in the deployment considerations section below. In 159 the proposed solution in this document we observe that in order to 160 address most of the applications just use of best external 161 advertisement is required. For ASBRs which are peering to multiple 162 upstream ASs setting "next hop self" is recommended. 164 The add paths protocol extensions have to be implemented by all the 165 routers within an AS in order for the system to work correctly. It 166 remains quite a research topic to analyze benefits or risk associated 167 with partial add-paths deployments. The risk becomes even greater in 168 networks not using some form of edge to edge encapsulation. 170 The required code modifications include enhancements such as the Fast 171 Connectivity Restoration Using BGP Add-path 172 [I-D.pmohapat-idr-fast-conn-restore]. The deployment of such 173 technology in an entire service provider network requires software 174 and perhaps sometimes in the cases of End-of-Engineering or End-of- 175 Life equipment even hardware upgrades. Such operation may or may not 176 be economically feasible. Even if add-path functionality was 177 available today on all commercial routing equipment and across all 178 vendors, experience indicates that to achieve 100% deployment 179 coverage within any medium or large global network may easily take 180 years. 182 While it needs to be clearly acknowledged that the add-path mechanism 183 provides the most general way to address the problem of distributing 184 many paths between BGP speakers, this document provides a much easier 185 to deploy solution that requires no modification to the BGP protocol 186 where only a few additional paths may be required. The alternative 187 method presented is capable of addressing critical service provider 188 requirements for disseminating more than a single path across an AS 189 with a significantly lower deployment cost. 191 3. Goals 193 The proposal described in this document is not intended to compete 194 with add-paths. Instead if deployed it is to be used as a very easy 195 method to accommodate the majority of applications which may require 196 presence of alternative BGP exit points. 198 It is presented to network operators as a possible choice and 199 provides those operators who need additional paths today an 200 alternative from the need to transition to a full mesh. 202 It is intended as a way to buy more time allowing for a smoother and 203 gradual migration where router upgrades will be required for perhaps 204 different reasons. It will also allow the time required where 205 standard RP/RE memory size can easily accommodate the associated 206 overhead with other techniques without any compromises. 208 4. Multi plane route reflection 210 The idea contained in the proposal assumes the use of route 211 reflection within the network. Other techniques as described in the 212 following sections already provide means for distribution of 213 alternate paths today. 215 Let's observe today's picture of simple route reflected domain: 217 ASBR3 218 *** 219 * * 220 +------------* *-----------+ 221 | AS1 * * | 222 | *** | 223 | | 224 | | 225 | | 226 | RR1 *** RR2 | 227 | *** * * *** | 228 |* * * P * * *| 229 |* * * * * *| 230 | *** *** *** | 231 | | 232 | IBGP | 233 | | 234 | | 235 | *** *** | 236 | * * * * | 237 +-----* *---------* *----+ 238 * * * * 239 *** *** 240 ASBR1 ASBR2 241 EBGP 243 Figure1: Simple route reflection 245 Figure 1 shows an AS that is connected via EBGP peering at ASBR1 and 246 ASBR2 to an upstream AS or set of ASes. For a given destination "D" 247 ASBR1 and ASBR2 will each have an external path P1 and P2 248 respectively. The AS network uses two route reflectors RR1 and RR2 249 for redundancy reasons. The route reflectors propagate the single 250 BGP best path for each route to all clients. All ASBRs are clients 251 of RR1 and RR2. 253 Below are the possible cases of the path information that ASBR3 may 254 receive from route reflectors RR1 and RR2: 256 1. When best path tie breaker is the IGP distance: When paths P1 and 257 P2 are considered to be equally good best path candidates the 258 selection will depend on the distance of the path next-hops from 259 the route reflector making the decision. Depending on the 260 positioning of the route reflectors in the IGP topology they may 261 choose the same best path or a different one. In such a case 262 ASBR3 may receive either the same path or different paths from 263 each of the route reflectors. 265 2. When best path tie breaker is Multi-Exit-Discriminator or Local 266 Preference: In this case only one path from preferred exit point 267 ASBR will be available to RRs since the other peering ASBR will 268 consider the IBGP path as best and will not announce (or if 269 already announced will withdraw) its own external path. The 270 exception here is the use of BGP Best-External proposal which 271 will allow stated ASBR to still propagate to the RRs its own 272 external path. Unfortunately RRs will not be able to distribute 273 it any further to other clients as only the overall best path 274 will be reflected. 276 The proposed solution is based on the use of additional route 277 reflectors or new functionality enabled on the existing route 278 reflectors that instead of distributing the best path for each route 279 will distribute an alternative path other then best. The best path 280 (main) reflector plane distributes the best path for each route as it 281 does today. The second plane distributes the second best path for 282 each route and so on. Distribution of N paths for each route can be 283 achieved by using N reflector planes. 285 Each plane of route reflectors is a logical entity and may or may not 286 be co-located with the existing best path route reflectors. Adding a 287 route reflector plane to a network may be as easy as enabling a 288 logical router partition, new BGP process or just a new configuration 289 knob on an existing route reflector and configuring an additional 290 IBGP session from the current clients if required. There are no code 291 changes required on the route reflector clients for this mechanism to 292 work. It is easy to observe that the installation of one or more 293 additional route reflector control planes is much cheaper and an 294 easier than the need of upgrading 100s of route reflector clients in 295 the entire network to support different protocol encoding. 297 Diverse path route reflectors need the new ability to calculate and 298 propagate the Nth best path instead of the overall best path. An 299 implementation is encouraged to enable this new functionality on a 300 per neighbor basis. 302 While this is an implementation detail, the code to calculate Nth 303 best path is also required by other BGP solutions. For example in 304 the application of fast connectivity restoration BGP must calculate a 305 backup path for installation into the RIB and FIB ahead of the actual 306 failure. 308 To address the problem of external paths not being available to route 309 reflectors due to local preference or MED factors it is recommended 310 that ASBRs enable the best-external functionality in order to always 311 inject their external paths to the route reflectors. 313 4.1. Co-located best and backup path RRs 315 To simplify the description let's assume that we only use two route 316 reflector planes (N=2). When co-located the additional 2nd best path 317 reflectors are connected to the network at the same points from the 318 perspective of the IGP as the existing best path RRs. Let's also 319 assume that best-external is enabled on all ASBRs. 321 ASBR3 322 *** 323 * * 324 +------------* *-----------+ 325 | AS1 * * | 326 | *** | 327 | | 328 | RR1 RR2 | 329 | *** *** | 330 |* * *** * *| 331 |* * * * * *| 332 | *** * P * *** | 333 |* * * * * *| 334 |* * *** * *| 335 | *** *** | 336 | RR1' IBGP RR2'| 337 | | 338 | | 339 | *** *** | 340 | * * * * | 341 +-----* *---------* *----+ 342 * * * * 343 *** *** 344 ASBR1 ASBR2 346 EBGP 348 Figure2: Co-located 2nd best RR plane 350 The following is a list of configuration changes required to enable 351 the 2nd best path route reflector plane: 353 1. Unless same RR1/RR2 platform is being used adding RR1' and RR2' 354 either as logical or physical new control plane RRs in the same 355 IGP points as RR1 and RR2 respectively. 357 2. Enabling best-external on ASBRs 359 3. Enabling RR1' and RR2' for 2nd plane route reflection. 360 Alternatively instructing existing RR1 and RR2 to calculate also 361 2nd best path. 363 4. Unless one of the existing RRs is turned to advertise only 364 diverse path to it's current clients configuring new ASBRs-RR' 365 IBGP sessions 367 The expected behaviour is that under any BGP condition the ASBR3 and 368 P routers will receive both paths P1 and P2 for destination D. The 369 availability of both paths will allow them to implement a number of 370 new services as listed in the applications section below. 372 As an alternative to fully meshing all RRs and RRs' an operator who 373 has a large number of reflectors deployed today may choose to peer 374 newly introduced RRs' to a hierarchical RR' which would be an IBGP 375 interconnect point within the 2nd plane as well as between planes. 377 One of the deployment model of this scenario can be achieved by 378 simple upgrade of the existing route reflectors without the need to 379 deploy any new logical or physical platforms. Such upgrade would 380 allow route reflectors to service both upgraded to add-paths peers as 381 well as those peers which can not be immediately upgraded while in 382 the same time allowing to distribute more then single best path. The 383 obvious protocol benefit of using existing RRs to distribute towards 384 their clients best and diverse bgp paths over different IBGP session 385 is the automatic assurance that such client would always get 386 different paths with their next hop being different. 388 The way to accomplish this would be to create a separate IBGP session 389 for each N-th BGP path. Such session should be preferably terminated 390 at a different loopback address of the route reflector. At the BGP 391 OPEN stage of each such session a different bgp_router_id may be 392 used. Correspondingly route reflector should also allow its clients 393 to use the same bgp_router_id on each such session. 395 4.2. Randomly located best and backup path RRs 397 Now let's consider a deployment case where an operator wishes to 398 enable a 2nd RR' plane using only a single additional router in a 399 different network location to his current route reflectors. This 400 model would be of particular use in networks where some form of end- 401 to-end encapsulation (IP or MPLS) is enabled between provider edge 402 routers. 404 Note that this model of operation assumes that the present best path 405 route reflectors are only control plane devices. If the route 406 reflector is in the data forwarding path then the implementation must 407 be able to clearly separate the Nth best-path selection from the 408 selection of the paths to be used for data forwarding. The basic 409 premise of this mode of deployment assumes that all reflector planes 410 have the same information to choose from which includes the same set 411 of BGP paths. It also requires the ability to ignore the step of 412 comparison of the IGP metric to reach the bgp next hop during best- 413 path calculation. 415 ASBR3 416 *** 417 * * 418 +------------* *-----------+ 419 | AS1 * * | 420 | IBGP *** | 421 | | 422 | *** | 423 | * * | 424 | RR1 * P * RR2 | 425 | *** * * *** | 426 |* * *** * *| 427 |* * * *| 428 | *** RR' *** | 429 | *** | 430 | * * | 431 | * * | 432 | *** | 433 | *** *** | 434 | * * * * | 435 +-----* *---------* *----+ 436 * * * * 437 *** *** 438 ASBR1 ASBR2 440 EBGP 442 Figure3: Experimental deployment of 2nd best RR 444 The following is a list of configuration changes required to enable 445 the 2nd best path route reflector RR' as a single platform or to 446 enable one of the existing control plane RRs for diverse-path 447 functionality: 449 1. If needed adding RR' logical or physical as new route reflector 450 anywhere in the network 452 2. Enabling best-external on ASBRs 454 3. Disabling IGP metric check in BGP best path on all route 455 reflectors. 457 4. Enabling RR' or any of the existing RR for 2nd plane path 458 calculation 460 5. If required fully meshing newly added RRs' with the all other 461 reflectors in both planes. That condition does not apply if the 462 newly added RR'(s) already have peering to all ASBRs/PEs. 464 6. Unless one of the existing RRs is turned to advertise only 465 diverse path to it's current clients configuring new ASBRs-RR' 466 IBGP sessions 468 In this scenario the operator has the flexibility to introduce the 469 new additional route reflector functionality on any existing or new 470 hardware in the network. Any of the existing routers that are not 471 already members of the best path route reflector plane can be easily 472 configured to serve the 2nd plane either via using a logical / 473 virtual router partition or by having their bgp implementation 474 compliant to this specification. 476 Even if the IGP metric is not taken into consideration when comparing 477 paths during the bestpath calculation, an implementation still has to 478 consider paths with unreachable nexthops as invalid. It is worth 479 pointing out that some implementations today already allow for 480 configuration which results in no IGP metric comparison during the 481 best path calculation. 483 The additional planes of route reflectors do not need to be fully 484 redundant as the primary one does. If we are preparing for a single 485 network failure event, a failure of a non backed up N-th best-path 486 route reflector would not result in an connectivity outage of the 487 actual data plane. The reason is that this would at most affect the 488 presence of a backup path (not an active one) on same parts of the 489 network. If the operator chooses to build the N-th best path plane 490 redundantly by installing not one, but two or more route reflectors 491 serving each additional plane the additional robustness will be 492 achieved. 494 As a result of this solution ASBR3 and other ASBRs peering to RR' 495 will be receiving the 2nd best path. 497 Similarly to section 4.1 as an alternative to fully meshing all RRs & 498 RRs' an operator who may have a large number of reflectors already 499 deployed today may choose to peer newly introduced RRs' to a 500 hierarchical RR' which would be an IBGP interconnect point between 501 planes. 503 4.3. Multi plane route servers for Internet Exchanges 505 Another group of devices where the proposed multi-plane architecture 506 may be of particular applicability are EBGP route servers used at 507 many of internet exchange points. 509 In such cases 100s of ISPs are interconnected on a common LAN. 510 Instead of having 100s of direct EBGP sessions on each exchange 511 client, a single peering is created to the transparent route server. 512 The route server can only propagate a single best path. Mandating 513 the upgrade for 100s of different service providers in order to 514 implement add-path may be much more difficult as compared to asking 515 them for provisioning one new EBGP session to an Nth best-path route 516 server plane. That will allow to distribute more then single best 517 BGP path from a given route server to such IX peer. 519 The solution proposed in this document fits very well with the 520 requirement of having broader EBGP path diversity among the members 521 of any Internet Exchange Point. 523 5. Discussion on current models of IBGP route distribution 525 In today's networks BGP4 operates as specified in [RFC4271] 527 There are a number of technology choices for intra-AS BGP route 528 distribution: 530 1. Full mesh 532 2. Confederations 534 3. Route reflectors 536 5.1. Full Mesh 538 A full mesh, the most basic iBGP architecture, exists when all the 539 BGP speaking routers within the AS peer directly with all other BGP 540 speaking routers within the AS, irrespective of where a given router 541 resides within the AS (e.g., P router, PE router, etc..). 543 While this is the simplest intra-domain path distribution method, 544 historically there have been a number of challenges in realizing such 545 an IBGP full mesh in a large scale network. While some of these 546 challenges are no longer applicable today some may still apply, to 547 include the following: 549 1. Number of TCP sessions: The number of IBGP sessions on a single 550 router in a full mesh topology of a large scale service provider 551 can easily reach 100s. While on hardware and software used in 552 the late 70s, 80s and 90s such numbers could be of concern, today 553 customer requirements for the number of BGP sessions per box are 554 reaching 1000s. This is already an order of magnitude more then 555 the potential number of IBGP sessions. Advancement in hardware 556 and software used in production routers mean that running a full 557 mesh of IBGP sessions should not be dismissed due to the 558 resulting number of TCP sessions alone. 560 2. Provisioning: When operating and troubleshooting large networks 561 one of the top-most requirements is to keep the design as simple 562 as possible. When the autonomous systems network is composed of 563 hundreds of nodes it becomes very difficult to manually provision 564 a full mesh of IBGP sessions. Adding or removing a router 565 requires reconfiguration of all the other routers in the AS. 566 While this is a real concern today there is already work in 567 progress in the IETF to define IBGP peering automation through an 568 IBGP Auto Discovery [I-D.raszuk-idr-ibgp-auto-mesh] mechanism. 570 3. Number of paths: Another concern when deploying a full IBGP mesh 571 is the number of BGP paths for each route that have to be stored 572 at every node. This number is very tightly related to the number 573 of external peerings of an AS, the use of local preference or 574 multi-exit-discriminator techniques and the presence of best- 575 external [I-D.ietf-idr-best-external] advertisement 576 configuration. If we make a rough assumption that the BGP4 path 577 data structure consumes about 80-100 bytes the resulting control 578 plane memory requirement for 500,000 IPv4 routes with one 579 additional external path is 38-48 MB while for 1 million IPv4 580 routes it grows linearly to 76-95 MB. It is not possible to 581 reach a general conclusion if this condition is negligible or if 582 it is a show stopper for a full mesh deployment without direct 583 reference to a given network. 585 To summarize, a full mesh IBGP peering can offer natural 586 dissemination of multiple external paths among BGP speakers. When 587 realized with the help of IBGP Auto Discovery peering automation this 588 seems like a viable deployment especially in medium and small scale 589 networks. 591 5.2. Confederations 593 For the purpose of this document let's observe that confederations 594 [RFC5065] can be viewed as a hierarchical full mesh model. 596 Within each sub-AS BGP speakers are fully meshed and as discussed in 597 section 2.1 all full mesh characteristics (number of TCP sessions, 598 provisioning and potential concern over number of paths still apply 599 in the sub-AS scale). 601 In addition to the direct peering of all BGP speakers within each 602 sub-AS, all sub-AS border routers must also be fully meshed with each 603 other. Sub-AS border routers configured with best-external 604 functionality can inject additional exit paths within a sub-AS. 606 To summarize, it is technically sound to use confederations with the 607 combination of best-external to achieve distribution of more than a 608 single best path per route in a large autonomous systems. 610 In topologies where route reflectors are deployed within the 611 confederation sub-ASes the technique describe here does apply. 613 5.3. Route reflectors 615 The main motivation behind the use of route reflectors [RFC4456] is 616 the avoidance of the full mesh session management problem described 617 above. Route reflectors, for good or for bad, are the most common 618 solution today for interconnecting BGP speakers within an internal 619 routing domain. 621 Route reflector peerings follow the advertisement rules defined by 622 the BGP4 protocol. As a result only a single best path per prefix is 623 sent to client BGP peers. That is the main reason why many current 624 networks are exposed to a phenomenon called BGP path starvation which 625 essentially results in inability to deliver a number of applications 626 discussed later. 628 The route reflection equivalent when interconnecting BGP speakers 629 between domains is popularly called the Route Server and is globally 630 deployed today in many internet exchange points. 632 6. Deployment considerations 634 The diverse BGP path dissemination proposal allows the distribution 635 of more paths than just the best-path to route reflector or route 636 server clients of today's BGP4 implementations. 638 From the client's point of view receiving additional paths via 639 separate IBGP sessions terminated at the new router reflector plane 640 is functionally equivalent to constructing a full mesh peering 641 without the problems that such a full mesh would come with set of 642 problems as discussed in earlier section. 644 By precisely defining the number of reflector planes, network 645 operators have full control over the number of redundant paths in the 646 network. This number can be defined to address the needs of the 647 service(s) being deployed. 649 The Nth plane route reflectors should be acting as control plane 650 network entities. While they can be provisioned on the current 651 production routers selected Nth best BGP paths should not be used 652 directly in the date plane with the exception of such paths being BGP 653 multipath eligible and such functionality is enabled. On RRs being 654 in the data plane unless multipath is enabled 2nd best path is 655 expected to be a backup path and should be installed as such into 656 local RIB/FIB. 658 The proposed architecture deployed along with the BGP best-external 659 functionality covers all three cases where the classic BGP route 660 reflection paradigm would fail to distribute alternate exit points 661 paths. 663 1. ASBRs advertising their single best external paths with no local- 664 preference or multi-exit-discriminator present. 666 2. ASBRs advertising their single best external paths with local- 667 preference or multi-exit-discriminator present and with BGP best- 668 external functionality enabled. 670 3. ASBRs with multiple external paths. 672 Let's discuss the 3rd above case in more detail. This describes the 673 scenario of a single ASBR connected to multiple EBGP peers. In 674 practice this peering scenario is quite common. It is mostly due to 675 the geographic location of EBGP peers and the diversity of those 676 peers (for example peering to multiple tier 1 ISPs etc...). It is 677 not designed for failure recovery scenarios as single failure of the 678 ASBR would simultaneously result in loss of connectivity to all of 679 the peers. In most medium and large geographically distributed 680 networks there is always another ASBR or multiple ASBRs providing 681 peering backups, typically in other geographically diverse locations 682 in the network. 684 When an operator uses ASBRs with multiple peerings setting next hop 685 self will effectively allow to locally repair the atomic failure of 686 any external peer without any compromise to the data plane. The most 687 common reason for not setting next hop self is traditionally the 688 associated drawback of loosing ability to signal the external 689 failures of peering ASBRs or links to those ASBRs by fast IGP 690 flooding. Such potential drawback can be easily avoided by using 691 different peering address from the address used for next hop mapping 692 as well as removing such next hop from IGP at the last possible BGP 693 path failure. 695 Herein one may correctly observe that in the case of setting next hop 696 self on an ASBR, attributes of other external paths such ASBR is 697 peering with may be different from the attributes of its best 698 external path. Therefore, not injecting all of those external paths 699 with their corresponding attribute can not be compared to equivalent 700 paths for the same prefix coming from different ASBRs. 702 While such observation in principle is correct one should put things 703 in perspective of the overall goal which is to provide data plane 704 connectivity upon a single failure with minimal interruption/packet 705 loss. During such transient conditions, using even potentially 706 suboptimal exit points is reasonable, so long as forwarding 707 information loops are not introduced. In the mean time BGP control 708 plane will on its own re-advertise newly elected best external path, 709 route reflector planes will calculate their Nth best paths and 710 propagate to its clients. The result is that after seconds even if 711 potential sub-optimality were encountered it will be quickly and 712 naturally healed. 714 7. Summary of benefits 716 The diverse BGP path dissemination proposal provides the following 717 benefits when compared to the alternatives: 719 1. No modifications to BGP4 protocol. 721 2. No requirement for upgrades to edge and core routers. Backward 722 compatible with the existing BGP deployments. 724 3. Can be easily enabled by introduction of a new route reflector, 725 route server plane dedicated to the selection and distribution of 726 Nth best-path or just by new configuration of the upgraded 727 current route reflector(s). 729 4. Does not require major modification to BGP implementations in the 730 entire network which will result in an unnecessary increase of 731 memory and CPU consumption due to the shift from today's per 732 prefix to a per path advertisement state tracking. 734 5. Can be safely deployed gradually on a RR cluster basis. 736 6. The proposed solution is equally applicable to any BGP address 737 family as described in Multiprotocol Extensions for BGP-4 RFC4760 738 [RFC4760]. In particular it can be used "as is" without any 739 modifications to both IPv4 and IPv6 address families. 741 8. Applications 743 This section lists the most common applications which require 744 presence of redundant BGP paths: 746 1. Fast connectivity restoration where backup paths with alternate 747 exit points would be pre-installed as well as pre-resolved in the 748 FIB of routers. That would allow for a local action upon 749 reception of a critical event notification of network / node 750 failure. This failure recovery mechaism based on the presence of 751 backup paths is also suitable for gracefully addressing scheduled 752 maintenane requirements as described in 753 [I-D.decraene-bgp-graceful-shutdown-requirements]. 755 2. Multi-path load balancing for both IBGP and EBGP. 757 3. BGP control plane churn reduction both intra-domain and inter- 758 domain. 760 An important point to observe is that all of the above intra-domain 761 applications based on the use of reflector planes but are also 762 applicable in the inter-domain Internet exchange point examples. As 763 discussed in section 4.3 an internet exchange can conceptually deploy 764 shadow route server planes each responsible for distribution of an 765 Nth best path to its EBGP peers. In practice it may just equal to 766 new short configuration and establishment of new BGP sessions to IX 767 peers. 769 9. Security considerations 771 The new mechanism for diverse BGP path dissemination proposed in this 772 document does not introduce any new security concerns as compared to 773 base BGP4 specification [RFC4271]. 775 10. IANA Considerations 777 The new mechanism for diverse BGP path dissemination does not require 778 any new allocations from IANA. 780 11. Contributors 782 The following people contributed significantly to the content of the 783 document: 785 Selma Yilmaz 786 Cisco Systems 787 170 West Tasman Drive 788 San Jose, CA 95134 789 US 790 Email: seyilmaz@cisco.com 792 Satish Mynam 793 Cisco Systems 794 170 West Tasman Drive 795 San Jose, CA 95134 796 US 797 Email: mynam@cisco.com 799 Isidor Kouvelas 800 Cisco Systems 801 170 West Tasman Drive 802 San Jose, CA 95134 803 US 804 Email: kouvelas@cisco.com 806 12. Acknowledgments 808 The authors would like to thank Bruno Decraene, Bart Peirens, Eric 809 Rosen, Jim Uttaro, Renwei Li and George Wes for their valuable input. 811 The authors would also like to express special thank you to number of 812 operators who helped to optimize the provided solution to be as close 813 as possible to their daily operational practices. Especially many 814 thx goes to Ted Seely, Shan Amante, Benson Schliesser and Seiichi 815 Kawamura. 817 13. References 819 13.1. Normative References 821 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 822 Requirement Levels", BCP 14, RFC 2119, March 1997. 824 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway 825 Protocol 4 (BGP-4)", RFC 4271, January 2006. 827 [RFC4760] Bates, T., Chandra, R., Katz, D., and Y. Rekhter, 828 "Multiprotocol Extensions for BGP-4", RFC 4760, 829 January 2007. 831 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 832 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 833 May 2008. 835 13.2. Informative References 837 [I-D.decraene-bgp-graceful-shutdown-requirements] 838 Decraene, B., Francois, P., pelsser, c., Ahmad, Z., and A. 839 Armengol, "Requirements for the graceful shutdown of BGP 840 sessions", 841 draft-decraene-bgp-graceful-shutdown-requirements-01 (work 842 in progress), March 2009. 844 [I-D.ietf-idr-add-paths] 845 Walton, D., Retana, A., Chen, E., and J. Scudder, 846 "Advertisement of Multiple Paths in BGP", 847 draft-ietf-idr-add-paths-04 (work in progress), 848 August 2010. 850 [I-D.ietf-idr-best-external] 851 Marques, P., Fernando, R., Chen, E., and P. Mohapatra, 852 "Advertisement of the best external route in BGP", 853 draft-ietf-idr-best-external-02 (work in progress), 854 August 2010. 856 [I-D.ietf-idr-route-oscillation] 857 McPherson, D., "BGP Persistent Route Oscillation 858 Condition", draft-ietf-idr-route-oscillation-01 (work in 859 progress), February 2002. 861 [I-D.pmohapat-idr-fast-conn-restore] 862 Mohapatra, P., Fernando, R., Filsfils, C., and R. Raszuk, 863 "Fast Connectivity Restoration Using BGP Add-path", 864 draft-pmohapat-idr-fast-conn-restore-00 (work in 865 progress), September 2008. 867 [I-D.raszuk-idr-ibgp-auto-mesh] 868 Raszuk, R., "IBGP Auto Mesh", 869 draft-raszuk-idr-ibgp-auto-mesh-00 (work in progress), 870 June 2003. 872 [RFC4456] Bates, T., Chen, E., and R. Chandra, "BGP Route 873 Reflection: An Alternative to Full Mesh Internal BGP 874 (IBGP)", RFC 4456, April 2006. 876 [RFC5065] Traina, P., McPherson, D., and J. Scudder, "Autonomous 877 System Confederations for BGP", RFC 5065, August 2007. 879 Authors' Addresses 881 Robert Raszuk (editor) 882 Cisco Systems 883 170 West Tasman Drive 884 San Jose, CA 95134 885 US 887 Email: raszuk@cisco.com 889 Rex Fernando 890 Cisco Systems 891 170 West Tasman Drive 892 San Jose, CA 95134 893 US 895 Email: rex@cisco.com 897 Keyur Patel 898 Cisco Systems 899 170 West Tasman Drive 900 San Jose, CA 95134 901 US 903 Email: keyupate@cisco.com 905 Danny McPherson 906 Verisign 907 21345 Ridgetop Circle 908 Dulles, VA 20166 909 US 911 Email: dmcpherson@verisign.com 912 Kenji Kumaki 913 KDDI Corporation 914 Garden Air Tower 915 Iidabashi, Chiyoda-ku, Tokyo 102-8460 916 Japan 918 Email: ke-kumaki@kddi.com