idnits 2.17.1 draft-ietf-rtgwg-bgp-routing-large-dc-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 21, 2015) is 3201 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-15) exists of draft-ietf-idr-add-paths-10 == Outdated reference: A later version (-07) exists of draft-ietf-idr-link-bandwidth-06 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Routing Area Working Group P. Lapukhov 3 Internet-Draft Facebook 4 Intended status: Informational A. Premji 5 Expires: January 22, 2016 Arista Networks 6 J. Mitchell, Ed. 7 July 21, 2015 9 Use of BGP for routing in large-scale data centers 10 draft-ietf-rtgwg-bgp-routing-large-dc-04 12 Abstract 14 Some network operators build and operate data centers that support 15 over one hundred thousand servers. In this document, such data 16 centers are referred to as "large-scale" to differentiate them from 17 smaller infrastructures. Environments of this scale have a unique 18 set of network requirements with an emphasis on operational 19 simplicity and network stability. This document summarizes 20 operational experience in designing and operating large-scale data 21 centers using BGP as the only routing protocol. The intent is to 22 report on a proven and stable routing design that could be leveraged 23 by others in the industry. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on January 22, 2016. 42 Copyright Notice 44 Copyright (c) 2015 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Network Design Requirements . . . . . . . . . . . . . . . . . 4 61 2.1. Bandwidth and Traffic Patterns . . . . . . . . . . . . . 4 62 2.2. CAPEX Minimization . . . . . . . . . . . . . . . . . . . 4 63 2.3. OPEX Minimization . . . . . . . . . . . . . . . . . . . . 5 64 2.4. Traffic Engineering . . . . . . . . . . . . . . . . . . . 5 65 2.5. Summarized Requirements . . . . . . . . . . . . . . . . . 5 66 3. Data Center Topologies Overview . . . . . . . . . . . . . . . 6 67 3.1. Traditional DC Topology . . . . . . . . . . . . . . . . . 6 68 3.2. Clos Network topology . . . . . . . . . . . . . . . . . . 7 69 3.2.1. Overview . . . . . . . . . . . . . . . . . . . . . . 7 70 3.2.2. Clos Topology Properties . . . . . . . . . . . . . . 8 71 3.2.3. Scaling the Clos topology . . . . . . . . . . . . . . 9 72 3.2.4. Managing the Size of Clos Topology Tiers . . . . . . 10 73 4. Data Center Routing Overview . . . . . . . . . . . . . . . . 11 74 4.1. Layer 2 Only Designs . . . . . . . . . . . . . . . . . . 11 75 4.2. Hybrid L2/L3 Designs . . . . . . . . . . . . . . . . . . 12 76 4.3. Layer 3 Only Designs . . . . . . . . . . . . . . . . . . 12 77 5. Routing Protocol Selection and Design . . . . . . . . . . . . 13 78 5.1. Choosing EBGP as the Routing Protocol . . . . . . . . . . 13 79 5.2. EBGP Configuration for Clos topology . . . . . . . . . . 14 80 5.2.1. EBGP Configuration Guidelines and Example ASN Scheme 15 81 5.2.2. Private Use ASNs . . . . . . . . . . . . . . . . . . 16 82 5.2.3. Prefix Advertisement . . . . . . . . . . . . . . . . 17 83 5.2.4. External Connectivity . . . . . . . . . . . . . . . . 18 84 5.2.5. Route Summarization at the Edge . . . . . . . . . . . 19 85 6. ECMP Considerations . . . . . . . . . . . . . . . . . . . . . 19 86 6.1. Basic ECMP . . . . . . . . . . . . . . . . . . . . . . . 20 87 6.2. BGP ECMP over Multiple ASNs . . . . . . . . . . . . . . . 21 88 6.3. Weighted ECMP . . . . . . . . . . . . . . . . . . . . . . 21 89 6.4. Consistent Hashing . . . . . . . . . . . . . . . . . . . 22 90 7. Routing Convergence Properties . . . . . . . . . . . . . . . 22 91 7.1. Fault Detection Timing . . . . . . . . . . . . . . . . . 22 92 7.2. Event Propagation Timing . . . . . . . . . . . . . . . . 23 93 7.3. Impact of Clos Topology Fan-outs . . . . . . . . . . . . 23 94 7.4. Failure Impact Scope . . . . . . . . . . . . . . . . . . 24 95 7.5. Routing Micro-Loops . . . . . . . . . . . . . . . . . . . 25 96 8. Additional Options for Design . . . . . . . . . . . . . . . . 26 97 8.1. Third-party Route Injection . . . . . . . . . . . . . . . 26 98 8.2. Route Summarization within Clos Topology . . . . . . . . 26 99 8.2.1. Collapsing Tier-1 Devices Layer . . . . . . . . . . . 27 100 8.2.2. Simple Virtual Aggregation . . . . . . . . . . . . . 28 101 8.3. ICMP Unreachable Message Masquerading . . . . . . . . . . 29 102 9. Security Considerations . . . . . . . . . . . . . . . . . . . 29 103 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29 104 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 30 105 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 30 106 12.1. Normative References . . . . . . . . . . . . . . . . . . 30 107 12.2. Informative References . . . . . . . . . . . . . . . . . 30 108 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 33 110 1. Introduction 112 This document describes a practical routing design that can be used 113 in a large-scale data center ("DC") design. Such data centers, also 114 known as hyper-scale or warehouse-scale data centers, have a unique 115 attribute of supporting over a hundred thousand servers. In order to 116 accommodate networks of this scale, operators are revisiting 117 networking designs and platforms to address this need. 119 The design presented in this document is based on operational 120 experience with data centers built to support large-scale distributed 121 software infrastructure, such as a Web search engine. The primary 122 requirements in such an environment are operational simplicity and 123 network stability so that a small group of people can effectively 124 support a significantly sized network. 126 After experimentation and extensive testing, the authors and their 127 colleagues chose to use an end-to-end routed network infrastructure 128 with External BGP (EBGP) [RFC4271] as the only routing protocol for 129 some of its DC deployments. This is in contrast with more 130 traditional DC designs, which may use simple tree topologies and rely 131 on extending Layer 2 domains across multiple network devices. This 132 document elaborates on the requirements that led to this design 133 choice and presents details of the EBGP routing design as well as 134 explores ideas for further enhancements. 136 This document first presents an overview of network design 137 requirements and considerations for large-scale data centers. Then 138 traditional hierarchical data center network topologies are 139 contrasted with Clos networks [CLOS1953] that are horizontally scaled 140 out. This is followed by arguments for selecting EBGP with a Clos 141 topology as the most appropriate routing protocol to meet the 142 requirements and the proposed design is described in detail. 143 Finally, the document reviews some additional considerations and 144 design options. 146 2. Network Design Requirements 148 This section describes and summarizes network design requirements for 149 large-scale data centers. 151 2.1. Bandwidth and Traffic Patterns 153 The primary requirement when building an interconnection network for 154 large number of servers is to accommodate application bandwidth and 155 latency requirements. Until recently it was quite common to see the 156 majority of traffic entering and leaving the data center, commonly 157 referred to as "north-south" traffic. As a result, traditional 158 "tree" topologies were sufficient to accommodate such flows, even 159 with high oversubscription ratios between the layers of the network. 160 If more bandwidth was required, it was added by "scaling up" the 161 network elements, e.g. by upgrading the device's linecards or fabrics 162 or replacing the device with one with higher port density. 164 Today many large-scale data centers host applications generating 165 significant amounts of server-to-server traffic, which does not 166 egress the DC, commonly referred to as "east-west" traffic. Examples 167 of such applications could be compute clusters such as Hadoop 168 [HADOOP], massive data replication between clusters needed by certain 169 applications, or virtual machine migrations. Scaling traditional 170 tree topologies to match these bandwidth demands becomes either too 171 expensive or impossible due to physical limitations, e.g. port 172 density in a switch. 174 2.2. CAPEX Minimization 176 The Capital Expenditures (CAPEX) associated with the network 177 infrastructure alone constitutes about 10-15% of total data center 178 expenditure (see [GREENBERG2009]). However, the absolute cost is 179 significant, and hence there is a need to constantly drive down the 180 cost of individual network elements. This can be accomplished in two 181 ways: 183 o Unifying all network elements, preferably using the same hardware 184 type or even the same device. This allows for volume pricing on 185 bulk purchases. 187 o Driving costs down using competitive pressures, by introducing 188 multiple network equipment vendors. 190 In order to allow for good vendor diversity it is important to 191 minimize the software feature requirements for the network elements. 192 This strategy provides maximum flexibility of vendor equipment 193 choices while enforcing interoperability using open standards. 195 2.3. OPEX Minimization 197 Operating large-scale infrastructure could be expensive, provided 198 that a larger amount of elements will statistically fail more often. 199 Having a simpler design and operating using a limited software 200 feature set minimizes software issue-related failures. 202 An important aspect of Operational Expenditure (OPEX) minimization is 203 reducing size of failure domains in the network. Ethernet networks 204 are known to be susceptible to broadcast or unicast traffic storms 205 that can have a dramatic impact on network performance and 206 availability. The use of a fully routed design significantly reduces 207 the size of the data plane failure domains - i.e. limits them to the 208 lowest level in the network hierarchy. However, such designs 209 introduce the problem of distributed control plane failures. This 210 observation calls for simpler control plane protocols that are 211 expected to have less chances of network meltdown. Minimizing 212 software feature requirements as described in the CAPEX section above 213 also reduces testing and training requirements. 215 2.4. Traffic Engineering 217 In any data center, application load balancing is a critical function 218 performed by network devices. Traditionally, load balancers are 219 deployed as dedicated devices in the traffic forwarding path. The 220 problem arises in scaling load balancers under growing traffic 221 demand. A preferable solution would be able to scale load balancing 222 layer horizontally, by adding more of the uniform nodes and 223 distributing incoming traffic across these nodes. In situation like 224 this, an ideal choice would be to use network infrastructure itself 225 to distribute traffic across a group of load balancers. The 226 combination of Anycast prefix advertisement [RFC4786] and Equal Cost 227 Multipath (ECMP) functionality can be used to accomplish this goal. 228 To allow for more granular load distribution, it is beneficial for 229 the network to support the ability to perform controlled per-hop 230 traffic engineering. For example, it is beneficial to directly 231 control the ECMP next-hop set for Anycast prefixes at every level of 232 network hierarchy. 234 2.5. Summarized Requirements 236 This section summarizes the list of requirements outlined in the 237 previous sections: 239 o REQ1: Select a topology that can be scaled "horizontally" by 240 adding more links and network devices of the same type without 241 requiring upgrades to the network elements themselves. 243 o REQ2: Define a narrow set of software features/protocols supported 244 by a multitude of networking equipment vendors. 246 o REQ3: Choose a routing protocol that has a simple implementation 247 in terms of programming code complexity and ease of operational 248 support. 250 o REQ4: Minimize the failure domain of equipment or protocol issues 251 as much as possible. 253 o REQ5: Allow for some traffic engineering, preferably via explicit 254 control of the routing prefix next-hop using built-in protocol 255 mechanics. 257 3. Data Center Topologies Overview 259 This section provides an overview of two general types of data center 260 designs - hierarchical (also known as tree based) and Clos based 261 network designs. 263 3.1. Traditional DC Topology 265 In the networking industry, a common design choice for data centers 266 typically look like a (upside down) tree with redundant uplinks and 267 three layers of hierarchy namely; core, aggregation/distribution and 268 access layers (see Figure 1). To accommodate bandwidth demands, each 269 higher layer, from server towards DC egress or WAN, has higher port 270 density and bandwidth capacity where the core functions as the 271 "trunk" of the tree based design. To keep terminology uniform and 272 for comparison with other designs, in this document these layers will 273 be referred to as Tier-1, Tier-2 and Tier-3 "tiers", instead of Core, 274 Aggregation or Access layers. 276 +------+ +------+ 277 | | | | 278 | |--| | Tier-1 279 | | | | 280 +------+ +------+ 281 | | | | 282 +---------+ | | +----------+ 283 | +-------+--+------+--+-------+ | 284 | | | | | | | | 285 +----+ +----+ +----+ +----+ 286 | | | | | | | | 287 | |-----| | | |-----| | Tier-2 288 | | | | | | | | 289 +----+ +----+ +----+ +----+ 290 | | | | 291 | | | | 292 | +-----+ | | +-----+ | 293 +-| |-+ +-| |-+ Tier-3 294 +-----+ +-----+ 295 | | | | | | 296 <- Servers -> <- Servers -> 298 Figure 1: Typical DC network topology 300 3.2. Clos Network topology 302 This section describes a common design for horizontally scalable 303 topology in large-scale data centers in order to meet REQ1. 305 3.2.1. Overview 307 A common choice for a horizontally scalable topology is a folded Clos 308 topology, sometimes called "fat-tree" (see, for example, [INTERCON] 309 and [ALFARES2008]). This topology features an odd number of stages 310 (sometimes known as dimensions) and is commonly made of uniform 311 elements, e.g. network switches with the same port count. Therefore, 312 the choice of folded Clos topology satisfies REQ1 and facilitates 313 REQ2. See Figure 2 below for an example of a folded 3-stage Clos 314 topology (3 stages counting Tier-2 stage twice, when tracing a packet 315 flow): 317 +-------+ 318 | |----------------------------+ 319 | |------------------+ | 320 | |--------+ | | 321 +-------+ | | | 322 +-------+ | | | 323 | |--------+---------+-------+ | 324 | |--------+-------+ | | | 325 | |------+ | | | | | 326 +-------+ | | | | | | 327 +-------+ | | | | | | 328 | |------+-+-------+-+-----+ | | 329 | |------+-+-----+ | | | | | 330 | |----+ | | | | | | | | 331 +-------+ | | | | | | ---------> M links 332 Tier-1 | | | | | | | | | 333 +-------+ +-------+ +-------+ 334 | | | | | | 335 | | | | | | Tier-2 336 | | | | | | 337 +-------+ +-------+ +-------+ 338 | | | | | | | | | 339 | | | | | | ---------> N Links 340 | | | | | | | | | 341 O O O O O O O O O Servers 343 Figure 2: 3-Stage Folded Clos topology 345 This topology is often also referred to as a "Leaf and Spine" 346 network, where "Spine" is the name given to the middle stage of the 347 Clos topology (Tier-1) and "Leaf" is the name of input/output stage 348 (Tier-2). For uniformity, this document will refer to these layers 349 using the "Tier-n" notation. 351 3.2.2. Clos Topology Properties 353 The following are some key properties of the Clos topology: 355 o The topology is fully non-blocking (or more accurately: non- 356 interfering) if M >= N and oversubscribed by a factor of N/M 357 otherwise. Here M and N is the uplink and downlink port count 358 respectively, for a Tier-2 switch as shown in Figure 2. 360 o Utilizing this topology requires control and data plane support 361 for ECMP with a fan-out of M or more. 363 o Tier-1 switches have exactly one path to every server in this 364 topology. This is an important property that makes route 365 summarization impossible in this topology (see Section 8.2 below). 367 o Traffic flowing from server to server is load balanced over all 368 available paths using ECMP. 370 3.2.3. Scaling the Clos topology 372 A Clos topology can be scaled either by increasing network element 373 port density or adding more stages, e.g. moving to a 5-stage Clos, as 374 illustrated in Figure 3 below: 376 Tier-1 377 +-----+ 378 Cluster | | 379 +----------------------------+ +--| |--+ 380 | | | +-----+ | 381 | Tier-2 | | | Tier-2 382 | +-----+ | | +-----+ | +-----+ 383 | +-------------| DEV |------+--| |--+--| |-------------+ 384 | | +-----| C |------+ | | +--| |-----+ | 385 | | | +-----+ | +-----+ +-----+ | | 386 | | | | | | 387 | | | +-----+ | +-----+ +-----+ | | 388 | | +-----------| DEV |------+ | | +--| |-----------+ | 389 | | | | +---| D |------+--| |--+--| |---+ | | | 390 | | | | | +-----+ | | +-----+ | +-----+ | | | | 391 | | | | | | | | | | | | 392 | +-----+ +-----+ | | +-----+ | +-----+ +-----+ 393 | | DEV | | DEV | | +--| |--+ | | | | 394 | | A | | B | Tier-3 | | | Tier-3 | | | | 395 | +-----+ +-----+ | +-----+ +-----+ +-----+ 396 | | | | | | | | | | 397 | O O O O | O O O O 398 | Servers | Servers 399 +----------------------------+ 401 Figure 3: 5-Stage Clos topology 403 The small example topology on Figure 3 is built from devices with a 404 port count of 4 and provides full bisectional bandwidth to all 405 connected servers. In this document, one set of directly connected 406 Tier-2 and Tier-3 devices along with their attached servers will be 407 referred to as a "cluster". For example, DEV A, B, C, D, and the 408 servers that connect to DEV A and B, on Figure 3 form a cluster. The 409 concept of a cluster may also be a useful concept as a single 410 deployment or maintenance unit which can be operated on at a 411 different frequency than the entire topology. 413 In practice, the Tier-3 layer of the network, which are typically top 414 of rack switches (ToRs), is where oversubscription is introduced to 415 allow for packaging of more servers in the data center while meeting 416 the bandwidth requirements for different types of applications. The 417 main reason to limit oversubscription at a single layer of the 418 network is to simplify application development that would otherwise 419 need to account for multiple bandwidth pools: within rack (Tier-3), 420 between racks (Tier-2), and between clusters (Tier-1). Since 421 oversubscription does not have a direct relationship to the routing 422 design it is not discussed further in this document. 424 3.2.4. Managing the Size of Clos Topology Tiers 426 If a data center network size is small, it is possible to reduce the 427 number of switches in Tier-1 or Tier-2 of Clos topology by a power of 428 two. To understand how this could be done, take Tier-1 as an 429 example. Every Tier-2 device connects to a single group of Tier-1 430 devices. If half of the ports on each of the Tier-1 devices are not 431 being used then it is possible to reduce the number of Tier-1 devices 432 by half and simply map two uplinks from a Tier-2 device to the same 433 Tier-1 device that were previously mapped to different Tier-1 434 devices. This technique maintains the same bisectional bandwidth 435 while reducing the number of elements in the Tier-1 layer, thus 436 saving on CAPEX. The tradeoff, in this example, is the reduction of 437 maximum DC size in terms of overall server count by half. 439 In this example, Tier-2 devices will be using two parallel links to 440 connect to each Tier-1 device. If one of these links fails, the 441 other will pick up all traffic of the failed link, possible resulting 442 in heavy congestion and quality of service degradation if the path 443 determination procedure does not take bandwidth amount into account. 444 To avoid this situation, parallel links can be grouped in link 445 aggregation groups (LAGs, such as [IEEE8023AD]) with widely available 446 implementation settings that take the whole "bundle" down upon a 447 single link failure. Equivalent techniques that enforce "fate 448 sharing" on the parallel links can be used in place of LAGs to 449 achieve the same effect. As a result of such fate-sharing, traffic 450 from two or more failed links will be re-balanced over the multitude 451 of remaining paths that equals the number of Tier-1 devices. This 452 example is using two links for simplicity, having more links in a 453 bundle will have less impact on capacity upon a member-link failure. 455 4. Data Center Routing Overview 457 This section provides an overview of three general types of data 458 center protocol designs - Layer 2 only, Hybrid L2/L3 and Layer 3 459 only. 461 4.1. Layer 2 Only Designs 463 Originally most data center designs used Spanning-Tree Protocol (STP) 464 originally defined in [IEEE8021D-1990] for loop free topology 465 creation, typically utilizing variants of the traditional DC topology 466 described in Section 3.1. At the time, many DC switches either did 467 not support Layer 3 routed protocols or supported it with additional 468 licensing fees, which played a part in the design choice. Although 469 many enhancements have been made through the introduction of Rapid 470 Spanning Tree Protocol (RSTP) in the latest revision of 471 [IEEE8021D-2004] and Multiple Spanning Tree Protocol (MST) specified 472 in [IEEE8021Q] that increase convergence, stability and load 473 balancing in larger topologies many of the fundamentals of the 474 protocol limit its applicability in large-scale DCs. STP and its 475 newer variants use an active/standby approach to path selection and 476 are therefore hard to deploy in horizontally-scaled topologies as 477 described in Section 3.2. Further, operators have had many 478 experiences with large failures due to issues caused by improper 479 cabling, misconfiguration, or flawed software on a single device. 480 These failures regularly affected the entire spanning-tree domain and 481 were very hard to troubleshoot due to the nature of the protocol. 482 For these reasons, and since almost all DC traffic is now IP, 483 therefore requiring a Layer 3 routing protocol at the network edge 484 for external connectivity, designs utilizing STP usually fail all of 485 the requirements of large-scale DC operators. Various enhancements 486 to link-aggregation protocols such as [IEEE8023AD], generally known 487 as Multi-Chassis Link-Aggregation (M-LAG) made it possible to use 488 Layer 2 designs with active-active network paths while relying on STP 489 as the backup for loop prevention. The major downside of this 490 approach is the proprietary nature of such extensions. 492 It should be noted that building large, horizontally scalable, Layer 493 2 only networks without STP is possible recently through the 494 introduction of the TRILL protocol in [RFC6325]. TRILL resolves many 495 of the issues STP has for large-scale DC design however currently the 496 maturity of the protocol, limited number of implementations, and 497 requirement for new equipment that supports it has limited its 498 applicability and increased the cost of such designs. 500 Finally, neither TRILL nor the M-LAG approach eliminate the 501 fundamental problem of the shared broadcast domain, that is so 502 detrimental to the operations of any Layer 2, Ethernet based 503 solutions. 505 4.2. Hybrid L2/L3 Designs 507 Operators have sought to limit the impact of data plane faults and 508 build large-scale topologies through implementing routing protocols 509 in either the Tier-1 or Tier-2 parts of the network and dividing the 510 Layer-2 domain into numerous, smaller domains. This design has 511 allowed data centers to scale up, but at the cost of complexity in 512 the network managing multiple protocols. For the following reasons, 513 operators have retained Layer 2 in either the access (Tier-3) or both 514 access and aggregation (Tier-3 and Tier-2) parts of the network: 516 o Supporting legacy applications that may require direct Layer 2 517 adjacency or use non-IP protocols. 519 o Seamless mobility for virtual machines that require the 520 preservation of IP addresses when a virtual machine moves to 521 different Tier-3 switch. 523 o Simplified IP addressing = less IP subnets are required for the 524 data center. 526 o Application load balancing may require direct Layer 2 reachability 527 to perform certain functions such as Layer 2 Direct Server Return 528 (DSR). 530 o Continued CAPEX differences between Layer-2 and Layer-3 capable 531 switches. 533 4.3. Layer 3 Only Designs 535 Network designs that leverage IP routing down to Tier-3 of the 536 network have gained popularity as well. The main benefit of these 537 designs is improved network stability and scalability, as a result of 538 confining L2 broadcast domains. Commonly an Interior Gateway 539 Protocol (IGP) such as OSPF [RFC2328] is used as the primary routing 540 protocol in such a design. As data centers grow in scale, and server 541 count exceeds tens of thousands, such fully routed designs have 542 become more attractive. 544 Choosing a Layer 3 only design greatly simplifies the network, 545 facilitating the meeting of REQ1 and REQ2, and has widespread 546 adoption in networks where large Layer 2 adjacency and larger size 547 Layer 3 subnets are not as critical compared to network scalability 548 and stability. Application providers and network operators continue 549 to also develop new solutions to meet some of the requirements that 550 previously have driven large Layer 2 domains. 552 5. Routing Protocol Selection and Design 554 In this section the motivations for using External BGP (EBGP) as the 555 single routing protocol for data center networks having a Layer 3 556 protocol design and Clos topology are reviewed. Then, a practical 557 approach for designing an EBGP based network is provided. 559 5.1. Choosing EBGP as the Routing Protocol 561 REQ2 would give preference to the selection of a single routing 562 protocol to reduce complexity and interdependencies. While it is 563 common to rely on an IGP in this situation, sometimes with either the 564 addition of EBGP at the device bordering the WAN or Internal BGP 565 (IBGP) throughout, this document proposes the use of an EBGP only 566 design. 568 Although EBGP is the protocol used for almost all inter-domain 569 routing on the Internet and has wide support from both vendor and 570 service provider communities, it is not generally deployed as the 571 primary routing protocol within the data center for a number of 572 reasons (some of which are interrelated): 574 o BGP is perceived as a "WAN only protocol only" and not often 575 considered for enterprise or data center applications. 577 o BGP is believed to have a "much slower" routing convergence 578 compared to IGPs. 580 o BGP deployment within an Autonomous System typically assumes the 581 presence of an IGP for next-hop resolution. 583 o BGP is perceived to require significant configuration overhead and 584 does not support neighbor auto-discovery. 586 This document discusses some of these perceptions, especially as 587 applicable to the proposed design, and highlights some of the 588 advantages of using the protocol such as: 590 o BGP has less complexity in parts of its protocol design - internal 591 data structures and state machine are simpler as compared to most 592 link-state IGPs such as OSPF. For example, instead of 593 implementing adjacency formation, adjacency maintenance and/or 594 flow-control, BGP simply relies on TCP as the underlying 595 transport. This fulfills REQ2 and REQ3. 597 o BGP information flooding overhead is less when compared to link- 598 state IGPs. Since every BGP router calculates and propagates only 599 the best-path selected, a network failure is masked as soon as the 600 BGP speaker finds an alternate path, which exists when highly 601 symmetric topologies, such as Clos, are coupled with EBGP only 602 design. In contrast, the event propagation scope of a link-state 603 IGP is an entire area, regardless of the failure type. This meets 604 REQ3 and REQ4. It is worth mentioning that all widely deployed 605 link-state IGPs also feature periodic refreshes of routing 606 information, while BGP does not expire routing state, even if this 607 rarely causes significant impact to modern router control planes. 609 o BGP supports third-party (recursively resolved) next-hops. This 610 allows for manipulating multipath to be non-ECMP based or 611 forwarding based on application-defined paths, through 612 establishment of a peering session with an application 613 "controller" which can inject routing information into the system, 614 satisfying REQ5. OSPF provides similar functionality using 615 concepts such as "Forwarding Address", but with more difficulty in 616 implementation and far less control of information propagation 617 scope. 619 o Using a well-defined ASN allocation scheme and standard AS_PATH 620 loop detection, "BGP path hunting" (see [JAKMA2008]) can be 621 controlled and complex unwanted paths will be ignored. See 622 Section 5.2 for an example of a working ASN allocation scheme. In 623 a link-state IGP accomplishing the same goal would require multi- 624 (instance/topology/processes) support, typically not available in 625 all DC devices and quite complex to configure and troubleshoot. 626 Using a traditional single flooding domain, which most DC designs 627 utilize, under certain failure conditions may pick up unwanted 628 lengthy paths, e.g. traversing multiple Tier-2 devices. 630 o EBGP configuration that is implemented with minimal routing policy 631 is easier to troubleshoot for network reachability issues. In 632 most implementations, it is straightforward to view contents of 633 BGP Loc-RIB and compare it to the router's RIB. Also, in most 634 implementations an operator can view every BGP neighbors Adj-RIB- 635 In and Adj-RIB-Out structures and therefore incoming and outgoing 636 NRLI information can be easily correlated on both sides of a BGP 637 session. Thus, BGP satisfies REQ3. 639 5.2. EBGP Configuration for Clos topology 641 Clos topologies that have more than 5 stages are very uncommon due to 642 the large numbers of interconnects required by such a design. 643 Therefore, the examples below are made with reference to the 5-stage 644 Clos topology (in unfolded state). 646 5.2.1. EBGP Configuration Guidelines and Example ASN Scheme 648 The diagram below illustrates an example ASN allocation scheme. The 649 following is a list of guidelines that can be used: 651 o EBGP single-hop sessions are established over direct point-to- 652 point links interconnecting the network nodes, no multi-hop or 653 loopback sessions are used even in the case of multiple links 654 between the same pair of nodes. 656 o Private Use ASNs from the range 64512-65534 are used so as to 657 avoid ASN conflicts. 659 o A single ASN is allocated to all of the Clos topology's Tier-1 660 devices. 662 o A unique ASN is allocated per each group of Tier-2 devices. 664 o A unique ASN is allocated to every Tier-3 device (e.g. ToR) in 665 this topology. 667 ASN 65534 668 +---------+ 669 | +-----+ | 670 | | | | 671 +-|-| |-|-+ 672 | | +-----+ | | 673 ASN 646XX | | | | ASN 646XX 674 +---------+ | | | | +---------+ 675 | +-----+ | | | +-----+ | | | +-----+ | 676 +-----------|-| |-|-+-|-| |-|-+-|-| |-|-----------+ 677 | +---|-| |-|-+ | | | | +-|-| |-|---+ | 678 | | | +-----+ | | +-----+ | | +-----+ | | | 679 | | | | | | | | | | 680 | | | | | | | | | | 681 | | | +-----+ | | +-----+ | | +-----+ | | | 682 | +-----+---|-| |-|-+ | | | | +-|-| |-|---+-----+ | 683 | | | +-|-| |-|-+-|-| |-|-+-|-| |-|-+ | | | 684 | | | | | +-----+ | | | +-----+ | | | +-----+ | | | | | 685 | | | | +---------+ | | | | +---------+ | | | | 686 | | | | | | | | | | | | 687 +-----+ +-----+ | | +-----+ | | +-----+ +-----+ 688 | ASN | | | +-|-| |-|-+ | | | | 689 |65YYY| | ... | | | | | | ... | | ... | 690 +-----+ +-----+ | +-----+ | +-----+ +-----+ 691 | | | | +---------+ | | | | 692 O O O O <- Servers -> O O O O 694 Figure 4: BGP ASN layout for 5-stage Clos 696 5.2.2. Private Use ASNs 698 The original range of Private Use ASNs [RFC6996] limited operators to 699 1023 unique ASNs. Since it is quite likely that the number of 700 network devices may exceed this number, a workaround is required. 701 One approach is to re-use the ASNs assigned to the Tier-3 devices 702 across different clusters. For example, Private Use ASNs 65001, 703 65002 ... 65032 could be used within every individual cluster and 704 assigned to Tier-3 devices. 706 To avoid route suppression due to the AS_PATH loop detection 707 mechanism in BGP, upstream EBGP sessions on Tier-3 devices must be 708 configured with the "AllowAS In" feature [ALLOWASIN] that allows 709 accepting a device's own ASN in received route advertisements. 710 Introducing this feature does not make it likely for routing loops in 711 the design since the AS_PATH is being added to by routers at each of 712 the topology tiers and AS_PATH length is an early tie breaker in the 713 BGP path selection process. Further loop protection is still in 714 place at the Tier-1 device, which will not accept routes with a path 715 including its own ASN and Tier-2 devices do not have direct 716 connectivity with each other. 718 Another solution to this problem would be using Four-Octet ASNs 719 ([RFC6793]), where there are additional Private Use ASNs available, 720 see [IANA.AS]. Use of Four-Octet ASNs put additional protocol 721 complexity in the BGP implementation so should be considered against 722 the complexity of re-use when considering REQ3 and REQ4. Perhaps 723 more importantly, they are not yet supported by all BGP 724 implementations, which may limit vendor selection of DC equipment. 725 When supported, ensure that implementations in use are able to remove 726 the Private Use ASNs if required for external connectivity 727 (Section 5.2.4). 729 5.2.3. Prefix Advertisement 731 A Clos topology features a large number of point-to-point links and 732 associated prefixes. Advertising all of these routes into BGP may 733 create FIB overload conditions in the network devices. Advertising 734 these links also puts additional path computation stress on the BGP 735 control plane for little benefit. There are two possible solutions: 737 o Do not advertise any of the point-to-point links into BGP. Since 738 the EBGP-based design changes the next-hop address at every 739 device, distant networks will automatically be reachable via the 740 advertising EBGP peer and do not require reachability to these 741 prefixes. However, this may complicate operations or monitoring: 742 e.g. using the popular "traceroute" tool will display IP addresses 743 that are not reachable. 745 o Advertise point-to-point links, but summarize them on every 746 device. This requires an address allocation scheme such as 747 allocating a consecutive block of IP addresses per Tier-1 and 748 Tier-2 device to be used for point-to-point interface addressing 749 to the lower layers (Tier-2 uplinks will be numbered out of Tier-1 750 addressing and so forth). 752 Server subnets on Tier-3 devices must be announced into BGP without 753 using route summarization on Tier-2 and Tier-1 devices. Summarizing 754 subnets in a Clos topology results in route black-holing under a 755 single link failure (e.g. between Tier-2 and Tier-3 devices) and 756 hence must be avoided. The use of peer links within the same tier to 757 resolve the black-holing problem by providing "bypass paths" is 758 undesirable due to O(N^2) complexity of the peering mesh and waste of 759 ports on the devices. An alternative to the full-mesh of peer-links 760 would be using a simpler bypass topology, e.g. a "ring" as described 761 in [FB4POST], but such a topology adds extra hops and has very 762 limited bisection bandwidth, in addition requiring special tweaks to 763 make BGP routing work - such as possibly splitting every device into 764 an ASN on its own. The section Section 8.2 introduces another, less 765 intrusive, method for performing a limited form route summarization 766 in Clos networks and the discusses the associated trade-offs. 768 5.2.4. External Connectivity 770 A dedicated cluster (or clusters) in the Clos topology could be used 771 for the purpose of connecting to the Wide Area Network (WAN) edge 772 devices, or WAN Routers. Tier-3 devices in such cluster would be 773 replaced with WAN routers, and EBGP peering would be used again, 774 though WAN routers are likely to belong to a public ASN if Internet 775 connectivity is required in the design. The Tier-2 devices in such a 776 dedicated cluster will be referred to as "Border Routers" in this 777 document. These devices have to perform a few special functions: 779 o Hide network topology information when advertising paths to WAN 780 routers, i.e. remove Private Use ASNs [RFC6996] from the AS_PATH 781 attribute. This is typically done to avoid ASN number collisions 782 between different data centers and also to provide a uniform 783 AS_PATH length to the WAN for purposes of WAN ECMP to Anycast 784 prefixes originated in the topology. An implementation specific 785 BGP feature typically called "Remove Private AS" is commonly used 786 to accomplish this. Depending on implementation, the feature 787 should strip a contiguous sequence of Private Use ASNs found in 788 AS_PATH attribute prior to advertising the path to a neighbor. 789 This assumes that all ASNs used for intra data center numbering 790 are from the Private Use ranges. The process for stripping the 791 Private Use ASNs is not currently standardized, but most 792 implementations commonly follow the logic described in this 793 vendor's document [REMOVE-PRIVATE-AS]. 795 o Originate a default route to the data center devices. This is the 796 only place where default route can be originated, as route 797 summarization is risky for the "scale-out" topology. 798 Alternatively, Border Routers may simply relay the default route 799 learned from WAN routers. Advertising the default route from 800 Border Routers requires that all Border Routers be fully connected 801 to the WAN Routers upstream, to provide resistance to a single- 802 link failure causing the black-holing of traffic. To prevent 803 black-holing in the situation when all of the EBGP sessions to the 804 WAN routers fail simultaneously on a given device it is more 805 desirable to take the "relaying" approach rather than introducing 806 the default route via complicated conditional route origination 807 schemes provided by some implementations [CONDITIONALROUTE]. 809 5.2.5. Route Summarization at the Edge 811 It is often desirable to summarize network reachability information 812 prior to advertising it to the WAN network due to high amount of IP 813 prefixes originated from within the data center in a fully routed 814 network design. For example, a network with 2000 Tier-3 devices will 815 have at least 2000 servers subnets advertised into BGP, along with 816 the infrastructure or other prefixes. However, as discussed before, 817 the proposed network design does not allow for route summarization 818 due to the lack of peer links inside every tier. 820 However, it is possible to lift this restriction for the Border 821 Routers, by devising a different connectivity model for these 822 devices. There are two options possible: 824 o Interconnect the Border Routers using a full-mesh of physical 825 links or using any other "peer-mesh" topology, such as ring or 826 hub-and-spoke. Configure BGP accordingly on all Border Leafs to 827 exchange network reachability information - e.g. by adding a mesh 828 of iBGP sessions. The interconnecting peer links need to be 829 appropriately sized for traffic that will be present in the case 830 of a device or link failure underneath the Border Routers. 832 o Tier-1 devices may have additional physical links provisioned 833 toward the Border Routers (which are Tier-2 devices from the 834 perspective of Tier-1). Specifically, if protection from a single 835 link or node failure is desired, each Tier-1 devices would have to 836 connect to at least two Border Routers. This puts additional 837 requirements on the port count for Tier-1 devices and Border 838 Routers, potentially making it a non-uniform, larger port count, 839 device with the other devices in the Clos. This also reduces the 840 number of ports available to "regular" Tier-2 switches and hence 841 the number of clusters that could be interconnected via Tier-1 842 layer. 844 If any of the above options are implemented, it is possible to 845 perform route summarization at the Border Routers toward the WAN 846 network core without risking a routing black-hole condition under a 847 single link failure. Both of the options would result in non-uniform 848 topology as additional links have to be provisioned on some network 849 devices. 851 6. ECMP Considerations 853 This section covers the Equal Cost Multipath (ECMP) functionality for 854 Clos topology and discusses a few special requirements. 856 6.1. Basic ECMP 858 ECMP is the fundamental load sharing mechanism used by a Clos 859 topology. Effectively, every lower-tier device will use all of its 860 directly attached upper-tier devices to load share traffic destined 861 to the same IP prefix. Number of ECMP paths between any two Tier-3 862 devices in Clos topology equals to the number of the devices in the 863 middle stage (Tier-1). For example, Figure 5 illustrates the 864 topology where Tier-3 device A has four paths to reach servers X and 865 Y, via Tier-2 devices B and C and then Tier-1 devices 1, 2, 3, and 4 866 respectively. 868 Tier-1 869 +-----+ 870 | DEV | 871 +->| 1 |--+ 872 | +-----+ | 873 Tier-2 | | Tier-2 874 +-----+ | +-----+ | +-----+ 875 +------------>| DEV |--+->| DEV |--+--| |-------------+ 876 | +-----| B |--+ | 2 | +--| |-----+ | 877 | | +-----+ +-----+ +-----+ | | 878 | | | | 879 | | +-----+ +-----+ +-----+ | | 880 | +-----+---->| DEV |--+ | DEV | +--| |-----+-----+ | 881 | | | +---| C |--+->| 3 |--+--| |---+ | | | 882 | | | | +-----+ | +-----+ | +-----+ | | | | 883 | | | | | | | | | | 884 +-----+ +-----+ | +-----+ | +-----+ +-----+ 885 | DEV | | | Tier-3 +->| DEV |--+ Tier-3 | | | | 886 | A | | | | 4 | | | | | 887 +-----+ +-----+ +-----+ +-----+ +-----+ 888 | | | | | | | | 889 O O O O <- Servers -> X Y O O 891 Figure 5: ECMP fan-out tree from A to X and Y 893 The ECMP requirement implies that the BGP implementation must support 894 multipath fan-out for up to the maximum number of devices directly 895 attached at any point in the topology in upstream or downstream 896 direction. Normally, this number does not exceed half of the ports 897 found on a device in the topology. For example, an ECMP fan-out of 898 32 would be required when building a Clos network using 64-port 899 devices. The Border Routers may need to have wider fan-out to be 900 able to connect to multitude of Tier-1 devices if route summarization 901 at Border Router level is implemented as described in Section 5.2.5. 902 If a device's hardware does not support wider ECMP, logical link- 903 grouping (link-aggregation at layer 2) could be used to provide 904 "hierarchical" ECMP (Layer 3 ECMP followed by Layer 2 ECMP) to 905 compensate for fan-out limitations. Such approach, however, 906 increases the risk of flow polarization, as less entropy will be 907 available to the second stage of ECMP. 909 Most BGP implementations declare paths to be equal from ECMP 910 perspective if they match up to and including step (e) 911 Section 9.1.2.2 of [RFC4271]. In the proposed network design there 912 is no underlying IGP, so all IGP costs are assumed to be zero or 913 otherwise the same value across all paths and policies may be applied 914 as necessary to equalize BGP attributes that vary in vendor defaults, 915 such as MED and origin code. For historical reasons it is also 916 useful to not use 0 as the equalized MED value; this and some other 917 useful BGP information is available in [RFC4277] . Routing loops are 918 unlikely due to the BGP best-path selection process which prefers 919 shorter AS_PATH length, and longer paths through the Tier-1 devices 920 which don't allow their own ASN in the path and have the same ASN are 921 also not possible. 923 6.2. BGP ECMP over Multiple ASNs 925 For application load balancing purposes it is desirable to have the 926 same prefix advertised from multiple Tier-3 devices. From the 927 perspective of other devices, such a prefix would have BGP paths with 928 different AS_PATH attribute values, while having the same AS_PATH 929 attribute lengths. Therefore, BGP implementations must support load 930 sharing over above-mentioned paths. This feature is sometimes known 931 as "multipath relax" and effectively allows for ECMP to be done 932 across different neighboring ASNs if all other attributes are equal 933 as described in the previous section. 935 6.3. Weighted ECMP 937 It may be desirable for the network devices to implement "weighted" 938 ECMP, to be able to send more traffic over some paths in ECMP fan- 939 out. This could be helpful to compensate for failures in the network 940 and send more traffic over paths that have more capacity. The 941 prefixes that require weighted ECMP would have to be injected using 942 remote BGP speaker (central agent) over a multihop session as 943 described further in Section 8.1. If support in implementations is 944 available, weight-distribution for multiple BGP paths could be 945 signaled using the technique described in 946 [I-D.ietf-idr-link-bandwidth]. 948 6.4. Consistent Hashing 950 It is often desirable to have the hashing function used to ECMP to be 951 consistent (see [CONS-HASH]), to minimizing the impact on flow to 952 next-hop affinity changes when a next-hop is added or removed to ECMP 953 group. This could be used if the network device is used as a load 954 balancer, mapping flows toward multiple destinations - in this case, 955 losing or adding a destination will not have detrimental effect of 956 currently established flows. One particular recommendation on 957 implementing consistent hashing is provided in [RFC2992], though 958 other implementations are possible. This functionality could be 959 naturally combined with weighted ECMP, with the impact of the next- 960 hop changes being proportional to the weight of the given next-hop. 961 The downside of consistent hashing is increased load on hardware 962 resource utilization, as typically more space is required to 963 implement a consistent-hashing region. 965 7. Routing Convergence Properties 967 This section reviews routing convergence properties in the proposed 968 design. A case is made that sub-second convergence is achievable if 969 the implementation supports fast EBGP peering session deactivation 970 and timely RIB and FIB update upon failure of the associated link. 972 7.1. Fault Detection Timing 974 BGP typically relies on an IGP to route around link/node failures 975 inside an AS, and implements either a polling based or an event- 976 driven mechanism to obtain updates on IGP state changes. The 977 proposed routing design does not use an IGP, so the remaining 978 mechanisms that could be used for fault detection are BGP keep-alive 979 process (or any other type of keep-alive mechanism) and link-failure 980 triggers. 982 Relying solely on BGP keep-alive packets may result in high 983 convergence delays, in the order of multiple seconds (on many BGP 984 implementations the minimum configurable BGP hold timer value is 985 three seconds). However, many BGP implementations can shut down 986 local EBGP peering sessions in response to the "link down" event for 987 the outgoing interface used for BGP peering. This feature is 988 sometimes called as "fast fallover". Since links in modern data 989 centers are predominantly point-to-point fiber connections, a 990 physical interface failure is often detected in milliseconds and 991 subsequently triggers a BGP re-convergence. 993 Ethernet technologies may support failure signaling or detection 994 standards such as Connectivity Fault Management (CFM) as described in 995 [IEEE8021Q], which may make failure detection more robust. 997 Alternatively, some platforms may support Bidirectional Forwarding 998 Detection (BFD) [RFC5880] to allow for sub-second failure detection 999 and fault signaling to the BGP process. However, use of either of 1000 these presents additional requirements to vendor software and 1001 possibly hardware, and may contradict REQ1. Until recently with 1002 [RFC7130], BFD also did not allow detection of a single member link 1003 failure on a LAG, which would have limited it's usefulness in some 1004 designs. 1006 7.2. Event Propagation Timing 1008 In the proposed design the impact of BGP Minimum Route Advertisement 1009 Interval (MRAI) timer (See section 9.2.1.1 of [RFC4271]) should be 1010 considered. Per the standard it is required for BGP implementations 1011 to space out consecutive BGP UPDATE messages by at least MRAI 1012 seconds, which is often a configurable value. The initial BGP UPDATE 1013 messages after an event carrying withdrawn routes are commonly not 1014 affected by this timer. The MRAI timer may present significant 1015 convergence delays when a BGP speaker "waits" for the new path to be 1016 learned from its peers and has no local backup path information. 1018 In a Clos topology each EBGP speaker has either one path or N paths 1019 for the same prefix, where N is a significantly large number, e.g. 1020 N=32 (the ECMP fan-out). Therefore, if a path fails there is either 1021 no backup path at all (e.g. from perspective of a Tier-2 switch 1022 losing link to a Tier-3 device), or the backup is readily available 1023 in BGP Loc-RIB (e.g. from perspective of a Tier-2 device losing link 1024 to a Tier-1 switch). In the former case, the BGP withdrawal 1025 announcement will propagate un-delayed and trigger re-convergence on 1026 affected devices. In the latter case, the best-path will be re- 1027 evaluated and the local ECMP group corresponding to the new next-hop 1028 set changed. If the BGP path was the best-path selected previously, 1029 an "implicit withdraw" will be sent via a BGP UPDATE message as 1030 described as Option b in Section 3.1 of [RFC4271] due to the BGP 1031 AS_PATH attribute changing. 1033 7.3. Impact of Clos Topology Fan-outs 1035 Clos topology has large fan-outs, which may impact the "Up->Down" 1036 convergence in some cases, as described in this section. In a 1037 situation when a link between Tier-3 and Tier-2 device fails, the 1038 Tier-2 device will send BGP UPDATE messages to all upstream Tier-1 1039 devices, withdrawing the affected prefixes. The Tier-1 devices, in 1040 turn, will relay those messages to all downstream Tier-2 devices 1041 (except for the originator). Tier-2 devices other than the one 1042 originating the UPDATE should then wait for ALL upstream Tier-1 1043 devices to send an UPDATE message before removing the affected 1044 prefixes and sending corresponding UPDATE downstream to connected 1045 Tier-3 devices. If the original Tier-2 device or the relaying Tier-1 1046 devices introduce some delay into their UPDATE message announcements, 1047 the result could be UPDATE message "dispersion", that could be as 1048 long as multiple seconds. In order to avoid such a behavior, BGP 1049 implementations must support "update groups". The "update group" is 1050 defined as a collection of neighbors sharing the same outbound policy 1051 - the local speaker will send BGP updates to the members of the group 1052 synchronously. 1054 The impact of such "dispersion" grows with the size of topology fan- 1055 out and could also grow under network convergence churn. Some 1056 operators may be tempted to introduce "route flap dampening" type 1057 features that vendors include to reduce the control plane impact of 1058 rapidly flapping prefixes. However, due to issues described with 1059 false positives in these implementations especially under such 1060 "dispersion" events, it is not recommended to turn this feature on in 1061 this design. More background and issues with "route flap dampening" 1062 and possible implementation changes that could affect this are well 1063 described in [RFC7196]. 1065 7.4. Failure Impact Scope 1067 A network is declared to converge in response to a failure once all 1068 devices within the failure impact scope are notified of the event and 1069 have re-calculated their RIB's and consequently updated their FIB's. 1070 Larger failure impact scope typically means slower convergence since 1071 more devices have to be notified, and additionally results in a less 1072 stable network. In this section we describe BGP's advantages over 1073 link-state routing protocols in reducing failure impact scope for a 1074 Clos topology. 1076 BGP behaves like a distance-vector protocol in the sense that only 1077 the best path from the point of view of the local router is sent to 1078 neighbors. As such, some failures are masked if the local node can 1079 immediately find a backup path and does not have to send any updates 1080 further. Notice that in the worst case ALL devices in a data center 1081 topology have to either withdraw a prefix completely or update the 1082 ECMP groups in the FIB. However, many failures will not result in 1083 such a wide impact. There are two main failure types where impact 1084 scope is reduced: 1086 o Failure of a link between Tier-2 and Tier-1 devices: In this case, 1087 a Tier-2 device will update the affected ECMP groups, removing the 1088 failed link. There is no need to send new information to 1089 downstream Tier-3 devices, unless the path was selected as best by 1090 the BGP process, in which case only an "implicit withdraw" needs 1091 to be sent, which should not affect forwarding. The affected 1092 Tier-1 device will lose the only path available to reach a 1093 particular cluster and will have to withdraw the associated 1094 prefixes. Such prefix withdrawal process will only affect Tier-2 1095 devices directly connected to the affected Tier-1 device. The 1096 Tier-2 devices receiving the BGP UPDATE messages withdrawing 1097 prefixes will simply have to update their ECMP groups. The Tier-3 1098 devices are not involved in the re-convergence process. 1100 o Failure of a Tier-1 device: In this case, all Tier-2 devices 1101 directly attached to the failed node will have to update their 1102 ECMP groups for all IP prefixes from non-local cluster. The 1103 Tier-3 devices are once again not involved in the re-convergence 1104 process, but may receive "implicit withdraws" as described above. 1106 Even though in case of such failures multiple IP prefixes will have 1107 to be reprogrammed in the FIB, it is worth noting that ALL of these 1108 prefixes share a single ECMP group on Tier-2 device. Therefore, in 1109 the case of implementations with a hierarchical FIB, only a single 1110 change has to be made to the FIB. Hierarchical FIB here means FIB 1111 structure where the next-hop forwarding information is stored 1112 separately from the prefix lookup table, and the latter only store 1113 pointers to the respective forwarding information. 1115 Even though BGP offers reduced failure scope for some cases, further 1116 reduction of the fault domain using summarization is not always 1117 possible with the proposed design, since using this technique may 1118 create routing black-holes as mentioned previously. Therefore, the 1119 worst control plane failure impact scope is the network as a whole, 1120 for instance in a case of a link failure between Tier-2 and Tier-3 1121 devices. The amount of impacted prefixes in this case would be much 1122 less than in the case of a failure in the upper layers of a Clos 1123 network topology. The property of having such large failure scope is 1124 not a result of choosing EBGP in the design but rather a result of 1125 using the "scale-out" Clos topology. 1127 7.5. Routing Micro-Loops 1129 When a downstream device, e.g. Tier-2 device, loses all paths for a 1130 prefix, it normally has the default route pointing toward the 1131 upstream device, in this case the Tier-1 device. As a result, it is 1132 possible to get in the situation when Tier-2 switch loses a prefix, 1133 but Tier-1 switch still has the path pointing to the Tier-2 device, 1134 which results in transient micro-loop, since Tier-1 switch will keep 1135 passing packets to the affected prefix back to Tier-2 device, and 1136 Tier-2 will bounce it back again using the default route. This 1137 micro-loop will last for the duration of time it takes the upstream 1138 device to fully update its forwarding tables. 1140 To minimize impact of the micro-loops, Tier-2 and Tier-1 switches can 1141 be configured with static "discard" or "null" routes that will be 1142 more specific than the default route for prefixes missing during 1143 network convergence. For Tier-2 switches, the discard route should 1144 be a summary route, covering all server subnets of the underlying 1145 Tier-3 devices. For Tier-1 devices, the discard route should be a 1146 summary covering the server IP address subnet allocated for the whole 1147 data center. Those discard routes will only take precedence for the 1148 duration of network convergence, until the device learns a more 1149 specific prefix via a new path. 1151 8. Additional Options for Design 1153 8.1. Third-party Route Injection 1155 BGP allows for a "third-party", i.e. directly attached, BGP speaker 1156 to inject routes anywhere in the network topology, meeting REQ5. 1157 This can be achieved by peering via a multihop BGP session with some 1158 or even all devices in the topology. Furthermore, BGP diverse path 1159 distribution [RFC6774] could be used to inject multiple BGP next hops 1160 for the same prefix to facilitate load balancing, or using the BGP 1161 ADD-PATH capability [I-D.ietf-idr-add-paths] if supported by the 1162 implementation. Unfortunately, in many implementations ADD-PATH has 1163 been found to only support IBGP properly due to the use cases it was 1164 originally optimized for, which limits the "third-party" peering to 1165 iBGP only, if the feature is used. 1167 To implement route injection in the proposed design, a third-party 1168 BGP speaker may peer with Tier-3 and Tier-1 switches, injecting the 1169 same prefix, but using a special set of BGP next-hops for Tier-1 1170 devices. Those next-hops are assumed to resolve recursively via BGP, 1171 and could be, for example, IP addresses on Tier-3 devices. The 1172 resulting forwarding table programming could provide desired traffic 1173 proportion distribution among different clusters. 1175 8.2. Route Summarization within Clos Topology 1177 As mentioned previously, route summarization is not possible within 1178 the proposed Clos topology since it makes the network susceptible to 1179 route black-holing under single link failures. The main problem is 1180 the limited number of redundant paths between network elements, e.g. 1181 there is only a single path between any pair of Tier-1 and Tier-3 1182 devices. However, some operators may find route aggregation 1183 desirable to improve control plane stability. 1185 If planning on using any technique to summarize within the topology 1186 modeling of the routing behavior and potential for black-holing 1187 should be done not only for single or multiple link failures, but 1188 also fiber pathway failures or optical domain failures if the 1189 topology extends beyond a physical location. Simple modeling can be 1190 done by checking the reachability on devices doing summarization 1191 under the condition of a link or pathway failure between a set of 1192 devices in every Tier as well as to the WAN routers if external 1193 connectivity is present. 1195 Route summarization would be possible with a small modification to 1196 the network topology, though the trade-off would be reduction of the 1197 total size of the network as well as network congestion under 1198 specific failures. This approach is very similar to the technique 1199 described above, which allows Border Routers to summarize the entire 1200 data center address space. 1202 8.2.1. Collapsing Tier-1 Devices Layer 1204 In order to add more paths between Tier-1 and Tier-3 devices, group 1205 Tier-2 devices into pairs, and then connect the pairs to the same 1206 group of Tier-1 devices. This is logically equivalent to 1207 "collapsing" Tier-1 devices into a group of half the size, merging 1208 the links on the "collapsed" devices. The result is illustrated in 1209 Figure 6. For example, in this topology DEV C and DEV D connect to 1210 the same set of Tier-1 devices (DEV 1 and DEV 2), whereas before they 1211 were connecting to different groups of Tier-1 devices. 1213 Tier-2 Tier-1 Tier-2 1214 +-----+ +-----+ +-----+ 1215 +-------------| DEV |------| DEV |------| |-------------+ 1216 | +-----| C |--++--| 1 |--++--| |-----+ | 1217 | | +-----+ || +-----+ || +-----+ | | 1218 | | || || | | 1219 | | +-----+ || +-----+ || +-----+ | | 1220 | +-----+-----| DEV |--++--| DEV |--++--| |-----+-----+ | 1221 | | | +---| D |------| 2 |------| |---+ | | | 1222 | | | | +-----+ +-----+ +-----+ | | | | 1223 | | | | | | | | 1224 +-----+ +-----+ +-----+ +-----+ 1225 | DEV | | DEV | | | | | 1226 | A | | B | Tier-3 Tier-3 | | | | 1227 +-----+ +-----+ +-----+ +-----+ 1228 | | | | | | | | 1229 O O O O <- Servers -> O O O O 1231 Figure 6: 5-Stage Clos topology 1233 Having this design in place, Tier-2 devices may be configured to 1234 advertise only a default route down to Tier-3 devices. If a link 1235 between Tier-2 and Tier-3 fails, the traffic will be re-routed via 1236 the second available path known to a Tier-2 switch. It is still not 1237 possible to advertise a summary route covering prefixes for a single 1238 cluster from Tier-2 devices since each of them has only a single path 1239 down to this prefix. It would require dual-homed servers to 1240 accomplish that. Also note that this design is only resilient to 1241 single link failure. It is possible for a double link failure to 1242 isolate a Tier-2 device from all paths toward a specific Tier-3 1243 device, thus causing a routing black-hole. 1245 A result of the proposed topology modification would be reduction of 1246 Tier-1 devices port capacity. This limits the maximum number of 1247 attached Tier-2 devices and therefore will limit the maximum DC 1248 network size. A larger network would require different Tier-1 1249 devices that have higher port density to implement this change. 1251 Another problem is traffic re-balancing under link failures. Since 1252 three are two paths from Tier-1 to Tier-3, a failure of the link 1253 between Tier-1 and Tier-2 switch would result in all traffic that was 1254 taking the failed link to switch to the remaining path. This will 1255 result in doubling of link utilization on the remaining link. 1257 8.2.2. Simple Virtual Aggregation 1259 A completely different approach to route summarization is possible, 1260 provided that the main goal is to reduce the FIB pressure, while 1261 allowing the control plane to disseminate full routing information. 1262 Firstly, it could be easily noted that in many cases multiple 1263 prefixes, some of which are less specific, share the same set of the 1264 next-hops (same ECMP group). For example, looking from the 1265 perspective of a Tier-3 devices, all routes learned from upstream 1266 Tier-2's, including the default route, will share the same set of BGP 1267 next-hops, provided that there is no failures in the network. This 1268 makes it possible to use the technique similar to described in 1269 [RFC6769] and only install the least specific route in the FIB, 1270 ignoring more specific routes if they share the same next-hop set. 1271 For example, under normal network conditions, only the default route 1272 need to be programmed into FIB. 1274 Furthermore, if the Tier-2 devices are configured with summary 1275 prefixes covering all of their attached Tier-3 device's prefixes the 1276 same logic could be applied in Tier-1 devices as well, and, by 1277 induction to Tier-2/Tier-3 switches in different clusters. These 1278 summary routes should still allow for more specific prefixes to leak 1279 to Tier-1 devices, to enable for detection of mismatches in the next- 1280 hop sets if a particular link fails, changing the next-hop set for a 1281 specific prefix. 1283 Re-stating once again, this technique does not reduce the amount of 1284 control plane state (i.e. BGP UPDATEs/BGP LocRIB sizing), but only 1285 allows for more efficient FIB utilization, by spotting more specific 1286 prefixes that share their next-hops with less specifics. 1288 8.3. ICMP Unreachable Message Masquerading 1290 This section discusses some operational aspects of not advertising 1291 point-to-point link subnets into BGP, as previously outlined as an 1292 option in Section 5.2.3. The operational impact of this decision 1293 could be seen when using the well-known "traceroute" tool. 1294 Specifically, IP addresses displayed by the tool will be the link's 1295 point-to-point addresses, and hence will be unreachable for 1296 management connectivity. This makes some troubleshooting more 1297 complicated. 1299 One way to overcome this limitation is by using the DNS subsystem to 1300 create the "reverse" entries for the IP addresses of the same device 1301 pointing to the same name. The connectivity then can be made by 1302 resolving this name to the "primary" IP address of the devices, e.g. 1303 its Loopback interface, which is always advertised into BGP. 1304 However, this creates a dependency on the DNS subsystem, which may be 1305 unavailable during an outage. 1307 Another option is to make the network device perform IP address 1308 masquerading, that is rewriting the source IP addresses of the 1309 appropriate ICMP messages sent off of the device with the "primary" 1310 IP address of the device. Specifically, the ICMP Destination 1311 Unreachable Message (type 3) codes 3 (port unreachable) and ICMP Time 1312 Exceeded (type 11) code 0, which are involved in proper working of 1313 the "traceroute" tool. With this modification, the "traceroute" 1314 probes sent to the devices will always be sent back with the 1315 "primary" IP address as the source, allowing the operator to discover 1316 the "reachable" IP address of the box. This has the downside of 1317 hiding the address of the "entry point" into the device. 1319 9. Security Considerations 1321 The design does not introduce any additional security concerns. 1322 General BGP security considerations are discussed in [RFC4271] and 1323 [RFC4272]. Furthermore, the Generalized TTL Security Mechanism 1324 [RFC5082] could be used to reduce the risk of BGP session spoofing. 1326 10. IANA Considerations 1328 This document includes no request to IANA. 1330 11. Acknowledgements 1332 This publication summarizes work of many people who participated in 1333 developing, testing and deploying the proposed network design, some 1334 of whom were George Chen, Parantap Lahiri, Dave Maltz, Edet Nkposong, 1335 Robert Toomey, and Lihua Yuan. Authors would also like to thank 1336 Linda Dunbar, Susan Hares, Danny McPherson, Russ White and Robert 1337 Raszuk for reviewing the document and providing valuable feedback and 1338 Mary Mitchell for grammar and style suggestions. 1340 12. References 1342 12.1. Normative References 1344 [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A 1345 Border Gateway Protocol 4 (BGP-4)", RFC 4271, 1346 DOI 10.17487/RFC4271, January 2006, 1347 . 1349 [RFC6996] Mitchell, J., "Autonomous System (AS) Reservation for 1350 Private Use", BCP 6, RFC 6996, DOI 10.17487/RFC6996, July 1351 2013, . 1353 12.2. Informative References 1355 [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, 1356 DOI 10.17487/RFC2328, April 1998, 1357 . 1359 [RFC2992] Hopps, C., "Analysis of an Equal-Cost Multi-Path 1360 Algorithm", RFC 2992, DOI 10.17487/RFC2992, November 2000, 1361 . 1363 [RFC4272] Murphy, S., "BGP Security Vulnerabilities Analysis", 1364 RFC 4272, DOI 10.17487/RFC4272, January 2006, 1365 . 1367 [RFC4277] McPherson, D. and K. Patel, "Experience with the BGP-4 1368 Protocol", RFC 4277, DOI 10.17487/RFC4277, January 2006, 1369 . 1371 [RFC4786] Abley, J. and K. Lindqvist, "Operation of Anycast 1372 Services", BCP 126, RFC 4786, DOI 10.17487/RFC4786, 1373 December 2006, . 1375 [RFC5082] Gill, V., Heasley, J., Meyer, D., Savola, P., Ed., and C. 1376 Pignataro, "The Generalized TTL Security Mechanism 1377 (GTSM)", RFC 5082, DOI 10.17487/RFC5082, October 2007, 1378 . 1380 [RFC5880] Katz, D. and D. Ward, "Bidirectional Forwarding Detection 1381 (BFD)", RFC 5880, DOI 10.17487/RFC5880, June 2010, 1382 . 1384 [RFC6325] Perlman, R., Eastlake 3rd, D., Dutt, D., Gai, S., and A. 1385 Ghanwani, "Routing Bridges (RBridges): Base Protocol 1386 Specification", RFC 6325, DOI 10.17487/RFC6325, July 2011, 1387 . 1389 [RFC6769] Raszuk, R., Heitz, J., Lo, A., Zhang, L., and X. Xu, 1390 "Simple Virtual Aggregation (S-VA)", RFC 6769, 1391 DOI 10.17487/RFC6769, October 2012, 1392 . 1394 [RFC6774] Raszuk, R., Ed., Fernando, R., Patel, K., McPherson, D., 1395 and K. Kumaki, "Distribution of Diverse BGP Paths", 1396 RFC 6774, DOI 10.17487/RFC6774, November 2012, 1397 . 1399 [RFC6793] Vohra, Q. and E. Chen, "BGP Support for Four-Octet 1400 Autonomous System (AS) Number Space", RFC 6793, 1401 DOI 10.17487/RFC6793, December 2012, 1402 . 1404 [RFC7130] Bhatia, M., Ed., Chen, M., Ed., Boutros, S., Ed., 1405 Binderberger, M., Ed., and J. Haas, Ed., "Bidirectional 1406 Forwarding Detection (BFD) on Link Aggregation Group (LAG) 1407 Interfaces", RFC 7130, DOI 10.17487/RFC7130, February 1408 2014, . 1410 [RFC7196] Pelsser, C., Bush, R., Patel, K., Mohapatra, P., and O. 1411 Maennel, "Making Route Flap Damping Usable", RFC 7196, 1412 DOI 10.17487/RFC7196, May 2014, 1413 . 1415 [I-D.ietf-idr-add-paths] 1416 Walton, D., Retana, A., Chen, E., and J. Scudder, 1417 "Advertisement of Multiple Paths in BGP", draft-ietf-idr- 1418 add-paths-10 (work in progress), October 2014. 1420 [I-D.ietf-idr-link-bandwidth] 1421 Mohapatra, P. and R. Fernando, "BGP Link Bandwidth 1422 Extended Community", draft-ietf-idr-link-bandwidth-06 1423 (work in progress), January 2013. 1425 [CLOS1953] 1426 Clos, C., "A Study of Non-Blocking Switching Networks: 1427 Bell System Technical Journal Vol. 32(2)", March 1953. 1429 [HADOOP] Apache, , "Apache HaDoop", July 2015, 1430 . 1432 [GREENBERG2009] 1433 Greenberg, A., Hamilton, J., and D. Maltz, "The Cost of a 1434 Cloud: Research Problems in Data Center Networks", January 1435 2009. 1437 [IEEE8021D-1990] 1438 IEEE 802.1D, , "IEEE Standard for Local and Metropolitan 1439 Area Networks--Media access control (MAC) Bridges", May 1440 1990. 1442 [IEEE8021D-2004] 1443 IEEE 802.1D, , "IEEE Standard for Local and Metropolitan 1444 Area Networks--Media access control (MAC) Bridges", 1445 February 2004. 1447 [IEEE8021Q] 1448 IEEE 802.1Q, , "IEEE Standard for Local and metropolitan 1449 area networks--Bridges and Bridged Networks", December 1450 2014. 1452 [INTERCON] 1453 Dally, W. and B. Towles, "Principles and Practices of 1454 Interconnection Networks", ISBN 978-0122007514, January 1455 2004. 1457 [ALFARES2008] 1458 Al-Fares, M., Loukissas, A., and A. Vahdat, "A Scalable, 1459 Commodity Data Center Network Architecture", August 2008. 1461 [IANA.AS] IANA, , "Autonomous System (AS) Numbers", July 2015, 1462 . 1464 [IEEE8023AD] 1465 IEEE 802.3ad, , "IEEE Standard for Link aggregation for 1466 parallel links", October 2000. 1468 [ALLOWASIN] 1469 Cisco Systems, , "Allowas-in Feature in BGP Configuration 1470 Example", February 2015, 1471 . 1475 [REMOVE-PRIVATE-AS] 1476 Cisco Systems, , "Removing Private Autonomous System 1477 Numbers in BGP", August 2005, 1478 . 1481 [CONDITIONALROUTE] 1482 Cisco Systems, , "Configuring and Verifying the BGP 1483 Conditional Advertisement Feature", August 2005, 1484 . 1487 [FB4POST] Farrington, N. and A. Andreyev, "Facebook's Data Center 1488 Network Architecture", May 2013, 1489 . 1491 [JAKMA2008] 1492 Jakma, P., "BGP Path Hunting", 2008, 1493 . 1495 [CONS-HASH] 1496 Wikipedia, , "Consistent Hashing", 1497 . 1499 Authors' Addresses 1501 Petr Lapukhov 1502 Facebook 1503 1 Hacker Way 1504 Menlo Park, CA 94025 1505 US 1507 Email: petr@fb.com 1508 Ariff Premji 1509 Arista Networks 1510 5453 Great America Parkway 1511 Santa Clara, CA 95054 1512 US 1514 Email: ariff@arista.com 1515 URI: http://arista.com/ 1517 Jon Mitchell (editor) 1519 Email: jrmitche@puck.nether.net