idnits 2.17.1
draft-ietf-rtgwg-bgp-routing-large-dc-02.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (April 20, 2015) is 3294 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Outdated reference: A later version (-15) exists of
draft-ietf-idr-add-paths-10
== Outdated reference: A later version (-07) exists of
draft-ietf-idr-link-bandwidth-06
Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Routing Area Working Group P. Lapukhov
3 Internet-Draft Facebook
4 Intended status: Informational A. Premji
5 Expires: October 22, 2015 Arista Networks
6 J. Mitchell, Ed.
7 April 20, 2015
9 Use of BGP for routing in large-scale data centers
10 draft-ietf-rtgwg-bgp-routing-large-dc-02
12 Abstract
14 Some network operators build and operate data centers that support
15 over one hundred thousand servers. In this document, such data
16 centers are referred to as "large-scale" to differentiate them from
17 smaller infrastructures. Environments of this scale have a unique
18 set of network requirements with an emphasis on operational
19 simplicity and network stability. This document summarizes
20 operational experience in designing and operating large-scale data
21 centers using BGP as the only routing protocol. The intent is to
22 report on a proven and stable routing design that could be leveraged
23 by others in the industry.
25 Status of This Memo
27 This Internet-Draft is submitted in full conformance with the
28 provisions of BCP 78 and BCP 79.
30 Internet-Drafts are working documents of the Internet Engineering
31 Task Force (IETF). Note that other groups may also distribute
32 working documents as Internet-Drafts. The list of current Internet-
33 Drafts is at http://datatracker.ietf.org/drafts/current/.
35 Internet-Drafts are draft documents valid for a maximum of six months
36 and may be updated, replaced, or obsoleted by other documents at any
37 time. It is inappropriate to use Internet-Drafts as reference
38 material or to cite them other than as "work in progress."
40 This Internet-Draft will expire on October 22, 2015.
42 Copyright Notice
44 Copyright (c) 2015 IETF Trust and the persons identified as the
45 document authors. All rights reserved.
47 This document is subject to BCP 78 and the IETF Trust's Legal
48 Provisions Relating to IETF Documents
49 (http://trustee.ietf.org/license-info) in effect on the date of
50 publication of this document. Please review these documents
51 carefully, as they describe your rights and restrictions with respect
52 to this document. Code Components extracted from this document must
53 include Simplified BSD License text as described in Section 4.e of
54 the Trust Legal Provisions and are provided without warranty as
55 described in the Simplified BSD License.
57 Table of Contents
59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
60 2. Network Design Requirements . . . . . . . . . . . . . . . . . 4
61 2.1. Bandwidth and Traffic Patterns . . . . . . . . . . . . . 4
62 2.2. CAPEX Minimization . . . . . . . . . . . . . . . . . . . 4
63 2.3. OPEX Minimization . . . . . . . . . . . . . . . . . . . . 5
64 2.4. Traffic Engineering . . . . . . . . . . . . . . . . . . . 5
65 2.5. Summarized Requirements . . . . . . . . . . . . . . . . . 5
66 3. Data Center Topologies Overview . . . . . . . . . . . . . . . 6
67 3.1. Traditional DC Topology . . . . . . . . . . . . . . . . . 6
68 3.2. Clos Network topology . . . . . . . . . . . . . . . . . . 7
69 3.2.1. Overview . . . . . . . . . . . . . . . . . . . . . . 7
70 3.2.2. Clos Topology Properties . . . . . . . . . . . . . . 8
71 3.2.3. Scaling the Clos topology . . . . . . . . . . . . . . 9
72 3.2.4. Managing the Size of Clos Topology Tiers . . . . . . 10
73 4. Data Center Routing Overview . . . . . . . . . . . . . . . . 10
74 4.1. Layer 2 Only Designs . . . . . . . . . . . . . . . . . . 11
75 4.2. Hybrid L2/L3 Designs . . . . . . . . . . . . . . . . . . 11
76 4.3. Layer 3 Only Designs . . . . . . . . . . . . . . . . . . 12
77 5. Routing Protocol Selection and Design . . . . . . . . . . . . 12
78 5.1. Choosing EBGP as the Routing Protocol . . . . . . . . . . 13
79 5.2. EBGP Configuration for Clos topology . . . . . . . . . . 14
80 5.2.1. Example ASN Scheme . . . . . . . . . . . . . . . . . 14
81 5.2.2. Private Use BGP ASNs . . . . . . . . . . . . . . . . 15
82 5.2.3. Prefix Advertisement . . . . . . . . . . . . . . . . 16
83 5.2.4. External Connectivity . . . . . . . . . . . . . . . . 17
84 5.2.5. Route Summarization at the Edge . . . . . . . . . . . 18
85 6. ECMP Considerations . . . . . . . . . . . . . . . . . . . . . 19
86 6.1. Basic ECMP . . . . . . . . . . . . . . . . . . . . . . . 19
87 6.2. BGP ECMP over Multiple ASNs . . . . . . . . . . . . . . . 20
88 6.3. Weighted ECMP . . . . . . . . . . . . . . . . . . . . . . 20
89 6.4. Consistent Hashing . . . . . . . . . . . . . . . . . . . 21
90 7. Routing Convergence Properties . . . . . . . . . . . . . . . 21
91 7.1. Fault Detection Timing . . . . . . . . . . . . . . . . . 21
92 7.2. Event Propagation Timing . . . . . . . . . . . . . . . . 22
93 7.3. Impact of Clos Topology Fan-outs . . . . . . . . . . . . 22
94 7.4. Failure Impact Scope . . . . . . . . . . . . . . . . . . 23
95 7.5. Routing Micro-Loops . . . . . . . . . . . . . . . . . . . 24
96 8. Additional Options for Design . . . . . . . . . . . . . . . . 25
97 8.1. Third-party Route Injection . . . . . . . . . . . . . . . 25
98 8.2. Route Summarization within Clos Topology . . . . . . . . 25
99 8.2.1. Collapsing Tier-1 Devices Layer . . . . . . . . . . . 26
100 8.2.2. Simple Virtual Aggregation . . . . . . . . . . . . . 27
101 8.3. ICMP Unreachable Message Masquerading . . . . . . . . . . 27
102 9. Security Considerations . . . . . . . . . . . . . . . . . . . 28
103 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28
104 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 28
105 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 29
106 12.1. Normative References . . . . . . . . . . . . . . . . . . 29
107 12.2. Informative References . . . . . . . . . . . . . . . . . 29
108 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 31
110 1. Introduction
112 This document describes a practical routing design that can be used
113 in a large-scale data center ("DC") design. Such data centers, also
114 known as hyper-scale or warehouse-scale data-centers, have a unique
115 attribute of supporting over a hundred thousand servers. In order to
116 accommodate networks of this scale, operators are revisiting
117 networking designs and platforms to address this need.
119 The design presented in this document is based on operational
120 experience with data centers built to support large scale distributed
121 software infrastructure, such as a Web search engine. The primary
122 requirements in such an environment are operational simplicity and
123 network stability so that a small group of people can effectively
124 support a significantly sized network.
126 After experimentation and extensive testing, Microsoft chose to use
127 an end-to-end routed network infrastructure with External BGP (EBGP)
128 [RFC4271] as the only routing protocol for some of its DC
129 deployments. This is in contrast with more traditional DC designs,
130 which may use simple tree topologies and rely on extending Layer 2
131 domains across multiple network devices. This document elaborates on
132 the requirements that led to this design choice and presents details
133 of the EBGP routing design as well as explores ideas for further
134 enhancements.
136 This document first presents an overview of network design
137 requirements and considerations for large-scale data centers. Then
138 traditional hierarchical data center network topologies are
139 contrasted with Clos networks that are horizontally scaled out. This
140 is followed by arguments for selecting EBGP with a Clos topology as
141 the most appropriate routing protocol to meet the requirements and
142 the proposed design is described in detail. Finally, the document
143 reviews some additional considerations and design options.
145 2. Network Design Requirements
147 This section describes and summarizes network design requirements for
148 large-scale data centers.
150 2.1. Bandwidth and Traffic Patterns
152 The primary requirement when building an interconnection network for
153 large number of servers is to accommodate application bandwidth and
154 latency requirements. Until recently it was quite common to see the
155 majority of traffic entering and leaving the data center, commonly
156 referred to as "north-south" traffic. As a result, traditional
157 "tree" topologies were sufficient to accommodate such flows, even
158 with high oversubscription ratios between the layers of the network.
159 If more bandwidth was required, it was added by "scaling up" the
160 network elements, e.g. by upgrading the device's line-cards or
161 fabrics or replacing the device with one with higher port density.
163 Today many large-scale data centers host applications generating
164 significant amounts of server-to-server traffic, which does not
165 egress the DC, commonly referred to as "east-west" traffic. Examples
166 of such applications could be compute clusters such as Hadoop,
167 massive data replication between clusters needed by certain
168 applications, or virtual machine migrations. Scaling traditional
169 tree topologies to match these bandwidth demands becomes either too
170 expensive or impossible due to physical limitations, e.g. port
171 density in a switch.
173 2.2. CAPEX Minimization
175 The cost of the network infrastructure alone (CAPEX) constitutes
176 about 10-15% of total data center expenditure (see [GREENBERG2009]).
177 However, the absolute cost is significant, and hence there is a need
178 to constantly drive down the cost of individual network elements.
179 This can be accomplished in two ways:
181 o Unifying all network elements, preferably using the same hardware
182 type or even the same device. This allows for volume pricing on
183 bulk purchases.
185 o Driving costs down using competitive pressures, by introducing
186 multiple network equipment vendors.
188 In order to allow for good vendor diversity it is important to
189 minimize the software feature requirements for the network elements.
190 This strategy provides maximum flexibility of vendor equipment
191 choices while enforcing interoperability using open standards.
193 2.3. OPEX Minimization
195 Operating large-scale infrastructure could be expensive, provided
196 that larger amount of elements will statistically fail more often.
197 Having a simpler design and operating using a limited software
198 feature-set minimizes software issue related failures.
200 An important aspect of OPEX minimization is reducing size of failure
201 domains in the network. Ethernet networks are known to be
202 susceptible to broadcast or unicast traffic storms that have dramatic
203 impact on network performance and availability. The use of a fully
204 routed design significantly reduces the size of the data-plane
205 failure domains - i.e. limits them to the lowest level in the network
206 hierarchy. However, such designs introduce the problem of
207 distributed control-plane failures. This observation calls for
208 simpler control-plane protocols that are expected to have less
209 chances of network meltdown. Minimizing software feature
210 requirements as described in the CAPEX section above also reduces
211 testing and training requirements.
213 2.4. Traffic Engineering
215 In any data center, application load-balancing is a critical function
216 performed by network devices. Traditionally, load-balancers are
217 deployed as dedicated devices in the traffic forwarding path. The
218 problem arises in scaling load-balancers under growing traffic
219 demand. A preferable solution would be able to scale load-balancing
220 layer horizontally, by adding more of the uniform nodes and
221 distributing incoming traffic across these nodes. In situation like
222 this, an ideal choice would be to use network infrastructure itself
223 to distribute traffic across a group of load-balancers. The
224 combination of Anycast prefix advertisement [RFC4786] and Equal Cost
225 Multipath (ECMP) functionality can be used to accomplish this goal.
226 To allow for more granular load-distribution, it is beneficial for
227 the network to support the ability to perform controlled per-hop
228 traffic engineering. For example, it is beneficial to directly
229 control the ECMP next-hop set for Anycast prefixes at every level of
230 network hierarchy.
232 2.5. Summarized Requirements
234 This section summarizes the list of requirements outlined in the
235 previous sections:
237 o REQ1: Select a topology that can be scaled "horizontally" by
238 adding more links and network devices of the same type without
239 requiring upgrades to the network elements themselves.
241 o REQ2: Define a narrow set of software features/protocols supported
242 by a multitude of networking equipment vendors.
244 o REQ3: Choose a routing protocol that has a simple implementation
245 in terms of programming code complexity and ease of operational
246 support.
248 o REQ4: Minimize the failure domain of equipment or protocol issues
249 as much as possible.
251 o REQ5: Allow for traffic engineering, preferably via explicit
252 control of the routing prefix next-hop using built-in protocol
253 mechanics.
255 3. Data Center Topologies Overview
257 This section provides an overview of two general types of data center
258 designs - hierarchical (also known as tree based) and Clos based
259 network designs.
261 3.1. Traditional DC Topology
263 In the networking industry, a common design choice for data centers
264 typically look like a (upside-down) tree with redundant uplinks and
265 three layers of hierarchy namely core, aggregation/distribution and
266 access layers (see Figure 1). To accommodate bandwidth demands, each
267 higher layer, from server towards DC egress or WAN, has higher port
268 density and bandwidth capacity where the core functions as the
269 "trunk" of the tree based design. To keep terminology uniform and
270 for comparison with other designs, in this document these layers will
271 be referred to as Tier-1, Tier-2 and Tier-3 "tiers" instead of Core,
272 Aggregation or Access layers.
274 +------+ +------+
275 | | | |
276 | |--| | Tier-1
277 | | | |
278 +------+ +------+
279 | | | |
280 +---------+ | | +----------+
281 | +-------+--+------+--+-------+ |
282 | | | | | | | |
283 +----+ +----+ +----+ +----+
284 | | | | | | | |
285 | |-----| | | |-----| | Tier-2
286 | | | | | | | |
287 +----+ +----+ +----+ +----+
288 | | | |
289 | | | |
290 | +-----+ | | +-----+ |
291 +-| |-+ +-| |-+ Tier-3
292 +-----+ +-----+
293 | | | | | |
294 <- Servers -> <- Servers ->
296 Figure 1: Typical DC network topology
298 3.2. Clos Network topology
300 This section describes a common design for horizontally scalable
301 topology in large scale data centers in order to meet REQ1.
303 3.2.1. Overview
305 A common choice for a horizontally scalable topology is a folded Clos
306 topology, sometimes called "fat-tree" (see, for example, [INTERCON]
307 and [ALFARES2008]). This topology features an odd number of stages
308 (sometimes known as dimensions) and is commonly made of uniform
309 elements, e.g. network switches with the same port count. Therefore,
310 the choice of folded Clos topology satisfies REQ1 and facilitates
311 REQ2. See Figure 2 below for an example of a folded 3-stage Clos
312 topology (3 stages counting Tier-2 stage twice, when tracing a packet
313 flow):
315 +-------+
316 | |----------------------------+
317 | |------------------+ |
318 | |--------+ | |
319 +-------+ | | |
320 +-------+ | | |
321 | |--------+---------+-------+ |
322 | |--------+-------+ | | |
323 | |------+ | | | | |
324 +-------+ | | | | | |
325 +-------+ | | | | | |
326 | |------+-+-------+-+-----+ | |
327 | |------+-+-----+ | | | | |
328 | |----+ | | | | | | | |
329 +-------+ | | | | | | ---------> M links
330 Tier-1 | | | | | | | | |
331 +-------+ +-------+ +-------+
332 | | | | | |
333 | | | | | | Tier-2
334 | | | | | |
335 +-------+ +-------+ +-------+
336 | | | | | | | | |
337 | | | | | | ---------> N Links
338 | | | | | | | | |
339 O O O O O O O O O Servers
341 Figure 2: 3-Stage Folded Clos topology
343 This topology is often also referred to as a "Leaf and Spine"
344 network, where "Spine" is the name given to the middle stage of the
345 Clos topology (Tier-1) and "Leaf" is the name of input/output stage
346 (Tier-2). For uniformity, this document will refer to these layers
347 using the "Tier-n" notation.
349 3.2.2. Clos Topology Properties
351 The following are some key properties of the Clos topology:
353 o The topology is fully non-blocking (or more accurately: non-
354 interfering) if M >= N and oversubscribed by a factor of N/M
355 otherwise. Here M and N is the uplink and downlink port count
356 respectively, for a Tier-2 switch as shown in Figure 2.
358 o Utilizing this topology requires control and data plane supporting
359 ECMP with the fan-out of M or more.
361 o Tier-1 switches have exactly one path to every server in this
362 topology. This is an important property that makes route
363 summarization impossible in this topology (see Section 8.2 below).
365 o Traffic flowing from server to server is load-balanced over all
366 available paths using ECMP.
368 3.2.3. Scaling the Clos topology
370 A Clos topology can be scaled either by increasing network element
371 port density or adding more stages, e.g. moving to a 5-stage Clos, as
372 illustrated in Figure 3 below:
374 Tier-1
375 +-----+
376 | |
377 +--| |--+
378 | +-----+ |
379 Tier-2 | | Tier-2
380 +-----+ | +-----+ | +-----+
381 +-------------| DEV |--+--| |--+--| |-------------+
382 | +-----| C |--+ | | +--| |-----+ |
383 | | +-----+ +-----+ +-----+ | |
384 | | | |
385 | | +-----+ +-----+ +-----+ | |
386 | +-----+-----| DEV |--+ | | +--| |-----+-----+ |
387 | | | +---| D |--+--| |--+--| |---+ | | |
388 | | | | +-----+ | +-----+ | +-----+ | | | |
389 | | | | | | | | | |
390 +-----+ +-----+ | +-----+ | +-----+ +-----+
391 | DEV | | DEV | +--| |--+ | | | |
392 | A | | B | Tier-3 | | Tier-3 | | | |
393 +-----+ +-----+ +-----+ +-----+ +-----+
394 | | | | | | | |
395 O O O O O O O O
396 Servers Servers
398 Figure 3: 5-Stage Clos topology
400 The small example topology on Figure 3 is built from devices with a
401 port count of 4 and provides full bisectional bandwidth to all
402 connected servers. In this document, one set of directly connected
403 Tier-2 and Tier-3 devices along with their attached servers will be
404 referred to as a "cluster". For example, DEV A, B, C, D, and the
405 servers that connect to DEV A and B, on Figure 3 form a cluster.
407 In practice, the Tier-3 layer of the network, which are typically top
408 of rack switches (ToRs), is where oversubscription is introduced to
409 allow for packaging of more servers in the data center while meeting
410 the bandwidth requirements for different types of applications. The
411 main reason to limit oversubscription at a single layer of the
412 network is to simplify application development that would otherwise
413 need to account for multiple bandwidth pools: within rack (Tier-3),
414 between racks (Tier-2), and between clusters (Tier-1). Since
415 oversubscription does not have a direct relationship to the routing
416 design it is not discussed further in this document.
418 3.2.4. Managing the Size of Clos Topology Tiers
420 If a data-center network size is small, it is possible to reduce the
421 number of switches in Tier-1 or Tier-2 of Clos topology by a power of
422 two. To understand how this could be done, take Tier-1 as an
423 example. Every Tier-2 device connects to a single group of Tier-1
424 devices. If half of the ports on each of the Tier-1 devices are not
425 being used then it is possible to reduce the number of Tier-1 devices
426 by half and simply map two uplinks from a Tier-2 device to the same
427 Tier-1 device that were previously mapped to different Tier-1
428 devices. This technique maintains the same bisectional bandwidth
429 while reducing the number of elements in the Tier-1 layer, thus
430 saving on CAPEX. The tradeoff, in this example, is the reduction of
431 maximum DC size in terms of overall server count by half.
433 In this example, Tier-2 devices will be using two parallel links to
434 connect to each Tier-1 device. If one of these links fails, the
435 other will pick up all traffic of the failed link, possible resulting
436 in heavy congestion and quality of service degradation if the path
437 determination procedure, does not take bandwidth amount into account.
438 To avoid this situation, parallel links can be grouped in link
439 aggregation groups (LAGs, such as [IEEE8023AD]) with widely available
440 implementation settings that take the whole "bundle" down upon a
441 single link failure. Equivalent techniques that enforce "fate
442 sharing" on the parallel links can be used in place of LAGs to
443 achieve the same effect. As a result of such fate-sharing, traffic
444 from two or more failed links will be re-balanced over the multitude
445 of remaining paths that equals the number of Tier-1 devices. This
446 example is using two links for simplicity it should be noted, that
447 having more links in a bundle will have less impact on capacity upon
448 a member-link failure.
450 4. Data Center Routing Overview
452 This section provides an overview of three general types of data
453 center protocol designs - Layer 2 only, Hybrid L2/L3 and Layer 3
454 only.
456 4.1. Layer 2 Only Designs
458 Originally most data center designs used Spanning-Tree Protocol (STP)
459 for loop free topology creation, typically utilizing variants of the
460 traditional DC topology described in Section 3.1. At the time, many
461 DC switches either did not support Layer 3 routed protocols or
462 supported it with additional licensing fees, which played a part in
463 the design choice. Although many enhancements have been made through
464 the introduction of Rapid Spanning Tree Protocol and Multiple
465 Spanning Tree Protocol that increase convergence, stability and load
466 balancing in larger topologies many of the fundamentals of the
467 protocol limit its applicability in large scale DC's. STP and its
468 newer variants use an active/standby approach to path selection and
469 are therefore hard to deploy in horizontally scaled topologies
470 described in Section 3.2. Further, operators have had many
471 experiences with large failures due to issues caused by improper
472 cabling, misconfiguration, or flawed software on a single device.
473 These failures regularly affected the entire spanning-tree domain and
474 were very hard to troubleshoot due to the nature of the protocol.
475 For these reasons, and since almost all DC traffic is now IP,
476 therefore requiring a Layer 3 routing protocol at the network edge
477 for external connectivity, designs utilizing STP usually fail all of
478 the requirements of large scale DC operators. Various enhancements
479 to link-aggregation protocols such as [IEEE8023AD], generally known
480 as Multi-Chassis Link-Aggregation (M-LAG) made it possible to use
481 Layer 2 designs with active-active network paths while relying on STP
482 as the backup for loop prevention. The major downside of this
483 approach is proprietary nature of such extensions.
485 It should be noted that building large, horizontally scalable, Layer
486 2 only networks without STP is possible recently through the
487 introduction of TRILL [RFC6325]. TRILL resolves many of the issues
488 STP has for large scale DC design however currently the maturity of
489 the protocol, limited number of implementations, and requirement for
490 new equipment that supports it has limited its applicability and
491 increased the cost of such designs.
493 Finally, neither TRILL nor M-LAG approach eliminate the fundamental
494 problem of the shared broadcast domain, that is so detrimental to the
495 operations of any Layer 2, Ethernet based solutions.
497 4.2. Hybrid L2/L3 Designs
499 Operators have sought to limit the impact of data-plane faults and
500 build larger scale topologies through implementing routing protocols
501 in either the Tier-1 or Tier-2 parts of the network and dividing the
502 Layer-2 domain into numerous, smaller domains. This design has
503 allowed data centers to scale up, but at the cost of complexity in
504 the network managing multiple protocols. For the following reasons,
505 operators have retained Layer 2 in either the access (Tier-3) or both
506 access and aggregation (Tier-3 and Tier-2) parts of the network:
508 o Supporting legacy applications that may require direct Layer 2
509 adjacency or use non-IP protocols.
511 o Seamless mobility for virtual machines that require the
512 preservation of IP addresses when a virtual machine moves to
513 different Tier-3 switch.
515 o Simplified IP addressing = less IP subnets is required for the
516 data center.
518 o Application load-balancing may require direct Layer 2 reachability
519 to perform certain functions such as Layer 2 Direct Server Return
520 (DSR).
522 o Continued CAPEX differences between Layer-2 and Layer-3 capable
523 switches.
525 4.3. Layer 3 Only Designs
527 Network designs that leverage IP routing down to Tier-3 of the
528 network have gained popularity as well. The main benefit of these
529 designs is improved network stability and scalability, as a result of
530 confining L2 broadcast domains. Commonly an IGP such as OSPF
531 [RFC2328] is used as the primary routing protocol in such a design.
532 As data centers grow in scale, and server count exceeds tens of
533 thousands, such fully routed designs have become more attractive.
535 Choosing a Layer 3 only design greatly simplifies the network,
536 facilitating the meeting of REQ1 and REQ2, and has widespread
537 adoption in networks where large Layer 2 adjacency and larger size
538 Layer 3 subnets are not as critical compared to network scalability
539 and stability. Application providers and network operators continue
540 to also develop new solutions to meet some of the requirements that
541 previously have driven large Layer 2 domains.
543 5. Routing Protocol Selection and Design
545 In this section the motivations for using External BGP (EBGP) as the
546 single routing protocol for data center networks having a Layer 3
547 protocol design and Clos topology are reviewed. Then, a practical
548 approach for designing an EBGP based network is provided.
550 5.1. Choosing EBGP as the Routing Protocol
552 REQ2 would give preference to the selection of a single routing
553 protocol to reduce complexity and interdependencies. While it is
554 common to rely on an IGP in this situation, sometimes with either the
555 addition of EBGP at the device bordering the WAN or Internal BGP
556 (IBGP) throughout, this document proposes the use of an EBGP only
557 design.
559 Although EBGP is the protocol used for almost all inter-provider
560 routing on the Internet and has wide support from both vendor and
561 service provider communities, it is not generally deployed as the
562 primary routing protocol within the data center for a number of
563 reasons (some of which are interrelated):
565 o BGP is perceived as a "WAN only protocol only" and not often
566 considered for enterprise or data center applications.
568 o BGP is believed to have a "much slower" routing convergence
569 compared to IGPs.
571 o BGP deployment within an Autonomous System typically assumes the
572 presence of an IGP for next-hop resolution.
574 o BGP is perceived to require significant configuration overhead and
575 does not support neighbor auto-discovery.
577 This document discusses some of these perceptions, especially as
578 applicable to the proposed design, and highlights some of the
579 advantages of using the protocol such as:
581 o BGP has less complexity within its protocol design - internal data
582 structures and state-machines are simpler when compared to a link-
583 state IGP such as OSPF. For example, instead of implementing
584 adjacency formation, adjacency maintenance and/or flow-control,
585 BGP simply relies on TCP as the underlying transport. This
586 fulfills REQ2 and REQ3.
588 o BGP information flooding overhead is less when compared to link-
589 state IGPs. Since every BGP router calculates and propagates only
590 the best-path selected, a network failure is masked as soon as the
591 BGP speaker finds an alternate path, which exists when highly
592 symmetric topologies, such as Clos, are coupled with EBGP only
593 design. In contrast, the event propagation scope of a link-state
594 IGP is an entire area, regardless of the failure type. This meets
595 REQ3 and REQ4. It is worth mentioning that all widely deployed
596 link-state IGPs also feature periodic refreshes of routing
597 information, while BGP does not expire routing state, even if this
598 rarely causes significant impact to modern router control planes.
600 o BGP supports third-party (recursively resolved) next-hops. This
601 allows for manipulating multi-path to be non-ECMP based or
602 forwarding based on application-defined forwarding paths, through
603 establishment of a peering session with an application
604 "controller" which can inject routing information into the system,
605 satisfying REQ5. OSPF provides similar functionality using
606 concepts such as "Forwarding Address", but with more difficulty in
607 implementation and lack of protocol simplicity.
609 o Using a well-defined BGP ASN allocation scheme and standard
610 AS_PATH loop detection, "BGP path hunting" (see [JAKMA2008]) can
611 be controlled and complex unwanted paths will be ignored. See
612 Section 5.2 for an example of a working BGP ASN allocation scheme.
613 In a link-state IGP accomplishing the same goal would require
614 multi-(instance/topology/processes) support, typically not
615 available in all DC devices and quite complex to configure and
616 troubleshoot. Using a traditional single flooding domain, which
617 most DC designs utilize, under certain failure conditions may pick
618 up unwanted lengthy paths, e.g. traversing multiple Tier-2
619 devices.
621 o EBGP configuration that is implemented with minimal routing policy
622 is easier to troubleshoot for network reachability issues. In
623 most implementations, it is straightforward to view contents of
624 BGP Loc-RIB and compare it to the router's RIB. Also every BGP
625 neighbor has corresponding Adj-RIB-In and Adj-RIB-Out structures
626 with incoming and outgoing NRLI information that can be easily
627 correlated on both sides of a BGP session. Thus, BGP satisfies
628 REQ3.
630 5.2. EBGP Configuration for Clos topology
632 Clos topologies that have more than 5 stages are very uncommon due to
633 the large numbers of interconnects required by such a design.
634 Therefore, the examples below are made with reference to the 5-stage
635 Clos topology (5 stages in unfolded state).
637 5.2.1. Example ASN Scheme
639 The diagram below illustrates an example ASN allocation scheme. The
640 following is a list of guidelines that can be used:
642 o Only EBGP sessions established over direct point-to-point links
643 interconnecting the network nodes.
645 o 16-bit (two octet) BGP ASNs are used, since these are widely
646 supported and have better vendor interoperability.
648 o Private BGP ASNs from the range 64512-65534 are used so as to
649 avoid ASN conflicts.
651 o A single BGP ASN is allocated to all of the Clos topology's Tier-1
652 devices.
654 o Unique BGP ASN is allocated per each group of Tier-2 devices.
656 o Unique BGP ASN is allocated to every Tier-3 device (e.g. ToR) in
657 this topology.
659 ASN 65534
660 +---------+
661 | +-----+ |
662 | | | |
663 +-|-| |-|-+
664 | | +-----+ | |
665 ASN 646XX | | | | ASN 646XX
666 +---------+ | | | | +---------+
667 | +-----+ | | | +-----+ | | | +-----+ |
668 +-----------|-| |-|-+-|-| |-|-+-|-| |-|-----------+
669 | +---|-| |-|-+ | | | | +-|-| |-|---+ |
670 | | | +-----+ | | +-----+ | | +-----+ | | |
671 | | | | | | | | | |
672 | | | | | | | | | |
673 | | | +-----+ | | +-----+ | | +-----+ | | |
674 | +-----+---|-| |-|-+ | | | | +-|-| |-|---+-----+ |
675 | | | +-|-| |-|-+-|-| |-|-+-|-| |-|-+ | | |
676 | | | | | +-----+ | | | +-----+ | | | +-----+ | | | | |
677 | | | | +---------+ | | | | +---------+ | | | |
678 | | | | | | | | | | | |
679 +-----+ +-----+ | | +-----+ | | +-----+ +-----+
680 | ASN | | | +-|-| |-|-+ | | | |
681 |65YYY| | ... | | | | | | ... | | ... |
682 +-----+ +-----+ | +-----+ | +-----+ +-----+
683 | | | | +---------+ | | | |
684 O O O O <- Servers -> O O O O
686 Figure 4: BGP ASN layout for 5-stage Clos
688 5.2.2. Private Use BGP ASNs
690 The original range of Private Use BGP ASNs [RFC6996] limited
691 operators to 1023 unique ASNs. Since it is quite likely that the
692 number of network devices may exceed this number, a workaround is
693 required. One approach is to re-use the ASNs assigned to the Tier-3
694 devices across different clusters. For example, Private Use BGP ASNs
695 65001, 65002 ... 65032 could be used within every individual cluster
696 and assigned to Tier-3 devices.
698 To avoid route suppression due to the AS_PATH loop detection
699 mechanism in BGP, upstream EBGP sessions on Tier-3 devices must be
700 configured with the "AllowAS In" feature that allows accepting a
701 device's own ASN in received route advertisements. Introducing this
702 feature does not create an opportunity for routing loops under
703 misconfiguration since the AS_PATH is always incremented when routes
704 are propagated between topology tiers. Loop protection is also in
705 place at the Tier-1 device, which does not accept routes with a path
706 including its own ASN.
708 Another solution to this problem would be using four-octet BGP ASNs
709 ([RFC6793]), where there are additional Private Use ASN's available,
710 see [IANA.AS]. Use of Four-Octet BGP ASNs put additional protocol
711 complexity in the BGP implementation so should be considered against
712 the complexity of re-use when considering REQ3 and REQ4. Perhaps
713 more importantly, they are not yet supported by all BGP
714 implementations, which may limit vendor selection of DC equipment.
716 5.2.3. Prefix Advertisement
718 A Clos topology features a large number of point-to-point links and
719 associated prefixes. Advertising all of these routes into BGP may
720 create FIB overload conditions in the network devices. Advertising
721 these links also puts additional path computation stress on the BGP
722 control plane for little benefit. There are two possible solutions:
724 o Do not advertise any of the point-to-point links into BGP. Since
725 the EBGP based design changes the next-hop address at every
726 device, distant networks will automatically be reachable via the
727 advertising EBGP peer and do not require reachability to these
728 prefixes. However, this may complicate operational
729 troubleshooting or monitoring systems if the addresses are not
730 reachable: e.g. using the popular "traceroute" tool will display
731 IP addresses that are not reachable.
733 o Advertise point-to-point links, but summarize them on every
734 device. This requires an address allocation scheme such as
735 allocating a consecutive block of IP addresses per Tier-1 and
736 Tier-2 device to be used for point-to-point interface addressing
737 to the lower layers (Tier-2 uplinks will be numbered out of Tier-1
738 addressing and so forth).
740 Server subnets on Tier-3 devices must be announced into BGP without
741 using route summarization on Tier-2 and Tier-1 devices. Summarizing
742 subnets in a Clos topology results in route black-holing under a
743 single link failure (e.g. between Tier-2 and Tier-3 devices) and
744 hence must be avoided. The use of peer links within the same tier to
745 resolve the black-holing problem by providing "bypass paths" is
746 undesirable due to O(N^2) complexity of the peering mesh and waste of
747 ports on the devices. An alternative to the full-mesh of peer-links
748 would be using a simpler bypass topology, e.g. a "ring" as described
749 in [FB4POST], but such a topology adds extra hops and has very
750 limited bisection bandwidth, in addition requiring special tweaks to
751 make BGP routing work - such as possibly splitting every device into
752 an ASN on its own. In Section 8.2 another, less intrusive, method
753 for performing a limited form route summarization in Clos networks
754 and the associated trade-offs are described.
756 5.2.4. External Connectivity
758 A dedicated cluster (or clusters) in the Clos topology could be used
759 for the purpose of connecting to the Wide Area Network (WAN) edge
760 devices, or WAN Routers. Tier-3 devices in such cluster would be
761 replaced with WAN routers, and EBGP peering would be used again,
762 though WAN routers are likely to belong to a public ASN if Internet
763 connectivity is required in the design. The Tier-2 devices in such a
764 dedicated cluster will be referred to as "Border Routers" in this
765 document. These devices have to perform a few special functions:
767 o Hide network topology information when advertising paths to WAN
768 routers, i.e. remove Private BGP ASNs from the AS_PATH attribute.
769 This is typically done to avoid ASN number collisions between
770 different data centers and also to provide a uniform AS_PATH
771 length to the WAN for purposes of WAN ECMP to Anycast prefixes
772 originated in the topology. An implementation specific BGP
773 feature typically called "Remove Private AS" is commonly used to
774 accomplish this. Depending on implementation, the feature should
775 strip a contiguous sequence of private ASNs found in AS_PATH
776 attribute prior to advertising the path to a neighbor. This
777 assumes that all BGP ASN's used for intra data center numbering
778 are from the private ASN range. The process for stripping the
779 private ASNs is not currently standardized, but most
780 implementations commonly follow the logic described in
781 [REMOVE-PRIVATE-AS] vendor's document.
783 o Originate a default route to the data center devices. This is the
784 only place where default route can be originated, as route
785 summarization is risky for the "scale-out" topology.
786 Alternatively, Border Routers may simply relay the default route
787 learned from WAN routers. Advertising the default route from
788 Border Routers requires that all Border Routers to be fully
789 connected to the WAN Routers upstream, to provide resistance to a
790 single-link failure causing the black-holing of traffic. To
791 prevent chance of operator or implementation error that may impact
792 EBGP sessions to the WAN routers simultaneously (although these
793 scenarios are not planned for by many operators since they
794 represents a multiple failure) it is more desirable to take this
795 approach rather than introducing complicated conditional default
796 origination schemes provided by some implementations.
798 5.2.5. Route Summarization at the Edge
800 It is often desirable to summarize network reachability information
801 prior to advertising it to the WAN network due to high amount of IP
802 prefixes originated from within the data center in a fully routed
803 network design. For example, a network with 2000 Tier-3 devices will
804 have at least 2000 servers subnets advertised into BGP, along with
805 the infrastructure or other prefixes. However, as discussed before,
806 the proposed network design does not allow for route summarization
807 due to the lack of peer links inside every tier.
809 However, it is possible to lift this restriction for the Border
810 Routers, by devising a different connectivity model for these
811 devices. There are two options possible:
813 o Interconnect the Border Routers using a full-mesh of physical
814 links or using any other "peer-mesh" topology, such as ring or
815 hub-and-spoke. Configure BGP accordingly on all Border Leafs to
816 exchange network reachability information - e.g. by adding a mesh
817 of iBGP sessions. The interconnecting peer links need to be
818 appropriately sized for traffic that will be present in the case
819 of a device or link failure underneath the Border Routers.
821 o Tier-1 devices may have additional physical links provisioned
822 toward the Border Routers (which are Tier-2 devices from the
823 perspective of Tier-1). Specifically, if protection from a single
824 link or node failure is desired, each Tier-1 devices would have to
825 connect to at least two Border Routers. This puts additional
826 requirements on the port count for Tier-1 devices and Border
827 Routers, potentially making it a non-uniform, larger port count,
828 device with the other devices in the Clos. This also reduces the
829 number of ports available to "regular" Tier-2 switches and hence
830 the number of clusters that could be interconnected via Tier-1
831 layer.
833 If any of the above options are implemented, it is possible to
834 perform route summarization at the Border Routers toward the WAN
835 network core without risking a routing black-hole condition under a
836 single link failure. Both of the options would result in non-uniform
837 topology as additional links have to be provisioned on some network
838 devices.
840 6. ECMP Considerations
842 This section covers the Equal Cost Multipath (ECMP) functionality for
843 Clos topology and discusses a few special requirements.
845 6.1. Basic ECMP
847 ECMP is the fundamental load-sharing mechanism used by a Clos
848 topology. Effectively, every lower-tier device will use all of its
849 directly attached upper-tier devices to load-share traffic destined
850 to the same IP prefix. Number of ECMP paths between any two Tier-3
851 devices in Clos topology equals to the number of the devices in the
852 middle stage (Tier-1). For example, Figure 5 illustrates the
853 topology where Tier-3 device A has four paths to reach servers X and
854 Y, via Tier-2 devices B and C and then Tier-1 devices 1, 2, 3, and 4
855 respectively.
857 Tier-1
858 +-----+
859 | DEV |
860 +->| 1 |--+
861 | +-----+ |
862 Tier-2 | | Tier-2
863 +-----+ | +-----+ | +-----+
864 +------------>| DEV |--+->| DEV |--+--| |-------------+
865 | +-----| B |--+ | 2 | +--| |-----+ |
866 | | +-----+ +-----+ +-----+ | |
867 | | | |
868 | | +-----+ +-----+ +-----+ | |
869 | +-----+---->| DEV |--+ | DEV | +--| |-----+-----+ |
870 | | | +---| C |--+->| 3 |--+--| |---+ | | |
871 | | | | +-----+ | +-----+ | +-----+ | | | |
872 | | | | | | | | | |
873 +-----+ +-----+ | +-----+ | +-----+ +-----+
874 | DEV | | | Tier-3 +->| DEV |--+ Tier-3 | | | |
875 | A | | | | 4 | | | | |
876 +-----+ +-----+ +-----+ +-----+ +-----+
877 | | | | | | | |
878 O O O O <- Servers -> X Y O O
880 Figure 5: ECMP fan-out tree from A to X and Y
882 The ECMP requirement implies that the BGP implementation must support
883 multi-path fan-out for up to the maximum number of devices directly
884 attached at any point in the topology in upstream or downstream
885 direction. Normally, this number does not exceed half of the ports
886 found on a device in the topology. For example, an ECMP fan-out of
887 32 would be required when building a Clos network using 64-port
888 devices. The Border Routers may need to have wider fan-out to be
889 able to connect to multitude of Tier-1 devices if route summarization
890 at Border Router level is implemented as described in Section 5.2.5.
891 If a device's hardware does not support wider ECMP, logical link-
892 grouping (link-aggregation at layer 2) could be used to provide
893 "hierarchical" ECMP (Layer 3 ECMP followed by Layer 2 ECMP) to
894 compensate for fan-out limitations. Such approach, however,
895 increases the risk of flow polarization, as less entropy will be
896 available to the second stage of ECMP.
898 Most BGP implementations declare paths to be equal from ECMP
899 perspective if they match up to and including step (e)
900 Section 9.1.2.2 of [RFC4271]. In the proposed network design there
901 is no underlying IGP, so all IGP costs are assumed to be zero or
902 otherwise the same value across all paths and policies may be applied
903 as necessary to equalize BGP attributes that vary in vendor defaults,
904 as has been seen occasionally with MED and origin code. Routing
905 loops are unlikely due to the BGP best-path selection process which
906 prefers shorter AS_PATH length, and longer paths through the Tier-1
907 devices which don't allow their own AS in the path and have the same
908 ASN are also not possible.
910 6.2. BGP ECMP over Multiple ASNs
912 For application load-balancing purposes it is desirable to have the
913 same prefix advertised from multiple Tier-3 devices. From the
914 perspective of other devices, such a prefix would have BGP paths with
915 different AS_PATH attribute values, while having the same AS_PATH
916 attribute lengths. Therefore, BGP implementations must support load-
917 sharing over above-mentioned paths. This feature is sometimes known
918 as "multipath relax" and effectively allows for ECMP to be done
919 across different neighboring ASNs if all other attributes are equal
920 as described in the previous section.
922 6.3. Weighted ECMP
924 It may be desirable for the network devices to implement weighted
925 ECMP, to be able to send more traffic over some paths in ECMP fan-
926 out. This could be helpful to compensate for failures in the network
927 and send more traffic over paths that have more capacity. The
928 prefixes that require weighted ECMP would have to be injected using
929 remote BGP speaker (central agent) over a multihop session as
930 described further in Section 8.1. If support in implementations is
931 available, weight-distribution for multiple BGP paths could be
932 signaled using the technique described in
933 [I-D.ietf-idr-link-bandwidth].
935 6.4. Consistent Hashing
937 It is often desirable to have the hashing function used to ECMP to be
938 consistent (see [CONS-HASH]), to minimizing the impact on flow to
939 next-hop affinity changes when a next-hop is added or removed to ECMP
940 group. This could be used if the network device is used as a load-
941 balancer, mapping flows toward multiple destinations - in this case,
942 losing or adding a destination will not have detrimental effect of
943 currently established flows. One particular recommendation on
944 implementing consistent hashing is provided in [RFC2992], though
945 other implementations are possible. This functionality could be
946 naturally combined with weighted ECMP, with the impact of the next-
947 hop changes being proportional to the weight of the given next-hop.
948 Notice that the usual downside of consistent hashing is increased
949 load on hardware resource utilization, as typically more space is
950 required to implement a consistent-hashing region.
952 7. Routing Convergence Properties
954 This section reviews routing convergence properties in the proposed
955 design. A case is made that sub-second convergence is achievable if
956 the implementation supports fast EBGP peering session deactivation
957 and timely RIB and FIB update upon failure of the associated link.
959 7.1. Fault Detection Timing
961 BGP typically relies on an IGP to route around link/node failures
962 inside an AS, and implements either a polling based or an event-
963 driven mechanism to obtain updates on IGP state changes. The
964 proposed routing design does not use an IGP, so the only mechanisms
965 that could be used for fault detection are BGP keep-alive process (or
966 any other type of keep-alive mechanism) and link-failure triggers.
968 Relying solely on BGP keep-alive packets may result in high
969 convergence delays, in the order of multiple seconds (on many BGP
970 implementations the minimum configurable BGP hold timer value is
971 three seconds). However, many BGP implementations can shut down
972 local EBGP peering sessions in response to the "link down" event for
973 the outgoing interface used for BGP peering. This feature is
974 sometimes called as "fast fallover". Since links in modern data
975 centers are often point-to-point fiber connections, a physical
976 interface failure is often detected in milliseconds and subsequently
977 triggers a BGP re-convergence.
979 Ethernet technologies may support failure signaling or detection
980 standards such as [IEEE8021AG] and [IEEE8023AH], which may make
981 failure detection more robust. Alternatively, some platforms may
982 support Bidirectional Forwarding Detection (BFD) [RFC5880] to allow
983 for sub-second failure detection and fault signaling to the BGP
984 process. However, use of either of these presents additional
985 requirements to vendor software and possibly hardware, and may
986 contradict REQ1. Until recently with [RFC7130], BFD also did not
987 allow detection of a single member link failure on a LAG, which would
988 limit's it's usefulness in some designs.
990 7.2. Event Propagation Timing
992 In this design the impact of BGP Minimum Route Advertisement Interval
993 (MRAI) timer (See section 9.2.1.1 of [RFC4271]) should be considered.
994 Per the standard it is required for BGP implementations to space out
995 consecutive BGP UPDATE messages by at least MRAI seconds, which is
996 often a configurable value. The initial BGP UPDATE messages after an
997 event carrying withdrawn routes are commonly not affected by this
998 timer. The MRAI timer may present significant convergence delays
999 when a BGP speaker "waits" for the new path to be learned from its
1000 peers and has no local backup path information.
1002 In a Clos topology each EBGP speaker has either one path or N paths
1003 for the same prefix, where N is a significantly large number, e.g.
1004 N=32 (the ECMP fan-out). Therefore, if a path fails there is either
1005 no backup path at all, or the backup is readily available in BGP Loc-
1006 RIB. In the former case, the BGP withdrawal announcement will
1007 propagate un-delayed and trigger re-convergence on affected devices.
1008 In the latter case, the best-path will be re-evaluated and the local
1009 ECMP group corresponding to the new next-hop set changed. If the BGP
1010 path was the best-path selected previously, an "implicit withdraw"
1011 will be sent via a BGP UPDATE message as described as option b in
1012 Section 3.1 of [RFC4271] due to the BGP AS_PATH attribute changing.
1014 7.3. Impact of Clos Topology Fan-outs
1016 Clos topology has large fan-outs, which may impact the "Up->Down"
1017 convergence in some cases, as described in this section. In a
1018 situation when a link between Tier-3 and Tier-2 device fails, the
1019 Tier-2 device will send BGP WITHDRAW message to all upstream Tier-1
1020 devices, and Tier-1 devices will relay this message to all downstream
1021 Tier-2 devices (except for the originator). Tier-2 devices other
1022 than the one originating the WITHDRAW should then wait for ALL
1023 adjacent Tier-1 devices to send a WITHDRAW message before it removes
1024 the affected prefixes and sends corresponding WITHDRAW downstream to
1025 connected Tier-3 devices. If the original Tier-2 device or the
1026 relaying Tier-1 devices introduce some delay into their
1027 announcements, the result could be WITHDRAW message "dispersion",
1028 that could be as long as multiple seconds. In order to avoid such
1029 behavior, BGP implementations must support "update groups". The
1030 "update group" is defined as a collection of neighbors sharing the
1031 same outbound policy - the local speaker will send BGP updates to the
1032 members of the group synchronously.
1034 The impact of such "dispersion" grows with the size of topology fan-
1035 out and could also grow under network convergence churn.
1037 7.4. Failure Impact Scope
1039 A network is declared to converge in response to a failure once all
1040 devices within the failure impact scope are notified of the event and
1041 have re-calculated their RIB's and consequently updated their FIB's.
1042 Larger failure impact scope typically means slower convergence since
1043 more devices have to be notified, and additionally results in a less
1044 stable network. In this section we describe BGP's advantages over
1045 link-state routing protocols in reducing failure impact scope for a
1046 Clos topology.
1048 BGP is behaves like a distance-vector protocol in the sense that only
1049 the best path from the point of view of the local router is sent to
1050 neighbors. As such, some failures are masked if the local node can
1051 immediately find a backup path and does not have to send any updates
1052 further. Notice that in the worst case ALL devices in a data center
1053 topology have to either withdraw a prefix completely or update the
1054 ECMP groups in the FIB. However, many failures will not result in
1055 such a wide impact. There are two main failure types where impact
1056 scope is reduced:
1058 o Failure of a link between Tier-2 and Tier-1 devices: In this case,
1059 a Tier-2 device will update the affected ECMP groups, removing the
1060 failed link. There is no need to send new information to
1061 downstream Tier-3 devices, unless the path was selected as best by
1062 the BGP process, in which case only an "implicit withdraw" needs
1063 to be sent, which should not affect forwarding. The affected
1064 Tier-1 device will lose the only path available to reach a
1065 particular cluster and will have to withdraw the associated
1066 prefixes. Such prefix withdrawal process will only affect Tier-2
1067 devices directly connected to the affected Tier-1 device. The
1068 Tier-2 devices receiving the BGP UPDATE messages withdrawing
1069 prefixes will simply have to update their ECMP groups. The Tier-3
1070 devices are not involved in the re-convergence process.
1072 o Failure of a Tier-1 device: In this case, all Tier-2 devices
1073 directly attached to the failed node will have to update their
1074 ECMP groups for all IP prefixes from non-local cluster. The
1075 Tier-3 devices are once again not involved in the re-convergence
1076 process, but may receive "implicit withdraws" as described above.
1078 Even though in case of such failures multiple IP prefixes will have
1079 to be reprogrammed in the FIB, it is worth noting that ALL of these
1080 prefixes share a single ECMP group on Tier-2 device. Therefore, in
1081 the case of implementations with a hierarchical FIB, only a single
1082 change has to be made to the FIB. Hierarchical FIB here means FIB
1083 structure where the next-hop forwarding information is stored
1084 separately from the prefix lookup table, and the latter only store
1085 pointers to the respective forwarding information.
1087 Even though BGP offers some failure scope reduction, reduction of the
1088 fault domain using summarization is not always possible with the
1089 proposed design, since using this technique may create routing black-
1090 holes as mentioned previously. Therefore, the worst control-plane
1091 failure impact scope is the network as a whole, for instance in a
1092 case of a link failure between Tier-2 and Tier-3 devices. The amount
1093 of impacted prefixes in this case would be much less than in the case
1094 of a failure in the upper layers of a Clos network topology. The
1095 property of having such large failure scope is not a result of
1096 choosing EBGP in the design but rather a result of using the "scale-
1097 out" Clos topology.
1099 7.5. Routing Micro-Loops
1101 When a downstream device, e.g. Tier-2 device, loses all paths for a
1102 prefix, it normally has the default route pointing toward the
1103 upstream device, in this case the Tier-1 device. As a result, it is
1104 possible to get in the situation when Tier-2 switch loses a prefix,
1105 but Tier-1 switch still has the path pointing to the Tier-2 device,
1106 which results in transient micro-loop, since Tier-1 switch will keep
1107 passing packets to the affected prefix back to Tier-2 device, and
1108 Tier-2 will bounce it back again using the default route. This
1109 micro-loop will last for the duration of time it takes the upstream
1110 device to fully update its forwarding tables.
1112 To minimize impact of the micro-loops, Tier-2 and Tier-1 switches can
1113 be configured with static "discard" or "null" routes that will be
1114 more specific than the default route for specific prefixes missing
1115 during network convergence. For Tier-2 switches, the discard route
1116 should be a summary route, covering all server subnets of the
1117 underlying Tier-3 devices. For Tier-1 devices, the discard route
1118 should be a summary covering the server IP address subnet allocated
1119 for the whole data-center. Those discard routes will only take
1120 precedence for the duration of network convergence, until the device
1121 learns a more specific prefix via a new path.
1123 8. Additional Options for Design
1125 8.1. Third-party Route Injection
1127 BGP allows for a "third-party", i.e. directly attached, BGP speaker
1128 to inject routes anywhere in the network topology, meeting REQ5.
1129 This can be achieved by peering via a multihop BGP session with some
1130 or even all devices in the topology. Furthermore, BGP diverse path
1131 distribution [RFC6774] could be used to inject multiple BGP next hops
1132 for the same prefix to facilitate load-balancing, or using the BGP
1133 ADD-PATH capability [I-D.ietf-idr-add-paths] if supported by the
1134 implementation. Unfortunately, in many implementations ADD-PATH has
1135 been found to only support IBGP properly due to the use cases it was
1136 originally optimized for, which limits the "third-party" peering to
1137 iBGP only, if the feature is used.
1139 To implement route injection in the proposed design a third-party BGP
1140 speaker may peer with Tier-3 and Tier-1 switches, injecting the same
1141 prefix, but using a special set of BGP next-hops for Tier-1 devices.
1142 Those next-hops are assumed to resolve recursively via BGP, and could
1143 be, for example, IP addresses on Tier-3 devices. The resulting
1144 forwarding table programming could provide desired traffic proportion
1145 distribution among different clusters.
1147 8.2. Route Summarization within Clos Topology
1149 As mentioned previously, route summarization is not possible within
1150 the proposed Clos topology since it makes the network susceptible to
1151 route black-holing under single link failures. The main problem is
1152 the limited number of parallel paths between network elements, e.g.
1153 there is only a single path between any pair of Tier-1 and Tier-3
1154 devices. However, some operators may find route aggregation
1155 desirable to improve control plane stability.
1157 If planning on using any technique to summarize within the topology
1158 modeling of the routing behavior and potential for black-holing
1159 should be done not only for single or multiple link failures, but
1160 also fiber pathway failures or optical domain failures if the
1161 topology extends beyond a physical location. Simple modeling can be
1162 done by checking the reachability on devices doing summarization
1163 under the condition of a link or pathway failure between a set of
1164 devices in every Tier as well as to the WAN routers if external
1165 connectivity is present.
1167 Route summarization would be possible with a small modification to
1168 the network topology, though the trade-off would be reduction of the
1169 total size of the network as well as network congestion under
1170 specific failures. This approach is very similar to the technique
1171 described above, which allows Border Routers to summarize the entire
1172 data-center address space.
1174 8.2.1. Collapsing Tier-1 Devices Layer
1176 In order to add more paths between Tier-1 and Tier-3 devices, group
1177 Tier-2 devices into pairs, and then connect the pairs to the same
1178 group of Tier-1 devices. This is logically equivalent to
1179 "collapsing" Tier-1 devices into a group of half the size, merging
1180 the links on the "collapsed" devices. The result is illustrated in
1181 Figure 6. For example, in this topology DEV C and DEV D connect to
1182 the same set of Tier-1 devices (DEV 1 and DEV 2), whereas before they
1183 were connecting to different groups of Tier-1 devices.
1185 Tier-2 Tier-1 Tier-2
1186 +-----+ +-----+ +-----+
1187 +-------------| DEV |------| DEV |------| |-------------+
1188 | +-----| C |--++--| 1 |--++--| |-----+ |
1189 | | +-----+ || +-----+ || +-----+ | |
1190 | | || || | |
1191 | | +-----+ || +-----+ || +-----+ | |
1192 | +-----+-----| DEV |--++--| DEV |--++--| |-----+-----+ |
1193 | | | +---| D |------| 2 |------| |---+ | | |
1194 | | | | +-----+ +-----+ +-----+ | | | |
1195 | | | | | | | |
1196 +-----+ +-----+ +-----+ +-----+
1197 | DEV | | DEV | | | | |
1198 | A | | B | Tier-3 Tier-3 | | | |
1199 +-----+ +-----+ +-----+ +-----+
1200 | | | | | | | |
1201 O O O O <- Servers -> O O O O
1203 Figure 6: 5-Stage Clos topology
1205 Having this design in place, Tier-2 devices may be configured to
1206 advertise only a default route down to Tier-3 devices. If a link
1207 between Tier-2 and Tier-3 fails, the traffic will be re-routed via
1208 the second available path known to a Tier-2 switch. It is not
1209 possible to advertise a summary route covering prefixes for a single
1210 cluster from Tier-2 devices since each of them has only a single path
1211 down to this prefix. It would require dual-homed servers to
1212 accomplish that. Also note that this design is only resilient to
1213 single link failure. It is possible for a double link failure to
1214 isolate a Tier-2 device from all paths toward a specific Tier-3
1215 device, thus causing a routing black-hole.
1217 A result of the proposed topology modification would be reduction of
1218 Tier-1 devices port capacity. This limits the maximum number of
1219 attached Tier-2 devices and therefore will limit the maximum DC
1220 network size. A larger network would require different Tier-1
1221 devices that have higher port density to implement this change.
1223 Another problem is traffic re-balancing under link failures. Since
1224 three are two paths from Tier-1 to Tier-3, a failure of the link
1225 between Tier-1 and Tier-2 switch would result in all traffic that was
1226 taking the failed link to switch to the remaining path. This will
1227 result in doubling of link utilization on the remaining link.
1229 8.2.2. Simple Virtual Aggregation
1231 A completely different approach to route summarization is possible,
1232 provided that the main goal is to reduce the FIB pressure, while
1233 allowing the control plane to disseminate full routing information.
1234 Firstly, it could be easily noted that in many cases multiple
1235 prefixes, some of which are less specific, share the same set of the
1236 next-hops (same ECMP group). For example, looking from the
1237 perspective of a Tier-3 devices, all routes learned from upstream
1238 Tier-2's, including the default route, will share the same set of BGP
1239 next-hops, provided that there is no failures in the network. This
1240 makes it possible to use the technique similar to described in
1241 [RFC6769] and only install the least specific route in the FIB,
1242 ignoring more specific routes if they share the same next-hop set.
1243 For example, under normal network conditions, only the default route
1244 need to be programmed into FIB.
1246 Furthermore, if the Tier-2 devices are configured with summary
1247 prefixes covering all of their attached Tier-3 device's prefixes the
1248 same logic could be applied in Tier-1 devices as well, and, by
1249 induction to Tier-2/Tier-3 switches in different clusters. These
1250 summary routes should still allow for more specific prefixes to leak
1251 to Tier-1 devices, to enable for detection of mismatches in the next-
1252 hop sets if a particular link fails, changing the next-hop set for a
1253 specific prefix.
1255 Re-stating once again, this technique does not reduce the amount of
1256 control plane state (i.e. BGP UPDATEs/BGP LocRIB sizing), but only
1257 allows for more efficient FIB utilization, by spotting more specific
1258 prefixes that share their next-hops with less specifics.
1260 8.3. ICMP Unreachable Message Masquerading
1262 This section discusses some operational aspects of not advertising
1263 point-to-point link subnets into BGP, as previously outlined as an
1264 option in Section 5.2.3. The operational impact of this decision
1265 could be seen when using the well-known "traceroute" tool.
1266 Specifically, IP addresses displayed by the tool will be the link's
1267 point-to-point addresses, and hence will be unreachable for
1268 management connectivity. This makes some troubleshooting more
1269 complicated.
1271 One way to overcome this limitation is by using the DNS subsystem to
1272 create the "reverse" entries for the IP addresses of the same device
1273 pointing to the same name. The connectivity then can be made by
1274 resolving this name to the "primary" IP address of the devices, e.g.
1275 its Loopback interface, which is always advertised into BGP.
1276 However, this create dependency on DNS subsystem, which may happen to
1277 be unavailable during an outage.
1279 Another option is to make the network device perform IP address
1280 masquerading, that is rewriting the source IP addresses of the
1281 appropriate ICMP messages sent off of the device with the "primary"
1282 IP address of the device. Specifically, the ICMP Destination
1283 Unreachable Message (type 3) codes 3 (port unreachable) and ICMP Time
1284 Exceeded (type 11) code 0, which are involved in proper working of
1285 the "traceroute" tool. With this modification, the "traceroute"
1286 probes sent to the devices will always be sent back with the
1287 "primary" IP address as the source, allowing the operator to discover
1288 the "reachable" IP address of the box.
1290 9. Security Considerations
1292 The design does not introduce any additional security concerns.
1293 General BGP security considerations are discussed in [RFC4271] and
1294 [RFC4272]. Furthermore, the Generalized TTL Security Mechanism
1295 [RFC5082] could be used to reduce the risk of BGP session spoofing.
1297 10. IANA Considerations
1299 This document includes no request to IANA.
1301 11. Acknowledgements
1303 This publication summarizes work of many people who participated in
1304 developing, testing and deploying the proposed network design, some
1305 of whom were George Chen, Parantap Lahiri, Dave Maltz, Edet Nkposong,
1306 Robert Toomey, and Lihua Yuan. Authors would also like to thank
1307 Linda Dunbar, Susan Hares, Russ White and Robert Raszuk for reviewing
1308 the document and providing valuable feedback and Mary Mitchell for
1309 grammar and style suggestions.
1311 12. References
1313 12.1. Normative References
1315 [RFC4271] Rekhter, Y., Li, T., and S. Hares, "A Border Gateway
1316 Protocol 4 (BGP-4)", RFC 4271, January 2006.
1318 [RFC6996] Mitchell, J., "Autonomous System (AS) Reservation for
1319 Private Use", BCP 6, RFC 6996, July 2013.
1321 12.2. Informative References
1323 [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, April 1998.
1325 [RFC4272] Murphy, S., "BGP Security Vulnerabilities Analysis", RFC
1326 4272, January 2006.
1328 [RFC4786] Abley, J. and K. Lindqvist, "Operation of Anycast
1329 Services", BCP 126, RFC 4786, December 2006.
1331 [RFC5082] Gill, V., Heasley, J., Meyer, D., Savola, P., and C.
1332 Pignataro, "The Generalized TTL Security Mechanism
1333 (GTSM)", RFC 5082, October 2007.
1335 [RFC5880] Katz, D. and D. Ward, "Bidirectional Forwarding Detection
1336 (BFD)", RFC 5880, June 2010.
1338 [RFC6325] Perlman, R., Eastlake, D., Dutt, D., Gai, S., and A.
1339 Ghanwani, "Routing Bridges (RBridges): Base Protocol
1340 Specification", RFC 6325, July 2011.
1342 [RFC6774] Raszuk, R., Fernando, R., Patel, K., McPherson, D., and K.
1343 Kumaki, "Distribution of Diverse BGP Paths", RFC 6774,
1344 November 2012.
1346 [RFC6793] Vohra, Q. and E. Chen, "BGP Support for Four-Octet
1347 Autonomous System (AS) Number Space", RFC 6793, December
1348 2012.
1350 [RFC2992] Hopps, C., "Analysis of an Equal-Cost Multi-Path
1351 Algorithm", RFC 2992, November 2000.
1353 [RFC6769] Raszuk, R., Heitz, J., Lo, A., Zhang, L., and X. Xu,
1354 "Simple Virtual Aggregation (S-VA)", RFC 6769, October
1355 2012.
1357 [RFC7130] Bhatia, M., Chen, M., Boutros, S., Binderberger, M., and
1358 J. Haas, "Bidirectional Forwarding Detection (BFD) on Link
1359 Aggregation Group (LAG) Interfaces", RFC 7130, February
1360 2014.
1362 [I-D.ietf-idr-add-paths]
1363 Walton, D., Retana, A., Chen, E., and J. Scudder,
1364 "Advertisement of Multiple Paths in BGP", draft-ietf-idr-
1365 add-paths-10 (work in progress), October 2014.
1367 [I-D.ietf-idr-link-bandwidth]
1368 Mohapatra, P. and R. Fernando, "BGP Link Bandwidth
1369 Extended Community", draft-ietf-idr-link-bandwidth-06
1370 (work in progress), January 2013.
1372 [GREENBERG2009]
1373 Greenberg, A., Hamilton, J., and D. Maltz, "The Cost of a
1374 Cloud: Research Problems in Data Center Networks", January
1375 2009.
1377 [IEEE8021AG]
1378 IEEE 802.1Q, , "IEEE Standard for Local and metropolitan
1379 area networks - Media Access Control (MAC) Bridges and
1380 Virtual Bridged Local Area Networks", October 2012.
1382 [IEEE8023AH]
1383 IEEE 802.3, , "IEEE Standard for Information technology -
1384 Local and metropolitan area networks - Carrier sense
1385 multiple access with collision detection (CSMA/CD) access
1386 method and physical layer specifications", December 2008.
1388 [INTERCON]
1389 Dally, W. and B. Towles, "Principles and Practices of
1390 Interconnection Networks", ISBN 978-0122007514, January
1391 2004.
1393 [ALFARES2008]
1394 Al-Fares, M., Loukissas, A., and A. Vahdat, "A Scalable,
1395 Commodity Data Center Network Architecture", August 2008.
1397 [IANA.AS] IANA, , "Autonomous System (AS) Numbers", April 2015,
1398 .
1400 [IEEE8023AD]
1401 IEEE 802.3ad, , "IEEE Standard for Link aggregation for
1402 parallel links", October 2000.
1404 [REMOVE-PRIVATE-AS]
1405 Cisco Systems, , "Removing Private Autonomous System
1406 Numbers in BGP", August 2005,
1407 .
1410 [FB4POST] Farrington, N. and A. Andreyev, "Facebook's Data Center
1411 Network Architecture", May 2013,
1412 .
1414 [JAKMA2008]
1415 Jakma, P., "BGP Path Hunting", 2008,
1416 .
1418 [CONS-HASH]
1419 Wikipedia, , "Consistent Hashing",
1420 .
1422 Authors' Addresses
1424 Petr Lapukhov
1425 Facebook
1426 1 Hacker Way
1427 Menlo Park, CA 94025
1428 US
1430 Email: petr@fb.com
1432 Ariff Premji
1433 Arista Networks
1434 5453 Great America Parkway
1435 Santa Clara, CA 95054
1436 US
1438 Email: ariff@arista.com
1439 URI: http://arista.com/
1441 Jon Mitchell (editor)
1443 Email: jrmitche@puck.nether.net