idnits 2.17.1 draft-przygienda-rift-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 5 instances of too long lines in the document, the longest one being 22 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1429 has weird spacing: '...EPacket tide...' == Line 1430 has weird spacing: '...EPacket tire...' == Line 1431 has weird spacing: '...EPacket tie...' == Line 1448 has weird spacing: '...EHeader heade...' == Line 1497 has weird spacing: '...ionType nor...' == (8 more instances...) == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Thrift serializer/deserializer MUST not discard optional, unknown fields but preserve and serialize them again when re-flooding. -- The document date (January 11, 2017) is 2662 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'NH' is mentioned on line 783, but not defined == Missing Reference: '0-31' is mentioned on line 911, but not defined == Missing Reference: '32-63' is mentioned on line 912, but not defined == Missing Reference: 'P' is mentioned on line 1091, but not defined == Missing Reference: 'QUESTION' is mentioned on line 1341, but not defined == Unused Reference: 'ISO10589' is defined on line 1618, but no explicit reference was found in the text == Unused Reference: 'RFC5309' is defined on line 1660, but no explicit reference was found in the text -- Possible downref: Non-RFC (?) normative reference: ref. 'ISO10589' ** Obsolete normative reference: RFC 1142 (Obsoleted by RFC 7142) ** Downref: Normative reference to an Informational RFC: RFC 4655 ** Downref: Normative reference to an Informational RFC: RFC 5309 ** Downref: Normative reference to an Informational RFC: RFC 6234 ** Obsolete normative reference: RFC 6822 (Obsoleted by RFC 8202) ** Downref: Normative reference to an Informational RFC: RFC 7855 ** Downref: Normative reference to an Informational RFC: RFC 7938 Summary: 8 errors (**), 0 flaws (~~), 15 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Networking Working Group T. Przygienda 3 Internet-Draft J. Drake 4 Intended status: Standards Track A. Atlas 5 Expires: July 15, 2017 Juniper Networks 6 January 11, 2017 8 RIFT: Routing in Fat Trees 9 draft-przygienda-rift-00 11 Abstract 13 This document outlines a specialized, dynamic routing protocol for 14 Clos and fat-tree network topologies. The protocol (1) deals with 15 automatic construction of fat-tree topologies based on detection of 16 links, (2) minimizes the amount of routing state held at each level, 17 (3) automatically prunes the topology distribution exchanges to a 18 sufficient subset of links, (4) supports automatic disaggregation of 19 prefixes on link and node failures to prevent blackholing and 20 suboptimal routing, (5) allows traffic steering and re-routing 21 policies and ultimately (6) provides mechanisms to synchronize a 22 limited key-value data-store that can be used after protocol 23 convergence to e.g. bootstrap higher levels of functionality on 24 nodes. 26 Status of This Memo 28 This Internet-Draft is submitted in full conformance with the 29 provisions of BCP 78 and BCP 79. 31 Internet-Drafts are working documents of the Internet Engineering 32 Task Force (IETF). Note that other groups may also distribute 33 working documents as Internet-Drafts. The list of current Internet- 34 Drafts is at http://datatracker.ietf.org/drafts/current/. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or obsoleted by other documents at any 38 time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 This Internet-Draft will expire on July 15, 2017. 43 Copyright Notice 45 Copyright (c) 2017 IETF Trust and the persons identified as the 46 document authors. All rights reserved. 48 This document is subject to BCP 78 and the IETF Trust's Legal 49 Provisions Relating to IETF Documents 50 (http://trustee.ietf.org/license-info) in effect on the date of 51 publication of this document. Please review these documents 52 carefully, as they describe your rights and restrictions with respect 53 to this document. Code Components extracted from this document must 54 include Simplified BSD License text as described in Section 4.e of 55 the Trust Legal Provisions and are provided without warranty as 56 described in the Simplified BSD License. 58 Table of Contents 60 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 61 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 3 62 2. Reference Frame . . . . . . . . . . . . . . . . . . . . . . . 3 63 2.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 4 64 2.2. Topology . . . . . . . . . . . . . . . . . . . . . . . . 6 65 3. Requirement Considerations . . . . . . . . . . . . . . . . . 8 66 4. RIFT: Routing in Fat Trees . . . . . . . . . . . . . . . . . 9 67 4.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 10 68 4.2. Specification . . . . . . . . . . . . . . . . . . . . . . 10 69 4.2.1. Transport . . . . . . . . . . . . . . . . . . . . . . 10 70 4.2.2. Link (Neighbor) Discovery (LIE Exchange) . . . . . . 10 71 4.2.3. Topology Exchange (TIE Exchange) . . . . . . . . . . 11 72 4.2.3.1. Topology Information Elements . . . . . . . . . . 11 73 4.2.3.2. South- and Northbound Representation . . . . . . 11 74 4.2.3.3. Flooding . . . . . . . . . . . . . . . . . . . . 13 75 4.2.3.4. TIE Flooding Scopes . . . . . . . . . . . . . . . 13 76 4.2.3.5. Initial and Periodic Database Synchronization . . 14 77 4.2.3.6. Purging . . . . . . . . . . . . . . . . . . . . . 14 78 4.2.3.7. Optional Automatic Flooding Reduction and 79 Partitioning . . . . . . . . . . . . . . . . . . 15 80 4.2.4. Automatic Disaggregation on Link & Node Failures . . 16 81 4.2.5. Policy-Guided Prefixes . . . . . . . . . . . . . . . 19 82 4.2.5.1. Ingress Filtering . . . . . . . . . . . . . . . . 20 83 4.2.5.2. Applying Policy . . . . . . . . . . . . . . . . . 21 84 4.2.5.3. Store Policy-Guided Prefix for Route Computation 85 and Regeneration . . . . . . . . . . . . . . . . 21 86 4.2.5.4. Regeneration . . . . . . . . . . . . . . . . . . 22 87 4.2.5.5. Overlap with Disaggregated Prefixes . . . . . . . 22 88 4.2.6. Reachability Computation . . . . . . . . . . . . . . 22 89 4.2.6.1. Specification . . . . . . . . . . . . . . . . . . 23 90 4.2.6.2. Further Mechanisms . . . . . . . . . . . . . . . 25 91 4.2.7. Key/Value Store . . . . . . . . . . . . . . . . . . . 26 92 5. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 26 93 5.1. Normal Operation . . . . . . . . . . . . . . . . . . . . 26 94 5.2. Leaf Link Failure . . . . . . . . . . . . . . . . . . . . 27 95 5.3. Partitioned Fabric . . . . . . . . . . . . . . . . . . . 28 97 6. Implementation and Operation: Further Details . . . . . . . . 30 98 6.1. Leaf to Leaf connection . . . . . . . . . . . . . . . . . 30 99 6.2. Other End-to-End Services . . . . . . . . . . . . . . . . 30 100 6.3. Address Family and Topology . . . . . . . . . . . . . . . 31 101 7. Information Elements Schema . . . . . . . . . . . . . . . . . 31 102 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 36 103 9. Security Considerations . . . . . . . . . . . . . . . . . . . 36 104 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 36 105 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 36 106 11.1. Normative References . . . . . . . . . . . . . . . . . . 36 107 11.2. Informative References . . . . . . . . . . . . . . . . . 38 108 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 38 110 1. Introduction 112 Clos [CLOS] and Fat-Tree [FATTREE] have gained prominence in today's 113 networking, primarily as a result of a the paradigm shift towards a 114 centralized data-center based architecture that is poised to deliver 115 a majority of computation and storage services in the future. The 116 existing set of dynamic routing protocols was geared originally 117 towards a network with an irregular topology and low degree of 118 connectivity and consequently several attempts to adapt those have 119 been made. Most succesfully BGP [RFC4271] [RFC7938] has been 120 extended to this purpose, not as much due to its inherent suitability 121 to solve the problem but rather because the perceived capability to 122 modify it "quicker" and the immanent difficulties with link-state 123 [DIJKSTRA] based protocols to fulfill certain of the resulting 124 requirements. 126 In looking at the problem through the very lens of its requirements 127 an optimal approach does not seem to be a simple modification of 128 either a link-state (distributed computation) or distance-vector 129 (diffused computation) approach but rather a mixture of both, 130 colloquially best described as 'link-state towards the spine' and 131 'distance vector towards the leafs'. The balance of this document 132 details the resulting protocol. 134 1.1. Requirements Language 136 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 137 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 138 document are to be interpreted as described in RFC 2119 [RFC2119]. 140 2. Reference Frame 141 2.1. Terminology 143 This section presents the terminology used in this document. It is 144 assumed that the reader is thoroughly familiar with the terms and 145 concepts used in OSPF [RFC2328] and IS-IS [RFC1142] as well as the 146 according graph theoretical concepts of shortest path first (SPF) 147 [DIJKSTRA] computation and directed acyclic graphs (DAG). 149 Level: Clos and Fat Tree networks are trees and 'level' denotes the 150 set of nodes at the same height in such a network, where the 151 bottom level is level 0. A node has links to nodes one level down 152 and/or one level up. Under some circumstances, a node may have 153 links to nodes at the same level. As footnote: Clos terminology 154 uses often the concept of "stage" but due to the folded nature of 155 the Fat Tree we do not use it to prevent misunderstandings. 157 Spine/Aggregation/Edge Levels: Traditional names for Level 2, 1 and 158 0 respectively. Level 0 is often called leaf as well. 160 Point of Delivery (PoD): A self-contained vertical slice of a Clos 161 or Fat Tree network containing normally only level 0 and level 1 162 nodes. It communicates with nodes in other PoDs via the spine. 164 Spine: The set of nodes that provide inter-PoD communication. These 165 nodes are also organized into levels (typically one, three, or 166 five levels). Spine nodes do not belong to any PoD and are 167 assigned the PoD value 0 to indicate this. 169 Leaf: A node at level 0. 171 Connected Spine: In case a spine level represents a connected graph 172 (discounting links terminating at different levels), we call it a 173 "connected spine", in case a spine level consists of multiple 174 partitions, we call it a "disconnected" or "partitioned spine". 175 In other terms, a spine without east-west links is disconnected 176 and is the typical configuration for Clos and Fat Tree networks. 178 South/Southbound and North/Northbound (Direction): When describing 179 protocol elements and procedures, we will be using in different 180 situations the directionality of the compass. I.e., 'south' or 181 'southbound' mean moving towards the bottom of the Clos or Fat 182 Tree network and 'north' and 'northbound' mean moving towards the 183 top of the Clos or Fat Tree network. 185 Northbound Link: A link to a node one level up or in other words, 186 one level further north. 188 Southbound Link: A link to a node one level down or in other words, 189 one level further south. 191 East-West Link: A link between two nodes at the same level. East- 192 west links are not common in "fat-trees". 194 Leaf shortcuts (L2L): East-west links at leaf level will need to be 195 differentiated from East-west links at other levels. 197 Southbound representation: Information sent towards a lower level 198 representing only limited amount of information. 200 TIE: This is an acronym for a "Topology Information Element". TIEs 201 are exchanged between RIFT nodes to describe parts of a network 202 such as links and address prefixes. It can be thought of as 203 largely equivalent to ISIS LSPs or OSPF LSA. We will talk about 204 N-TIEs when talking about TIEs in the northbound representation 205 and S-TIEs for the southbound equivalent. 207 Node TIE: This is an acronym for a "Node Topology Information 208 Element", largely equivalent to OSPF Node LSA, i.e. it contains 209 all neighbors the node discovered and information about node 210 itself. 212 Prefix TIE: This is an acronym for a "Prefix Topology Information 213 Element" and it contains all prefixes directly attached to this 214 node in case of a N-TIE and in case of S-TIE the necesssary 215 default and de-aggregated prefixes the node passes southbound. 217 Policy-Guided Information: Information that is passed in either 218 southbound direction or north-bound direction by the means of 219 diffusion and can be filtered via policies. Policy-Guided 220 Prefixes and KV Ties are examples of Policy-Guided Information. 222 Key Value TIE: A S-TIE that is carrying a set of key value pairs 223 [DYNAMO]. It can be used to distribute information in the 224 southbound direction within the protocol. 226 TIDE: Topology Information Description Element, equivalent to CSNP 227 in ISIS. 229 TIRE: Topology Information Request Element, equivalent to PSNP in 230 ISIS. It can both confirm received and request missing TIEs. 232 PGP: Policy-Guided Prefixes allow to support traffic engineering 233 that cannot be achieved by the means of SPF computation or normal 234 node and prefix S-TIE origination. S-PGPs are propagated in south 235 direction only and N-PGPs follow northern direction strictly. 237 Deaggregation/Disaggregation Process in which a node decides to 238 advertise certain prefixes it received in N-TIEs to prevent 239 blackholing and suboptimal routing upon link failures. 241 LIE: This is an acronym for a "Link Information Element", largely 242 equivalent to HELLOs in IGPs. 244 FL: Flooding Leader for a specific system has a dedicated role to 245 flood TIEs of that system. 247 2.2. Topology 248 . +--------+ +--------+ 249 . | | | | ^ N 250 . |Spine 21| |Spine 22| | 251 .Level 2 ++-+--+-++ ++-+--+-++ <-*-> E/W 252 . | | | | | | | | | 253 . P111/2| |P121 | | | | S v 254 . ^ ^ ^ ^ | | | | 255 . | | | | | | | | 256 . +--------------+ | +-----------+ | | | +---------------+ 257 . | | | | | | | | 258 . South +-----------------------------+ | | ^ 259 . | | | | | | | All TIEs 260 . 0/0 0/0 0/0 +-----------------------------+ | 261 . v v v | | | | | 262 . | | +-+ +<-0/0----------+ | | 263 . | | | | | | | | 264 .+-+----++ optional +-+----++ ++----+-+ ++-----++ 265 .| | E/W link | | | | | | 266 .|Node111+----------+Node112| |Node121| |Node122| 267 .+-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 268 . | | | South | | | | 269 . | +---0/0--->-----+ 0/0 | +----------------+ | 270 . 0/0 | | | | | | | 271 . | +---<-0/0-----+ | v | +--------------+ | | 272 . v | | | | | | | 273 .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 274 .| | (L2L) | | | | Level 0 | | 275 .|Leaf111~~~~~~~~~~~~Leaf112| |Leaf121| |Leaf122| 276 .+-+-----+ +-+---+-+ +--+--+-+ +-+-----+ 277 . + + \ / + + 278 . Prefix111 Prefix112 \ / Prefix121 Prefix122 279 . multihomed 280 . Prefix 281 .+---------- Pod 1 ---------+ +---------- Pod 2 ---------+ 283 Figure 1: A two level spine-and-leaf topology 285 We will use this topology (called commonly a fat tree/network in 286 modern DC considerations [VAHDAT08] as homonym to the original 287 definition of the term [FATTREE]) in all further considerations. It 288 depicts a generic "fat-tree" and the concepts explained in three 289 levels here carry by induction for further levels and higher degrees 290 of connectivity. 292 3. Requirement Considerations 294 [RFC7938] gives the original set of requirements augmented here based 295 upon recent experience in the operation of fat-tree networks. 297 REQ1: The solution should allow for minimum size routing 298 information base and forwarding tables at leaf level for 299 speed, cost and simplicity reasons. Holding excessive 300 amount of information away from leaf nodes simplifies 301 operation of the underlay when addresses are moving in the 302 topology. 304 REQ2: High degree of ECMP (and ideally non equal cost) must be 305 supported. 307 REQ3: Traffic engineering should be allowed by modification of 308 prefixes and/or their next-hops. 310 REQ4: The control protocol must discover the physical links 311 automatically and be able to detect cabling that violates 312 fat-tree topology constraints. It must react accordingly to 313 such miscabling attempts, at a minimum preventing 314 adjacencies between nodes from being formed and traffic from 315 being forwarded on those miscabled links. E.g. connecting 316 a leaf to a spine at level 2 should be detected and ideally 317 prevented. 319 REQ5: The solution should allow for access to link states of the 320 whole topology to allow efficient support for modern control 321 architectures like SPRING [RFC7855] or PCE [RFC4655]. 323 REQ6: The solution should easily accomodate opaque data to be 324 carried throughout the topology to subsets of nodes. This 325 can be used for many purposes, one of them being a key-value 326 store that allows bootstrapping of nodes based right at the 327 time of topology discovery. 329 REQ7: Nodes should be taken out and introduced into production 330 with minimum wait-times and minimum of "shaking" of the 331 network, i.e. radius of propagation of necessary 332 information should be as small as viable. 334 REQ8: The protocol should allow for maximum aggregation of carried 335 routing information while at the same time automatically 336 deaggregating the prefixes to prevent blackholing in case of 337 failures. The deaggregation should support maximum possible 338 ECMP/N-ECMP remaining after failure. 340 REQ9: A node without any configuration beside default values 341 should come up as leaf in any PoD it is introduced into. 342 Optionally, it must be possible to configure nodes to 343 restrict their participation to the PoD(s) targeted at any 344 level. 346 REQ10: Reducing the scope of communication needed throughout the 347 network on link and state failure, as well as reducing 348 advertisements of repeating, idiomatic or policy-guided 349 information in stable state is highly desirable since it 350 leads to better stability and faster convergence behavior. 352 REQ11: Once a packet traverses a link in a "southbound" direction, 353 it must not take any further "northbound" steps along its 354 path to delivery to its destination. Taking a path through 355 the spine in cases where a shorter path is available is 356 highly undesirable. 358 Following list represents possible requirements and requirements 359 under discussion: 361 PEND1: Supporting anything but point-to-point links is a non- 362 requirement. Questions remain: for connecting to the 363 leaves, is there a case where multipoint is desirable? One 364 could still model it as point-to-point links; it seems there 365 is no need for anything more than a NBMA-type construct. 367 PEND2: We carrry parallel links with unique identifer carried in 368 node TIEs. Link bundles (i.e. parallel links between same 369 set of nodes) must be distinguishable for SPF and traffic 370 engineering purposes. But further, do we rely on coalesced 371 links from lower layers and BFD/m-BFD detection or hello all 372 links ? 374 PEND3: BFD will obviously play a big role in fast detection of 375 failures and the interactions will need to be worked out. 377 PEND4: What is the maximum scale of number leaf prefixes we need to 378 carry. Is 0.5E6 enough ? 380 4. RIFT: Routing in Fat Trees 382 Derived from the above requirements we present a detailed outline of 383 a protocol optimized for Routing in Fat Trees (RIFT) that in most 384 abstract terms has many properties of a modified link-state protocol 385 [RFC2328][RFC1142] when "pointing north" and path-vector [RFC4271] 386 protocol when "pointing south". Albeit an unusual combination, it 387 does quite naturally exhibit the desirable properties we seek. 389 4.1. Overview 391 The novel property of RIFT is that it floods northbound "flat" link- 392 state information so that each level understands the full topology of 393 levels south of it. In contrast, in the southbound direction the 394 protocol operates like a path vector protocol or rather a distance 395 vector with implicit split horizon since the topology constraints 396 make a diffused computation front propagating in all directions 397 unnecessary. 399 4.2. Specification 401 4.2.1. Transport 403 All protocol elements are carried over UDP. LIE exchange happens 404 over well-known multicast address with a TTL of 1. TIE exchange 405 mechanism uses address and port indicated by each node in the LIE 406 exchange with TTL of 1 as well. 408 All packet formats are defined in Thrift or protobuf models. 410 4.2.2. Link (Neighbor) Discovery (LIE Exchange) 412 Each node is provisioned with the level at which it is operating and 413 its PoD. A default level and PoD of zero are assumed, meaning that 414 leafs do not need to be configured with a level (or even PoD). Nodes 415 in the spine are configured with a PoD of zero. This information is 416 propagated in the LIEs exchanged. Adjacencies are formed if and only 417 if 419 a. the node is in the same PoD or either the node or the neighbor 420 advertises any PoD membership (PoD# = 0) AND 422 b. the neighboring node is at most one level away AND 424 c. the neighboring node is running the same MAJOR schema version AND 426 d. the neighbor is not member of some PoD while the node has a 427 northbound adjacency already joining another PoD. 429 A node configure with any PoD membership MUST, after building first 430 northbound adjacency making it participant in a PoD, advertise that 431 PoD as part of its LIEs. 433 LIEs arriving with a TTL larger than 1 MUST be ignored. 435 LIE exchange uses three-way handshake mechanism [RFC5303]. LIE 436 packets contain nonces and may contain an SHA-1 [RFC6234] over nonces 437 and some of the LIE data which prevents corruption and replay 438 attacks. TIE flooding reuses those nonces to prevent mismatches and 439 can use those for security purposes in case it is using QUIC [QUIC]. 441 4.2.3. Topology Exchange (TIE Exchange) 443 4.2.3.1. Topology Information Elements 445 Topology and reachability information in RIFT is conveyed by the 446 means of TIEs which have good amount of commonalities with LSAs in 447 OSPF. They contain sequence numbers, lifetimes and a type. Each 448 type has a large identifying number space and information is spread 449 across possibly many TIEs of a certain type by the means of a hash 450 function that a node or deployment can individually determine. One 451 extreme side of the scale is a prefix per TIE which leads to BGP-like 452 behavior vs. dense packing into few TIEs leading to more traditional 453 IGP trade-off with fewer TIEs. An implementation may even rehash at 454 the cost of significant amount of readvertisements of TIEs. 456 More information about the TIE structure can be found in the schema 457 in Section 7. 459 4.2.3.2. South- and Northbound Representation 461 As a central concept to RIFT, each node represents itself differently 462 depending on the direction in which it is advertising information. 463 More precisely, a spine node represents two different databases to 464 its neighbors depending whether it advertises TIEs to the south or to 465 the north/sideways. We call those differing TIE databases either 466 south- or northbound (S-TIEs and N-TIEs) depending on the direction 467 of distribution. 469 The N-TIEs hold all of the node's adjacencies, local prefixes and 470 northbound policy-guided prefixes while the S-TIEs hold only all of 471 the node's neighbors and the default prefix with necessary 472 disaggregated prefixes and southbound policy-guided prefixes. We 473 will explain this in detail further in Section 4.2.4 and 474 Section 4.2.5. 476 As an example to illustrate databases holding both representations, 477 consider the topology in Figure 1 with the optional link between 478 Node111 and Node 112 (so that the flooding on an east-west link can 479 be shown). This example assumes unnumbered interfaces. First, here 480 are the TIEs generated by some nodes. For simplicity, the 481 KeyValueElements and the PolicyGuidedPrefixesElements which may be 482 included in an S-TIE or an N-TIE are not shown. 484 Spine21 S-TIE: 485 NodeElement(layer=2, neighbors((Node111, layer 1, cost 1), 486 (Node112, layer 1, cost 1), (Node121, layer 1, cost 1), 487 (Node122, layer 1, cost 1))) 488 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 490 Node111 S-TIE: 491 NodeElement(layer=1, neighbors((Spine21,layer 2,cost 1), 492 (Spine22, layer 2, cost 1), (Node112, layer 1, cost 1), 493 (Leaf111, layer 0, cost 1), (Leaf112, layer 0, cost 1))) 494 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 496 Node111 N-TIE: 497 NodeLinkElement(layer=1, 498 neighbors((Spine21, layer 2, cost 1, links(...)), 499 (Spine22, layer 2, cost 1, links(...)), 500 (Node112, layer 1, cost 1, links(...)), 501 (Leaf111, layer 0, cost 1, links(...)), 502 (Leaf112, layer 0, cost 1, links(...)))) 503 NorthPrefixesElement(prefixes(Node111.loopback) 505 Node121 S-TIE: 506 NodeElement(layer=1, neighbors((Spine21,layer 2,cost 1), 507 (Spine22, layer 2, cost 1), (Leaf121, layer 0, cost 1), 508 (Leaf122, layer 0, cost 1))) 509 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 511 Node121 N-TIE: NodeLinkElement(layer=1, 512 neighbors((Spine21, layer 2, cost 1, links(...)), 513 (Spine22, layer 2, cost 1, links(...)), 514 (Leaf121, layer 0, cost 1, links(...)), 515 (Leaf122, layer 0, cost 1, links(...)))) 516 NorthPrefixesElement(prefixes(Node121.loopback) 518 Leaf112 N-TIE: 519 NodeLinkElement(layer=0, 520 neighbors((Node111, layer 1, cost 1, links(...)), 521 (Node112, layer 1, cost 1, links(...)))) 522 NorthPrefixesElement(prefixes(Leaf112.loopback, Prefix112, 523 Prefix_MH)) 525 Figure 2: example TIES generated in a 2 level spine-and-leaf topology 527 4.2.3.3. Flooding 529 The mechanism used to distribute TIEs is the well-known (albeit 530 modified in several respects to address fat tree requirements) 531 flooding mechanism used by today's link-state protocols. Albeit 532 initially more demanding to implement it avoids many problems with 533 diffused computation update style used by path vector. TIEs 534 themselves are transported over UDP with the ports indicates in the 535 LIE exchanges. 537 Once QUIC [QUIC] achieves the desired stability in deployments it may 538 prove a valuable candidate for TIE transport. 540 4.2.3.4. TIE Flooding Scopes 542 Every N-TIE is flooded northbound, providing a node at a given level 543 with the complete topology of the Clos or Fat Tree network underneath 544 it, including all specific prefixes. This means that a packet 545 received from a node at the same or lower level whose destination is 546 covered by one of those specific prefixes may be routed directly 547 towards the node advertising that prefix rather than sending the 548 packet to a node at a higher level. 550 It should be noted that east-west links are included in N-TIE 551 flooding; they need to be flooded in case the level above the current 552 level is disconnected from one or more nodes in the current level and 553 southbound SPF desires to use those links as backup in case of some 554 switches in the spine being partitioned in respect to some PoDs. 556 A node's S-TIEs, consisting of a node's adjacencies and a default IP 557 prefix, are flooded southbound in order to allow the nodes one level 558 down to see connectivity of the higher level as well as reachability 559 to the rest of the fabric. In order to allow a disconnected node in 560 a given level to receive the S-TIEs of other nodes at its level, 561 every *Node* S-TIE is "reflected" northbound to level from which it 562 was received. A node does not send an S-TIE northbound if it is from 563 the same or lower level. No S-TIEs are propagated southbound. 565 Node S-TIE "reflection" allows to support disaggregation on failures 566 describes in Section 4.2.4 and flooding reduction in Section 4.2.3.7. 568 Observe that a node does not reflood S-TIE received from the lower 569 level towards other southbound nodes which has implications on the 570 way TIDEs are generated in the southbound direction. 572 As an example to illustrate these rules, consider using the topology 573 in Figure 1, with the optional link between Node111 and Node 112, and 574 the associated TIEs given in Figure 2. The flooding from particular 575 nodes of the TIEs is given in Table 1. 577 Router Neighbor TIEs 578 floods to 579 --------- -------- -------------------------------------------------- 580 Leaf111 Node112 Leaf111 N-TIE, Node111 S-TIE 581 Leaf111 Node111 Leaf111 N-TIE, Node112 S-TIE 583 Node111 Leaf111 Node111 S-TIE 584 Node111 Leaf112 Node111 S-TIE 585 Node111 Node112 Node111 S-TIE, Node111 N-TIE, Leaf111 N-TIE, 586 Leaf112 N-TIE, Spine21 S-TIE, Spine22 S-TIE 587 Node111 Spine21 Node111 N-TIE, Node112 N-TIE, Leaf111 N-TIE, 588 Leaf112 N-TIE, Spine22 S-TIE 589 Node111 Spine22 Node111 N-TIE, Node112 N-TIE, Leaf111 N-TIE, 590 Leaf112 N-TIE, Spine21 S-TIE 592 Node121 Leaf121 Node121 S-TIE 593 Node121 Leaf122 Node121 S-TIE 594 Node121 Spine21 Node121 N-TIE, Leaf121 N-TIE, Leaf122 N-TIE, 595 Spine22 S-TIE 596 Node121 Spine22 Node121 N-TIE, Leaf121 N-TIE, Leaf122 N-TIE, 597 Spine22 S-TIE 599 Spine21 Node111 Spine21 S-TIE 600 Spine21 Node112 Spine21 S-TIE 601 Spine21 Node121 Spine21 S-TIE 602 Spine21 Node122 Spine21 S-TIE 603 Spine22 Node111 Spine22 S-TIE 604 Spine22 Node112 Spine22 S-TIE 605 Spine22 Node121 Spine22 S-TIE 606 Spine22 Node122 Spine22 S-TIE 608 Table 1: Flooding some TIEs from example topology 610 4.2.3.5. Initial and Periodic Database Synchronization 612 The initial exchange of RIFT is modelled after ISIS with TIDE being 613 equivalent to CSNP and TIRE playing the role of PSNP. The content of 614 TIDEs in north and south direction will contain obviously just the 615 according database variant and reflect the flooding scopes defined. 617 4.2.3.6. Purging 619 RIFT does not purge information that has been distributed by the 620 protocol. Purging mechanisms in other routing protocols have proven 621 through many years of experience to be complex and fragile. Abundant 622 amounts of memory are available today even on low-end platforms. The 623 information will age out and all computations will deliver correct 624 results if a node leaves the network due to the new information 625 distributed by its adjacent nodes. 627 Once a RIFT node issues a TIE with an ID, it MUST preserve the ID in 628 its database until it restarts, even if the TIE looses all content. 629 The re-advertisement of empty TIE fullfills the purpose of purging 630 any information advertised in previous versions. The originator is 631 free to not re-originate the according empty TIE again or originate 632 an empty TIE with relatively short lifetime to prevent large number 633 of long-lived empty stubs polluting the network. Each node will 634 timeout and clean up the according empty TIEs independently. 636 4.2.3.7. Optional Automatic Flooding Reduction and Partitioning 638 Two nodes can, but strictly only under conditions defined below, run 639 a hashing function based on TIE originator value and partition 640 flooding between them. 642 Steps for flooding reduction and partitioning: 644 1. select all nodes in the same level for which node S-TIEs have 645 been received and which have precisely the same set of north and 646 south neighbor adjacencies and support flooding reduction and 647 then 649 2. run on the chosen set a hash algorithm using nodes flood 650 priorities and IDs to select flooding leader and backup per TIE 651 originator ID, i.e. each node floods immediately through to all 652 its necessary neighbors TIEs that it received with an originator 653 ID that makes it the flooding leader or backup for this 654 originator. The preference (higher is better) is computed as 655 XOR(TIE-ORIGINATOR-ID<<1,~OWN-SYSTEM-ID)). 657 Additional rules for flooding reduction and partitioning: 659 a. A node always floods its own TIEs 661 b. A node generates TIDEs as usual but when receiving TIREs with 662 requests for TIEs for a node for which it is not a flooding 663 leader or backup it ignores such TIDEs on first request only. 664 Normally, the flooding leader should satisfy the requestor and 665 with that no further TIREs for such TIEs will be generated. 666 Otherwise, the next set of TIDEs and TIREs will lead to flooding 667 independent of the flooding leader status. 669 c. A node receiving a TIE originated by a node for which it is not a 670 flooding leader floods such TIEs only when receiving an out-of- 671 date TIDE for them, except for the first one. 673 The mechanism can be implemented optionally in each node. The 674 capability is carried in the node N-TIE. 676 Obviously flooding reduction does NOT apply to self originated TIEs. 677 Observe further that all policy-guided information consists of self- 678 originated TIEs. 680 4.2.4. Automatic Disaggregation on Link & Node Failures 682 Under normal circumstances, a node S-TIEs contain just its 683 adjacencies, a default route and policy-guided prefixes. However, if 684 a node detects that its default IP prefix covers one or more prefixes 685 that are reachable through it but not through one or more other nodes 686 at the same level, then it must explicitly advertise those prefixes 687 in an S-TIE. Otherwise, some percentage of the northbound traffic 688 for those prefixes would be sent to nodes without according 689 reachability, causing it to be blackholed. Even when not 690 blackholing, the resulting forwarding could 'backhaul' packets 691 through the higher level spines, clearly an undesirable condition 692 affecting the blocking probabilities of the fabric. 694 We refer to the process of advertising additional prefixes as 'de- 695 aggregation'. 697 A node determines the set of prefixes needing de-aggregation using 698 the following steps: 700 a. A DAG computation in the southern direction is performed first, 701 i.e. the N-TIEs are used to find all of prefixes it can reach and 702 the set of next-hops in the lower level for each. Such a 703 computation can be easily performed on a fat tree by e.g. setting 704 all link costs in the southern direction to 1 and all northern 705 directions to infinity. We term set of those prefixes |R, and 706 for each prefix, r, in |R, we define its set of next-hops to 707 be |H(r). Observe that policy-guided prefixes are NOT affected 708 since their scope is controlled by configuration. Overload bits 709 as introduced in Section 4.2.6.2.1 have to be respected during 710 the computation. 712 b. The node uses reflected S-TIEs to find all nodes at the same 713 level in the same PoD and the set of southbound adjacencies for 714 each. The set of nodes at the same level is termed |N and for 715 each node, n, in |N, we define its set of southbound adjacencies 716 to be |A(n). 718 c. For a given r, if the intersection of |H(r) and |A(n), for any n, 719 is null then that prefix r must be explicitly advertised by the 720 node in an S-TIE. 722 d. Identical set of de-aggregated prefixes is flooded on each of the 723 node's southbound adjacencies. In accordance with the normal 724 flooding rules for an S-TIE, a node at the lower level that 725 receives this S-TIE will not propagate it south-bound. Neither 726 is it necessary for the receiving node to reflect the 727 disaggregated prefixes back over its adjacencies to nodes at the 728 level from which it was received. 730 To summarize the above in simplest terms: if a node detects that its 731 default route encompasses prefixes for which one of the other nodes 732 in its level has no possible next-hops in the level below, it has to 733 disaggregate it to prevent blackholing or suboptimal routing. Hence 734 a node X needs to determine if it can reach a different set of south 735 neighbors than other nodes at the same level, which are connected via 736 at least one south or east-west neighbor. If it can, then prefix 737 disaggregation may be required. If it can't, then no prefix 738 disaggregation is needed. An example of disaggregation is provided 739 in Section 5.3. 741 A possible algorithm is described last: 743 1. Create partial_neighbors = (empty), a set of neighbors with 744 partial connectivity to the node X's layer from X's perspective. 745 Each entry is a list of south neighbor of X and a list of nodes 746 of X.layer that can't reach that neighbor. 748 2. A node X determines its set of southbound neighbors 749 X.south_neighbors. 751 3. For each S-TIE originated from a node Y that X has which is at 752 X.layer, if Y.south_neighbors is not the same as 753 X.south_neighbors, for each neighbor N in X.south_neighbors but 754 not in Y.south_neighbors, add (N, (Y))to partial_neighbors if N 755 isn't there or add Y to the list for N. 757 4. If partial_neighbors is empty, then node X does not to 758 disaggregate any prefixes. If node X is advertising 759 disaggregated prefixes in its S-TIE, X SHOULD remove them and 760 readvertise its according S-TIEs. 762 A node X computes its SPF based upon the received N-TIEs. This 763 results in a set of routes, each categorized by (prefix, 764 path_distance, next-hop-set). Alternately, for clarity in the 765 following procedure, these can be organized by next-hop-set as ( 766 (next-hops), {(prefix, path_distance)}). If partial_neighbors isn't 767 empty, then the following procedure describes how to identify 768 prefixes to disaggregate. 770 disaggregated_prefixes = {empty } 771 nodes_same_layer = { empty } 772 for each S-TIE 773 if S-TIE.layer == X.layer 774 add S-TIE.originator to nodes_same_layer 775 end if 776 end for 778 for each next-hop-set NHS 779 isolated_nodes = nodes_same_layer 780 for each NH in NHS 781 if NH in partial_neighbors 782 isolated_nodes = intersection(isolated_nodes, 783 partial_neighbors[NH].nodes) 784 end if 785 end for 787 if isolated_nodes is not empty 788 for each prefix using NHS 789 add (prefix, distance) to disaggregated_prefixes 790 end for 791 end if 792 end for 794 copy disaggregated_prefixes to X's S-TIE 795 if X's S-TIE is different 796 schedule S-TIE for flooding 797 end if 799 Figure 3: Computation to Disaggregate Prefixes 801 Each disaggregated prefix is sent with the accurate path_distance. 802 This allows a node to send the same S-TIE to each south neighbor. 803 The south neighbor which is connected to that prefix will thus have a 804 shorter path. 806 Finally, to summarize the less obvious points: 808 a. all the lower level nodes are flooded the disaggregated prefixes 809 since we don't want to build an S-TIE per node to not complicate 810 things unnecessarily. The PoD containing the prefix will prefer 811 southbound anyway. 813 b. disaggregated prefixes do NOT have to propagate to lower levels. 814 With that the disturbance in terms of new flooding is contained 815 to a single level experiencing failures only. 817 c. disaggregated S-TIEs are not "reflected" by the lower layer, i.e. 818 nodes within same level do NOT need to be aware which node 819 computed the need for disaggregation. 821 d. The fabric is still supporting maximum load balancing properties 822 while not trying to send traffic northbound unless necessary. 824 4.2.5. Policy-Guided Prefixes 826 In a fat tree, it can be sometimes desirable to guide traffic to 827 particular destinations or keep specific flows to certain paths. In 828 RIFT, this is done by using policy-guided prefixes with their 829 associated communities. Each community is an abstract value whose 830 meaning is determined by configuration. It is assumed that the 831 fabric is under a single administrative control so that the meaning 832 and intent of the communities is understood by all the nodes in the 833 fabric. Any node can originate a policy-guided prefix. 835 Since RIFT uses distance vector concepts in a southbound direction, 836 it is straightforward to add a policy-guided prefix to an S-TIE. For 837 easier troubleshooting, the approach taken in RIFT is that a node's 838 southbound policy-guided prefixes are sent in its S-TIE and the 839 receiver does inbound filtering based on the associated communities 840 (an egress policy is imaginable but would lead to different S-TIEs 841 per neighbor possibly which is not considered in RIFT protocol 842 procedures). A southbound policy-guided prefix can only use links in 843 the south direction. If an PGP S-TIE is received on an east-west or 844 northbound link, it must be discarded by ingress filtering. 846 Conceptually, a southbound policy-guided prefix guides traffic from 847 the leaves up to at most the northmost layer. It is also necessary 848 to to have northbound policy-guided prefixes to guide traffic from 849 the northmost layer down to the appropriate leaves. Therefore, RIFT 850 includes northbound policy-guided prefixes in its N PGP-TIE and the 851 receiver does inbound filtering based on the associated communities. 852 A northbound policy-guided prefix can only use links in the northern 853 direction. If an N PGP TIE is received on an east-west or southbound 854 link, it must be discarded by ingress filtering. 856 By separating southbound and northbound policy-guided prefixes and 857 requiring that the cost associated with a PGP is strictly 858 monotonically increasing at each hop, the path cannot loop. Because 859 the costs are strictly increasing, it is not possible to have a loop 860 between a northbound PGP and a southbound PGP. If east-west links 861 were to be allowed, then looping could occur and issues such as 862 counting to infinity would become an issue to be solved. If complete 863 generality of path - such as including east-west links and using both 864 north and south links in arbitrary sequence - then a Path Vector 865 protocol or a similar solution must be considered. 867 If a node has received the same prefix, after ingress filtering, as a 868 PGP in an S-TIE and in an N-TIE, then the node determines which 869 policy-guided prefix to use based upon the advertised cost. 871 A policy-guided prefix is always preferred to a regular prefix, even 872 if the policy-guided prefix has a larger cost. 874 The set of policy-guided prefixes received in a TIE is subject to 875 ingress filtering and then regenerated to be sent out in the 876 receiver's appropriate TIE. Both the ingress filtering and the 877 regeneration use the communities associated with the policy-guided 878 prefixes to determine the correct behavior. The cost on re- 879 advertisement MUST increase in a strictly monotonic fashion. 881 4.2.5.1. Ingress Filtering 883 When a node X receives a PGP S-TIE or N-TIE that is originated from a 884 node Y which does not have an adjacency with X, such a TIE MUST be 885 discarded. Similarly, if node Y is at the same layer as node X, then 886 X MUST discard PGP S- and N-TIEs. 888 Next, policy can be applied to determine which policy-guided prefixes 889 to accept. Since ingress filtering is chosen rather than egress 890 filtering and per-neighbor PGPs, policy that applies to links is done 891 at the receiver. Because the RIFT adjacency is between nodes and 892 there may be parallel links between the two nodes, the policy-guided 893 prefix is considered to start with the next-hop set that has all 894 links to the originating node Y. 896 A policy-guided prefix has or is assigned the following attributes: 898 cost: This is initialized to the cost received 900 community_list: This is initialized to the list of the communities 901 received. 903 next_hop_set: This is initialized to the set of links to the 904 originating node Y. 906 4.2.5.2. Applying Policy 908 The specific action to apply based upon a community is deployment 909 specific. Here are some examples of things that can be done with 910 communities. The length of a community is a 64 bits number and it 911 can be written as a single field M or as a multi-field (S = M[0-31], 912 T = M[32-63]) in these examples. For simplicity, the policy-guided 913 prefix is referred to as P, the processing node as X and the 914 originator as Y. 916 Prune Next-Hops: Community Required: For each next-hop in 917 P.next_hop_set, if the next-hop does not have the community, prune 918 that next-hop from P.next_hop_set. 920 Prune Next-Hops: Avoid Community: For each next-hop in 921 P.next_hop_set, if the next-hop has the community, prune that 922 next-hop from P.next_hop_set. 924 Drop if Community: If node X has community M, discard P. 926 Drop if not Community: If node X does not have the community M, 927 discard P. 929 Prune to ifIndex T: For each next-hop in P.next_hop_set, if the 930 next-hop's ifIndex is not the value T specified in the community 931 (S,T), then prune that next-hop from P.next_hop_set. 933 Add Cost T: For each appearance of community S in P.community_list, 934 if the node X has community S, then add T to P.cost. 936 Accumulate Min-BW T: Let bw be the sum of the bandwidth for 937 P.next_hop_set. If that sum is less than T, then replace (S,T) 938 with (S, bw). 940 Add Community T if Node matches S: If the node X has community S, 941 then add community T to P.community_list. 943 4.2.5.3. Store Policy-Guided Prefix for Route Computation and 944 Regeneration 946 Once a policy-guided prefix has completed ingress filtering and 947 policy, it is almost ready to store and use. It is still necessary 948 to adjust the cost of the prefix to account for the link from the 949 computing node X to the originating neighbor node Y. 951 There are three different policies that can be used: 953 Minimum Equal-Cost: Find the lowest cost C next-hops in 954 P.next_hop_set and prune to those. Add C to P.cost. 956 Minimum Unequal-Cost: Find the lowest cost C next-hop in 957 P.next_hop_set. Add C to P.cost. 959 Maximum Unequal-Cost: Find the highest cost C next-hop in 960 P.next_hop_set. Add C to P.cost. 962 The default policy is Minimum Unequal-Cost but well-known communities 963 can be defined to get the other behaviors. 965 Regardless of the policy used, a node MUST store a PGP cost that is 966 at least 1 greater than the PGP cost received. This enforces the 967 strictly monotonically increasing condition that avoids loops. 969 Two databases of PGPs - from N-TIEs and from S-TIEs are stored. When 970 a PGP is inserted into the appropriate database, the usual 971 tiebreaking on cost is performed. Observe that the node retains all 972 PGP TIEs due to normal flooding behavior and hence loss of the best 973 prefix will lead to re-evaluation of TIEs present and readvertisement 974 of a new best PGP. 976 4.2.5.4. Regeneration 978 A node must regenerate policy-guided prefixes and retransmit them. 979 The node has its database of southbound policy-guided prefixes to 980 send in its S-TIE and its database of northbound policy-guided 981 prefixes to send in its N-TIE. 983 Of course, a leaf does not need to regenerate southbound policy- 984 guided prefixes. 986 4.2.5.5. Overlap with Disaggregated Prefixes 988 PGPs may overlap with prefixes introduced by automatic de- 989 aggregation. The topic is under further discussion. The break in 990 connectivity that leads to infeasiblity of a PGP is mirrored in 991 adjacency tear-down and according removal of such PGPs. 992 Nevertheless, the underlying link-state flooding will be likely 993 reacting significantly faster than a hop-by-hop redistribution and 994 with that the preference for PGPs may cause intermittant blackholes. 996 4.2.6. Reachability Computation 998 A node has three sources of relevant information. A node knows the 999 full topology south from the received N-TIEs. A node has the set of 1000 prefixes with associated distances and bandwidths from received 1001 S-TIEs. A node can also have a set of PGPs. 1003 4.2.6.1. Specification 1005 A node uses the N-TIEs to build a network graph with unidirectional 1006 links. As in IS-IS or OSPF, unidirectional links are associated 1007 together to confirm bidirectional connectivity. Because of the 1008 requirement that a packet traversing in a southbound direction must 1009 not go take any northbound links, a node has topological visibility 1010 only south of itself. There are no links at the computing node's 1011 level that go to a northbound level. Therefore, all paths computed 1012 must contain only east-west and southbound links. To enforce this, 1013 the network graph MUST have either its northbound unidirectional 1014 links removed or set to have a cost of COST_INFINITY. 1016 A node runs a standard shortest path first (SPF) algorithm on this 1017 network graph. If a node is minimized to have a cost of 1018 COST_INFINITY, then it is not reachable. 1020 4.2.6.1.1. Attaching Prefixes 1022 After the SPF is run, it is necessary to attach prefixes. Prefixes 1023 from an N-TIE are attached to the originating node with that node's 1024 next-hop set and a distance equal to the prefix's cost plus the 1025 node's minimized path distance. The RIFT route database, a set of 1026 (prefix, type=spf, path_distance, next-hop set), accumulates these 1027 results. 1029 Prefixes from each S-TIE need to also be added to the RIFT route 1030 database. There is no SPF to be run. Instead, the computing node 1031 needs to determine, for each prefix in an S-TIE that originated from 1032 adjacent node, what next-hops to use to reach that node. Since there 1033 may be parallel links, the next-hops to use can be a set; presence of 1034 the computing node in the associated Node S-TIE is sufficient to 1035 verify that at least one link has bidirectional connectivity. The 1036 set of minimum cost next-hops from the computing node X to the 1037 originating adjacent node is determined. 1039 Each prefix has its cost adjusted before being added into the RIFT 1040 route database. The cost of the prefix is set to the cost received 1041 plus the cost of the minimum cost next-hop to that neighbor. Then 1042 each prefix can be added into the RIFT route database with the 1043 next_hop_set; ties are broken based upon distance and type. 1045 An exemplary implementation for node X follows: 1047 for each S-TIE 1048 if S-TIE.layer > X.layer 1049 next_hop_set = set of minimum cost links to the S-TIE.originator 1050 next_hop_cost = minimum cost link to S-TIE.originator 1051 end if 1052 for each prefix P in the S-TIE 1053 P.cost = P.cost + next_hop_cost 1054 if P not in route_database: 1055 add (P, type=DistVector, P.cost, next_hop_set) to route_database 1056 end if 1057 if (P in route_database) and 1058 (route_database[P].type is not PolicyGuided): 1059 if route_database[P].cost > P.cost): 1060 update route_database[P] with (P, DistVector, P.cost, next_hop_set) 1061 else if route_database[P].cost == P.cost 1062 update route_database[P] with (P, DistVector, P.cost, 1063 merge(next_hop_set, route_database[P].next_hop_set)) 1064 else 1065 // Not prefered route so ignore 1066 end if 1067 end if 1068 end for 1069 end for 1071 Figure 4: Adding Routes from S-TIE Prefixes 1073 4.2.6.1.2. Attaching Policy-Guided Prefixes 1075 Each policy-guided prefix P has its cost and next_hop_set already 1076 stored in the associated database, as specified in Section 4.2.5.3; 1077 the cost stored for the PGP is already updated to considering the 1078 cost of the link to the advertising neighbor. By definition, a 1079 policy-guided prefix is preferred to a regular prefix. 1081 for each policy-guided prefix P: 1082 if P not in route_database: 1083 add (P, type=PolicyGuided, P.cost, next_hop_set) 1084 end if 1085 if P in route_database : 1086 if (route_database[P].type is not PolicyGuided) or 1087 (route_database[P].cost > P.cost): 1088 update route_database[P] with (P, PolicyGuided, P.cost, next_hop_set) 1089 else if route_database[P].cost == P.cost 1090 update route_database[P] with (P, PolicyGuided, P.cost, 1091 merge(next_hop_set, route_database[P].next_hop_set)) 1092 else 1093 // Not prefered route so ignore 1094 end if 1095 end if 1096 end for 1098 Figure 5: Adding Routes from Policy-Guided Prefixes 1100 4.2.6.2. Further Mechanisms 1102 4.2.6.2.1. Overload Bit 1104 The leaf node SHOULD set the 'overload' bit on its N-TIE, since if 1105 the spine nodes were to forward traffic not meant for the local node, 1106 the leaf node does not have the topology information to prevent a 1107 routing/forwarding loop. 1109 Overload Bit MUST be respected in all according reachability 1110 computations. A node with overload bit set MUST NOT advertise any 1111 reachability prefixes southbound. 1113 4.2.6.2.2. Optimized Route Computation on Leafs 1115 Since the leafs do see only "one hop away" they do not need to run a 1116 full SPF but can simply gather prefix candidates from their neighbors 1117 and build the according routing table. 1119 A leaf will have no N-TIEs except optionally from its east-west 1120 neighbors. A leaf will have S-TIEs from its neighbors. 1122 Instead of creating a network graph from its N-TIEs and running an 1123 SPF, a leaf node can simply compute the minimum cost and next_hop_set 1124 to each leaf neighbor by examining its local interfaces, determining 1125 bi-directionality from the associated N-TIE, and specifying the 1126 neighbor's next_hop_set set and cost from the minimum cost local 1127 interfaces to that neighbor. 1129 Then a leaf attaches prefixes as in Section 4.2.6.1.1 as well as the 1130 policy-guided prefixes as in Section 4.2.6.1.2. 1132 4.2.7. Key/Value Store 1134 The protocol supports a southbound distribution of key-value pairs 1135 that can be used to e.g. distribute configuration information during 1136 topology bringup. The KV TIEs (which are always S-TIEs) can arrive 1137 from multiple nodes and need tie-breaking per key uses the following 1138 rules 1140 a. Only KV TIEs originated by a node to which the receiver has an 1141 adjacency are considered. 1143 b. Within all valid KV S-TIEs containing the key, the value of the 1144 S-TIE with the highest level and within the same level highest 1145 originator ID is prefered. 1147 Observe that if a node goes down, the node south of it looses 1148 adjacencies to it and with that the KVs will be disregarded and on 1149 tie-break changes new KV readvertised to prevent stale information 1150 being used by nodes further south. KV information is not result of 1151 independent computation of every node but a diffused computation. 1153 5. Examples 1155 5.1. Normal Operation 1157 This section describes RIFT deployment in the example topology 1158 without any node or link failures. We disregard flooding reduction 1159 for simplicity's sake. 1161 As first step, the following bi-directional adjacencies will be 1162 created (and any other links that do not fulfill LIE rules in 1163 Section 4.2.2 disregarded): 1165 o Spine 21 (PoD 0) to Node 111, Node 112, Node 121, and Node 122 1167 o Spine 22 (PoD 0) to Node 111, Node 112, Node 121, and Node 122 1169 o Node 111 to Leaf 111, Leaf 112 1171 o Node 112 to Leaf 111, Leaf 112 1173 o Node 121 to Leaf 121, Leaf 122 1175 o Node 122 to Leaf 121, Leaf 122 1176 Consequently, N-TIEs would be originated by Node 111 and Node 112 and 1177 each set would be sent to both Spine 21 and Spine 22. N-TIEs also 1178 would be originated by Leaf 111 (w/ Prefix 111) and Leaf 112 (w/ 1179 Prefix 112 and the multihomed prefix) and each set would be sent to 1180 Node 111 and Node 112. Node 111 and Node 112 would then flood these 1181 N-TIEs to Spine 21 and Spine 22. 1183 Similarly, N-TIEs would be originated by Node 121 and Node 122 and 1184 each set would be sent to both Spine 21 and Spine 22. N-TIEs also 1185 would be originated by Leaf 121 (w/ Prefix 121 and the multihomed 1186 prefix) and Leaf 122 (w/ Prefix 122) and each set would be sent to 1187 Node 121 and Node 122. Node 121 and Node 122 would then flood these 1188 N-TIEs to Spine 21 and Spine 22. 1190 At this point both Spine 21 and Spine 22, as well as any controller 1191 to which they are connected, would have the complete network 1192 topology. At the same time, Node 111/112/121/122 hold only the 1193 N-ties of level 0 of their respective PoD. Leafs hold only their own 1194 N-TIEs. 1196 S-TIEs with adjacencies and a default IP prefix would then be 1197 originated by Spine 21 and Spine 22 and each would be flooded to Node 1198 111, Node 112, Node 121, and Node 122. Node 111, Node 112, Node 121, 1199 and Node 122 would each send the S-TIE from Spine 21 to Spine 22 and 1200 the S-TIE from Spine 22 to Spine 21. (S-TIEs are reflected up to 1201 level from which they are received but they are NOT propagated 1202 southbound.) 1204 An S Tie with a default IP prefix would be originated by Node 111 and 1205 Node 112 and each would be sent to Leaf 111 and Leaf 112. Leaf 111 1206 and Leaf 112 would each send the S-TIE from Node 111 to Node 112 and 1207 the S-TIE from Node 112 to Node 111. 1209 Similarly, an S Tie with a default IP prefix would be originated by 1210 Node 121 and Node 122 and each would be sent to Leaf 121 and Leaf 1211 122. Leaf 121 and Leaf 122 would each send the S-TIE from Node 121 1212 to Node 122 and the S-TIE from Node 122 to Node 121. At this point 1213 IP connectivity with maximum possible ECMP has been established 1214 between the Leafs while constraining the amount of information held 1215 by each node to the minimum necessary for normal operation and 1216 dealing with failures. 1218 5.2. Leaf Link Failure 1219 . | | | | 1220 .+-+---+-+ +-+---+-+ 1221 .| | | | 1222 .|Node111| |Node112| 1223 .+-+---+-+ ++----+-+ 1224 . | | | | 1225 . | +---------------+ X 1226 . | | | X Failure 1227 . | +-------------+ | X 1228 . | | | | 1229 .+-+---+-+ +--+--+-+ 1230 .| | | | 1231 .|Leaf111| |Leaf112| 1232 .+-------+ +-------+ 1233 . + + 1234 . Prefix111 Prefix112 1236 Figure 6: Single Leaf link failure 1238 In case of a failing leaf link between node 112 and leaf 112 the 1239 link-state information will cause recomputation of the necessary SPF 1240 and the higher levels will stop forwarding towards prefix 112 through 1241 node 112. Only nodes 111 and 112, as well as both spines will see 1242 control traffic. Leaf 111 will receive a new S-TIE from node 112 and 1243 reflect back to node 111. Node 111 will deaggregate Prefix 111 and 1244 Prefix 112 but we will not describe it further here since 1245 deaggregation is emphasized in the next example. It is worth 1246 observing however in this example that if Leaf111 would keep on 1247 forwarding traffic towards Prefix112 using the advertised south-bound 1248 default of Node112 the traffic would end up on Spine21 and Spine22 1249 and cross back into Pod1 using Node111. This is arguably not as bad 1250 as blackholing present in the next example but clearly undesirable. 1251 Fortunately, deaggregation prevents this type of behavior except for 1252 a transitory period of time. 1254 5.3. Partitioned Fabric 1255 . +--------+ +--------+ S-TIE of Spine21 1256 . | | | | received by 1257 . |Spine 21| |Spine 22| reflection of 1258 . ++-+--+-++ ++-+--+-++ Nodes 112 and 111 1259 . | | | | | | | | 1260 . | | | | | | | 0/0 1261 . | | | | | | | | 1262 . | | | | | | | | 1263 . +--------------+ | +--- XXXXXX + | | | +---------------+ 1264 . | | | | | | | | 1265 . | +-----------------------------+ | | | 1266 . 0/0 | | | | | | | 1267 . | 0/0 0/0 +- XXXXXXXXXXXXXXXXXXXXXXXXX -+ | 1268 . | 1.1/16 | | | | | | 1269 . | | +-+ +-0/0-----------+ | | 1270 . | | | 1.1./16 | | | | 1271 .+-+----++ +-+-----+ ++-----0/0 ++----0/0 1272 .| | | | | 1.1/16 | 1.1/16 1273 .|Node111| |Node112| |Node121| |Node122| 1274 .+-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 1275 . | | | | | | | | 1276 . | +---------------+ | | +----------------+ | 1277 . | | | | | | | | 1278 . | +-------------+ | | | +--------------+ | | 1279 . | | | | | | | | 1280 .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 1281 .| | | | | | | | 1282 .|Leaf111| |Leaf112| |Leaf121| |Leaf122| 1283 .+-+-----+ ++------+ +-----+-+ +-+-----+ 1284 . + + + + 1285 . Prefix111 Prefix112 Prefix121 Prefix122 1286 . 1.1/16 1288 Figure 7: Fabric partition 1290 Figure 7 shows the arguably most catastrophic but also the most 1291 interesting case. Spine 21 is completely severed from access to 1292 Prefix 121 (we use in the figure 1.1/16 as example) by double link 1293 failure. However unlikely, if left unresolved, forwarding from leaf 1294 111 and leaf 112 to P121 would suffer 50% blackholing based on pure 1295 default route advertisements by spine 21 and spine 22. 1297 The mechanism used to resolve this scenario is hinging on the 1298 distribution of southbound representation by spine 21 that is 1299 reflected by node 111 and node 112 to spine 22. Spine 22, having 1300 computed reachability to all prefixes in the network, advertises with 1301 the default route the ones that are reachable only via lower level 1302 neighbors that Spine 21 does not show an adjacency to. That results 1303 in node 111 and node 112 obtaining a longest-prefix match to Prefix 1304 121 which leads through Spine 22 and prevents blackholing through 1305 Spine 21 still advertising the 0/0 aggregate only. 1307 The Prefix 121 advertised by spine 22 does not have to be propagated 1308 further towards leafs since they do no benefit from this information. 1309 Hence the amount of flooding is restricted to spine 21 reissuing its 1310 S-TIEs and reflection of those by node 111 and node 112. The 1311 resulting SPF in Spine 22 issues the new S-TIEs containing 1.1/16 and 1312 reflection of those by node 111 and node 112 again. None of the 1313 leafs become aware of the changes and the failure is constrained 1314 strictly to the level that became partitioned. 1316 To finish with an example of the resulting sets computed using 1317 notation introduced in Section 4.2.4, Spine 22 constructs the 1318 following sets: 1320 |R = Prefix 111, Prefix 112, Prefix 121, Prefix 122 1322 |H (for r=Prefix 111) = Node 111, Node 112 1324 |H (for r=Prefix 112) = Node 111, Node 112 1326 |H (for r=Prefix 121) = Node 121, Node 122 1328 |H (for r=Prefix 122) = Node 121, Node 122 1330 |A (for Spine 21) = Node 111, Node 112 1332 With that and |H (for r=Prefix 121) and |H (for r=Prefix 122) being 1333 disjoint from |A (for Spine 21), Spine 22 will originate an S-TIE 1334 with Prefix 121 and Prefix 122, that is flooded to Nodes 112, 112, 1335 121 and 122. 1337 6. Implementation and Operation: Further Details 1339 6.1. Leaf to Leaf connection 1341 [QUESTION] This would imply that the leaves have to understand the 1342 N-TIE format and pull out the prefixes to figure out the next-hop... 1343 Do we want this complexity?[/QUESTION] 1345 6.2. Other End-to-End Services 1347 Losing full, flat topology information at every node will have an 1348 impact on some of the end-to-end network services. This is the price 1349 paid for minimal disturbance in case of failures and reduced flooding 1350 and memory requirements on nodes lower south in the level hierarchy. 1352 6.3. Address Family and Topology 1354 Multi-Topology (MT)[RFC5120] and Multi-Instance (MI)[RFC6822] is used 1355 today in link-state routing protocols to provide the option of 1356 several instances on the same physical topology. RIFT supports this 1357 capability by carrying transport ports in the LIE protocol exchanges. 1358 Multiplexing of LIEs can be achieved by either choosing varying 1359 multicast addresses or ports on the same address. 1361 7. Information Elements Schema 1363 This section introduces the schema for information elements. 1365 On schema changes that 1367 a. change field numbers or 1369 b. add new required fields or 1371 c. change lists into sets, unions into structures or 1373 d. change multiplicity of fields or 1375 e. change datatypes of any field or 1377 f. changes default value of any field 1379 major version of the schema MUST increase. All other changes MUST 1380 increase minor version within the same major. 1382 Thrift serializer/deserializer MUST not discard optional, unknown 1383 fields but preserve and serialize them again when re-flooding. 1385 //! Thrift file for RIFT, flooding for fat trees 1386 //! @note: all numbers are implementation co'erced to unsigned versions using the highest bit 1388 /// represents protocol major version 1389 const i32 CURRENT_MAJOR_VERSION = 1 1390 const i32 CURRENT_MINOR_VERSION = 0 1392 typedef i64 SystemID 1393 typedef i32 IPv4Address 1394 /// this has to be of length long enough to accomodate prefix 1395 typedef binary IPv6Address 1396 typedef i16 UDPPortType 1397 typedef i16 TIENrType 1398 typedef i16 MTUSizeType 1399 typedef i32 SeqNrType 1400 typedef i32 LifeTimeType 1401 typedef i16 LevelType 1402 typedef i16 PodType 1403 typedef i16 VersionType 1404 typedef i32 MetricType 1405 typedef i64 KeyIDType 1406 typedef i32 LinkIDType 1407 typedef string KeyNameType 1408 typedef bool TieDirectionType 1410 const LevelType DEFAULT_LEVEL = 0 1411 const PodType DEFAULT_POD = 0 1412 const LinkIDType UNDEFINED_LINKID = 0 1414 /// RIFT packet header 1415 struct PacketHeader { 1416 1: required VersionType major_version = CURRENT_MAJOR_VERSION; 1417 2: required VersionType minor_version = CURRENT_MINOR_VERSION; 1418 3: required SystemID sender; 1419 4: optional LevelType level = DEFAULT_LEVEL; 1420 } 1422 struct ProtocolPacket { 1423 1: required PacketHeader header; 1424 2: required Content content; 1425 } 1427 union Content { 1428 1: optional LIE hello; 1429 2: optional TIDEPacket tide; 1430 3: optional TIREPacket tire; 1431 4: optional TIEPacket tie; 1432 } 1434 // serves as community for PGP 1435 struct Community { 1436 1: required i32 top; 1437 2: required i32 bottom; 1438 } 1440 // content per N-S direction 1441 union TIEElement { 1442 1: optional NorthTIEElement north_element; 1443 2: optional SouthTIEElement south_element; 1444 } 1445 // @todo: flood header separately in UDP ? 1446 // to allow caching to TIEs while changing lifetime? 1447 struct TIEPacket { 1448 1: required TIEHeader header; 1449 // North and South TIEs need the correct union 1450 // member to be sent, otherwise content is ignored 1451 2: required TIEElement element; 1452 } 1454 enum TIETypeType { 1455 Illegal = 0, 1456 TIETypeMinValue = 1, 1457 NodeTIEType = 2, 1458 NorthPrefixTIEType = 3, 1459 SouthPrefixTIEType = 4, 1460 KeyValueTIEType = 5, 1461 NorthPGPrefixTIEType = 6, 1462 SouthPGPrefixTIEType = 7, 1463 TIETypeMaxValue = 8, 1464 } 1466 /// RIFT LIE packet 1467 struct LIE { 1468 2: optional string name; 1469 3: required SystemID originator; 1470 // UDP port to which we can flood TIEs, same address 1471 // as the hello TX this hello has been received on 1472 4: required UDPPortType flood_port; 1473 5: optional Neighbor neighbor; 1474 6: optional PodType pod = DEFAULT_POD; 1475 // level is already included on the packet header 1477 } 1479 struct LinkID { 1480 1: required LinkIDType local_id; 1481 2: required LinkIDType remote_id; 1482 // more properties of the link can go in here 1483 } 1485 struct Neighbor { 1486 1: required SystemID originator; 1487 2: required UDPPortType flood_port; 1488 // ignored on LIE 1489 // can carry description of multiple 1490 // parallel links in a TIE 1491 3: optional set link_ids; 1492 } 1493 /// ID of a TIE 1494 /// @note: TIEID space is a total order achieved by comparing the elements in sequence defined 1495 struct TIEID { 1496 /// indicates whether N or S-TIE, True > False 1497 1: required TieDirectionType northbound; 1498 2: required SystemID originator; 1499 3: required TIETypeType tietype; 1500 4: required TIENrType tie_nr; 1501 } 1503 struct TIEHeader { 1504 2: required TIEID tieid; 1505 3: required SeqNrType seq_nr; 1506 // in seconds 1507 4: required LifeTimeType lifetime; 1508 } 1510 // sorted, otherwise protocol doesn't work properly 1511 struct TIDEPacket { 1512 /// all 00s marks starts 1513 1: required TIEID start_range; 1514 /// all FFs mark end 1515 2: required TIEID end_range; 1516 /// sorted list of headers 1517 3: required list headers; 1518 } 1520 struct TIREPacket { 1521 1: required set headers; 1522 } 1524 struct NodeNeighborsTIEElement { 1525 /// if neighbor systemID repeats in set or TIEs 1526 /// the behavior is undefined 1527 1: required SystemID neighbor; 1528 2: required LevelType level; 1529 3: optional MetricType cost = 1; 1530 } 1532 /// capabilities the node supports 1533 struct NodeCapabilities { 1534 1: required bool flood_reduction = true; 1535 } 1537 /// flags the node sets 1538 struct NodeFlags { 1539 1: required bool overflow = false; 1540 } 1541 struct NodeTIEElement { 1542 1: required LevelType level; 1543 2: optional NodeCapabilities capabilities; 1544 3: optional NodeFlags flags; 1545 4: required set neighbors; 1546 } 1548 struct IPv4PrefixType { 1549 1: required IPv4Address address; 1550 2: required byte prefixlen; 1551 } 1553 struct IPv6PrefixType { 1554 1: required IPv6Address address; 1555 2: required byte prefixlen; 1556 } 1558 union IPPrefixType { 1559 1: optional IPv4PrefixType ipv4prefix; 1560 2: optional IPv6PrefixType ipv6prefix; 1561 } 1563 struct PrefixWithMetric { 1564 1: required IPPrefixType prefix; 1565 2: optional MetricType cost = 1; 1566 } 1568 struct PrefixTIEElement { 1569 /// if the same prefix repeats in multiple TIEs 1570 /// or with different metrics, behavior is unspecified 1571 1: required set prefixes; 1572 } 1574 struct KeyValue { 1575 1: required KeyIDType keyid; 1576 2: optional KeyNameType key; 1577 3: optional string value = ""; 1578 } 1580 struct KeyValueTIEElement { 1581 1: required set keyvalues; 1582 } 1584 /// single element in a N-TIE 1585 union NorthTIEElement { 1586 /// hinges of enum TIETypeType::NodeTIEType 1587 1: optional NodeTIEElement node; 1588 /// hinges of enum TIETypeType::NorthPrefixTIEType 1589 2: optional PrefixTIEElement prefixes; 1590 /// @todo: policy guided prefixes 1591 } 1593 union SouthTIEElement { 1594 /// hinges of enum TIETypeType::NodeTIEType 1595 1: optional NodeTIEElement node; 1596 2: optional KeyValueTIEElement keyvalues; 1597 /// hinges of enum TIETypeType::SouthPrefixTIEType 1598 3: optional PrefixTIEElement prefixes; 1599 /// @todo: policy guided prefixes 1600 } 1602 8. IANA Considerations 1604 9. Security Considerations 1606 10. Acknowledgments 1608 Many thanks to Naiming Shen for some of the early discussions around 1609 the topic of using IGPs for routing in topologies related to Clos. 1610 Adrian Farrel and Jeffrey Zhang provided thoughtful comments that 1611 improved the readability of the document and found good amount of 1612 corners where the light failed to shine. 1614 11. References 1616 11.1. Normative References 1618 [ISO10589] 1619 ISO "International Organization for Standardization", 1620 "Intermediate system to Intermediate system intra-domain 1621 routeing information exchange protocol for use in 1622 conjunction with the protocol for providing the 1623 connectionless-mode Network Service (ISO 8473), ISO/IEC 1624 10589:2002, Second Edition.", Nov 2002. 1626 [RFC1142] Oran, D., Ed., "OSI IS-IS Intra-domain Routing Protocol", 1627 RFC 1142, DOI 10.17487/RFC1142, February 1990, 1628 . 1630 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1631 Requirement Levels", BCP 14, RFC 2119, 1632 DOI 10.17487/RFC2119, March 1997, 1633 . 1635 [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, 1636 DOI 10.17487/RFC2328, April 1998, 1637 . 1639 [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A 1640 Border Gateway Protocol 4 (BGP-4)", RFC 4271, 1641 DOI 10.17487/RFC4271, January 2006, 1642 . 1644 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 1645 Element (PCE)-Based Architecture", RFC 4655, 1646 DOI 10.17487/RFC4655, August 2006, 1647 . 1649 [RFC5120] Przygienda, T., Shen, N., and N. Sheth, "M-ISIS: Multi 1650 Topology (MT) Routing in Intermediate System to 1651 Intermediate Systems (IS-ISs)", RFC 5120, 1652 DOI 10.17487/RFC5120, February 2008, 1653 . 1655 [RFC5303] Katz, D., Saluja, R., and D. Eastlake 3rd, "Three-Way 1656 Handshake for IS-IS Point-to-Point Adjacencies", RFC 5303, 1657 DOI 10.17487/RFC5303, October 2008, 1658 . 1660 [RFC5309] Shen, N., Ed. and A. Zinin, Ed., "Point-to-Point Operation 1661 over LAN in Link State Routing Protocols", RFC 5309, 1662 DOI 10.17487/RFC5309, October 2008, 1663 . 1665 [RFC6234] Eastlake 3rd, D. and T. Hansen, "US Secure Hash Algorithms 1666 (SHA and SHA-based HMAC and HKDF)", RFC 6234, 1667 DOI 10.17487/RFC6234, May 2011, 1668 . 1670 [RFC6822] Previdi, S., Ed., Ginsberg, L., Shand, M., Roy, A., and D. 1671 Ward, "IS-IS Multi-Instance", RFC 6822, 1672 DOI 10.17487/RFC6822, December 2012, 1673 . 1675 [RFC7855] Previdi, S., Ed., Filsfils, C., Ed., Decraene, B., 1676 Litkowski, S., Horneffer, M., and R. Shakir, "Source 1677 Packet Routing in Networking (SPRING) Problem Statement 1678 and Requirements", RFC 7855, DOI 10.17487/RFC7855, May 1679 2016, . 1681 [RFC7938] Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of 1682 BGP for Routing in Large-Scale Data Centers", RFC 7938, 1683 DOI 10.17487/RFC7938, August 2016, 1684 . 1686 11.2. Informative References 1688 [CLOS] Yuan, X., "On Nonblocking Folded-Clos Networks in Computer 1689 Communication Environments", IEEE International Parallel & 1690 Distributed Processing Symposium, 2011. 1692 [DIJKSTRA] 1693 Dijkstra, E., "A Note on Two Problems in Connexion with 1694 Graphs", Journal Numer. Math. , 1959. 1696 [DYNAMO] De Candia et al., G., "Dynamo: amazon's highly available 1697 key-value store", ACM SIGOPS symposium on Operating 1698 systems principles (SOSP '07), 2007. 1700 [FATTREE] Leiserson, C., "Fat-Trees: Universal Networks for 1701 Hardware-Efficient Supercomputing", 1985. 1703 [QUIC] Iyengar et al., J., "QUIC: A UDP-Based Multiplexed and 1704 Secure Transport", 2016. 1706 [VAHDAT08] 1707 Al-Fares, M., Loukissas, A., and A. Vahdat, "A Scalable, 1708 Commodity Data Center Network Architecture", SIGCOMM , 1709 2008. 1711 Authors' Addresses 1713 Tony Przygienda 1714 Juniper Networks 1715 1194 N. Mathilda Ave 1716 Sunnyvale, CA 94089 1717 US 1719 Email: prz@juniper.net 1721 John Drake 1722 Juniper Networks 1723 1194 N. Mathilda Ave 1724 Sunnyvale, CA 94089 1725 US 1727 Email: jdrake@juniper.net 1728 Alia Atlas 1729 Juniper Networks 1730 10 Technology Park Drive 1731 Westford, MA 01886 1732 US 1734 Email: akatlas@juniper.net