idnits 2.17.1 draft-ietf-rift-rift-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 21 instances of too long lines in the document, the longest one being 20 characters in excess of 72. == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 2818 has weird spacing: '...itsType defau...' == Line 2820 has weird spacing: '...velType leaf...' == Line 2821 has weird spacing: '...velType defa...' == Line 2825 has weird spacing: '...velType defa...' == Line 2827 has weird spacing: '...kIDType undef...' == (24 more instances...) == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Thrift serializer/deserializer MUST not discard optional, unknown fields but preserve and serialize them again when re-flooding whereas missing optional fields MAY be replaced with according default values if present. -- The document date (Apr 26, 2018) is 2191 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'A' is mentioned on line 206, but not defined == Missing Reference: 'B' is mentioned on line 206, but not defined == Missing Reference: 'C' is mentioned on line 216, but not defined == Missing Reference: 'D' is mentioned on line 216, but not defined == Missing Reference: 'E' is mentioned on line 209, but not defined == Missing Reference: 'F' is mentioned on line 209, but not defined == Missing Reference: '0-31' is mentioned on line 1247, but not defined == Missing Reference: '32-63' is mentioned on line 1248, but not defined == Missing Reference: 'P' is mentioned on line 1488, but not defined == Missing Reference: 'NH' is mentioned on line 1600, but not defined == Missing Reference: 'RFC5880' is mentioned on line 2789, but not defined == Outdated reference: A later version (-21) exists of draft-ietf-6lo-rfc6775-update-19 -- Possible downref: Non-RFC (?) normative reference: ref. 'ISO10589' ** Downref: Normative reference to an Experimental RFC: RFC 3626 ** Downref: Normative reference to an Informational RFC: RFC 4655 ** Downref: Normative reference to an Informational RFC: RFC 6234 ** Obsolete normative reference: RFC 6822 (Obsoleted by RFC 8202) ** Downref: Normative reference to an Informational RFC: RFC 7855 ** Downref: Normative reference to an Informational RFC: RFC 7938 -- Obsolete informational reference (is this intentional?): RFC 3315 (Obsoleted by RFC 8415) Summary: 7 errors (**), 0 flaws (~~), 21 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RIFT Working Group T. Przygienda, Ed. 3 Internet-Draft Juniper Networks 4 Intended status: Standards Track A. Sharma 5 Expires: October 28, 2018 Comcast 6 P. Thubert 7 Cisco 8 A. Atlas 9 J. Drake 10 Juniper Networks 11 Apr 26, 2018 13 RIFT: Routing in Fat Trees 14 draft-ietf-rift-rift-01 16 Abstract 18 This document outlines a specialized, dynamic routing protocol for 19 Clos and fat-tree network topologies. The protocol (1) deals with 20 automatic construction of fat-tree topologies based on detection of 21 links, (2) minimizes the amount of routing state held at each level, 22 (3) automatically prunes the topology distribution exchanges to a 23 sufficient subset of links, (4) supports automatic disaggregation of 24 prefixes on link and node failures to prevent black-holing and 25 suboptimal routing, (5) allows traffic steering and re-routing 26 policies, (6) allows non-ECMP forwarding, (7) automatically re- 27 balances traffic towards the spines based on bandwidth available and 28 ultimately (8) provides mechanisms to synchronize a limited key-value 29 data-store that can be used after protocol convergence to e.g. 30 bootstrap higher levels of functionality on nodes. 32 Status of This Memo 34 This Internet-Draft is submitted in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at https://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on October 28, 2018. 49 Copyright Notice 51 Copyright (c) 2018 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (https://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. Code Components extracted from this document must 60 include Simplified BSD License text as described in Section 4.e of 61 the Trust Legal Provisions and are provided without warranty as 62 described in the Simplified BSD License. 64 Table of Contents 66 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 67 1.1. Requirements Language . . . . . . . . . . . . . . . . . . 5 68 2. Reference Frame . . . . . . . . . . . . . . . . . . . . . . . 5 69 2.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 70 2.2. Topology . . . . . . . . . . . . . . . . . . . . . . . . 8 71 3. Requirement Considerations . . . . . . . . . . . . . . . . . 10 72 4. RIFT: Routing in Fat Trees . . . . . . . . . . . . . . . . . 12 73 4.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 13 74 4.2. Specification . . . . . . . . . . . . . . . . . . . . . . 13 75 4.2.1. Transport . . . . . . . . . . . . . . . . . . . . . . 13 76 4.2.2. Link (Neighbor) Discovery (LIE Exchange) . . . . . . 13 77 4.2.3. Topology Exchange (TIE Exchange) . . . . . . . . . . 15 78 4.2.3.1. Topology Information Elements . . . . . . . . . . 15 79 4.2.3.2. South- and Northbound Representation . . . . . . 16 80 4.2.3.3. Flooding . . . . . . . . . . . . . . . . . . . . 19 81 4.2.3.4. TIE Flooding Scopes . . . . . . . . . . . . . . . 19 82 4.2.3.5. Initial and Periodic Database Synchronization . . 21 83 4.2.3.6. Purging . . . . . . . . . . . . . . . . . . . . . 21 84 4.2.3.7. Southbound Default Route Origination . . . . . . 22 85 4.2.3.8. Northbound TIE Flooding Reduction . . . . . . . . 22 86 4.2.4. Policy-Guided Prefixes . . . . . . . . . . . . . . . 26 87 4.2.4.1. Ingress Filtering . . . . . . . . . . . . . . . . 27 88 4.2.4.2. Applying Policy . . . . . . . . . . . . . . . . . 28 89 4.2.4.3. Store Policy-Guided Prefix for Route Computation 90 and Regeneration . . . . . . . . . . . . . . . . 29 91 4.2.4.4. Re-origination . . . . . . . . . . . . . . . . . 29 92 4.2.4.5. Overlap with Disaggregated Prefixes . . . . . . . 30 93 4.2.5. Reachability Computation . . . . . . . . . . . . . . 30 94 4.2.5.1. Northbound SPF . . . . . . . . . . . . . . . . . 30 95 4.2.5.2. Southbound SPF . . . . . . . . . . . . . . . . . 31 96 4.2.5.3. East-West Forwarding Within a Level . . . . . . . 31 98 4.2.6. Attaching Prefixes . . . . . . . . . . . . . . . . . 32 99 4.2.7. Attaching Policy-Guided Prefixes . . . . . . . . . . 33 100 4.2.8. Automatic Disaggregation on Link & Node Failures . . 34 101 4.2.9. Optional Autoconfiguration . . . . . . . . . . . . . 37 102 4.2.9.1. Terminology . . . . . . . . . . . . . . . . . . . 38 103 4.2.9.2. Automatic SystemID Selection . . . . . . . . . . 39 104 4.2.9.3. Generic Fabric Example . . . . . . . . . . . . . 39 105 4.2.9.4. Level Determination Procedure . . . . . . . . . . 40 106 4.2.9.5. Resulting Topologies . . . . . . . . . . . . . . 41 107 4.2.10. Stability Considerations . . . . . . . . . . . . . . 43 108 4.3. Further Mechanisms . . . . . . . . . . . . . . . . . . . 44 109 4.3.1. Overload Bit . . . . . . . . . . . . . . . . . . . . 44 110 4.3.2. Optimized Route Computation on Leafs . . . . . . . . 44 111 4.3.3. Mobility . . . . . . . . . . . . . . . . . . . . . . 44 112 4.3.3.1. Clock Comparison . . . . . . . . . . . . . . . . 46 113 4.3.3.2. Interaction between Time Stamps and Sequence 114 Counters . . . . . . . . . . . . . . . . . . . . 46 115 4.3.3.3. Anycast vs. Unicast . . . . . . . . . . . . . . . 47 116 4.3.3.4. Overlays and Signaling . . . . . . . . . . . . . 47 117 4.3.4. Key/Value Store . . . . . . . . . . . . . . . . . . . 48 118 4.3.4.1. Southbound . . . . . . . . . . . . . . . . . . . 48 119 4.3.4.2. Northbound . . . . . . . . . . . . . . . . . . . 48 120 4.3.5. Interactions with BFD . . . . . . . . . . . . . . . . 48 121 4.3.6. Fabric Bandwidth Balancing . . . . . . . . . . . . . 49 122 4.3.6.1. Northbound Direction . . . . . . . . . . . . . . 49 123 4.3.6.2. Southbound Direction . . . . . . . . . . . . . . 51 124 4.3.7. Label Binding . . . . . . . . . . . . . . . . . . . . 52 125 4.3.8. Segment Routing Support with RIFT . . . . . . . . . . 52 126 4.3.8.1. Global Segment Identifiers Assignment . . . . . . 52 127 4.3.8.2. Distribution of Topology Information . . . . . . 52 128 4.3.9. Leaf to Leaf Procedures . . . . . . . . . . . . . . . 53 129 4.3.10. Other End-to-End Services . . . . . . . . . . . . . . 53 130 4.3.11. Address Family and Multi Topology Considerations . . 53 131 4.3.12. Reachability of Internal Nodes in the Fabric . . . . 54 132 4.3.13. One-Hop Healing of Levels with East-West Links . . . 54 133 5. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 54 134 5.1. Normal Operation . . . . . . . . . . . . . . . . . . . . 54 135 5.2. Leaf Link Failure . . . . . . . . . . . . . . . . . . . . 55 136 5.3. Partitioned Fabric . . . . . . . . . . . . . . . . . . . 56 137 5.4. Northbound Partitioned Router and Optional East-West 138 Links . . . . . . . . . . . . . . . . . . . . . . . . . . 58 139 6. Implementation and Operation: Further Details . . . . . . . . 59 140 6.1. Considerations for Leaf-Only Implementation . . . . . . . 60 141 6.2. Adaptations to Other Proposed Data Center Topologies . . 60 142 6.3. Originating Non-Default Route Southbound . . . . . . . . 61 143 7. Security Considerations . . . . . . . . . . . . . . . . . . . 61 144 8. Information Elements Schema . . . . . . . . . . . . . . . . . 62 145 8.1. common.thrift . . . . . . . . . . . . . . . . . . . . . . 62 146 8.2. encoding.thrift . . . . . . . . . . . . . . . . . . . . . 67 147 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 72 148 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 73 149 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 73 150 11.1. Normative References . . . . . . . . . . . . . . . . . . 73 151 11.2. Informative References . . . . . . . . . . . . . . . . . 75 152 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 77 154 1. Introduction 156 Clos [CLOS] and Fat-Tree [FATTREE] have gained prominence in today's 157 networking, primarily as result of the paradigm shift towards a 158 centralized data-center based architecture that is poised to deliver 159 a majority of computation and storage services in the future. 160 Today's routing protocols were geared towards a network with an 161 irregular topology and low degree of connectivity originally but 162 given they were the only available mechanisms, consequently several 163 attempts to apply those to Clos have been made. Most successfully 164 BGP [RFC4271] [RFC7938] has been extended to this purpose, not as 165 much due to its inherent suitability to solve the problem but rather 166 because the perceived capability to modify it "quicker" and the 167 immanent difficulties with link-state [DIJKSTRA] based protocols to 168 perform in large scale densely meshed topologies. 170 In looking at the problem through the lens of its requirements an 171 optimal approach does not seem however to be a simple modification of 172 either a link-state (distributed computation) or distance-vector 173 (diffused computation) approach but rather a mixture of both, 174 colloquially best described as "link-state towards the spine" and 175 "distance vector towards the leafs". In other words, "bottom" levels 176 are flooding their link-state information in the "northern" direction 177 while each switch generates under normal conditions a default route 178 and floods it in the "southern" direction. Obviously, such 179 aggregation can blackhole in cases of misconfiguration or failures 180 and this has to be addressed somehow. 182 For the visually oriented reader, Figure 1 presents a first 183 simplified view of the resulting information and routes on a RIFT 184 fabric. The top of the fabric is holding in its link-state database 185 the nodes below it and routes to them. In the second row of the 186 database we indicate that a partial information of other nodes in the 187 same level is available as well; the details of how this is achieved 188 should be postponed for the moment. Whereas when we look at the 189 "bottom" of the fabric we see that the topology of the leafs is 190 basically empty and they only hold a load balanced default route to 191 the next level. 193 The balance of this document details the resulting protocol and fills 194 in the missing details. 196 . [A,B,C,D] 197 . [E] 198 . +-----+ +-----+ 199 . | E | | F | A/32 @ [C,D] 200 . +-+-+-+ +-+-+-+ B/32 @ [C,D] 201 . | | | | C/32 @ C 202 . | | +-----+ | D/32 @ D 203 . | | | | 204 . | +------+ | 205 . | | | | 206 . [A,B] +-+---+ | | +---+-+ [A,B] 207 . [D] | C +--+ +-+ D | [C] 208 . +-+-+-+ +-+-+-+ 209 . 0/0 @ [E,F] | | | | 0/0 @ [E,F] 210 . A/32 @ A | | +-----+ | A/32 @ A 211 . B/32 @ B | | | | B/32 @ B 212 . | +------+ | 213 . | | | | 214 . +-+---+ | | +---+-+ 215 . | A +--+ +-+ B | 216 . 0/0 @ [C,D] +-----+ +-----+ 0/0 @ [C,D] 218 Figure 1: RIFT information distribution 220 1.1. Requirements Language 222 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 223 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 224 document are to be interpreted as described in RFC 2119 [RFC2119]. 226 2. Reference Frame 228 2.1. Terminology 230 This section presents the terminology used in this document. It is 231 assumed that the reader is thoroughly familiar with the terms and 232 concepts used in OSPF [RFC2328] and IS-IS [ISO10589-Second-Edition], 233 [ISO10589] as well as the according graph theoretical concepts of 234 shortest path first (SPF) [DIJKSTRA] computation and directed acyclic 235 graphs (DAG). 237 Level: Clos and Fat Tree networks are trees and 'level' denotes the 238 set of nodes at the same height in such a network, where the 239 bottom level is level 0. A node has links to nodes one level down 240 and/or one level up. Under some circumstances, a node may have 241 links to nodes at the same level. As footnote: Clos terminology 242 uses often the concept of "stage" but due to the folded nature of 243 the Fat Tree we do not use it to prevent misunderstandings. 245 Spine/Aggregation/Edge Levels: Traditional names for Level 2, 1 and 246 0 respectively. Level 0 is often called leaf as well. 248 Point of Delivery (PoD): A self-contained vertical slice of a Clos 249 or Fat Tree network containing normally only level 0 and level 1 250 nodes. It communicates with nodes in other PoDs via the spine. 251 We number PoDs to distinguish them and use PoD #0 to denote 252 "undefined" PoD. 254 Spine: The set of nodes that provide inter-PoD communication. These 255 nodes are also organized into levels (typically one, three, or 256 five levels). Spine nodes do not belong to any PoD and are 257 assigned "undefined" PoD value to indicate the equivalent of "any" 258 PoD. 260 Leaf: A node without southbound adjacencies. Its level is 0 (except 261 cases where it is deriving its level via ZTP and is running 262 without LEAF_ONLY which will be explained in Section 4.2.9). 264 Connected Spine: In case a spine level represents a connected graph 265 (discounting links terminating at different levels), we call it a 266 "connected spine", in case a spine level consists of multiple 267 partitions, we call it a "disconnected" or "partitioned spine". 268 In other terms, a spine without East-West links is disconnected 269 and is the typical configuration forf Clos and Fat Tree networks. 271 South/Southbound and North/Northbound (Direction): When describing 272 protocol elements and procedures, we will be using in different 273 situations the directionality of the compass. I.e., 'south' or 274 'southbound' mean moving towards the bottom of the Clos or Fat 275 Tree network and 'north' and 'northbound' mean moving towards the 276 top of the Clos or Fat Tree network. 278 Northbound Link: A link to a node one level up or in other words, 279 one level further north. 281 Southbound Link: A link to a node one level down or in other words, 282 one level further south. 284 East-West Link: A link between two nodes at the same level. East- 285 West links are normally not part of Clos or "fat-tree" topologies. 287 Leaf shortcuts (L2L): East-West links at leaf level will need to be 288 differentiated from East-West links at other levels. 290 Southbound representation: Information sent towards a lower level 291 representing only limited amount of information. 293 TIE: This is an acronym for a "Topology Information Element". TIEs 294 are exchanged between RIFT nodes to describe parts of a network 295 such as links and address prefixes. It can be thought of as 296 largely equivalent to ISIS LSPs or OSPF LSA. We will talk about 297 N-TIEs when talking about TIEs in the northbound representation 298 and S-TIEs for the southbound equivalent. 300 Node TIE: This is an acronym for a "Node Topology Information 301 Element", largely equivalent to OSPF Node LSA, i.e. it contains 302 all neighbors the node discovered and information about node 303 itself. 305 Prefix TIE: This is an acronym for a "Prefix Topology Information 306 Element" and it contains all prefixes directly attached to this 307 node in case of a N-TIE and in case of S-TIE the necessary default 308 and de-aggregated prefixes the node passes southbound. 310 Policy-Guided Information: Information that is passed in either 311 southbound direction or north-bound direction by the means of 312 diffusion and can be filtered via policies. Policy-Guided 313 Prefixes and KV Ties are examples of Policy-Guided Information. 315 Key Value TIE: A S-TIE that is carrying a set of key value pairs 316 [DYNAMO]. It can be used to distribute information in the 317 southbound direction within the protocol. 319 TIDE: Topology Information Description Element, equivalent to CSNP 320 in ISIS. 322 TIRE: Topology Information Request Element, equivalent to PSNP in 323 ISIS. It can both confirm received and request missing TIEs. 325 PGP: Policy-Guided Prefixes allow to support traffic engineering 326 that cannot be achieved by the means of SPF computation or normal 327 node and prefix S-TIE origination. S-PGPs are propagated in south 328 direction only and N-PGPs follow northern direction strictly. 330 De-aggregation/Disaggregation: Process in which a node decides to 331 advertise certain prefixes it received in N-TIEs to prevent black- 332 holing and suboptimal routing upon link failures. 334 LIE: This is an acronym for a "Link Information Element", largely 335 equivalent to HELLOs in IGPs and exchanged over all the links 336 between systems running RIFT to form adjacencies. 338 FL: Flooding Leader for a specific system has a dedicated role to 339 flood TIEs of that system. 341 FR: Flooding Repeater for a specific system has a dedicated role to 342 flood TIEs of that system northbound. Similar to MPR in OSLR. 344 BAD: This is an acronym for Bandwidth Adjusted Distance. RIFT 345 calculates the amount of northbound bandwidth available towards a 346 node compared to other nodes at the same level and adjusts the 347 default route distance accordingly to allow for the lower level to 348 adjust their load balancing. 350 Overloaded: Applies to a node advertising `overload` attribute as 351 set. The semantics closely follow the meaning of the same 352 attribute in [ISO10589-Second-Edition]. 354 2.2. Topology 355 . +--------+ +--------+ 356 . | | | | ^ N 357 . |Spine 21| |Spine 22| | 358 .Level 2 ++-+--+-++ ++-+--+-++ <-*-> E/W 359 . | | | | | | | | | 360 . P111/2| |P121 | | | | S v 361 . ^ ^ ^ ^ | | | | 362 . | | | | | | | | 363 . +--------------+ | +-----------+ | | | +---------------+ 364 . | | | | | | | | 365 . South +-----------------------------+ | | ^ 366 . | | | | | | | All TIEs 367 . 0/0 0/0 0/0 +-----------------------------+ | 368 . v v v | | | | | 369 . | | +-+ +<-0/0----------+ | | 370 . | | | | | | | | 371 .+-+----++ optional +-+----++ ++----+-+ ++-----++ 372 .| | E/W link | | | | | | 373 .|Node111+----------+Node112| |Node121| |Node122| 374 .+-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 375 . | | | South | | | | 376 . | +---0/0--->-----+ 0/0 | +----------------+ | 377 . 0/0 | | | | | | | 378 . | +---<-0/0-----+ | v | +--------------+ | | 379 . v | | | | | | | 380 .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 381 .| | (L2L) | | | | Level 0 | | 382 .|Leaf111~~~~~~~~~~~~Leaf112| |Leaf121| |Leaf122| 383 .+-+-----+ +-+---+-+ +--+--+-+ +-+-----+ 384 . + + \ / + + 385 . Prefix111 Prefix112 \ / Prefix121 Prefix122 386 . multi-homed 387 . Prefix 388 .+---------- Pod 1 ---------+ +---------- Pod 2 ---------+ 390 Figure 2: A two level spine-and-leaf topology 392 We will use this topology (called commonly a fat tree/network in 393 modern DC considerations [VAHDAT08] as homonym to the original 394 definition of the term [FATTREE]) in all further considerations. It 395 depicts a generic "fat-tree" and the concepts explained in three 396 levels here carry by induction for further levels and higher degrees 397 of connectivity. However, this document will deal with designs that 398 provide only sparser connectivity as well. 400 3. Requirement Considerations 402 [RFC7938] gives the original set of requirements augmented here based 403 upon recent experience in the operation of fat-tree networks. 405 REQ1: The control protocol should discover the physical links 406 automatically and be able to detect cabling that violates 407 fat-tree topology constraints. It must react accordingly to 408 such mis-cabling attempts, at a minimum preventing 409 adjacencies between nodes from being formed and traffic from 410 being forwarded on those mis-cabled links. E.g. connecting 411 a leaf to a spine at level 2 should be detected and ideally 412 prevented. 414 REQ2: A node without any configuration beside default values 415 should come up at the correct level in any PoD it is 416 introduced into. Optionally, it must be possible to 417 configure nodes to restrict their participation to the 418 PoD(s) targeted at any level. 420 REQ3: Optionally, the protocol should allow to provision data 421 centers where the individual switches carry no configuration 422 information and are all deriving their level from a "seed". 423 Observe that this requirement may collide with the desire to 424 detect cabling misconfiguration and with that only one of 425 the requirements can be fully met in a chosen configuration 426 mode. 428 REQ4: The solution should allow for minimum size routing 429 information base and forwarding tables at leaf level for 430 speed, cost and simplicity reasons. Holding excessive 431 amount of information away from leaf nodes simplifies 432 operation and lowers cost of the underlay. 434 REQ5: Very high degree of ECMP must be supported. Maximum ECMP is 435 currently understood as the most efficient routing approach 436 to maximize the throughput of switching fabrics 437 [MAKSIC2013]. 439 REQ6: Non equal cost anycast must be supported to allow for easy 440 and robust multi-homing of services without regressing to 441 careful balancing of link costs. 443 REQ7: Traffic engineering should be allowed by modification of 444 prefixes and/or their next-hops. 446 REQ8: The solution should allow for access to link states of the 447 whole topology to enable efficient support for modern 448 control architectures like SPRING [RFC7855] or PCE 449 [RFC4655]. 451 REQ9: The solution should easily accommodate opaque data to be 452 carried throughout the topology to subsets of nodes. This 453 can be used for many purposes, one of them being a key-value 454 store that allows bootstrapping of nodes based right at the 455 time of topology discovery. 457 REQ10: Nodes should be taken out and introduced into production 458 with minimum wait-times and minimum of "shaking" of the 459 network, i.e. radius of propagation (often called "blast 460 radius") of changed information should be as small as 461 feasible. 463 REQ11: The protocol should allow for maximum aggregation of carried 464 routing information while at the same time automatically de- 465 aggregating the prefixes to prevent black-holing in case of 466 failures. The de-aggregation should support maximum 467 possible ECMP/N-ECMP remaining after failure. 469 REQ12: Reducing the scope of communication needed throughout the 470 network on link and state failure, as well as reducing 471 advertisements of repeating, idiomatic or policy-guided 472 information in stable state is highly desirable since it 473 leads to better stability and faster convergence behavior. 475 REQ13: Once a packet traverses a link in a "southbound" direction, 476 it must not take any further "northbound" steps along its 477 path to delivery to its destination under normal conditions. 478 Taking a path through the spine in cases where a shorter 479 path is available is highly undesirable. 481 REQ14: Parallel links between same set of nodes must be 482 distinguishable for SPF, failure and traffic engineering 483 purposes. 485 REQ15: The protocol must not rely on interfaces having discernible 486 unique addresses, i.e. it must operate in presence of 487 unnumbered links (even parallel ones) or links of a single 488 node having same addresses. 490 REQ16: It would be desirable to achieve fast re-balancing of flows 491 when links, especially towards the spines are lost or 492 provisioned without regressing to per flow traffic 493 engineering which introduces significant amount of 494 complexity while possibly not being reactive enough to 495 account for short-lived flows. 497 REQ17: The control plane should be able to unambiguously determine 498 the current point of attachment (which port on which leaf 499 node) of a prefix, even in a context of fast mobility, e.g., 500 when the prefix is a host address on a wireless node that 1) 501 may associate to any of multiple access points (APs) that 502 are attached to different ports on a same leaf node or to 503 different leaf nodes, and 2) may move and reassociate 504 several times to a different AP within a sub-second period. 506 Following list represents possible requirements and requirements 507 under discussion: 509 PEND1: Supporting anything but point-to-point links is a non- 510 requirement. Questions remain: for connecting to the 511 leaves, is there a case where multipoint is desirable? One 512 could still model it as point-to-point links; it seems there 513 is no need for anything more than a NBMA-type construct. 515 PEND2: What is the maximum scale of number leaf prefixes we need to 516 carry. Is 500'000 enough ? 518 Finally, following are the non-requirements: 520 NONREQ1: Broadcast media support is unnecessary. 522 NONREQ2: Purging is unnecessary given its fragility and complexity 523 and today's large memory size on even modest switches and 524 routers. 526 NONREQ3: Special support for layer 3 multi-hop adjacencies is not 527 part of the protocol specification. Such support can be 528 easily provided by using tunneling technologies the same 529 way IGPs today are solving the problem. 531 4. RIFT: Routing in Fat Trees 533 Derived from the above requirements we present a detailed outline of 534 a protocol optimized for Routing in Fat Trees (RIFT) that in most 535 abstract terms has many properties of a modified link-state protocol 536 [RFC2328][ISO10589-Second-Edition] when "pointing north" and path- 537 vector [RFC4271] protocol when "pointing south". Albeit an unusual 538 combination, it does quite naturally exhibit the desirable properties 539 we seek. 541 4.1. Overview 543 The singular property of RIFT is that it floods only northbound 544 "flat" link-state information so that each level understands the full 545 topology of levels south of it. That information is never flooded 546 East-West or back South again. In the southbound direction the 547 protocol operates like a "unidirectional" path vector protocol or 548 rather a distance vector with implicit split horizon whereas the 549 information only propagates one hop south and is 're-advertised' by 550 nodes at next lower level. However, we use flooding in the southern 551 direction as well to avoid the necessity to build an update per 552 neighbor. We leave the East-West direction out for the moment. 554 Those information flow constraints create a "smooth" information 555 propagation where nodes do not receive the same information from 556 multiple fronts which would force them to perform a diffused 557 computation to tie-break the same reachability information arriving 558 on arbitrary links and ultimately force hop-by-hop forwarding on 559 shortest-paths only. 561 To account for the "northern" and the "southern" information split 562 the link state database is partitioned into "north representation" 563 and "south representation" TIEs, whereas in simplest terms the N-TIEs 564 contain a link state topology description of lower levels and and 565 S-TIEs carry simply default routes. This oversimplified view will be 566 refined gradually in following sections while introducing protocol 567 procedures aimed to fulfill the described requirements. 569 4.2. Specification 571 4.2.1. Transport 573 All protocol elements are carried over UDP. Once QUIC [QUIC] 574 achieves the desired stability in deployments it may prove a valuable 575 candidate for TIE transport. 577 All packet formats are defined in Thrift models in Section 8. 579 Future versions may include a [PROTOBUF] schema. 581 4.2.2. Link (Neighbor) Discovery (LIE Exchange) 583 LIE exchange happens over well-known administratively locally scoped 584 IPv4 multicast address [RFC2365] or link-local multicast scope 585 [RFC4291] for IPv6 [RFC8200] and SHOULD be sent with a TTL of 1 to 586 prevent RIFT information reaching beyond a single L3 next-hop in the 587 topology. LIEs are exchanged over all links running RIFT. 589 Unless Section 4.2.9 is used, each node is provisioned with the level 590 at which it is operating and its PoD (or otherwise a default level 591 and "undefined" PoD are assumed; meaning that leafs do not need to be 592 configured at all if initial configuration values are all left at 0). 593 Nodes in the spine are configured with "any" PoD which has the same 594 value "undefined" PoD hence we will talk about "undefined/any" PoD. 595 This information is propagated in the LIEs exchanged. 597 A node tries to form a three way adjacency if and only if 598 (definitions of LEAF_ONLY are found in Section 4.2.9) 600 1. the node is in the same PoD or either the node or the neighbor 601 advertises "undefined/any" PoD membership (PoD# = 0) AND 603 2. the neighboring node is running the same MAJOR schema version AND 605 3. the neighbor is not member of some PoD while the node has a 606 northbound adjacency already joining another PoD AND 608 4. the neighboring node uses a valid System ID AND 610 5. the neighboring node uses a different System ID than the node 611 itself 613 6. the advertised MTUs match on both sides AND 615 7. both nodes advertise defined level values AND 617 8. [ 619 i) the node is at level 0 and has no three way adjacencies 620 already to nodes with level higher than the neighboring node 621 OR 623 ii) the node is not at level 0 and the neighboring node is at 624 level 0 OR 626 iii) both nodes are at level 0 AND both indicate support for 627 Section 4.3.9 OR 629 iii) neither node is at level 0 and the neighboring node is at 630 most one level away 632 ]. 634 Rule in Paragraph 3 MAY be optionally disregarded by a node if PoD 635 detection is undesirable or has to be disregarded. 637 A node configured with "undefined" PoD membership MUST, after 638 building first northbound three way adjacencies to a node being in a 639 defined PoD, advertise that PoD as part of its LIEs. In case that 640 adjacency is lost, from all available northbound three way 641 adjacencies the node with the highest System ID and defined PoD is 642 chosen. That way the northmost defined PoD value (normally the top 643 spines in a PoD) can diffuse southbound towards the leafs "forcing" 644 the PoD value on any node with "undefined" PoD. 646 LIEs arriving with a TTL larger than 1 MUST be ignored. 648 A node SHOULD NOT send out LIEs without defined level in the header 649 but in certain scenarios it may be beneficial for trouble-shooting 650 purposes. 652 LIE exchange uses three way handshake mechanism [RFC5303]. Precise 653 finite state machines will be provided in later versions of this 654 specification. LIE packets contain nonces and may contain an SHA-1 655 [RFC6234] over nonces and some of the LIE data which prevents 656 corruption and replay attacks. TIE flooding reuses those nonces to 657 prevent mismatches and can use those for security purposes in case it 658 is using QUIC [QUIC]. Section 7 will address the precise security 659 mechanisms in the future. 661 4.2.3. Topology Exchange (TIE Exchange) 663 4.2.3.1. Topology Information Elements 665 Topology and reachability information in RIFT is conveyed by the 666 means of TIEs which have good amount of commonalities with LSAs in 667 OSPF. 669 TIE exchange mechanism uses port indicated by each node in the LIE 670 exchange and the interface on which the adjacency has been formed as 671 destination. It SHOULD use TTL of 1 as well. 673 TIEs contain sequence numbers, lifetimes and a type. Each type has a 674 large identifying number space and information is spread across 675 possibly many TIEs of a certain type by the means of a hash function 676 that a node or deployment can individually determine. One extreme 677 point of the design space is a prefix per TIE which leads to BGP-like 678 behavior vs. dense packing into few TIEs leading to more traditional 679 IGP trade-off with fewer TIEs. An implementation may even rehash at 680 the cost of significant amount of re-advertisements of TIEs. 682 More information about the TIE structure can be found in the schema 683 in Section 8. 685 4.2.3.2. South- and Northbound Representation 687 As a central concept to RIFT, each node represents itself differently 688 depending on the direction in which it is advertising information. 689 More precisely, a spine node represents two different databases to 690 its neighbors depending whether it advertises TIEs to the north or to 691 the south/sideways. We call those differing TIE databases either 692 south- or northbound (S-TIEs and N-TIEs) depending on the direction 693 of distribution. 695 The N-TIEs hold all of the node's adjacencies, local prefixes and 696 northbound policy-guided prefixes while the S-TIEs hold only all of 697 the node's adjacencies and the default prefix with necessary 698 disaggregated prefixes and southbound policy-guided prefixes. We 699 will explain this in detail further in Section 4.2.8 and 700 Section 4.2.4. 702 The TIE types are symmetric in both directions and Table 1 provides a 703 quick reference to the different TIE types including direction and 704 their function. 706 +----------+--------------------------------------------------------+ 707 | TIE-Type | Content | 708 +----------+--------------------------------------------------------+ 709 | node | node properties, adjacencies and information helping | 710 | N-TIE | in complex disaggregation scenarios | 711 +----------+--------------------------------------------------------+ 712 | node | same content as node N-TIE except the information to | 713 | S-TIE | help disaggregation | 714 +----------+--------------------------------------------------------+ 715 | Prefix | contains nodes' directly reachable prefixes | 716 | N-TIE | | 717 +----------+--------------------------------------------------------+ 718 | Prefix | contains originated defaults and de-aggregated | 719 | S-TIE | prefixes | 720 +----------+--------------------------------------------------------+ 721 | PGP | contains nodes north PGPs | 722 | N-TIE | | 723 +----------+--------------------------------------------------------+ 724 | PGP | contains nodes south PGPs | 725 | S-TIE | | 726 +----------+--------------------------------------------------------+ 727 | KV | contains nodes northbound KVs | 728 | N-TIE | | 729 +----------+--------------------------------------------------------+ 730 | KV | contains nodes southbound KVs | 731 | S-TIE | | 732 +----------+--------------------------------------------------------+ 734 Table 1: TIE Types 736 As an example illustrating a databases holding both representations, 737 consider the topology in Figure 2 with the optional link between node 738 111 and node 112 (so that the flooding on an East-West link can be 739 shown). This example assumes unnumbered interfaces. First, here are 740 the TIEs generated by some nodes. For simplicity, the key value 741 elements and the PGP elements which may be included in their S-TIEs 742 or N-TIEs are not shown. 744 Spine21 S-TIEs: 745 Node S-TIE: 746 NodeElement(layer=2, neighbors((Node111, layer 1, cost 1), 747 (Node112, layer 1, cost 1), (Node121, layer 1, cost 1), 748 (Node122, layer 1, cost 1))) 749 Prefix S-TIE: 750 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 752 Node111 S-TIEs: 754 Node S-TIE: 755 NodeElement(layer=1, neighbors((Spine21, layer 2, cost 1, links(...)), 756 (Spine22, layer 2, cost 1, links(...)), 757 (Node112, layer 1, cost 1, links(...)), 758 (Leaf111, layer 0, cost 1, links(...)), 759 (Leaf112, layer 0, cost 1, links(...)))) 760 Prefix S-TIE: 761 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 763 Node111 N-TIEs: 764 Node N-TIE: 765 NodeElement(layer=1, 766 neighbors((Spine21, layer 2, cost 1, links(...)), 767 (Spine22, layer 2, cost 1, links(...)), 768 (Node112, layer 1, cost 1, links(...)), 769 (Leaf111, layer 0, cost 1, links(...)), 770 (Leaf112, layer 0, cost 1, links(...)))) 771 Prefix N-TIE: 772 NorthPrefixesElement(prefixes(Node111.loopback) 774 Node121 S-TIEs: 775 Node S-TIE: 776 NodeElement(layer=1, neighbors((Spine21,layer 2,cost 1), 777 (Spine22, layer 2, cost 1), (Leaf121, layer 0, cost 1), 778 (Leaf122, layer 0, cost 1))) 779 Prefix S-TIE: 780 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 782 Node121 N-TIEs: 783 Node N-TIE: 784 NodeLinkElement(layer=1, 785 neighbors((Spine21, layer 2, cost 1, links(...)), 786 (Spine22, layer 2, cost 1, links(...)), 787 (Leaf121, layer 0, cost 1, links(...)), 788 (Leaf122, layer 0, cost 1, links(...)))) 789 Prefix N-TIE: 790 NorthPrefixesElement(prefixes(Node121.loopback) 792 Leaf112 N-TIEs: 793 Node N-TIE: 794 NodeLinkElement(layer=0, 795 neighbors((Node111, layer 1, cost 1, links(...)), 796 (Node112, layer 1, cost 1, links(...)))) 797 Prefix N-TIE: 798 NorthPrefixesElement(prefixes(Leaf112.loopback, Prefix112, 799 Prefix_MH)) 801 Figure 3: example TIES generated in a 2 level spine-and-leaf topology 803 4.2.3.3. Flooding 805 The mechanism used to distribute TIEs is the well-known (albeit 806 modified in several respects to address fat tree requirements) 807 flooding mechanism used by today's link-state protocols. Although 808 flloding is initially more demanding to implement it avoids many 809 problems with update style used in diffused computation such as path 810 vector protocols. Since flooding tends to present an unscalable 811 burden in large, densely meshed topologies (fat trees being 812 unfortunately such a topology) we provide as solution a close to 813 optimal global flood reduction and load balancing optimization in 814 Section 4.2.3.8. 816 As described before, TIEs themselves are transported over UDP with 817 the ports indicated in the LIE exchanges and using the destination 818 address (for unnumbered IPv4 interfaces same considerations apply as 819 in equivalent OSPF case) on which the LIE adjacency has been formed. 821 On reception of a TIE with an undefined level value in the packet 822 header the node SHOULD issue a warning and indiscriminately discard 823 the packet. 825 Precise finite state machines and procedures will be provided in 826 later versions of this specification. 828 4.2.3.4. TIE Flooding Scopes 830 In a somewhat analogous fashion to link-local, area and domain 831 flooding scopes, RIFT defines several complex "flooding scopes" 832 depending on the direction and type of TIE propagated. 834 Every N-TIE is flooded northbound, providing a node at a given level 835 with the complete topology of the Clos or Fat Tree network underneath 836 it, including all specific prefixes. This means that a packet 837 received from a node at the same or lower level whose destination is 838 covered by one of those specific prefixes may be routed directly 839 towards the node advertising that prefix rather than sending the 840 packet to a node at a higher level. 842 A node's Node S-TIEs, consisting of all node's adjacencies and prefix 843 S-TIEs limited to those related to default IP prefix and 844 disaggregated prefixes, are flooded southbound in order to allow the 845 nodes one level down to see connectivity of the higher level as well 846 as reachability to the rest of the fabric. In order to allow a E-W 847 disconnected node in a given level to receive the S-TIEs of other 848 nodes at its level, every *NODE* S-TIE is "reflected" northbound to 849 level from which it was received. It should be noted that East-West 850 links are included in South TIE flooding; those TIEs need to be 851 flooded to satisfy algorithms in Section 4.2.5. In that way nodes at 852 same level can learn about each other without a lower level, e.g. in 853 case of leaf level. The precise flooding scopes are given in 854 Table 2. Those rules govern as well what SHOULD be included in TIDEs 855 towards neighbors. East-West flooding scopes are identical to South 856 flooding scopes. 858 Node S-TIE "reflection" allows to support disaggregation on failures 859 describes in Section 4.2.8 and flooding reduction in Section 4.2.3.8. 861 +--------------+----------------------------+-----------------------+ 862 | Packet Type | South | North | 863 | vs. Peer | | | 864 | Direction | | | 865 +--------------+----------------------------+-----------------------+ 866 | node S-TIE | flood self-originated only | flood if TIE | 867 | | | originator's level is | 868 | | | higher than own level | 869 +--------------+----------------------------+-----------------------+ 870 | non-node | flood self-originated only | flood only if TIE | 871 | S-TIE | | originator is equal | 872 | | | peer | 873 +--------------+----------------------------+-----------------------+ 874 | all N-TIEs | never flood | flood always | 875 +--------------+----------------------------+-----------------------+ 876 | TIDE | include TIEs in flooding | include TIEs in | 877 | | scope | flooding scope | 878 +--------------+----------------------------+-----------------------+ 879 | TIRE | include all N-TIEs and all | include only if TIE | 880 | | peer's self-originated | originator is equal | 881 | | TIEs and all node S-TIEs | peer | 882 +--------------+----------------------------+-----------------------+ 884 Table 2: Flooding Scopes 886 As an example to illustrate these rules, consider using the topology 887 in Figure 2, with the optional link between node 111 and node 112, 888 and the associated TIEs given in Figure 3. The flooding from 889 particular nodes of the TIEs is given in Table 3. 891 +------------+----------+-------------------------------------------+ 892 | Router | Neighbor | TIEs | 893 | floods to | | | 894 +------------+----------+-------------------------------------------+ 895 | Leaf111 | Node112 | Leaf111 N-TIEs, Node111 node S-TIE | 896 | Leaf111 | Node111 | Leaf111 N-TIEs, Node112 node S-TIE | 897 | | | | 898 | Node111 | Leaf111 | Node111 S-TIEs | 899 | Node111 | Leaf112 | Node111 S-TIEs | 900 | Node111 | Node112 | Node111 S-TIEs | 901 | Node111 | Spine21 | Node111 N-TIEs, Leaf111 N-TIEs, Leaf112 | 902 | | | N-TIEs, Spine22 node S-TIE | 903 | Node111 | Spine22 | Node111 N-TIEs, Leaf111 N-TIEs, Leaf112 | 904 | | | N-TIEs, Spine21 node S-TIE | 905 | | | | 906 | ... | ... | ... | 907 | Spine21 | Node111 | Spine21 S-TIEs | 908 | Spine21 | Node112 | Spine21 S-TIEs | 909 | Spine21 | Node121 | Spine21 S-TIEs | 910 | Spine21 | Node122 | Spine21 S-TIEs | 911 | ... | ... | ... | 912 +------------+----------+-------------------------------------------+ 914 Table 3: Flooding some TIEs from example topology 916 4.2.3.5. Initial and Periodic Database Synchronization 918 The initial exchange of RIFT is modeled after ISIS with TIDE being 919 equivalent to CSNP and TIRE playing the role of PSNP. The content of 920 TIDEs and TIREs is governed by Table 2. 922 4.2.3.6. Purging 924 RIFT does not purge information that has been distributed by the 925 protocol. Purging mechanisms in other routing protocols have proven 926 to be complex and fragile over many years of experience. Abundant 927 amounts of memory are available today even on low-end platforms. The 928 information will age out and all computations will deliver correct 929 results if a node leaves the network due to the new information 930 distributed by its adjacent nodes. 932 Once a RIFT node issues a TIE with an ID, it MUST preserve the ID as 933 long as feasible (also when the protocol restarts), even if the TIE 934 looses all content. The re-advertisement of empty TIE fulfills the 935 purpose of purging any information advertised in previous versions. 936 The originator is free to not re-originate the according empty TIE 937 again or originate an empty TIE with relatively short lifetime to 938 prevent large number of long-lived empty stubs polluting the network. 940 Each node will timeout and clean up the according empty TIEs 941 independently. 943 Upon restart a node MUST, as any link-state implementation, be 944 prepared to receive TIEs with its own system ID and supercede them 945 with equivalent, newly generated, empty TIEs with a higher sequence 946 number. As above, the lifetime can be relatively short since it only 947 needs to exceed the necessary propagation and processing delay by all 948 the nodes that are within the TIE's flooding scope. 950 4.2.3.7. Southbound Default Route Origination 952 Under certain conditions nodes issue a default route in their South 953 Prefix TIEs with metrics as computed in Section 4.3.6.1. 955 A node X that 957 1. is NOT overloaded AND 959 2. has southbound or East-West adjacencies 961 originates in its south prefix TIE such a default route IIF 963 1. all other nodes at X's' level are overloaded OR 965 2. all other nodes at X's' level have NO northbound adjacencies OR 967 3. X has computed reachability to a default route during N-SPF. 969 The term "all other nodes at X's' level" describes obviously just the 970 nodes at the same level in the POD with a viable lower layer 971 (otherwise the node S-TIEs cannot be reflected and the nodes in e.g. 972 POD 1 and POD 2 are "invisible" to each other). 974 A node originating a southbound default route MUST install a default 975 discard route if it did not compute a default route during N-SPF. 977 4.2.3.8. Northbound TIE Flooding Reduction 979 Section 1.4 of the Optimized Link State Routing Protocol [RFC3626] 980 (OLSR) introduces the concept of a "multipoint relay" (MPR) that 981 minimize the overhead of flooding messages in the network by reducing 982 redundant retransmissions in the same region. 984 A similar technique is applied to RIFT to control northbound 985 flooding. Important observations first: 987 1. a node MUST flood self-originated N-TIE to all the reachable 988 nodes at the level above which we call the node's "parents"; 990 2. it is typically not necessary that all parents reflood the N-TIEs 991 to achieve a complete flooding of all the reachable nodes two 992 levels above which we choose to call the node's "grandparents"; 994 3. to control the volume of its flooding two hops North and yet keep 995 it robust enough, it is advantageous for a node to select a 996 subset of its parents as "Flood Repeaters" (FRs), which combined 997 together deliver two or more copies of its flooding to all of its 998 parents, i.e. the originating node's grandparents; 1000 4. nodes at the same level do NOT have to agree on a specific 1001 algorithm to select the FRs, but overall load balancing should be 1002 achieved so that different nodes at the same level should tend to 1003 select different parents as FRs; 1005 5. there are usually many solutions to the problem of finding a set 1006 of FRs for a given node; the problem of finding the minimal set 1007 is (similar to) a NP-Complete problem and a globally optimal set 1008 may not be the minimal one if load-balancing with other nodes is 1009 an important consideration; 1011 6. it is expected that there will be often sets of equivalent nodes 1012 at a level L, defined as having a common set of parents at L+1. 1013 Applying this observation at both L and L+1, an algorithm may 1014 attempt to split the larger problem in a sum of smaller separate 1015 problems; 1017 7. it is another expectation that there will be from time to time a 1018 broken link between a parent and a grandparent, and in that case 1019 the parent is probably a poor FR due to its lower reliability. 1020 An algorithm may attempt to eliminate parents with broken 1021 northbound adjacencies first in order to reduce the number of 1022 FRs. Albeit it could be argued that relying on higher fanout FRs 1023 will slow flooding due to higher replication load reliability of 1024 FR's links seems to be a more pressing concern. 1026 In a fully connected Clos Network, this means that a node selects one 1027 arbitrary parent as FR and then a second one for redundancy. The 1028 computation can be kept relatively simple and completely distributed 1029 without any need for synchronization amongst nodes. In a "PoD" 1030 structure, where the Level L+2 is partitioned in silos of equivalent 1031 grandparents that are only reachable from respective parents, this 1032 means treating each silo as a fully connected Clos Network and solve 1033 the problem within the silo. 1035 In terms of signaling, a node has enough information to select its 1036 set of FRs; this information is derived from the node's parents' Node 1037 S-TIEs, which indicate the parent's reachable northbound adjacencies 1038 to its own parents, i.e. the node's grandparents. An optional 1039 boolean information `you_are_not_flood_repeater` in a LIE packet to a 1040 parent is set to indicate that the parent is not an FR and that it 1041 SHOULD NOT reflood N-TIEs. 1043 This specification proposes a simple default algorithm that SHOULD be 1044 implemented and used by default on every RIFT node. 1046 o let |NA(Node) be the set of Northbound adjacencies of node Node 1047 and CN(Node) be the cardinality of |NA(Node); 1049 o let |SA(Node) be the set of Southbound adjacencies of node Node 1050 and CS(Node) be the cardinality of |SA(Node); 1052 o let |P(Node) be the set of node Node's parents; 1054 o let |G(Node) be the set of node Node's grandparents. Observe 1055 that |G(Node) = |P(|P(Node)); 1057 o let N be the child node at level L computing a set of FR; 1059 o let P be a node at level L+1 and a parent node of N, i.e. bi- 1060 directionally reachable over adjacency A(N, P); 1062 o let G be a grandparent node of N, reachable transitively via a 1063 parent P over adjacencies ADJ(N, P) and ADJ(P, G). Observe that N 1064 does not have enough information to check bidirectional 1065 reachability of A(P, G); 1067 o let R be a redundancy constant integer; a value of 2 or higher for 1068 R is RECOMMENDED; 1070 o let S be a similarity constant integer; a value in range 0 .. 2 1071 for S is RECOMMENDED, the value of 1 SHOULD be used. Two 1072 cardinalities are considered as equivalent if their absolute 1073 difference is less than or equal to S, i.e. |a-b|<=S. 1075 The algorithm consists of the following steps: 1077 1. derive a 16-bits pseudo-random unsigned integer PR(N) from N's 1078 system ID by splitting it in 16-bits-long words W1, W2, ..., Wn 1079 and then XOR'ing the circularly shifted resulting words together, 1080 and casting the resulting representation: 1082 1. (unsigned integer) (W1<<1 xor (W2<<2) xor ... xor (Wn< 0 do 1122 1. for i from C_k(N)-1 to 1 decrementing by 1 do 1124 1. set j to PR(N) modulo i; 1126 2. exchange |A_k[j] and |A_k[i]; 1128 2. set k=k-1; 1130 5. for each grandparent, initialize a counter with the number of its 1131 Southbound adjacencies : 1133 1. for each G in |G(N) set c(G) = CS(G); 1135 6. finally keep as FRs only parents that are needed to maintain the 1136 number of adjacencies between the FRs and any grandparent G equal 1137 or above the redundancy constant R: 1139 1. for each P in reshuffled |A(N); 1141 1. if there exists an adjacency ADJ(P, G) in |NA(P) such 1142 that c(G) <= R then 1144 1. place P in FR set; 1146 2. else 1148 1. for all adjacencies ADJ(P, G) in |NA(P) 1150 1. decrement c(G); 1152 The algorithm MUST be re-evaluated by a node on every change of local 1153 adjacencies or reception of a parent S-TIE with changed adjacencies. 1154 A node MAY apply a hysteresis to prevent excessive amount of 1155 computation during periods of network instability just like in case 1156 of reachability computation. 1158 4.2.4. Policy-Guided Prefixes 1160 In a fat tree, it can be sometimes desirable to guide traffic to 1161 particular destinations or keep specific flows to certain paths. In 1162 RIFT, this is done by using policy-guided prefixes with their 1163 associated communities. Each community is an abstract value whose 1164 meaning is determined by configuration. It is assumed that the 1165 fabric is under a single administrative control so that the meaning 1166 and intent of the communities is understood by all the nodes in the 1167 fabric. Any node can originate a policy-guided prefix. 1169 Since RIFT uses distance vector concepts in a southbound direction, 1170 it is straightforward to add a policy-guided prefix to an S-TIE. For 1171 easier troubleshooting, the approach taken in RIFT is that a node's 1172 southbound policy-guided prefixes are sent in its S-TIE and the 1173 receiver does inbound filtering based on the associated communities 1174 (an egress policy is imaginable but would lead to different S-TIEs 1175 per neighbor possibly which is not considered in RIFT protocol 1176 procedures). A southbound policy-guided prefix can only use links in 1177 the south direction. If an PGP S-TIE is received on an East-West or 1178 northbound link, it must be discarded by ingress filtering. 1180 Conceptually, a southbound policy-guided prefix guides traffic from 1181 the leaves up to at most the north-most layer. It is also necessary 1182 to to have northbound policy-guided prefixes to guide traffic from 1183 the north-most layer down to the appropriate leaves. Therefore, RIFT 1184 includes northbound policy-guided prefixes in its N PGP-TIE and the 1185 receiver does inbound filtering based on the associated communities. 1186 A northbound policy-guided prefix can only use links in the northern 1187 direction. If an N PGP TIE is received on an East-West or southbound 1188 link, it must be discarded by ingress filtering. 1190 By separating southbound and northbound policy-guided prefixes and 1191 requiring that the cost associated with a PGP is strictly 1192 monotonically increasing at each hop, the path cannot loop. Because 1193 the costs are strictly increasing, it is not possible to have a loop 1194 between a northbound PGP and a southbound PGP. If East-West links 1195 were to be allowed, then looping could occur and issues such as 1196 counting to infinity would become an issue to be solved. If complete 1197 generality of path - such as including East-West links and using both 1198 north and south links in arbitrary sequence - then a Path Vector 1199 protocol or a similar solution must be considered. 1201 If a node has received the same prefix, after ingress filtering, as a 1202 PGP in an S-TIE and in an N-TIE, then the node determines which 1203 policy-guided prefix to use based upon the advertised cost. 1205 A policy-guided prefix is always preferred to a regular prefix, even 1206 if the policy-guided prefix has a larger cost. Section 8 provides 1207 normative indication of prefix preferences. 1209 The set of policy-guided prefixes received in a TIE is subject to 1210 ingress filtering and then re-originated to be sent out in the 1211 receiver's appropriate TIE. Both the ingress filtering and the re- 1212 origination use the communities associated with the policy-guided 1213 prefixes to determine the correct behavior. The cost on re- 1214 advertisement MUST increase in a strictly monotonic fashion. 1216 4.2.4.1. Ingress Filtering 1218 When a node X receives a PGP S-TIE or a PGP N-TIE that is originated 1219 from a node Y which does not have an adjacency with X, all PGPs in 1220 such a TIE MUST be filtered. Similarly, if node Y is at the same 1221 layer as node X, then X MUST filter out PGPs in such S- and N-TIEs to 1222 prevent loops. 1224 Next, policy can be applied to determine which policy-guided prefixes 1225 to accept. Since ingress filtering is chosen rather than egress 1226 filtering and per-neighbor PGPs, policy that applies to links is done 1227 at the receiver. Because the RIFT adjacency is between nodes and 1228 there may be parallel links between the two nodes, the policy-guided 1229 prefix is considered to start with the next-hop set that has all 1230 links to the originating node Y. 1232 A policy-guided prefix has or is assigned the following attributes: 1234 cost: This is initialized to the cost received 1236 community_list: This is initialized to the list of the communities 1237 received. 1239 next_hop_set: This is initialized to the set of links to the 1240 originating node Y. 1242 4.2.4.2. Applying Policy 1244 The specific action to apply based upon a community is deployment 1245 specific. Here are some examples of things that can be done with 1246 communities. The length of a community is a 64 bits number and it 1247 can be written as a single field M or as a multi-field (S = M[0-31], 1248 T = M[32-63]) in these examples. For simplicity, the policy-guided 1249 prefix is referred to as P, the processing node as X and the 1250 originator as Y. 1252 Prune Next-Hops: Community Required: For each next-hop in 1253 P.next_hop_set, if the next-hop does not have the community, prune 1254 that next-hop from P.next_hop_set. 1256 Prune Next-Hops: Avoid Community: For each next-hop in 1257 P.next_hop_set, if the next-hop has the community, prune that 1258 next-hop from P.next_hop_set. 1260 Drop if Community: If node X has community M, discard P. 1262 Drop if not Community: If node X does not have the community M, 1263 discard P. 1265 Prune to ifIndex T: For each next-hop in P.next_hop_set, if the 1266 next-hop's ifIndex is not the value T specified in the community 1267 (S,T), then prune that next-hop from P.next_hop_set. 1269 Add Cost T: For each appearance of community S in P.community_list, 1270 if the node X has community S, then add T to P.cost. 1272 Accumulate Min-BW T: Let bw be the sum of the bandwidth for 1273 P.next_hop_set. If that sum is less than T, then replace (S,T) 1274 with (S, bw). 1276 Add Community T if Node matches S: If the node X has community S, 1277 then add community T to P.community_list. 1279 4.2.4.3. Store Policy-Guided Prefix for Route Computation and 1280 Regeneration 1282 Once a policy-guided prefix has completed ingress filtering and 1283 policy, it is almost ready to store and use. It is still necessary 1284 to adjust the cost of the prefix to account for the link from the 1285 computing node X to the originating neighbor node Y. 1287 There are three different policies that can be used: 1289 Minimum Equal-Cost: Find the loWest cost C next-hops in 1290 P.next_hop_set and prune to those. Add C to P.cost. 1292 Minimum Unequal-Cost: Find the loWest cost C next-hop in 1293 P.next_hop_set. Add C to P.cost. 1295 Maximum Unequal-Cost: Find the highest cost C next-hop in 1296 P.next_hop_set. Add C to P.cost. 1298 The default policy is Minimum Unequal-Cost but well-known communities 1299 can be defined to get the other behaviors. 1301 Regardless of the policy used, a node MUST store a PGP cost that is 1302 at least 1 greater than the PGP cost received. This enforces the 1303 strictly monotonically increasing condition that avoids loops. 1305 Two databases of PGPs - from N-TIEs and from S-TIEs are stored. When 1306 a PGP is inserted into the appropriate database, the usual tie- 1307 breaking on cost is performed. Observe that the node retains all PGP 1308 TIEs due to normal flooding behavior and hence loss of the best 1309 prefix will lead to re-evaluation of TIEs present and re- 1310 advertisement of a new best PGP. 1312 4.2.4.4. Re-origination 1314 A node must re-originate policy-guided prefixes and retransmit them. 1315 The node has its database of southbound policy-guided prefixes to 1316 send in its S-TIE and its database of northbound policy-guided 1317 prefixes to send in its N-TIE. 1319 Of course, a leaf does not need to re-originate southbound policy- 1320 guided prefixes. 1322 4.2.4.5. Overlap with Disaggregated Prefixes 1324 PGPs may overlap with prefixes introduced by automatic de- 1325 aggregation. The topic is under further discussion. The break in 1326 connectivity that leads to infeasibility of a PGP is mirrored in 1327 adjacency tear-down and according removal of such PGPs. 1328 Nevertheless, the underlying link-state flooding will be likely 1329 reacting significantly faster than a hop-by-hop redistribution and 1330 with that the preference for PGPs may cause intermittent black-holes. 1332 4.2.5. Reachability Computation 1334 A node has three sources of relevant information. A node knows the 1335 full topology south from the received N-TIEs. A node has the set of 1336 prefixes with associated distances and bandwidths from received 1337 S-TIEs. A node can also have a set of PGPs. 1339 To compute reachability, a node runs conceptually a northbound and a 1340 southbound SPF. We call that N-SPF and S-SPF. 1342 Since neither computation can "loop" (with due considerations given 1343 to PGPs), it is possible to compute non-equal-cost or even k-shortest 1344 paths [EPPSTEIN] and "saturate" the fabric to the extent desired. 1346 4.2.5.1. Northbound SPF 1348 N-SPF uses northbound and East-West adjacencies in North Node TIEs 1349 when progressing Dijkstra. Observe that this is really just a one 1350 hop variety since South Node TIEs are not re-flooded southbound 1351 beyond a single level (or East-West) and with that the computation 1352 cannot progress beyond adjacent nodes. 1354 Default route found when crossing an E-W link is used IIF 1356 1. the node itself does NOT have any northbound adjacencies AND 1358 2. the adjacent node has one or more northbound adjacencies 1360 This rule forms a "one-hop default route split-horizon" and prevents 1361 looping over default routes while allowing for "one-hop protection" 1362 of nodes that lost all northbound adjacencies. 1364 Other south prefixes found when crossing E-W link MAY be used IIF 1366 1. no north neighbors are advertising same or supersuming non- 1367 default prefix AND 1369 2. the node does not originate a non-default supersuming prefix 1370 itself. 1372 i.e. the E-W link can be used as the gateway of last resort for a 1373 specific prefix only. Using south prefixes across E-W link can be 1374 beneficial e.g. on automatic de-aggregation in pathological fabric 1375 partitioning scenarios. 1377 A detailed example can be found in Section 5.4. 1379 For N-SPF we are using the South Node TIEs to find according 1380 adjacencies to verify backlink connectivity. Just as in case of IS- 1381 IS or OSPF, two unidirectional links are associated together to 1382 confirm bidirectional connectivity. 1384 4.2.5.2. Southbound SPF 1386 S-SPF uses only the southbound adjacencies in the south node TIEs, 1387 i.e. progresses towards nodes at lower levels. Observe that E-W 1388 adjacencies are NEVER used in the computation. This enforces the 1389 requirement that a packet traversing in a southbound direction must 1390 never change its direction. 1392 S-SPF uses northbound adjacencies in north node TIEs to verify 1393 backlink connectivity. 1395 4.2.5.3. East-West Forwarding Within a Level 1397 Ultimately, it should be observed that in presence of a "ring" of E-W 1398 links in a level neither SPF will provide a "ring protection" scheme 1399 since such a computation would have to deal necessarily with breaking 1400 of "loops" in generic Dijkstra sense; an application for which RIFT 1401 is not intended. It is outside the scope of this document how an 1402 underlay can be used to provide a full-mesh connectivity between 1403 nodes in the same layer that would allow for N-SPF to provide 1404 protection for a single node loosing all its northbound adjacencies 1405 (as long as any of the other nodes in the level are northbound 1406 connected). 1408 Using south prefixes over horizontal links is optional and can 1409 protect against pathological fabric partitioning cases that leave 1410 only paths to destinations that would necessitate multiple changes of 1411 forwarding direction between north and south. 1413 4.2.6. Attaching Prefixes 1415 After the SPF is run, it is necessary to attach according prefixes. 1416 For S-SPF, prefixes from an N-TIE are attached to the originating 1417 node with that node's next-hop set and a distance equal to the 1418 prefix's cost plus the node's minimized path distance. The RIFT 1419 route database, a set of (prefix, type=spf, path_distance, next-hop 1420 set), accumulates these results. Obviously, the prefix retains its 1421 type which is used to tie-break between the same prefix advertised 1422 with different types. 1424 In case of N-SPF prefixes from each S-TIE need to also be added to 1425 the RIFT route database. The N-SPF is really just a stub so the 1426 computing node needs simply to determine, for each prefix in an S-TIE 1427 that originated from adjacent node, what next-hops to use to reach 1428 that node. Since there may be parallel links, the next-hops to use 1429 can be a set; presence of the computing node in the associated Node 1430 S-TIE is sufficient to verify that at least one link has 1431 bidirectional connectivity. The set of minimum cost next-hops from 1432 the computing node X to the originating adjacent node is determined. 1434 Each prefix has its cost adjusted before being added into the RIFT 1435 route database. The cost of the prefix is set to the cost received 1436 plus the cost of the minimum cost next-hop to that neighbor. Then 1437 each prefix can be added into the RIFT route database with the 1438 next_hop_set; ties are broken based upon type first and then 1439 distance. RIFT route preferences are normalized by the according 1440 thrift model type. 1442 An exemplary implementation for node X follows: 1444 for each S-TIE 1445 if S-TIE.layer > X.layer 1446 next_hop_set = set of minimum cost links to the S-TIE.originator 1447 next_hop_cost = minimum cost link to S-TIE.originator 1448 end if 1449 for each prefix P in the S-TIE 1450 P.cost = P.cost + next_hop_cost 1451 if P not in route_database: 1452 add (P, type=DistVector, P.cost, next_hop_set) to route_database 1453 end if 1454 if (P in route_database) and 1455 (route_database[P].type is not PolicyGuided): 1456 if route_database[P].cost > P.cost): 1457 update route_database[P] with (P, DistVector, P.cost, next_hop_set) 1458 else if route_database[P].cost == P.cost 1459 update route_database[P] with (P, DistVector, P.cost, 1460 merge(next_hop_set, route_database[P].next_hop_set)) 1461 else 1462 // Not preferred route so ignore 1463 end if 1464 end if 1465 end for 1466 end for 1468 Figure 4: Adding Routes from S-TIE Prefixes 1470 4.2.7. Attaching Policy-Guided Prefixes 1472 Each policy-guided prefix P has its cost and next_hop_set already 1473 stored in the associated database, as specified in Section 4.2.4.3; 1474 the cost stored for the PGP is already updated to considering the 1475 cost of the link to the advertising neighbor. By definition, a 1476 policy-guided prefix is preferred to a regular prefix. 1478 for each policy-guided prefix P: 1479 if P not in route_database: 1480 add (P, type=PolicyGuided, P.cost, next_hop_set) 1481 end if 1482 if P in route_database : 1483 if (route_database[P].type is not PolicyGuided) or 1484 (route_database[P].cost > P.cost): 1485 update route_database[P] with (P, PolicyGuided, P.cost, next_hop_set) 1486 else if route_database[P].cost == P.cost 1487 update route_database[P] with (P, PolicyGuided, P.cost, 1488 merge(next_hop_set, route_database[P].next_hop_set)) 1489 else 1490 // Not preferred route so ignore 1491 end if 1492 end if 1493 end for 1495 Figure 5: Adding Routes from Policy-Guided Prefixes 1497 4.2.8. Automatic Disaggregation on Link & Node Failures 1499 Under normal circumstances, node's S-TIEs contain just the 1500 adjacencies, a default route and policy-guided prefixes. However, if 1501 a node detects that its default IP prefix covers one or more prefixes 1502 that are reachable through it but not through one or more other nodes 1503 at the same level, then it MUST explicitly advertise those prefixes 1504 in an S-TIE. Otherwise, some percentage of the northbound traffic 1505 for those prefixes would be sent to nodes without according 1506 reachability, causing it to be black-holed. Even when not black- 1507 holing, the resulting forwarding could 'backhaul' packets through the 1508 higher level spines, clearly an undesirable condition affecting the 1509 blocking probabilities of the fabric. 1511 We refer to the process of advertising additional prefixes as 'de- 1512 aggregation' or 'dis-aggregation'. 1514 A node determines the set of prefixes needing de-aggregation using 1515 the following steps: 1517 1. A DAG computation in the southern direction is performed first, 1518 i.e. the N-TIEs are used to find all of prefixes it can reach and 1519 the set of next-hops in the lower level for each. Such a 1520 computation can be easily performed on a fat tree by e.g. setting 1521 all link costs in the southern direction to 1 and all northern 1522 directions to infinity. We term set of those prefixes |R, and 1523 for each prefix, r, in |R, we define its set of next-hops to 1524 be |H(r). Observe that policy-guided prefixes are NOT affected 1525 since their scope is controlled by configuration. 1527 2. The node uses reflected S-TIEs to find all nodes at the same 1528 level in the same PoD and the set of southbound adjacencies for 1529 each. The set of nodes at the same level is termed |N and for 1530 each node, n, in |N, we define its set of southbound adjacencies 1531 to be |A(n). 1533 3. For a given r, if the intersection of |H(r) and |A(n), for any n, 1534 is null then that prefix r must be explicitly advertised by the 1535 node in an S-TIE. 1537 4. Identical set of de-aggregated prefixes is flooded on each of the 1538 node's southbound adjacencies. In accordance with the normal 1539 flooding rules for an S-TIE, a node at the lower level that 1540 receives this S-TIE will not propagate it south-bound. Neither 1541 is it necessary for the receiving node to reflect the 1542 disaggregated prefixes back over its adjacencies to nodes at the 1543 level from which it was received. 1545 To summarize the above in simplest terms: if a node detects that its 1546 default route encompasses prefixes for which one of the other nodes 1547 in its level has no possible next-hops in the level below, it has to 1548 disaggregate it to prevent black-holing or suboptimal routing. Hence 1549 a node X needs to determine if it can reach a different set of south 1550 neighbors than other nodes at the same level, which are connected to 1551 it via at least one common south or East-West neighbor. If it can, 1552 then prefix disaggregation may be required. If it can't, then no 1553 prefix disaggregation is needed. An example of disaggregation is 1554 provided in Section 5.3. 1556 A possible algorithm is described last: 1558 1. Create partial_neighbors = (empty), a set of neighbors with 1559 partial connectivity to the node X's layer from X's perspective. 1560 Each entry is a list of south neighbor of X and a list of nodes 1561 of X.layer that can't reach that neighbor. 1563 2. A node X determines its set of southbound neighbors 1564 X.south_neighbors. 1566 3. For each S-TIE originated from a node Y that X has which is at 1567 X.layer, if Y.south_neighbors is not the same as 1568 X.south_neighbors but the nodes share at least one southern 1569 neighbor, for each neighbor N in X.south_neighbors but not in 1570 Y.south_neighbors, add (N, (Y)) to partial_neighbors if N isn't 1571 there or add Y to the list for N. 1573 4. If partial_neighbors is empty, then node X does not to 1574 disaggregate any prefixes. If node X is advertising 1575 disaggregated prefixes in its S-TIE, X SHOULD remove them and re- 1576 advertise its according S-TIEs. 1578 A node X computes its SPF based upon the received N-TIEs. This 1579 results in a set of routes, each categorized by (prefix, 1580 path_distance, next-hop-set). Alternately, for clarity in the 1581 following procedure, these can be organized by next-hop-set as ( 1582 (next-hops), {(prefix, path_distance)}). If partial_neighbors isn't 1583 empty, then the following procedure describes how to identify 1584 prefixes to disaggregate. 1586 disaggregated_prefixes = {empty } 1587 nodes_same_layer = { empty } 1588 for each S-TIE 1589 if (S-TIE.layer == X.layer and 1590 X shares at least one S-neighbor with X) 1591 add S-TIE.originator to nodes_same_layer 1592 end if 1593 end for 1595 for each next-hop-set NHS 1596 isolated_nodes = nodes_same_layer 1597 for each NH in NHS 1598 if NH in partial_neighbors 1599 isolated_nodes = intersection(isolated_nodes, 1600 partial_neighbors[NH].nodes) 1601 end if 1602 end for 1604 if isolated_nodes is not empty 1605 for each prefix using NHS 1606 add (prefix, distance) to disaggregated_prefixes 1607 end for 1608 end if 1609 end for 1611 copy disaggregated_prefixes to X's S-TIE 1612 if X's S-TIE is different 1613 schedule S-TIE for flooding 1614 end if 1616 Figure 6: Computation to Disaggregate Prefixes 1618 Each disaggregated prefix is sent with the accurate path_distance. 1619 This allows a node to send the same S-TIE to each south neighbor. 1620 The south neighbor which is connected to that prefix will thus have a 1621 shorter path. 1623 Finally, to summarize the less obvious points partially omitted in 1624 the algorithms to keep them more tractable: 1626 1. all neighbor relationships MUST perform backlink checks. 1628 2. overload bits as introduced in Section 4.3.1 have to be respected 1629 during the computation. 1631 3. all the lower level nodes are flooded the same disaggregated 1632 prefixes since we don't want to build an S-TIE per node and 1633 complicate things unnecessarily. The PoD containing the prefix 1634 will prefer southbound anyway. 1636 4. disaggregated prefixes do NOT have to propagate to lower levels. 1637 With that the disturbance in terms of new flooding is contained 1638 to a single level experiencing failures only. 1640 5. disaggregated prefix S-TIEs are not "reflected" by the lower 1641 layer, i.e. nodes within same level do NOT need to be aware 1642 which node computed the need for disaggregation. 1644 6. The fabric is still supporting maximum load balancing properties 1645 while not trying to send traffic northbound unless necessary. 1647 Ultimately, complex partitions of superspine on sparsely connected 1648 fabrics can lead to necessity of transitive disaggregation through 1649 multiple layers. The topic will be described and standardized in 1650 later versions of this document. 1652 4.2.9. Optional Autoconfiguration 1654 Each RIFT node can optionally operate in zero touch provisioning 1655 (ZTP) mode, i.e. it has no configuration (unless it is a superspine 1656 at the top of the topology or the must operate in the topology as 1657 leaf and/or support leaf-2-leaf procedures) and it will fully 1658 configure itself after being attached to the topology. Configured 1659 nodes and nodes operating in ZTP can be mixed and will form a valid 1660 topology if achievable. This section describes the necessary 1661 concepts and procedures. 1663 4.2.9.1. Terminology 1665 Automatic Level Derivation: Procedures which allow nodes without 1666 level configured to derive it automatically. Only applied if 1667 CONFIGURED_LEVEL is undefined. 1669 UNDEFINED_LEVEL: An imaginary value that indicates that the level 1670 has not beeen determined and has not been configured. Schemas 1671 normally indicate that by a missing optional value without an 1672 available defined default. 1674 LEAF_ONLY: An optional configuration flag that can be configured on 1675 a node to make sure it never leaves the "bottom of the hierarchy". 1676 SUPERSPINE_FLAG and CONFIGURED_LEVEL cannot be defined at the same 1677 time as this flag. It implies CONFIGURED_LEVEL value of 0. 1679 CONFIGURED_LEVEL: A level value provided manually. When this is 1680 defined (i.e. it is not an UNDEFINED_LEVEL) the node is not 1681 participating in ZTP. SUPERSPINE_FLAG is ignored when this value 1682 is defined. LEAF_ONLY can be set only if this value is undefined 1683 or set to 0. 1685 DERIVED_LEVEL: Level value computed via automatic level derivation 1686 when CONFIGURED_LEVEL is equal to UNDEFINED_LEVEL. 1688 LEAF_2_LEAF: An optional flag that can be configured on a node to 1689 make sure it supports procedures defined in Section 4.3.9. 1690 SUPERSPINE_FLAG is ignored when set at the same time as this flag. 1691 LEAF_2_LEAF implies LEAF_ONLY and the according restrictions. 1693 LEVEL_VALUE: In ZTP case the original definition of "level" in 1694 Section 2.1 is both extended and relaxed. First, level is defined 1695 now as LEVEL_VALUE and is the first defined value of 1696 CONFIGURED_LEVEL followed by DERIVED_LEVEL. Second, it is 1697 possible for nodes to be more than one level apart to form 1698 adjacencies if any of the nodes is at least LEAF_ONLY. 1700 Valid Offered Level (VOL): A neighbor's level received on a valid 1701 LIE (i.e. passing all checks for adjacency formation while 1702 disregarding all clauses involving level values) persisting for 1703 the duration of the holdtime interval on the LIE. Observe that 1704 offers from nodes offering level value of 0 do not constitute VOLs 1705 (since no valid DERIVED_LEVEL can be obtained from those). Offers 1706 from LIEs with `not_a_ztp_offer` being true are not VOLs either. 1708 Highest Available Level (HAL): Highest defined level value seen from 1709 all VOLs received. 1711 Highest Adjacency Three Way (HAT): Highest neigbhor level of all the 1712 formed three way adjacencies for the node. 1714 SUPERSPINE_FLAG: Configuration flag provided to all superspines. 1715 LEAF_FLAG and CONFIGURED_LEVEL cannot be defined at the same time 1716 as this flag. It implies CONFIGURED_LEVEL value of 16. In fact, 1717 it is basically a shortcut for configuring same level at all 1718 superspine nodes which is unavoidable since an initial 'seed' is 1719 needed for other ZTP nodes to derive their level in the topology. 1721 4.2.9.2. Automatic SystemID Selection 1723 RIFT identifies each node via a SystemID which is a 64 bits wide 1724 integer. It is relatively simple to derive a, for all practical 1725 purposes collision free, value for each node on startup. For that 1726 purpose, a node MUST use as system ID EUI-64 MA-L format where the 1727 organizationally governed 24 bits can be used to generate system IDs 1728 for multiple RIFT instances running on the system. 1730 The router MUST ensure that such identifier is not changing very 1731 frequently (at least not without sending all its TIEs with fairly 1732 short lifetimes) since otherwise the network may be left with large 1733 amounts of stale TIEs in other nodes (though this is not necessarily 1734 a serious problem if the procedures suggested in Section 7 are 1735 implemented). 1737 4.2.9.3. Generic Fabric Example 1739 ZTP forces us to think about miscabled or unusually cabled fabric and 1740 how such a topology can be forced into a "lattice" structure which a 1741 fabric represents (with further restrictions). Let us consider a 1742 necessary and sufficient physical cabling in Figure 7. We assume all 1743 nodes being in the same PoD. 1745 . +---+ 1746 . | A | s = SUPERSPINE_FLAG 1747 . | s | l = LEAF_ONLY 1748 . ++-++ l2l = LEAF_2_LEAF 1749 . | | 1750 . +--+ +--+ 1751 . | | 1752 . +--++ ++--+ 1753 . | E | | F | 1754 . | +-+ | +-----------+ 1755 . ++--+ | ++-++ | 1756 . | | | | | 1757 . | +-------+ | | 1758 . | | | | | 1759 . | | +----+ | | 1760 . | | | | | 1761 . ++-++ ++-++ | 1762 . | I +-----+ J | | 1763 . | | | +-+ | 1764 . ++-++ +--++ | | 1765 . | | | | | 1766 . +---------+ | +------+ | 1767 . | | | | | 1768 . +-----------------+ | | 1769 . | | | | | 1770 . ++-++ ++-++ | 1771 . | X +-----+ Y +-+ 1772 . |l2l| | l | 1773 . +---+ +---+ 1775 Figure 7: Generic ZTP Cabling Considerations 1777 First, we need to anchor the "top" of the cabling and that's what the 1778 SUPERSPINE_FLAG at node A is for. Then things look smooth until we 1779 have to decide whether node Y is at the same level as I, J or at the 1780 same level as Y and consequently, X is south of it. This is 1781 unresolvable here until we "nail down the bottom" of the topology. 1782 To achieve that we use the the leaf flags. We will see further then 1783 whether Y chooses to form adjacencies to F or I, J successively. 1785 4.2.9.4. Level Determination Procedure 1787 A node starting up with UNDEFINED_VALUE (i.e. without a 1788 CONFIGURED_LEVEL or any leaf or superspine flag) MUST follow those 1789 additional procedures: 1791 1. It advertises its LEVEL_VALUE on all LIEs (observe that this can 1792 be UNDEFINED_LEVEL which in terms of the schema is simply an 1793 omitted optional value). 1795 2. It chooses on an ongoing basis from all VOLs the value of 1796 MAX(HAL-1,0) as its DERIVED_LEVEL. The node then starts to 1797 advertise this derived level. 1799 3. A node that lost all adjacencies with HAL value MUST hold down 1800 computation of new DERIVED_LEVEL for a short period of time 1801 unless it has no VOLs from southbound adjacencies. After the 1802 holddown expired, it MUST discard all received offers, recompute 1803 DERIVED_LEVEL and announce it to all neighbors. 1805 4. A node MUST reset any adjacency that has changed the level it is 1806 offering and is in three way state. 1808 5. A node that changed its defined level value MUST readvertise its 1809 own TIEs (since the new `PacketHeader` will contain a different 1810 level than before). Sequence number of each TIE MUST be 1811 increased. 1813 6. After a level has been derived the node MUST set the 1814 `not_a_ztp_offer` on LIEs towards all systems extending a VOL for 1815 HAL. 1817 A node starting with LEVEL_VALUE being 0 (i.e. it assumes a leaf 1818 function or has a CONFIGURED_LEVEL of 0) MUST follow those additional 1819 procedures: 1821 1. It computes HAT per procedures above but does NOT use it to 1822 compute DERIVED_LEVEL. HAT is used to limit adjacency formation 1823 per Section 4.2.2. 1825 Precise finite state machines will be provided in later versions of 1826 this specification. 1828 4.2.9.5. Resulting Topologies 1830 The procedures defined in Section 4.2.9.4 will lead to the RIFT 1831 topology and levels depicted in Figure 8. 1833 . +---+ 1834 . | As| 1835 . | 64| 1836 . ++-++ 1837 . | | 1838 . +--+ +--+ 1839 . | | 1840 . +--++ ++--+ 1841 . | E | | F | 1842 . | 63+-+ | 63+-----------+ 1843 . ++--+ | ++-++ | 1844 . | | | | | 1845 . | +-------+ | | 1846 . | | | | | 1847 . | | +----+ | | 1848 . | | | | | 1849 . ++-++ ++-++ | 1850 . | I +-----+ J | | 1851 . | 62| | 62| | 1852 . ++--+ +--++ | 1853 . | | | 1854 . +---------+ | | 1855 . | | | 1856 . ++-++ +---+ | 1857 . | X | | Y +-+ 1858 . | 0 | | 0 | 1859 . +---+ +---+ 1861 Figure 8: Generic ZTP Topology Autoconfigured 1863 In case we imagine the LEAF_ONLY restriction on Y is removed the 1864 outcome would be very different however and result in Figure 9. This 1865 demonstrates basically that auto configuration prevents miscabling 1866 detection and with that can lead to undesirable effects when leafs 1867 are not "nailed" and arbitrarily cabled. 1869 . +---+ 1870 . | As| 1871 . | 64| 1872 . ++-++ 1873 . | | 1874 . +--+ +--+ 1875 . | | 1876 . +--++ ++--+ 1877 . | E | | F | 1878 . | 63+-+ | 63+-------+ 1879 . ++--+ | ++-++ | 1880 . | | | | | 1881 . | +-------+ | | 1882 . | | | | | 1883 . | | +----+ | | 1884 . | | | | | 1885 . ++-++ ++-++ +-+-+ 1886 . | I +-----+ J +-----+ Y | 1887 . | 62| | 62| | 62| 1888 . ++-++ +--++ ++-++ 1889 . | | | | | 1890 . | +-----------------+ | 1891 . | | | 1892 . +---------+ | | 1893 . | | | 1894 . ++-++ | 1895 . | X +--------+ 1896 . | 0 | 1897 . +---+ 1899 Figure 9: Generic ZTP Topology Autoconfigured 1901 4.2.10. Stability Considerations 1903 The autoconfiguration mechanism computes a global maximum of levels 1904 by diffusion. The achieved equilibrium can be disturbed massively by 1905 all nodes with highest level either leaving or entering the domain 1906 (with some finer distinctions not explained further). It is 1907 therefore recommended that each node is multi-homed towards nodes 1908 with respective HAL offerings. Fortuntately, this is the natural 1909 state of things for the topology variants considered in RIFT. 1911 4.3. Further Mechanisms 1913 4.3.1. Overload Bit 1915 Overload Bit MUST be respected in all according reachability 1916 computations. A node with overload bit set SHOULD NOT advertise any 1917 reachability prefixes southbound except locally hosted ones. 1919 The leaf node SHOULD set the 'overload' bit on its node TIEs, since 1920 if the spine nodes were to forward traffic not meant for the local 1921 node, the leaf node does not have the topology information to prevent 1922 a routing/forwarding loop. 1924 4.3.2. Optimized Route Computation on Leafs 1926 Since the leafs do see only "one hop away" they do not need to run a 1927 full SPF but can simply gather prefix candidates from their neighbors 1928 and build the according routing table. 1930 A leaf will have no N-TIEs except its own and optionally from its 1931 East-West neighbors. A leaf will have S-TIEs from its neighbors. 1933 Instead of creating a network graph from its N-TIEs and neighbor's 1934 S-TIEs and then running an SPF, a leaf node can simply compute the 1935 minimum cost and next_hop_set to each leaf neighbor by examining its 1936 local interfaces, determining bi-directionality from the associated 1937 N-TIE, and specifying the neighbor's next_hop_set set and cost from 1938 the minimum cost local interfaces to that neighbor. 1940 Then a leaf attaches prefixes as in Section 4.2.6 as well as the 1941 policy-guided prefixes as in Section 4.2.7. 1943 4.3.3. Mobility 1945 It is a requirement for RIFT to maintain at the control plane a real 1946 time status of which prefix is attached to which port of which leaf, 1947 even in a context of mobility where the point of attachement may 1948 change several times in a subsecond period of time. 1950 There are two classical approaches to maintain such knowledge in an 1951 unambiguous fashion: 1953 time stamp: With this method, the infrastructure memorizes the 1954 precise time at which the movement is observed. One key advantage 1955 of this technique is that it has no dependency on the mobile 1956 device. One drawback is that the infrastructure must be precisely 1957 synchronized to be able to compare time stamps as observed by the 1958 various points of attachment, e.g., using the variation of the 1959 Precision Time Protocol (PTP) IEEE Std. 1588 [IEEEstd1588], 1960 [IEEEstd8021AS] designed for bridged LANs IEEE Std. 802.1AS 1961 [IEEEstd8021AS]. Both the precision of the synchronisation 1962 protocol and the resolution of the time stamp must beat the 1963 highest possible roaming time on the fabric. Another drawback is 1964 that the presence of the mobile device may be observed only 1965 asynchronously, e.g., after it starts using an IP protocol such as 1966 ARP [RFC0826], IPv6 Neighbor Discovery [RFC4861][RFC4862], or DHCP 1967 [RFC2131][RFC3315]. 1969 sequence counter: With this method, a mobile node notifies its point 1970 of attachment on arrival with a sequence counter that is 1971 incremented upon each movement. On the positive side, this method 1972 does not have a dependency on a precise sense of time, since the 1973 sequence of movements is kept in order by the device. The 1974 disadvantage of this approach is the lack of support for protocols 1975 that may be used by the mobile node to register its presence to 1976 the leaf node with the capability to provide a sequence counter. 1977 Well-known issues with wrapping sequence counters must be 1978 addressed properly, and many forms of sequence counters that vary 1979 in both wrapping rules and comparison rules. A particular 1980 knowledge of the source of the sequence counter is required to 1981 operate it, and the comparison between sequence counters from 1982 heterogeneous sources can be hard to impossible. 1984 RIFT supports a hybrid approach contained in an optional 1985 `PrefixSequenceType` prefix attribute that we call a `monotonic 1986 clock` consisting of a timestamp and optional sequence number. In 1987 case of presence of the attribute: 1989 o The leaf node MUST advertise a time stamp of the latest sighting 1990 of a prefix, e.g., by snooping IP protocols or the switch using 1991 the time at which it advertised the prefix. RIFT transports the 1992 time stamp within the desired prefix N-TIEs as 802.1AS timestamp. 1994 o RIFT may interoperate with the "update to 6LoWPAN Neighbor 1995 Discovery" [I-D.ietf-6lo-rfc6775-update], which provides a method 1996 for registering a prefix with a sequence counter called a 1997 Transaction ID (TID). RIFT transports in such case the TID in its 1998 native form. 2000 o RIFT also defines an abstract negative clock (ANSC) that compares 2001 as less than any other clock. By default, the lack of a 2002 `PrefixSequenceType` in a Prefix N-TIE is interpreted as ANSC. We 2003 call this also an `undefined` clock. 2005 o Any prefix present on the fabric in multiple nodes that has the 2006 `same` clock is considered as anycast. ASNC is always considered 2007 smaller than any defined clock. 2009 o RIFT implementation assumes by default that all nodes are being 2010 synchronized to 200 milliseconds precision which is easily 2011 achievable even in very large fabrics using [RFC5905]. An 2012 implementation MAY provide a way to reconfigure a domain to a 2013 different value. We call this variable MAXIMUM_CLOCK_DELTA. 2015 4.3.3.1. Clock Comparison 2017 All monotonic clock values are comparable to each other using the 2018 following rules: 2020 1. ASNC is older than any other value except ASNC AND 2022 2. Clock with timestamp differing by more than MAXIMUM_CLOCK_DELTA 2023 are comparable by using the timestamps only AND 2025 3. Clocks with timestamps differing by less than MAXIMUM_CLOCK_DELTA 2026 are comparable by using their TIDs only AND 2028 4. An undefined TID is always older than any other TID AND 2030 5. TIDs are compared using rules of [I-D.ietf-6lo-rfc6775-update]. 2032 4.3.3.2. Interaction between Time Stamps and Sequence Counters 2034 For slow movements that occur less frequently than e.g. once per 2035 second, the time stamp that the RIFT infrastruture captures is enough 2036 to determine the freshest discovery. If the point of attachement 2037 changes faster than the maximum drift of the time stamping mechanism 2038 (i.e. MAXIMUM_CLOCK_DELTA), then a sequence counter is required to 2039 add resolution to the freshness evaluation, and it must be sized so 2040 that the counters stay comparable within the resolution of the time 2041 stampling mechanism. 2043 The sequence counter in [I-D.ietf-6lo-rfc6775-update] is encoded as 2044 one octet, wraps after 127 increments, and, by default, values are 2045 defined as comparable as long as they are less than SEQUENCE_WINDOW = 2046 16 apart. An implementation MAY allow this to be configurable 2047 throughout the domain, and the number can be pushed up to 64 and 2048 still preserve the capability to discover an error situation where 2049 counters are not comparable. 2051 Within the resolution of MAXIMUM_CLOCK_DELTA the sequence counters 2052 captured during 2 sequential values of the time stamp must be 2053 comparable. This means with default values that a node may move up 2054 to 16 times during a 200 milliseconds period and the clocks remain 2055 still comparable thus allowing the infrastructure to assert the 2056 freshest advertisement with no ambiguity. 2058 4.3.3.3. Anycast vs. Unicast 2060 A unicast prefix can be attached to at most one leaf, whereas an 2061 anycast prefix may be reachable via more than one leaf. 2063 If a monotonic clock attribute is provided on the prefix, then the 2064 prefix with the `newest` clock value is strictly prefered. An 2065 anycast prefix does not carry a clock or all clock attributes MUST be 2066 the same under the rules of Section 4.3.3.1. 2068 Observe that it is important that in mobility events the leaf is re- 2069 flooding as quickly as possible the absence of the prefix that moved 2070 away. 2072 Observe further that without support for 2073 [I-D.ietf-6lo-rfc6775-update] movements on the fabric within 2074 intervals smaller than 100msec will be seen as anycast. 2076 4.3.3.4. Overlays and Signaling 2078 RIFT is agnostic whichever the overlay technology [MIP, LISP, VxLAN, 2079 NVO3] and the associated signaling is deployed over it. But it is 2080 expected that leaf nodes, and possibly superspine nodes can perform 2081 the according encapsulation. 2083 In the context of mobility, overlays provide a classical solution to 2084 avoid injecting mobile prefixes in the fabric and improve the 2085 scalability of the solution. It makes sense on a data center that 2086 already uses overlays to consider their applicability to the mobility 2087 solution; as an example, a mobility protocol such as LISP may inform 2088 the ingress leaf of the location of the egress leaf in real time. 2090 Another possibility is to consider that mobility as an underlay 2091 service and support it in RIFT to an extent. The load on the fabric 2092 augments with the amount of mobility obviously since a move forces 2093 flooding and computation on all nodes in the scope of the move so 2094 tunneling from leaf to the superspines may be desired. Future 2095 versions of this document may describe support for such tunneling in 2096 RIFT. 2098 4.3.4. Key/Value Store 2100 4.3.4.1. Southbound 2102 The protocol supports a southbound distribution of key-value pairs 2103 that can be used to e.g. distribute configuration information during 2104 topology bring-up. The KV S-TIEs can arrive from multiple nodes and 2105 hence need tie-breaking per key. We use the following rules 2107 1. Only KV TIEs originated by a node to which the receiver has an 2108 adjacency are considered. 2110 2. Within all valid KV S-TIEs containing the key, the value of the 2111 KV S-TIE for which the according node S-TIE is present, has the 2112 highest level and within the same level has highest originator ID 2113 is preferred. If keys in the most preferred TIEs are 2114 overlapping, the behavior is undefined. 2116 Observe that if a node goes down, the node south of it looses 2117 adjacencies to it and with that the KVs will be disregarded and on 2118 tie-break changes new KV re-advertised to prevent stale information 2119 being used by nodes further south. KV information in southbound 2120 direction is not result of independent computation of every node but 2121 a diffused computation. 2123 4.3.4.2. Northbound 2125 Certain use cases seem to necessitate distribution of essentialy KV 2126 information that is generated in the leafs in the northbound 2127 direction. Such information is flooded in KV N-TIEs. Since the 2128 originator of northbound KV is preserved during northbound flooding, 2129 overlapping keys could be used. However, to omit further protocol 2130 complexity, only the value of the key in TIE tie-broken in same 2131 fashion as southbound KV TIEs is used. 2133 4.3.5. Interactions with BFD 2135 RIFT MAY incorporate BFD [RFC5881] to react quickly to link failures. 2136 In such case following procedures are introduced: 2138 After RIFT three way hello adjacency convergence a BFD session MAY 2139 be formed automatically between the RIFT endpoints without further 2140 configuration using the exchanged discriminators. 2142 In case established BFD session goes Down after it was Up, RIFT 2143 adjacency should be re-initialized started from Init. 2145 In case of parallel links between nodes each link may run its own 2146 independent BFD session or they may share a session. 2148 In case RIFT changes link identifiers both the hello as well as 2149 the BFD sessions SHOULD be brought down and back up again. 2151 Multiple RIFT instances MAY choose to share a single BFD session 2152 (in such case it is undefined what discriminators are used albeit 2153 RIFT CAN advertise the same link ID for the same interface in 2154 multiple instances and with that "share" the discriminators). 2156 4.3.6. Fabric Bandwidth Balancing 2158 A well understood problem in fabrics is that in case of link losses 2159 it would be ideal to rebalance how much traffic is offered to 2160 switches in the next layer based on the ingress and egress bandwidth 2161 they have. Current attempts rely mostly on specialized traffic 2162 engineering via controller or leafs being aware of complete topology 2163 with according cost and complexity. 2165 RIFT can support a very light weight mechanism that can deal with the 2166 problem in an approximative way based on the fact that RIFT is loop- 2167 free. 2169 4.3.6.1. Northbound Direction 2171 Every RIFT node SHOULD compute the amount of northbound bandwith 2172 available through neighbors at higher level and modify distance 2173 received on default route from this neighbor. Those different 2174 distances SHOULD be used to support weighted ECMP forwarding towards 2175 higher level when using default route. We call such a distance 2176 Bandwidth Adjusted Distance or BAD. This is best illustrated by a 2177 simple example. 2179 . 100 x 100 100 MBits 2180 . | x | | 2181 . +-+---+-+ +-+---+-+ 2182 . | | | | 2183 . |Node111| |Node112| 2184 . +-+---+++ ++----+++ 2185 . |x || || || 2186 . || |+---------------+ || 2187 . || +---------------+| || 2188 . || || || || 2189 . || || || || 2190 . -----All Links 10 MBit------- 2191 . || || || || 2192 . || || || || 2193 . || +------------+| || || 2194 . || |+------------+ || || 2195 . |x || || || 2196 . +-+---+++ +--++-+++ 2197 . | | | | 2198 . |Leaf111| |Leaf112| 2199 . +-------+ +-------+ 2201 Figure 10: Balancing Bandwidth 2203 All links from Leafs in Figure 10 are assumed to 10 MBit/s bandwidth 2204 while the uplinks one level further up are assumed to be 100 MBit/s. 2205 Further, in Figure 10 we assume that Leaf111 lost one of the parallel 2206 links to Node 111 and with that wants to possibly push more traffic 2207 onto Node 112. Leaf 112 has equal bandwidth to Node 111 and Node 112 2208 but Node 111 lost one of its uplinks. 2210 The local modification of the received default route distance from 2211 upper layer is achieved by running a relatively simple algorithm 2212 where the bandwidth is weighted exponentially while the distance on 2213 the default route represents a multiplier for the bandwidth weight 2214 for easy operational adjustements. 2216 On a node L use Node TIEs to compute for each non-overloaded 2217 northbound neighbor N three values: 2219 L_N_u: as sum of the bandwidth available to N 2221 N_u: as sum of the uplink bandwidth available on N 2223 T_N_u: as sum of L_N_u * OVERSUBSCRIPTION_CONSTANT + N_u 2225 For all T_N_u determine the according M_N_u as 2226 log_2(next_power_2(T_N_u)) and determine MAX_M_N_u as maximum value 2227 of all M_N_u. 2229 For each advertised default route from a node N modify the advertised 2230 distance D to BAD = D * (1 + MAX_M_N_u - M_N_u) and use BAD instead 2231 of distance D to weight balance default forwarding towards N. 2233 For the example above a simple table of values will help the 2234 understanding. We assume the default route distance is advertised 2235 with D=1 everywhere and OVERSUBSCRIPTION_CONSTANT = 1. 2237 +---------+---------+-------+-------+-----+ 2238 | Node | N | T_N_u | M_N_u | BAD | 2239 +---------+---------+-------+-------+-----+ 2240 | Leaf111 | Node111 | 110 | 7 | 2 | 2241 +---------+---------+-------+-------+-----+ 2242 | Leaf111 | Node112 | 220 | 8 | 1 | 2243 +---------+---------+-------+-------+-----+ 2244 | Leaf112 | Node111 | 120 | 7 | 2 | 2245 +---------+---------+-------+-------+-----+ 2246 | Leaf112 | Node112 | 220 | 8 | 1 | 2247 +---------+---------+-------+-------+-----+ 2249 Table 4: BAD Computation 2251 All the multiplications and additions are saturating, i.e. when 2252 exceeding range of the bandwidth type are set to highest possible 2253 value of the type. 2255 Observe that since BAD is only computed for default routes any 2256 disaggregated prefixes so PGP or disaggregated routes are not 2257 affected, however, a node MAY choose to compute and use BAD for other 2258 routes. 2260 Observe further that a change in available bandwidth will only affect 2261 at maximum two levels down in the fabric, i.e. blast radius of 2262 bandwidth changes is contained. 2264 4.3.6.2. Southbound Direction 2266 Due to its loop free properties a node could take during S-SPF into 2267 account the available bandwidth on the nodes in lower layers and 2268 modify the amount of traffic offered to next level's "southbound" 2269 nodes based as what it sees is the total achievable maximum flow 2270 through those nodes. It is worth observing that such computations 2271 will work better if standardized but does not have to be necessarily. 2273 As long the packet keeps on heading south it will take one of the 2274 available paths and arrive at the intended destination. 2276 Future versions of this document will fill in more details. 2278 4.3.7. Label Binding 2280 A node MAY advertise on its TIEs a locally significant, downstream 2281 assigned label for the according interface. One use of such label is 2282 a hop-by-hop encapsulation allowing to easily distinguish forwarding 2283 planes served by a multiplicity of RIFT instances. 2285 4.3.8. Segment Routing Support with RIFT 2287 Recently, alternative architecture to reuse labels as segment 2288 identifiers [I-D.ietf-spring-segment-routing] has gained traction and 2289 may present use cases in DC fabric that would justify its deployment. 2290 Such use cases will either precondition an assignment of a label per 2291 node (or other entities where the mechanisms are equivalent) or a 2292 global assignment and a knowledge of topology everywhere to compute 2293 segment stacks of interest. We deal with the two issues separately. 2295 4.3.8.1. Global Segment Identifiers Assignment 2297 Global segment identifiers are normally assumed to be provided by 2298 some kind of a centralized "controller" instance and distributed to 2299 other entities. This can be performed in RIFT by attaching a 2300 controller to the superspine nodes at the top of the fabric where the 2301 whole topology is always visible, assign such identifiers and then 2302 distribute those via the KV mechanism towards all nodes so they can 2303 perform things like probing the fabric for failures using a stack of 2304 segments. 2306 4.3.8.2. Distribution of Topology Information 2308 Some segment routing use cases seem to precondition full knowledge of 2309 fabric topology in all nodes which can be performed albeit at the 2310 loss of one of highly desirable properties of RIFT, namely minimal 2311 blast radius. Basically, RIFT can function as a flat IGP by 2312 switching off its flooding scopes. All nodes will end up with full 2313 topology view and albeit the N-SPF and S-SPF are still performed 2314 based on RIFT rules, any computation with segment identifiers that 2315 needs full topology can use it. 2317 Beside blast radius problem, excessive flooding may present 2318 significant load on implementations. 2320 4.3.9. Leaf to Leaf Procedures 2322 RIFT can optionally allow special leaf East-West adjacencies under 2323 additional set of rules. The leaf supporting those procedures MUST: 2325 advertise the LEAF_2_LEAF flag in node capabilities AND 2327 set the overload bit on all leaf's node TIEs AND 2329 flood only node's own north and south TIEs over E-W leaf 2330 adjacencies AND 2332 always use E-W leaf adjacency in both north as well as south 2333 computation AND 2335 install a discard route for any advertised aggregate in leaf's 2336 TIEs AND 2338 never form southbound adjacencies. 2340 This will allow the E-W leaf nodes to exchange traffic strictly for 2341 the prefixes advertised in each other's north prefix TIEs (since the 2342 southbound computation will find the reverse direction in the other 2343 node's TIE and install its north prefixes). 2345 4.3.10. Other End-to-End Services 2347 Losing full, flat topology information at every node will have an 2348 impact on some of the end-to-end network services. This is the price 2349 paid for minimal disturbance in case of failures and reduced flooding 2350 and memory requirements on nodes lower south in the level hierarchy. 2352 4.3.11. Address Family and Multi Topology Considerations 2354 Multi-Topology (MT)[RFC5120] and Multi-Instance (MI)[RFC6822] is used 2355 today in link-state routing protocols to support several domains on 2356 the same physical topology. RIFT supports this capability by 2357 carrying transport ports in the LIE protocol exchanges. Multiplexing 2358 of LIEs can be achieved by either choosing varying multicast 2359 addresses or ports on the same address. 2361 BFD interactions in Section 4.3.5 are implementation dependent when 2362 multiple RIFT instances run on the same link. 2364 4.3.12. Reachability of Internal Nodes in the Fabric 2366 RIFT does not precondition that its nodes have reachable addresses 2367 albeit for operational purposes this is clearly desirable. Under 2368 normal operating conditions this can be easily achieved by e.g. 2369 injecting the node's loopback address into North Prefix TIEs. 2371 Things get more interesting in case a node looses all its northbound 2372 adjacencies but is not at the top of the fabric. In such a case a 2373 node that detects that some other members at its level are 2374 advertising northbound adjacencies MAY inject its loopback address 2375 into southbound PGP TIE and become reachable "from the south" that 2376 way. Further, a solution may be implemented where based on e.g. a 2377 "well known" community such a southbound PGP is reflected at level 0 2378 and advertised as northbound PGP again to allow for "reachability 2379 from the north" at the cost of additional flooding. 2381 4.3.13. One-Hop Healing of Levels with East-West Links 2383 Based on the rules defined in Section 4.2.5, Section 4.2.3.7 and 2384 given presence of E-W links, RIFT can provide a one-hop protection of 2385 nodes that lost all their northbound links or in other complex link 2386 set failure scenarios. Section 5.4 explains the resulting behavior 2387 based on one such example. 2389 5. Examples 2391 5.1. Normal Operation 2393 This section describes RIFT deployment in the example topology 2394 without any node or link failures. We disregard flooding reduction 2395 for simplicity's sake. 2397 As first step, the following bi-directional adjacencies will be 2398 created (and any other links that do not fulfill LIE rules in 2399 Section 4.2.2 disregarded): 2401 1. Spine 21 (PoD 0) to Node 111, Node 112, Node 121, and Node 122 2403 2. Spine 22 (PoD 0) to Node 111, Node 112, Node 121, and Node 122 2405 3. Node 111 to Leaf 111, Leaf 112 2407 4. Node 112 to Leaf 111, Leaf 112 2409 5. Node 121 to Leaf 121, Leaf 122 2411 6. Node 122 to Leaf 121, Leaf 122 2412 Consequently, N-TIEs would be originated by Node 111 and Node 112 and 2413 each set would be sent to both Spine 21 and Spine 22. N-TIEs also 2414 would be originated by Leaf 111 (w/ Prefix 111) and Leaf 112 (w/ 2415 Prefix 112 and the multi-homed prefix) and each set would be sent to 2416 Node 111 and Node 112. Node 111 and Node 112 would then flood these 2417 N-TIEs to Spine 21 and Spine 22. 2419 Similarly, N-TIEs would be originated by Node 121 and Node 122 and 2420 each set would be sent to both Spine 21 and Spine 22. N-TIEs also 2421 would be originated by Leaf 121 (w/ Prefix 121 and the multi-homed 2422 prefix) and Leaf 122 (w/ Prefix 122) and each set would be sent to 2423 Node 121 and Node 122. Node 121 and Node 122 would then flood these 2424 N-TIEs to Spine 21 and Spine 22. 2426 At this point both Spine 21 and Spine 22, as well as any controller 2427 to which they are connected, would have the complete network 2428 topology. At the same time, Node 111/112/121/122 hold only the 2429 N-ties of level 0 of their respective PoD. Leafs hold only their own 2430 N-TIEs. 2432 S-TIEs with adjacencies and a default IP prefix would then be 2433 originated by Spine 21 and Spine 22 and each would be flooded to Node 2434 111, Node 112, Node 121, and Node 122. Node 111, Node 112, Node 121, 2435 and Node 122 would each send the S-TIE from Spine 21 to Spine 22 and 2436 the S-TIE from Spine 22 to Spine 21. (S-TIEs are reflected up to 2437 level from which they are received but they are NOT propagated 2438 southbound.) 2440 An S Tie with a default IP prefix would be originated by Node 111 and 2441 Node 112 and each would be sent to Leaf 111 and Leaf 112. Leaf 111 2442 and Leaf 112 would each send the S-TIE from Node 111 to Node 112 and 2443 the S-TIE from Node 112 to Node 111. 2445 Similarly, an S Tie with a default IP prefix would be originated by 2446 Node 121 and Node 122 and each would be sent to Leaf 121 and Leaf 2447 122. Leaf 121 and Leaf 122 would each send the S-TIE from Node 121 2448 to Node 122 and the S-TIE from Node 122 to Node 121. At this point 2449 IP connectivity with maximum possible ECMP has been established 2450 between the leafs while constraining the amount of information held 2451 by each node to the minimum necessary for normal operation and 2452 dealing with failures. 2454 5.2. Leaf Link Failure 2455 . | | | | 2456 .+-+---+-+ +-+---+-+ 2457 .| | | | 2458 .|Node111| |Node112| 2459 .+-+---+-+ ++----+-+ 2460 . | | | | 2461 . | +---------------+ X 2462 . | | | X Failure 2463 . | +-------------+ | X 2464 . | | | | 2465 .+-+---+-+ +--+--+-+ 2466 .| | | | 2467 .|Leaf111| |Leaf112| 2468 .+-------+ +-------+ 2469 . + + 2470 . Prefix111 Prefix112 2472 Figure 11: Single Leaf link failure 2474 In case of a failing leaf link between node 112 and leaf 112 the 2475 link-state information will cause re-computation of the necessary SPF 2476 and the higher levels will stop forwarding towards prefix 112 through 2477 node 112. Only nodes 111 and 112, as well as both spines will see 2478 control traffic. Leaf 111 will receive a new S-TIE from node 112 and 2479 reflect back to node 111. Node 111 will de-aggregate prefix 111 and 2480 prefix 112 but we will not describe it further here since de- 2481 aggregation is emphasized in the next example. It is worth observing 2482 however in this example that if leaf 111 would keep on forwarding 2483 traffic towards prefix 112 using the advertised south-bound default 2484 of node 112 the traffic would end up on spine 21 and spine 22 and 2485 cross back into pod 1 using node 111. This is arguably not as bad as 2486 black-holing present in the next example but clearly undesirable. 2487 Fortunately, de-aggregation prevents this type of behavior except for 2488 a transitory period of time. 2490 5.3. Partitioned Fabric 2491 . +--------+ +--------+ S-TIE of Spine21 2492 . | | | | received by 2493 . |Spine 21| |Spine 22| reflection of 2494 . ++-+--+-++ ++-+--+-++ Nodes 112 and 111 2495 . | | | | | | | | 2496 . | | | | | | | 0/0 2497 . | | | | | | | | 2498 . | | | | | | | | 2499 . +--------------+ | +--- XXXXXX + | | | +---------------+ 2500 . | | | | | | | | 2501 . | +-----------------------------+ | | | 2502 . 0/0 | | | | | | | 2503 . | 0/0 0/0 +- XXXXXXXXXXXXXXXXXXXXXXXXX -+ | 2504 . | 1.1/16 | | | | | | 2505 . | | +-+ +-0/0-----------+ | | 2506 . | | | 1.1./16 | | | | 2507 .+-+----++ +-+-----+ ++-----0/0 ++----0/0 2508 .| | | | | 1.1/16 | 1.1/16 2509 .|Node111| |Node112| |Node121| |Node122| 2510 .+-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 2511 . | | | | | | | | 2512 . | +---------------+ | | +----------------+ | 2513 . | | | | | | | | 2514 . | +-------------+ | | | +--------------+ | | 2515 . | | | | | | | | 2516 .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 2517 .| | | | | | | | 2518 .|Leaf111| |Leaf112| |Leaf121| |Leaf122| 2519 .+-+-----+ ++------+ +-----+-+ +-+-----+ 2520 . + + + + 2521 . Prefix111 Prefix112 Prefix121 Prefix122 2522 . 1.1/16 2524 Figure 12: Fabric partition 2526 Figure 12 shows the arguably most catastrophic but also the most 2527 interesting case. Spine 21 is completely severed from access to 2528 Prefix 121 (we use in the figure 1.1/16 as example) by double link 2529 failure. However unlikely, if left unresolved, forwarding from leaf 2530 111 and leaf 112 to prefix 121 would suffer 50% black-holing based on 2531 pure default route advertisements by spine 21 and spine 22. 2533 The mechanism used to resolve this scenario is hinging on the 2534 distribution of southbound representation by spine 21 that is 2535 reflected by node 111 and node 112 to spine 22. Spine 22, having 2536 computed reachability to all prefixes in the network, advertises with 2537 the default route the ones that are reachable only via lower level 2538 neighbors that spine 21 does not show an adjacency to. That results 2539 in node 111 and node 112 obtaining a longest-prefix match to prefix 2540 121 which leads through spine 22 and prevents black-holing through 2541 spine 21 still advertising the 0/0 aggregate only. 2543 The prefix 121 advertised by spine 22 does not have to be propagated 2544 further towards leafs since they do no benefit from this information. 2545 Hence the amount of flooding is restricted to spine 21 reissuing its 2546 S-TIEs and reflection of those by node 111 and node 112. The 2547 resulting SPF in spine 22 issues a new prefix S-TIEs containing 2548 1.1/16. None of the leafs become aware of the changes and the 2549 failure is constrained strictly to the level that became partitioned. 2551 To finish with an example of the resulting sets computed using 2552 notation introduced in Section 4.2.8, spine 22 constructs the 2553 following sets: 2555 |R = Prefix 111, Prefix 112, Prefix 121, Prefix 122 2557 |H (for r=Prefix 111) = Node 111, Node 112 2559 |H (for r=Prefix 112) = Node 111, Node 112 2561 |H (for r=Prefix 121) = Node 121, Node 122 2563 |H (for r=Prefix 122) = Node 121, Node 122 2565 |A (for Spine 21) = Node 111, Node 112 2567 With that and |H (for r=prefix 121) and |H (for r=prefix 122) being 2568 disjoint from |A (for spine 21), spine 22 will originate an S-TIE 2569 with prefix 121 and prefix 122, that is flooded to nodes 112, 112, 2570 121 and 122. 2572 5.4. Northbound Partitioned Router and Optional East-West Links 2573 . + + + 2574 . X N1 | N2 | N3 2575 . X | | 2576 .+--+----+ +--+----+ +--+-----+ 2577 .| |0/0> <0/0| |0/0> <0/0| | 2578 .| A01 +----------+ A02 +----------+ A03 | Level 1 2579 .++-+-+--+ ++--+--++ +---+-+-++ 2580 . | | | | | | | | | 2581 . | | +----------------------------------+ | | | 2582 . | | | | | | | | | 2583 . | +-------------+ | | | +--------------+ | 2584 . | | | | | | | | | 2585 . | +----------------+ | +-----------------+ | 2586 . | | | | | | | | | 2587 . | | +------------------------------------+ | | 2588 . | | | | | | | | | 2589 .++-+-+--+ | +---+---+ | +-+---+-++ 2590 .| | +-+ +-+ | | 2591 .| L01 | | L02 | | L03 | Level 0 2592 .+-------+ +-------+ +--------+ 2594 Figure 13: North Partitioned Router 2596 Figure 13 shows a part of a fabric where level 1 is horizontally 2597 connected and A01 lost its only northbound adjacency. Based on N-SPF 2598 rules in Section 4.2.5.1 A01 will compute northbound reachability by 2599 using the link A01 to A02 (whereas A02 will NOT use this link during 2600 N-SPF). Hence A01 will still advertise the default towards level 0 2601 and route unidirectionally using the horizontal link. Moreover, 2602 based on Section 4.3.12 it may advertise its loopback address as 2603 south PGP to remain reachable "from the south" for operational 2604 purposes. This is necessary since A02 will NOT route towards A01 2605 using the E-W link (doing otherwise may form routing loops). 2607 As further consideration, the moment A02 looses link N2 the situation 2608 evolves again. A01 will have no more northbound reachability while 2609 still seeing A03 advertising northbound adjacencies in its south node 2610 tie. With that it will stop advertising a default route due to 2611 Section 4.2.3.7. Moreover, A02 may now inject its loopback address 2612 as south PGP. 2614 6. Implementation and Operation: Further Details 2615 6.1. Considerations for Leaf-Only Implementation 2617 Ideally RIFT can be stretched out to the loWest level in the IP 2618 fabric to integrate ToRs or even servers. Since those entities would 2619 run as leafs only, it is worth to observe that a leaf only version is 2620 significantly simpler to implement and requires much less resources: 2622 1. Under normal conditions, the leaf needs to support a multipath 2623 default route only. In worst partitioning case it has to be 2624 capable of accommodating all the leaf routes in its own POD to 2625 prevent black-holing. 2627 2. Leaf nodes hold only their own N-TIEs and S-TIEs of Level 1 nodes 2628 they are connected to; so overall few in numbers. 2630 3. Leaf node does not have to support flooding reduction and de- 2631 aggregation. 2633 4. Unless optional leaf-2-leaf procedures are desired default route 2634 origination, S-TIE origination is unnecessary. 2636 6.2. Adaptations to Other Proposed Data Center Topologies 2638 . +-----+ +-----+ 2639 . | | | | 2640 .+-+ S0 | | S1 | 2641 .| ++---++ ++---++ 2642 .| | | | | 2643 .| | +------------+ | 2644 .| | | +------------+ | 2645 .| | | | | 2646 .| ++-+--+ +--+-++ 2647 .| | | | | 2648 .| | A0 | | A1 | 2649 .| +-+--++ ++---++ 2650 .| | | | | 2651 .| | +------------+ | 2652 .| | +-----------+ | | 2653 .| | | | | 2654 .| +-+-+-+ +--+-++ 2655 .+-+ | | | 2656 . | L0 | | L1 | 2657 . +-----+ +-----+ 2659 Figure 14: Level Shortcut 2661 Strictly speaking, RIFT is not limited to Clos variations only. The 2662 protocol preconditions only a sense of 'compass rose direction' 2663 achieved by configuration (or derivation) of levels and other 2664 topologies are possible within this framework. So, conceptually, one 2665 could include leaf to leaf links and even shortcut between layers but 2666 certain requirements in Section 3 will not be met anymore. As an 2667 example, shortcutting levels illustrated in Figure 14 will lead 2668 either to suboptimal routing when L0 sends traffic to L1 (since using 2669 S0's default route will lead to the traffic being sent back to A0 or 2670 A1) or the leafs need each other's routes installed to understand 2671 that only A0 and A1 should be used to talk to each other. 2673 Whether such modifications of topology constraints make sense is 2674 dependent on many technology variables and the exhausting treatment 2675 of the topic is definitely outside the scope of this document. 2677 6.3. Originating Non-Default Route Southbound 2679 Obviously, an implementation may choose to originate southbound 2680 instead of a strict default route (as described in Section 4.2.3.7) a 2681 shorter prefix P' but in such a scenario all addresses carried within 2682 the RIFT domain must be contained within P'. 2684 7. Security Considerations 2686 The protocol has provisions for nonces and can include authentication 2687 mechanisms in the future comparable to [RFC5709] and [RFC7987]. 2689 One can consider additionally attack vectors where a router may 2690 reboot many times while changing its system ID and pollute the 2691 network with many stale TIEs or TIEs are sent with very long 2692 lifetimes and not cleaned up when the routes vanishes. Those attack 2693 vectors are not unique to RIFT. Given large memory footprints 2694 available today those attacks should be relatively benign. Otherwise 2695 a node can implement a strategy of e.g. discarding contents of all 2696 TIEs of nodes that were not present in the SPF tree over a certain 2697 period of time. Since the protocol, like all modern link-state 2698 protocols, is self-stabilizing and will advertise the presence of 2699 such TIEs to its neighbors, they can be re-requested again if a 2700 computation finds that it sees an adjacency formed towards the system 2701 ID of the discarded TIEs. 2703 Section 4.2.9 presents many attack vectors in untrusted environments, 2704 starting with nodes that oscillate their level offers to the 2705 possiblity of a node offering a three way adjacency with the highest 2706 possible level value with a very long holdtime trying to put itself 2707 "on top of the lattice" and with that gaining access to the whole 2708 southbound topology. Session authentication mechanisms are necessary 2709 in environments where this is possible. 2711 8. Information Elements Schema 2713 This section introduces the schema for information elements. 2715 On schema changes that 2717 1. change field numbers or 2719 2. add new required fields or 2721 3. remove fields or 2723 4. change lists into sets, unions into structures or 2725 5. change multiplicity of fields or 2727 6. changes name of any field 2729 7. change datatypes of any field or 2731 8. adds or removes a default value of any field or 2733 9. changes default value of any field 2735 major version of the schema MUST increase. All other changes MUST 2736 increase minor version within the same major. 2738 Thrift serializer/deserializer MUST not discard optional, unknown 2739 fields but preserve and serialize them again when re-flooding whereas 2740 missing optional fields MAY be replaced with according default values 2741 if present. 2743 All signed integer as forced by Thrift support must be cast for 2744 internal purposes to equivalent unsigned values without discarding 2745 the signedness bit. An implementation SHOULD try to avoid using the 2746 signedness bit when generating values. 2748 The schema is normative. 2750 8.1. common.thrift 2752 /** 2753 Thrift file with common definitions for RIFT 2754 */ 2755 /** @note MUST be interpreted in implementation as unsigned 64 bits. 2756 * The implementation SHOULD NOT use the MSB. 2757 */ 2758 typedef i64 SystemIDType 2759 typedef i32 IPv4Address 2760 /** this has to be of length long enough to accomodate prefix */ 2761 typedef binary IPv6Address 2762 /** @note MUST be interpreted in implementation as unsigned 16 bits */ 2763 typedef i16 UDPPortType 2764 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 2765 typedef i32 TIENrType 2766 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 2767 typedef i32 MTUSizeType 2768 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 2769 typedef i32 SeqNrType 2770 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 2771 typedef i32 LifeTimeInSecType 2772 /** @note MUST be interpreted in implementation as unsigned 16 bits */ 2773 typedef i16 LevelType 2774 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 2775 typedef i32 PodType 2776 /** @note MUST be interpreted in implementation as unsigned 16 bits */ 2777 typedef i16 VersionType 2778 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 2779 typedef i32 MetricType 2780 /** @note MUST be interpreted in implementation as unstructured 64 bits */ 2781 typedef i64 RouteTagType 2782 /** @note MUST be interpreted in implementation as unstructured 32 bits label value */ 2783 typedef i32 LabelType 2784 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 2785 typedef i32 BandwithInMegaBitsType 2786 typedef string KeyIDType 2787 /** node local, unique identification for a link (interface/tunnel 2788 * etc. Basically anything RIFT runs on). This is kept 2789 * at 32 bits so it aligns with BFD [RFC5880] discriminator size. 2790 */ 2791 typedef i32 LinkIDType 2792 typedef string KeyNameType 2793 typedef i8 PrefixLenType 2794 /** timestamp in seconds since the epoch */ 2795 typedef i64 TimestampInSecsType 2796 /** security nonce */ 2797 typedef i64 NonceType 2798 /** adjacency holdtime */ 2799 typedef i16 HoldTimeInSecType 2800 /** Transaction ID type for prefix mobility as specified by RFC6550, value 2801 MUST be interpreted in implementation as unsigned */ 2802 typedef i8 PrefixTransactionIDType 2803 /** timestamp per IEEE 802.1AS, values MUST be interpreted in implementation as unsigned */ 2804 struct IEEE802_1ASTimeStampType { 2805 1: required i64 AS_sec; 2806 2: optional i32 AS_nsec; 2807 } 2809 /** Flags indicating nodes behavior in case of ZTP and support 2810 for special optimization procedures. It will force level to `leaf_level` 2811 */ 2812 enum LeafIndications { 2813 leaf_only =0, 2814 leaf_only_and_leaf_2_leaf_procedures =1, 2815 } 2817 /** default bandwidth on a link */ 2818 const BandwithInMegaBitsType default_bandwidth = 100 2819 /** fixed leaf level when ZTP is not used */ 2820 const LevelType leaf_level = 0 2821 const LevelType default_level = leaf_level 2822 /** This MUST be used when node is configured as superspine in ZTP. 2823 This is kept reasonably low to alow for fast ZTP convergence on 2824 failures. */ 2825 const LevelType default_superspine_level = 24 2826 const PodType default_pod = 0 2827 const LinkIDType undefined_linkid = 0 2828 /** default distance used */ 2829 const MetricType default_distance = 1 2830 /** any distance larger than this will be considered infinity */ 2831 const MetricType infinite_distance = 0x7FFFFFFF 2832 /** any element with 0 distance will be ignored, 2833 * missing metrics will be replaced with default_distance 2834 */ 2835 const MetricType invalid_distance = 0 2836 const bool overload_default = false 2837 const bool flood_reduction_default = true 2838 const HoldTimeInSecType default_holdtime = 3 2839 /** by default LIE levels are ZTP offers */ 2840 const bool default_not_a_ztp_offer = false 2841 /** by default e'one is repeating flooding */ 2842 const bool default_you_are_not_flood_repeater = false 2843 /** 0 is illegal for SystemID */ 2844 const SystemIDType IllegalSystemID = 0 2845 /** empty set of nodes */ 2846 const set empty_set_of_nodeids = {} 2848 /** default UDP port to run LIEs on */ 2849 const UDPPortType default_lie_udp_port = 6949 2850 const UDPPortType default_tie_udp_flood_port = 6950 2851 /** default MTU size to use */ 2852 const MTUSizeType default_mtu_size = 1400 2853 /** default mcast is v4 224.0.1.150, we make it i64 to 2854 * help languages struggling with highest bit */ 2855 const i64 default_lie_v4_mcast_group = 3758096790 2857 /** indicates whether the direction is northbound/east-west 2858 * or southbound */ 2859 enum TieDirectionType { 2860 Illegal = 0, 2861 South = 1, 2862 North = 2, 2863 DirectionMaxValue = 3, 2864 } 2866 enum AddressFamilyType { 2867 Illegal = 0, 2868 AddressFamilyMinValue = 1, 2869 IPv4 = 2, 2870 IPv6 = 3, 2871 AddressFamilyMaxValue = 4, 2872 } 2874 struct IPv4PrefixType { 2875 1: required IPv4Address address; 2876 2: required PrefixLenType prefixlen; 2877 } 2879 struct IPv6PrefixType { 2880 1: required IPv6Address address; 2881 2: required PrefixLenType prefixlen; 2882 } 2884 union IPAddressType { 2885 1: optional IPv4Address ipv4address; 2886 2: optional IPv6Address ipv6address; 2887 } 2889 union IPPrefixType { 2890 1: optional IPv4PrefixType ipv4prefix; 2891 2: optional IPv6PrefixType ipv6prefix; 2892 } 2894 /** @note: Sequence of a prefix. Comparison function: 2895 if diff(timestamps) < 200 milliseconds better transactionid wins 2896 else better time wins 2897 */ 2898 struct PrefixSequenceType { 2899 1: required IEEE802_1ASTimeStampType timestamp; 2900 2: optional PrefixTransactionIDType transactionid; 2901 } 2903 enum TIETypeType { 2904 Illegal = 0, 2905 TIETypeMinValue = 1, 2906 /** first legal value */ 2907 NodeTIEType = 2, 2908 PrefixTIEType = 3, 2909 TransitivePrefixTIEType = 4, 2910 PGPrefixTIEType = 5, 2911 KeyValueTIEType = 6, 2912 TIETypeMaxValue = 7, 2913 } 2915 /** @note: route types which MUST be ordered on their preference 2916 * PGP prefixes are most preferred attracting 2917 * traffic north (towards spine) and then south 2918 * normal prefixes are attracting traffic south (towards leafs), 2919 * i.e. prefix in NORTH PREFIX TIE is preferred over SOUTH PREFIX TIE 2920 */ 2921 enum RouteType { 2922 Illegal = 0, 2923 RouteTypeMinValue = 1, 2924 /** First legal value. */ 2925 /** Discard routes are most prefered */ 2926 Discard = 2, 2928 /** Local prefixes are directly attached prefixes on the 2929 * system such as e.g. interface routes. 2930 */ 2931 LocalPrefix = 3, 2932 /** advertised in S-TIEs */ 2933 SouthPGPPrefix = 4, 2934 /** advertised in N-TIEs */ 2935 NorthPGPPrefix = 5, 2936 /** advertised in N-TIEs */ 2937 NorthPrefix = 6, 2938 /** advertised in S-TIEs */ 2939 SouthPrefix = 7, 2940 /** transitive southbound are least preferred */ 2941 TransitiveSouthPrefix = 8, 2942 RouteTypeMaxValue = 9 2943 } 2944 8.2. encoding.thrift 2946 /** 2947 Thrift file for packet encodings for RIFT 2948 */ 2950 include "common.thrift" 2952 /** represents protocol encoding schema major version */ 2953 const i32 protocol_major_version = 10 2954 /** represents protocol encoding schema minor version */ 2955 const i32 protocol_minor_version = 0 2957 /** common RIFT packet header */ 2958 struct PacketHeader { 2959 1: required common.VersionType major_version = protocol_major_version; 2960 2: required common.VersionType minor_version = protocol_minor_version; 2961 /** this is the node sending the packet, in case of LIE/TIRE/TIDE 2962 also the originator of it */ 2963 3: required common.SystemIDType sender; 2964 /** level of the node sending the packet, required on everything except 2965 * LIEs. Lack of presence on LIEs indicates UNDEFINED_LEVEL and is used 2966 * in ZTP procedures. 2967 */ 2968 4: optional common.LevelType level; 2969 } 2971 /** Community serves as community for PGP purposes */ 2972 struct Community { 2973 1: required i32 top; 2974 2: required i32 bottom; 2975 } 2977 /** Neighbor structure */ 2978 struct Neighbor { 2979 1: required common.SystemIDType originator; 2980 2: required common.LinkIDType remote_id; 2981 } 2983 /** Capabilities the node supports */ 2984 struct NodeCapabilities { 2985 /** can this node participate in flood reduction, 2986 only relevant at level > 0 */ 2987 1: optional bool flood_reduction = 2988 common.flood_reduction_default; 2989 /** does this node restrict itself to be leaf only (in ZTP) and 2990 does it support leaf-2-leaf procedures */ 2991 2: optional common.LeafIndications leaf_indications; 2992 } 2994 /** RIFT LIE packet 2996 @note this node's level is already included on the packet header */ 2997 struct LIEPacket { 2998 /** optional node or adjacency name */ 2999 1: optional string name; 3000 /** local link ID */ 3001 2: required common.LinkIDType local_id; 3002 /** UDP port to which we can receive flooded TIEs */ 3003 3: required common.UDPPortType flood_port = 3004 common.default_tie_udp_flood_port; 3005 /** layer 3 MTU */ 3006 4: optional common.MTUSizeType link_mtu_size = 3007 common.default_mtu_size; 3008 /** this will reflect the neighbor once received to provid 3009 3-way connectivity */ 3010 5: optional Neighbor neighbor; 3011 6: optional common.PodType pod = common.default_pod; 3012 /** optional nonce used for security computations */ 3013 7: optional common.NonceType nonce; 3014 /** optional node capabilities shown in the LIE. The capabilies 3015 MUST match the capabilities shown in the Node TIEs, otherwise 3016 the behavior is unspecified. A node detecting the mismatch 3017 SHOULD generate according error. 3018 */ 3019 8: optional NodeCapabilities capabilities; 3020 /** required holdtime of the adjacency, i.e. how much time 3021 MUST expire without LIE for the adjacency to drop 3022 */ 3023 9: required common.HoldTimeInSecType holdtime = 3024 common.default_holdtime; 3025 /** indicates that the level on the LIE MUST NOT be used 3026 to derive a ZTP level by the receiving node. */ 3027 10: optional bool not_a_ztp_offer = 3028 common.default_not_a_ztp_offer; 3029 /** indicates to northbound neighbor that it should not 3030 be reflooding this node's N-TIEs to flood reduce and 3031 balance northbound flooding. To be ignored if received from a 3032 northbound adjacency. */ 3033 11: optional bool you_are_not_flood_repeater= 3034 common.default_you_are_not_flood_repeater; 3035 /** optional downstream assigned locally significant label 3036 value for the adjacency. */ 3037 12: optional common.LabelType label; 3039 } 3041 /** LinkID pair describes one of parallel links between two nodes */ 3042 struct LinkIDPair { 3043 /** node-wide unique value for the local link */ 3044 1: required common.LinkIDType local_id; 3045 /** received remote link ID for this link */ 3046 2: required common.LinkIDType remote_id; 3047 /** more properties of the link can go in here */ 3048 } 3050 /** ID of a TIE 3052 @note: TIEID space is a total order achieved by comparing the elements 3053 in sequence defined and comparing each value as an 3054 unsigned integer of according length 3055 */ 3056 struct TIEID { 3057 /** indicates direction of the TIE */ 3058 1: required common.TieDirectionType direction; 3059 /** indicates originator of the TIE */ 3060 2: required common.SystemIDType originator; 3061 3: required common.TIETypeType tietype; 3062 4: required common.TIENrType tie_nr; 3063 } 3065 /** Header of a TIE */ 3066 struct TIEHeader { 3067 2: required TIEID tieid; 3068 3: required common.SeqNrType seq_nr; 3069 /** lifetime expires down to 0 just like in ISIS */ 3070 4: required common.LifeTimeInSecType lifetime; 3071 } 3073 /** A sorted TIDE packet, if unsorted, behavior is undefined */ 3074 struct TIDEPacket { 3075 /** all 00s marks starts */ 3076 1: required TIEID start_range; 3077 /** all FFs mark end */ 3078 2: required TIEID end_range; 3079 /** _sorted_ list of headers */ 3080 3: required list headers; 3081 } 3083 /** A TIRE packet */ 3084 struct TIREPacket { 3085 1: required set headers; 3086 } 3087 /** Neighbor of a node */ 3088 struct NodeNeighborsTIEElement { 3089 /** Level of neighbor */ 3090 2: required common.LevelType level; 3091 /** Cost to neighbor. 3093 @note: All parallel links to same node 3094 incur same cost, in case the neighbor has multiple 3095 parallel links at different cost, the largest distance 3096 (highest numerical value) MUST be advertised 3097 @note: any neighbor with cost <= 0 MUST be ignored in computations */ 3098 3: optional common.MetricType cost = common.default_distance; 3099 /** can carry description of multiple parallel links in a TIE */ 3100 4: optional set link_ids; 3102 /** total bandwith to neighbor, this will be normally sum of the 3103 * bandwidths of all the parallel links. 3104 **/ 3105 5: optional common.BandwithInMegaBitsType bandwidth = 3106 common.default_bandwidth; 3107 } 3109 /** Flags the node sets */ 3110 struct NodeFlags { 3111 /** node is in overload, do not transit traffic through it */ 3112 1: optional bool overload = common.overload_default; 3113 } 3115 /** Description of a node. 3117 It may occur multiple times in different TIEs but if either 3118 * capabilities values do not match or 3119 * flags values do not match or 3120 * neighbors repeat with different values or 3121 * visible in same level/having partition upper do not match 3122 the behavior is undefined and a warning SHOULD be generated. 3123 Neighbors can be distributed across multiple TIEs however if 3124 the sets are disjoint. 3126 @note: observe that absence of fields implies defined defaults 3127 */ 3128 struct NodeTIEElement { 3129 1: required common.LevelType level; 3130 /** if neighbor systemID repeats in other node TIEs of same node 3131 the behavior is undefined. Equivalent to |A_(n,s)(N) in spec. */ 3132 2: required map neighbors; 3134 3: optional NodeCapabilities capabilities; 3135 4: optional NodeFlags flags; 3136 /** optional node name for easier operations */ 3137 5: optional string name; 3139 /** Nodes seen an the same level through reflection through nodes 3140 having backlink to both nodes. They are equivalent to |V(N) in 3141 future specifications. Ignored in Node S-TIEs if present. 3142 */ 3143 6: optional set visible_in_same_level 3144 = common.empty_set_of_nodeids; 3145 /** Non-overloaded nodes in |V seen as attached to another north 3146 * level partition due to the fact that some nodes in its |V have 3147 * adjacencies to higher level nodes that this node doesn't see. 3148 * This may be used in the computation at higher levels to prevent 3149 * blackholing. Ignored in Node S-TIEs if present. 3150 * Equivalent to |PUL(N) in spec. */ 3151 7: optional set same_level_unknown_north_partitions 3152 = common.empty_set_of_nodeids; 3153 } 3155 struct PrefixAttributes { 3156 2: required common.MetricType metric = common.default_distance; 3157 /** generic unordered set of route tags, can be redistributed to other protocols or use 3158 within the context of real time analytics */ 3159 3: optional set tags; 3160 /** optional monotonic clock for mobile addresses */ 3161 4: optional common.PrefixSequenceType monotonic_clock; 3162 } 3164 /** multiple prefixes */ 3165 struct PrefixTIEElement { 3166 /** prefixes with the associated attributes. 3167 if the same prefix repeats in multiple TIEs of same node 3168 behavior is unspecified */ 3169 1: required map prefixes; 3170 } 3172 /** keys with their values */ 3173 struct KeyValueTIEElement { 3174 /** if the same key repeats in multiple TIEs of same node 3175 or with different values, behavior is unspecified */ 3176 1: required map keyvalues; 3177 } 3179 /** single element in a TIE. enum common.TIETypeType 3180 in TIEID indicates which elements MUST be present 3181 in the TIEElement. In case of mismatch the unexpected 3182 elements MUST be ignored. 3184 */ 3185 union TIEElement { 3186 /** in case of enum common.TIETypeType.NodeTIEType */ 3187 1: optional NodeTIEElement node; 3188 /** in case of enum common.TIETypeType.PrefixTIEType */ 3189 2: optional PrefixTIEElement prefixes; 3190 /** transitive prefixes (always southbound) which SHOULD be propagated 3191 * southwards towards lower levels to heal 3192 * pathological upper level partitioning, otherwise 3193 * blackholes may occur. MUST NOT be advertised within a North TIE. 3194 */ 3195 3: optional PrefixTIEElement transitive_prefixes; 3196 4: optional KeyValueTIEElement keyvalues; 3197 /** @todo: policy guided prefixes */ 3198 } 3200 /** @todo: flood header separately in UDP to allow caching to TIEs 3201 while changing lifetime? 3202 */ 3203 struct TIEPacket { 3204 1: required TIEHeader header; 3205 2: required TIEElement element; 3206 } 3208 union PacketContent { 3209 1: optional LIEPacket lie; 3210 2: optional TIDEPacket tide; 3211 3: optional TIREPacket tire; 3212 4: optional TIEPacket tie; 3213 } 3215 /** protocol packet structure */ 3216 struct ProtocolPacket { 3217 1: required PacketHeader header; 3218 2: required PacketContent content; 3219 } 3221 9. IANA Considerations 3223 This specification will request at an opportune time multiple 3224 registry points to exchange protocol packets in a standardized way, 3225 amongst them multicast address assignments and standard port numbers. 3226 The schema itself defines many values and codepoints which can be 3227 considered registries themselves. 3229 10. Acknowledgments 3231 Many thanks to Naiming Shen for some of the early discussions around 3232 the topic of using IGPs for routing in topologies related to Clos. 3233 Russ White to be especially acknowledged for the key conversation on 3234 epistomology that allowed to tie current asynchronous distributed 3235 systems theory results to a modern protocol design presented here. 3236 Adrian Farrel, Joel Halpern and Jeffrey Zhang provided thoughtful 3237 comments that improved the readability of the document and found good 3238 amount of corners where the light failed to shine. Kris Price was 3239 first to mention single router, single arm default considerations. 3240 Jeff Tantsura helped out with some initial thoughts on BFD 3241 interactions while Jeff Haas corrected several misconceptions about 3242 BFD's finer points. Artur Makutunowicz pointed out many possible 3243 improvements and acted as sounding board in regard to modern protocol 3244 implementation techniques RIFT is exploring. Barak Gafni formalized 3245 first time clearly the problem of partitioned spine on a (clean) 3246 napkin in Singapore. 3248 11. References 3250 11.1. Normative References 3252 [I-D.ietf-6lo-rfc6775-update] 3253 Thubert, P., Nordmark, E., Chakrabarti, S., and C. 3254 Perkins, "Registration Extensions for 6LoWPAN Neighbor 3255 Discovery", draft-ietf-6lo-rfc6775-update-19 (work in 3256 progress), April 2018. 3258 [ISO10589] 3259 ISO "International Organization for Standardization", 3260 "Intermediate system to Intermediate system intra-domain 3261 routeing information exchange protocol for use in 3262 conjunction with the protocol for providing the 3263 connectionless-mode Network Service (ISO 8473), ISO/IEC 3264 10589:2002, Second Edition.", Nov 2002. 3266 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 3267 Requirement Levels", BCP 14, RFC 2119, 3268 DOI 10.17487/RFC2119, March 1997, 3269 . 3271 [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, 3272 DOI 10.17487/RFC2328, April 1998, 3273 . 3275 [RFC2365] Meyer, D., "Administratively Scoped IP Multicast", BCP 23, 3276 RFC 2365, DOI 10.17487/RFC2365, July 1998, 3277 . 3279 [RFC3626] Clausen, T., Ed. and P. Jacquet, Ed., "Optimized Link 3280 State Routing Protocol (OLSR)", RFC 3626, 3281 DOI 10.17487/RFC3626, October 2003, 3282 . 3284 [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A 3285 Border Gateway Protocol 4 (BGP-4)", RFC 4271, 3286 DOI 10.17487/RFC4271, January 2006, 3287 . 3289 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 3290 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 3291 2006, . 3293 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 3294 Element (PCE)-Based Architecture", RFC 4655, 3295 DOI 10.17487/RFC4655, August 2006, 3296 . 3298 [RFC5120] Przygienda, T., Shen, N., and N. Sheth, "M-ISIS: Multi 3299 Topology (MT) Routing in Intermediate System to 3300 Intermediate Systems (IS-ISs)", RFC 5120, 3301 DOI 10.17487/RFC5120, February 2008, 3302 . 3304 [RFC5303] Katz, D., Saluja, R., and D. Eastlake 3rd, "Three-Way 3305 Handshake for IS-IS Point-to-Point Adjacencies", RFC 5303, 3306 DOI 10.17487/RFC5303, October 2008, 3307 . 3309 [RFC5709] Bhatia, M., Manral, V., Fanto, M., White, R., Barnes, M., 3310 Li, T., and R. Atkinson, "OSPFv2 HMAC-SHA Cryptographic 3311 Authentication", RFC 5709, DOI 10.17487/RFC5709, October 3312 2009, . 3314 [RFC5881] Katz, D. and D. Ward, "Bidirectional Forwarding Detection 3315 (BFD) for IPv4 and IPv6 (Single Hop)", RFC 5881, 3316 DOI 10.17487/RFC5881, June 2010, 3317 . 3319 [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch, 3320 "Network Time Protocol Version 4: Protocol and Algorithms 3321 Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010, 3322 . 3324 [RFC6234] Eastlake 3rd, D. and T. Hansen, "US Secure Hash Algorithms 3325 (SHA and SHA-based HMAC and HKDF)", RFC 6234, 3326 DOI 10.17487/RFC6234, May 2011, 3327 . 3329 [RFC6822] Previdi, S., Ed., Ginsberg, L., Shand, M., Roy, A., and D. 3330 Ward, "IS-IS Multi-Instance", RFC 6822, 3331 DOI 10.17487/RFC6822, December 2012, 3332 . 3334 [RFC7855] Previdi, S., Ed., Filsfils, C., Ed., Decraene, B., 3335 Litkowski, S., Horneffer, M., and R. Shakir, "Source 3336 Packet Routing in Networking (SPRING) Problem Statement 3337 and Requirements", RFC 7855, DOI 10.17487/RFC7855, May 3338 2016, . 3340 [RFC7938] Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of 3341 BGP for Routing in Large-Scale Data Centers", RFC 7938, 3342 DOI 10.17487/RFC7938, August 2016, 3343 . 3345 [RFC7987] Ginsberg, L., Wells, P., Decraene, B., Przygienda, T., and 3346 H. Gredler, "IS-IS Minimum Remaining Lifetime", RFC 7987, 3347 DOI 10.17487/RFC7987, October 2016, 3348 . 3350 [RFC8200] Deering, S. and R. Hinden, "Internet Protocol, Version 6 3351 (IPv6) Specification", STD 86, RFC 8200, 3352 DOI 10.17487/RFC8200, July 2017, 3353 . 3355 11.2. Informative References 3357 [CLOS] Yuan, X., "On Nonblocking Folded-Clos Networks in Computer 3358 Communication Environments", IEEE International Parallel & 3359 Distributed Processing Symposium, 2011. 3361 [DIJKSTRA] 3362 Dijkstra, E., "A Note on Two Problems in Connexion with 3363 Graphs", Journal Numer. Math. , 1959. 3365 [DYNAMO] De Candia et al., G., "Dynamo: amazon's highly available 3366 key-value store", ACM SIGOPS symposium on Operating 3367 systems principles (SOSP '07), 2007. 3369 [EPPSTEIN] 3370 Eppstein, D., "Finding the k-Shortest Paths", 1997. 3372 [FATTREE] Leiserson, C., "Fat-Trees: Universal Networks for 3373 Hardware-Efficient Supercomputing", 1985. 3375 [I-D.ietf-spring-segment-routing] 3376 Filsfils, C., Previdi, S., Ginsberg, L., Decraene, B., 3377 Litkowski, S., and R. Shakir, "Segment Routing 3378 Architecture", draft-ietf-spring-segment-routing-15 (work 3379 in progress), January 2018. 3381 [IEEEstd1588] 3382 IEEE, "IEEE Standard for a Precision Clock Synchronization 3383 Protocol for Networked Measurement and Control Systems", 3384 IEEE Standard 1588, 3385 . 3387 [IEEEstd8021AS] 3388 IEEE, "IEEE Standard for Local and Metropolitan Area 3389 Networks - Timing and Synchronization for Time-Sensitive 3390 Applications in Bridged Local Area Networks", 3391 IEEE Standard 802.1AS, 3392 . 3394 [ISO10589-Second-Edition] 3395 International Organization for Standardization, 3396 "Intermediate system to Intermediate system intra-domain 3397 routeing information exchange protocol for use in 3398 conjunction with the protocol for providing the 3399 connectionless-mode Network Service (ISO 8473)", Nov 2002. 3401 [MAKSIC2013] 3402 Maksic et al., N., "Improving Utilization of Data Center 3403 Networks", IEEE Communications Magazine, Nov 2013. 3405 [PROTOBUF] 3406 Google, Inc., "Protocol Buffers, 3407 https://developers.google.com/protocol-buffers". 3409 [QUIC] Iyengar et al., J., "QUIC: A UDP-Based Multiplexed and 3410 Secure Transport", 2016. 3412 [RFC0826] Plummer, D., "An Ethernet Address Resolution Protocol: Or 3413 Converting Network Protocol Addresses to 48.bit Ethernet 3414 Address for Transmission on Ethernet Hardware", STD 37, 3415 RFC 826, DOI 10.17487/RFC0826, November 1982, 3416 . 3418 [RFC2131] Droms, R., "Dynamic Host Configuration Protocol", 3419 RFC 2131, DOI 10.17487/RFC2131, March 1997, 3420 . 3422 [RFC3315] Droms, R., Ed., Bound, J., Volz, B., Lemon, T., Perkins, 3423 C., and M. Carney, "Dynamic Host Configuration Protocol 3424 for IPv6 (DHCPv6)", RFC 3315, DOI 10.17487/RFC3315, July 3425 2003, . 3427 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 3428 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 3429 DOI 10.17487/RFC4861, September 2007, 3430 . 3432 [RFC4862] Thomson, S., Narten, T., and T. Jinmei, "IPv6 Stateless 3433 Address Autoconfiguration", RFC 4862, 3434 DOI 10.17487/RFC4862, September 2007, 3435 . 3437 [VAHDAT08] 3438 Al-Fares, M., Loukissas, A., and A. Vahdat, "A Scalable, 3439 Commodity Data Center Network Architecture", SIGCOMM , 3440 2008. 3442 Authors' Addresses 3444 Tony Przygienda (editor) 3445 Juniper Networks 3446 1194 N. Mathilda Ave 3447 Sunnyvale, CA 94089 3448 US 3450 Email: prz@juniper.net 3452 Alankar Sharma 3453 Comcast 3454 1800 Bishops Gate Blvd 3455 Mount Laurel, NJ 08054 3456 US 3458 Email: Alankar_Sharma@comcast.com 3459 Pascal Thubert 3460 Cisco Systems, Inc 3461 Building D 3462 45 Allee des Ormes - BP1200 3463 MOUGINS - Sophia Antipolis 06254 3464 FRANCE 3466 Phone: +33 497 23 26 34 3467 Email: pthubert@cisco.com 3469 Alia Atlas 3470 Juniper Networks 3471 10 Technology Park Drive 3472 Westford, MA 01886 3473 US 3475 Email: akatlas@juniper.net 3477 John Drake 3478 Juniper Networks 3479 1194 N. Mathilda Ave 3480 Sunnyvale, CA 94089 3481 US 3483 Email: jdrake@juniper.net