idnits 2.17.1 draft-ietf-rift-rift-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 109 instances of too long lines in the document, the longest one being 96 characters in excess of 72. == There are 1 instance of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 3828 has weird spacing: '...velType top_...' == Line 3830 has weird spacing: '...itsType defau...' == Line 3832 has weird spacing: '...velType leaf...' == Line 3833 has weird spacing: '...velType defa...' == Line 3835 has weird spacing: '...kIDType undef...' == (26 more instances...) == Using lowercase 'not' together with uppercase 'MUST', 'SHALL', 'SHOULD', or 'RECOMMENDED' is not an accepted usage according to RFC 2119. Please use uppercase 'NOT' together with RFC 2119 keywords (if that is what you mean). Found 'MUST not' in this paragraph: Thrift serializer/deserializer MUST not discard optional, unknown fields but preserve and serialize them again when re-flooding whereas missing optional fields MAY be replaced with according default values if present. -- The document date (October 19, 2018) is 2015 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'A' is mentioned on line 2339, but not defined == Missing Reference: 'B' is mentioned on line 2339, but not defined == Missing Reference: 'C' is mentioned on line 2341, but not defined == Missing Reference: 'D' is mentioned on line 2340, but not defined == Missing Reference: 'E' is mentioned on line 256, but not defined == Missing Reference: 'F' is mentioned on line 256, but not defined == Missing Reference: 'NH' is mentioned on line 2136, but not defined == Missing Reference: 'P' is mentioned on line 2323, but not defined == Missing Reference: 'RFC5880' is mentioned on line 3795, but not defined -- Possible downref: Non-RFC (?) normative reference: ref. 'ISO10589' ** Downref: Normative reference to an Experimental RFC: RFC 3626 ** Downref: Normative reference to an Informational RFC: RFC 4655 ** Downref: Normative reference to an Informational RFC: RFC 6234 ** Obsolete normative reference: RFC 6822 (Obsoleted by RFC 8202) ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) ** Downref: Normative reference to an Informational RFC: RFC 7855 ** Downref: Normative reference to an Informational RFC: RFC 7938 -- Obsolete informational reference (is this intentional?): RFC 3315 (Obsoleted by RFC 8415) Summary: 8 errors (**), 0 flaws (~~), 18 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RIFT Working Group The RIFT Authors 3 Internet-Draft "Heaven is under our feet as well as over our heads" 4 Intended status: Standards Track October 19, 2018 5 Expires: April 22, 2019 7 RIFT: Routing in Fat Trees 8 draft-ietf-rift-rift-03 10 Abstract 12 This document outlines a specialized, dynamic routing protocol for 13 Clos and fat-tree network topologies. The protocol (1) deals with 14 fully automatied construction of fat-tree topologies based on 15 detection of links, (2) minimizes the amount of routing state held at 16 each level, (3) automatically prunes and load balances topology 17 flooding exchanges over a sufficient subset of links, (4) supports 18 automatic disaggregation of prefixes on link and node failures to 19 prevent black-holing and suboptimal routing, (5) allows traffic 20 steering and re-routing policies, (6) allows loop-free non-ECMP 21 forwarding, (7) automatically re-balances traffic towards the spines 22 based on bandwidth available and finally (8) provides mechanisms to 23 synchronize a limited key-value data-store that can be used after 24 protocol convergence to e.g. bootstrap higher levels of 25 functionality on nodes. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on April 22, 2019. 44 Copyright Notice 46 Copyright (c) 2018 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (https://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 62 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 63 2.1. Requirements Language . . . . . . . . . . . . . . . . . . 6 64 3. Reference Frame . . . . . . . . . . . . . . . . . . . . . . . 6 65 3.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6 66 3.2. Topology . . . . . . . . . . . . . . . . . . . . . . . . 10 67 4. Requirement Considerations . . . . . . . . . . . . . . . . . 11 68 5. RIFT: Routing in Fat Trees . . . . . . . . . . . . . . . . . 14 69 5.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 15 70 5.1.1. Properties . . . . . . . . . . . . . . . . . . . . . 15 71 5.1.2. Generalized Topology View . . . . . . . . . . . . . . 15 72 5.1.3. Fallen Leaf Problem . . . . . . . . . . . . . . . . . 25 73 5.1.4. Discovering Fallen Leaves . . . . . . . . . . . . . . 27 74 5.1.5. Addressing the Fallen Leaves Problem . . . . . . . . 28 75 5.2. Specification . . . . . . . . . . . . . . . . . . . . . . 29 76 5.2.1. Transport . . . . . . . . . . . . . . . . . . . . . . 29 77 5.2.2. Link (Neighbor) Discovery (LIE Exchange) . . . . . . 30 78 5.2.3. Topology Exchange (TIE Exchange) . . . . . . . . . . 32 79 5.2.3.1. Topology Information Elements . . . . . . . . . . 32 80 5.2.3.2. South- and Northbound Representation . . . . . . 33 81 5.2.3.3. Flooding . . . . . . . . . . . . . . . . . . . . 35 82 5.2.3.4. TIE Flooding Scopes . . . . . . . . . . . . . . . 36 83 5.2.3.5. Initial and Periodic Database Synchronization . . 38 84 5.2.3.6. Purging . . . . . . . . . . . . . . . . . . . . . 38 85 5.2.3.7. Southbound Default Route Origination . . . . . . 39 86 5.2.3.8. Northbound TIE Flooding Reduction . . . . . . . . 40 87 5.2.4. Reachability Computation . . . . . . . . . . . . . . 44 88 5.2.4.1. Northbound SPF . . . . . . . . . . . . . . . . . 44 89 5.2.4.2. Southbound SPF . . . . . . . . . . . . . . . . . 45 90 5.2.4.3. East-West Forwarding Within a Level . . . . . . . 45 91 5.2.5. Automatic Disaggregation on Link & Node Failures . . 45 92 5.2.5.1. Positive, Non-transitive Disaggregation . . . . . 45 93 5.2.5.2. Negative, Transitive Disaggregation for Fallen 94 Leafs . . . . . . . . . . . . . . . . . . . . . . 49 95 5.2.6. Attaching Prefixes . . . . . . . . . . . . . . . . . 51 96 5.2.7. Optional Zero Touch Provisioning (ZTP) . . . . . . . 53 97 5.2.7.1. Terminology . . . . . . . . . . . . . . . . . . . 54 98 5.2.7.2. Automatic SystemID Selection . . . . . . . . . . 55 99 5.2.7.3. Generic Fabric Example . . . . . . . . . . . . . 55 100 5.2.7.4. Level Determination Procedure . . . . . . . . . . 56 101 5.2.7.5. Resulting Topologies . . . . . . . . . . . . . . 58 102 5.2.8. Stability Considerations . . . . . . . . . . . . . . 59 103 5.3. Further Mechanisms . . . . . . . . . . . . . . . . . . . 60 104 5.3.1. Overload Bit . . . . . . . . . . . . . . . . . . . . 60 105 5.3.2. Optimized Route Computation on Leafs . . . . . . . . 60 106 5.3.3. Mobility . . . . . . . . . . . . . . . . . . . . . . 60 107 5.3.3.1. Clock Comparison . . . . . . . . . . . . . . . . 62 108 5.3.3.2. Interaction between Time Stamps and Sequence 109 Counters . . . . . . . . . . . . . . . . . . . . 62 110 5.3.3.3. Anycast vs. Unicast . . . . . . . . . . . . . . . 63 111 5.3.3.4. Overlays and Signaling . . . . . . . . . . . . . 63 112 5.3.4. Key/Value Store . . . . . . . . . . . . . . . . . . . 64 113 5.3.4.1. Southbound . . . . . . . . . . . . . . . . . . . 64 114 5.3.4.2. Northbound . . . . . . . . . . . . . . . . . . . 64 115 5.3.5. Interactions with BFD . . . . . . . . . . . . . . . . 64 116 5.3.6. Fabric Bandwidth Balancing . . . . . . . . . . . . . 65 117 5.3.6.1. Northbound Direction . . . . . . . . . . . . . . 65 118 5.3.6.2. Southbound Direction . . . . . . . . . . . . . . 67 119 5.3.7. Label Binding . . . . . . . . . . . . . . . . . . . . 68 120 5.3.8. Segment Routing Support with RIFT . . . . . . . . . . 68 121 5.3.8.1. Global Segment Identifiers Assignment . . . . . . 68 122 5.3.8.2. Distribution of Topology Information . . . . . . 68 123 5.3.9. Leaf to Leaf Procedures . . . . . . . . . . . . . . . 69 124 5.3.10. Address Family and Multi Topology Considerations . . 69 125 5.3.11. Reachability of Internal Nodes in the Fabric . . . . 69 126 5.3.12. One-Hop Healing of Levels with East-West Links . . . 70 127 6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 70 128 6.1. Normal Operation . . . . . . . . . . . . . . . . . . . . 70 129 6.2. Leaf Link Failure . . . . . . . . . . . . . . . . . . . . 71 130 6.3. Partitioned Fabric . . . . . . . . . . . . . . . . . . . 72 131 6.4. Northbound Partitioned Router and Optional East-West 132 Links . . . . . . . . . . . . . . . . . . . . . . . . . . 74 133 6.5. Multi-Plane Fabric and Negative Disaggregation . . . . . 75 134 7. Implementation and Operation: Further Details . . . . . . . . 75 135 7.1. Considerations for Leaf-Only Implementation . . . . . . . 75 136 7.2. Adaptations to Other Proposed Data Center Topologies . . 76 137 7.3. Originating Non-Default Route Southbound . . . . . . . . 77 138 8. Security Considerations . . . . . . . . . . . . . . . . . . . 77 139 8.1. General . . . . . . . . . . . . . . . . . . . . . . . . . 77 140 8.2. ZTP . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 141 8.3. Lifetime . . . . . . . . . . . . . . . . . . . . . . . . 78 142 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 78 143 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 78 144 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 78 145 11.1. Normative References . . . . . . . . . . . . . . . . . . 78 146 11.2. Informative References . . . . . . . . . . . . . . . . . 81 147 Appendix A. Information Elements Schema . . . . . . . . . . . . 83 148 A.1. common.thrift . . . . . . . . . . . . . . . . . . . . . . 84 149 A.2. encoding.thrift . . . . . . . . . . . . . . . . . . . . . 88 150 Appendix B. Finite State Machines and Precise Operational 151 Specifications . . . . . . . . . . . . . . . . . . . 94 152 B.1. LIE FSM . . . . . . . . . . . . . . . . . . . . . . . . . 95 153 B.2. ZTP FSM . . . . . . . . . . . . . . . . . . . . . . . . . 101 154 B.3. Flooding Procedures . . . . . . . . . . . . . . . . . . . 105 155 B.3.1. FloodState Structure per Adjacency . . . . . . . . . 106 156 B.3.2. TIDEs . . . . . . . . . . . . . . . . . . . . . . . . 107 157 B.3.2.1. TIDE Generation . . . . . . . . . . . . . . . . . 107 158 B.3.2.2. TIDE Processing . . . . . . . . . . . . . . . . . 108 159 B.3.3. TIREs . . . . . . . . . . . . . . . . . . . . . . . . 109 160 B.3.3.1. TIRE Generation . . . . . . . . . . . . . . . . . 109 161 B.3.3.2. TIRE Processing . . . . . . . . . . . . . . . . . 110 162 B.3.4. TIEs Processing on Flood State Adjacency . . . . . . 110 163 B.3.5. TIEs Processing When LSDB Received Newer Version on 164 Other Adjacencies . . . . . . . . . . . . . . . . . . 111 165 Appendix C. Constants . . . . . . . . . . . . . . . . . . . . . 111 166 C.1. Configurable Protocol Constants . . . . . . . . . . . . . 111 167 Appendix D. TODO . . . . . . . . . . . . . . . . . . . . . . . . 113 168 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 113 170 1. Authors 172 This work is a product of a growing list of individuals. 174 Tony Przygienda, Ed | Alankar Sharma | Pascal Thubert 175 Juniper Networks | Comcast | Cisco 177 Bruno Rijsman | Ilya Vershkov | Alia Atlas 178 Individual | Mellanox | Individual 180 Don Fedyk | John Drake | 181 HPE | Juniper | 183 Table 1: RIFT Authors 185 2. Introduction 187 Clos [CLOS] and Fat-Tree [FATTREE] have gained prominence in today's 188 networking, primarily as result of the paradigm shift towards a 189 centralized data-center based architecture that is poised to deliver 190 a majority of computation and storage services in the future. 191 Today's current routing protocols were geared towards a network with 192 an irregular topology and low degree of connectivity originally but 193 given they were the only available options, consequently several 194 attempts to apply those protocols to Clos have been made. Most 195 successfully BGP [RFC4271] [RFC7938] has been extended to this 196 purpose, not as much due to its inherent suitability but rather 197 because the perceived capability to easily modify BGP and the 198 immanent difficulties with link-state link-state [DIJKSTRA] based 199 protocols to optimize topology exchange and converge quickly in large 200 scale densely meshed topologies. The incumbent protocols 201 precondition normally extensive configuration or provisioning during 202 bring up and re-dimensioning which is only viable for a set of 203 organizations with according networking operation skills and budgets. 204 For the majority of data center consumers a preferable protocol would 205 be one that auto-configures itself and deals with failures and 206 misconfigurations with a minimum of human intervention only. Such a 207 solution would allow local IP fabric bandwidth to be consumed in a 208 standardized component fashion, i.e. provision it much faster and 209 operate it at much lower costs, much like compute or storage is 210 consumed today. 212 In looking at the problem through the lens of data center 213 requirements, an optimal approach does not seem however to be a 214 simple modification of either a link-state (distributed computation) 215 or distance-vector (diffused computation) approach but rather a 216 mixture of both, colloquially best described as "link-state towards 217 the spine" and "distance vector towards the leafs". In other words, 218 "bottom" levels are flooding their link-state information in the 219 "northern" direction while each node generates under normal 220 conditions a default route and floods it in the "southern" direction. 221 This type of protocol allows highly desirable aggregation. Alas, 222 such aggregation could blackhole traffic in cases of misconfiguration 223 or while failures are being resolved or even cause partial network 224 partitioning and this has to be addressed. The approach RIFT takes 225 is described in Section 5.2.5 and is basically based on automatic, 226 sufficient disaggregation of prefixes to prevent any possible 227 problems. 229 For the visually oriented reader, Figure 1 presents a first level 230 simplified view of the resulting information and routes on a RIFT 231 fabric. The top of the fabric, is holding, in its link-state 232 database the nodes below it and the routes to them. In the second 233 row of the database we indicate that partial information of other 234 nodes in the same level is available as well. The details of how 235 this is achieved will be postponed for the moment. When we look at 236 the "bottom" of the fabric, the leafs, we see that the topology is 237 basically empty and they only hold a load balanced default route to 238 the next level. 240 The balance of this document details the resulting protocol and fills 241 in the missing details. 243 . [A,B,C,D] 244 . [E] 245 . +-----+ +-----+ 246 . | E | | F | A/32 @ [C,D] 247 . +-+-+-+ +-+-+-+ B/32 @ [C,D] 248 . | | | | C/32 @ C 249 . | | +-----+ | D/32 @ D 250 . | | | | 251 . | +------+ | 252 . | | | | 253 . [A,B] +-+---+ | | +---+-+ [A,B] 254 . [D] | C +--+ +-+ D | [C] 255 . +-+-+-+ +-+-+-+ 256 . 0/0 @ [E,F] | | | | 0/0 @ [E,F] 257 . A/32 @ A | | +-----+ | A/32 @ A 258 . B/32 @ B | | | | B/32 @ B 259 . | +------+ | 260 . | | | | 261 . +-+---+ | | +---+-+ 262 . | A +--+ +-+ B | 263 . 0/0 @ [C,D] +-----+ +-----+ 0/0 @ [C,D] 265 Figure 1: RIFT information distribution 267 2.1. Requirements Language 269 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 270 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 271 document are to be interpreted as described in RFC 2119 [RFC2119]. 273 3. Reference Frame 275 3.1. Terminology 277 This section presents the terminology used in this document. It is 278 assumed that the reader is thoroughly familiar with the terms and 279 concepts used in OSPF [RFC2328] and IS-IS [ISO10589-Second-Edition], 280 [ISO10589] as well as the according graph theoretical concepts of 281 shortest path first (SPF) [DIJKSTRA] computation and directed acyclic 282 graphs (DAG). 284 Level: Clos and Fat Tree networks are topologically partially 285 ordered graphs and 'level' denotes the set of nodes at the same 286 height in such a network, where the bottom level (leaf) is the 287 level with lowest value. A node has links to nodes one level down 288 and/or one level up. Under some circumstances, a node may have 289 links to nodes at the same level. As footnote: Clos terminology 290 uses often the concept of "stage" but due to the folded nature of 291 the Fat Tree we do not use it to prevent misunderstandings. 293 Superspine/Aggregation or Spine/Edge Levels: Traditional names in 294 5-stages folded Clos for Level 2, 1 and 0 respectively. Level 0 295 is often called leaf as well. We normalize this language to talk 296 about leafs, spines and top-of-fabric (ToF). 298 Point of Delivery (PoD): A self-contained vertical slice or subset 299 of a Clos or Fat Tree network containing normally only level 0 and 300 level 1 nodes. A node in a PoD communicates with nodes in other 301 PoDs via the Top-of-Fabric. We number PoDs to distinguish them 302 and use PoD #0 to denote "undefined" PoD. 304 Top of PoD (ToP): The set of nodes that provide intra-PoD 305 communication and have northbound adjacencies outside of the PoD, 306 i.e. are at the "top" of the PoD. 308 Top of Fabric (ToF): The set of nodes that provide inter-PoD 309 communication and have no northbound adjacencies, i.e. are at the 310 "very top" of the fabric. ToF nodes do not belong to any PoD and 311 are assigned "undefined" PoD value to indicate the equivalent of 312 "any" PoD. 314 Spine: Any nodes north of leafs and south of top-of-fabric nodes. 315 Multiple layers of spines in a PoD are possible. 317 Leaf: A node without southbound adjacencies. Its level is 0 (except 318 cases where it is deriving its level via ZTP and is running 319 without LEAF_ONLY which will be explained in Section 5.2.7). 321 Top-of-fabric Plane or Partition: In large fabrics top-of-fabric 322 switches may not have enough ports to aggregate all switches south 323 of them and with that, the ToF is 'split' into multiple 324 independent planes. Introduction and Section 5.1.2 explains the 325 concept in more detail. A plane is subset of ToF nodes that see 326 each other through south reflection or E-W links. 328 Radix: A radix of a switch is basically number of switching ports it 329 provides. It's sometimes called fanout as well. 331 North Radix: Ports cabled northbound to higher level nodes. 333 South Radix: Ports cabled southbound to lower level nodes. 335 South/Southbound and North/Northbound (Direction): When describing 336 protocol elements and procedures, we will be using in different 337 situations the directionality of the compass. I.e., 'south' or 338 'southbound' mean moving towards the bottom of the Clos or Fat 339 Tree network and 'north' and 'northbound' mean moving towards the 340 top of the Clos or Fat Tree network. 342 Northbound Link: A link to a node one level up or in other words, 343 one level further north. 345 Southbound Link: A link to a node one level down or in other words, 346 one level further south. 348 East-West Link: A link between two nodes at the same level. East- 349 West links are normally not part of Clos or "fat-tree" topologies. 351 Leaf shortcuts (L2L): East-West links at leaf level will need to be 352 differentiated from East-West links at other levels. 354 Southbound representation: Information sent towards a lower level 355 representing only limited amount of information. 357 South Reflection: Often abbreviated just as "reflection" it defines 358 a mechanism where South Node TIEs are "reflected" back up north to 359 allow nodes in same level without E-W links to "see" each other. 361 TIE: This is an acronym for a "Topology Information Element". TIEs 362 are exchanged between RIFT nodes to describe parts of a network 363 such as links and address prefixes. A TIE can be thought of as 364 largely equivalent to ISIS LSPs or OSPF LSA. We will talk about 365 N-TIEs when talking about TIEs in the northbound representation 366 and S-TIEs for the southbound equivalent. 368 Node TIE: This is an acronym for a "Node Topology Information 369 Element", largely equivalent to OSPF Router LSA, i.e. it contains 370 all adjacencies the node discovered and information about node 371 itself. 373 Prefix TIE: This is an acronym for a "Prefix Topology Information 374 Element" and it contains all prefixes directly attached to this 375 node in case of a N-TIE and in case of S-TIE the necessary default 376 the node passes southbound. 378 Key Value TIE: A S-TIE that is carrying a set of key value pairs 379 [DYNAMO]. It can be used to distribute information in the 380 southbound direction within the protocol. 382 TIDE: Topology Information Description Element, equivalent to CSNP 383 in ISIS. 385 TIRE: Topology Information Request Element, equivalent to PSNP in 386 ISIS. It can both confirm received and request missing TIEs. 388 De-aggregation/Disaggregation: Process in which a node decides to 389 advertise certain prefixes it received in N-TIEs to prevent black- 390 holing and suboptimal routing upon link failures. 392 LIE: This is an acronym for a "Link Information Element", largely 393 equivalent to HELLOs in IGPs and exchanged over all the links 394 between systems running RIFT to form adjacencies. all the links 395 between systems running RIFT to form adjacencies. 397 Flooding Leader (FL): Flooding Leader for a specific system has a 398 dedicated role to flood on northbound TIEs sent by this system. 399 Similar to MPR in OSLR. 401 Bandwidth Adjusted Distance (BAD): This is an acronym for Bandwidth 402 Adjusted Distance. Each RIFT node calculates the amount of 403 northbound bandwidth available towards a node compared to other 404 nodes at the same level and modifies the default route distance 405 accordingly to allow for the lower level to adjust their load 406 balancing towards spines. 408 Overloaded: Applies to a node advertising `overload` attribute as 409 set. The semantics closely follow the meaning of the same 410 attribute in [ISO10589-Second-Edition]. 412 Interface: A layer 3 entity over which RIFT control packets are 413 exchanged. 415 Adjacency: RIFT tries to form a unique adjacency over an interface 416 and exchange local configuration and necessary ZTP information. 418 Neighbor: Once a three way adjacency has been formed a neighborship 419 relationship contains the neighbor's properties. Multiple 420 adjacencies can be formed to a neighbor via parallel interfaces 421 but such adjacencies are NOT sharing a neighbor structure. Saying 422 "neighbor" is thus equivalent to saying "a three way adjacency". 424 Cost: The term signifies the weighted distance between two 425 neighbors. 427 Distance: Sum of costs (bound by infinite distance) between two 428 nodes. 430 Metric: Without going deeper into the mathematic differentiation, a 431 metric is equivalent to distance. 433 3.2. Topology 435 . +--------+ +--------+ ^ N 436 . |ToF 21| |ToF 22| | 437 .Level 2 ++-+--+-++ ++-+--+-++ <-*-> E/W 438 . | | | | | | | | | 439 . P111/2| |P121 | | | | S v 440 . ^ ^ ^ ^ | | | | 441 . | | | | | | | | 442 . +--------------+ | +-----------+ | | | +---------------+ 443 . | | | | | | | | 444 . South +-----------------------------+ | | ^ 445 . | | | | | | | All TIEs 446 . 0/0 0/0 0/0 +-----------------------------+ | 447 . v v v | | | | | 448 . | | +-+ +<-0/0----------+ | | 449 . | | | | | | | | 450 .+-+----++ optional +-+----++ ++----+-+ ++-----++ 451 .| | E/W link | | | | | | 452 .|Spin111+----------+Spin112| |Spin121| |Spin122| 453 .+-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 454 . | | | South | | | | 455 . | +---0/0--->-----+ 0/0 | +----------------+ | 456 . 0/0 | | | | | | | 457 . | +---<-0/0-----+ | v | +--------------+ | | 458 . v | | | | | | | 459 .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 460 .| | (L2L) | | | | Level 0 | | 461 .|Leaf111~~~~~~~~~~~~Leaf112| |Leaf121| |Leaf122| 462 .+-+-----+ +-+---+-+ +--+--+-+ +-+-----+ 463 . + + \ / + + 464 . Prefix111 Prefix112 \ / Prefix121 Prefix122 465 . multi-homed 466 . Prefix 467 .+---------- Pod 1 ---------+ +---------- Pod 2 ---------+ 469 Figure 2: A three level spine-and-leaf topology 470 .+--------+ +--------+ +--------+ +--------+ 471 .|ToF A1| |ToF B1| |ToF B2| |ToF A2| 472 .++-+-----+ ++-+-----+ ++-+-----+ ++-+-----+ 473 . | | | | | | | | 474 . | | | | | +---------------+ 475 . | | | | | | | | 476 . | | | +-------------------------+ | 477 . | | | | | | | | 478 . | +-----------------------+ | | | | 479 . | | | | | | | | 480 . | | +---------+ | +---------+ | | 481 . | | | | | | | | 482 . | +---------------------------------+ | | 483 . | | | | | | | | 484 .++-+-----+ ++-+-----+ +--+-+---+ +----+-+-+ 485 .|Spine111| |Spine112| |Spine121| |Spine122| 486 .+-+---+--+ ++----+--+ +-+---+--+ ++---+---+ 487 . | | | | | | | | 488 . | +--------+ | | +--------+ | 489 . | | | | | | | | 490 . | -------+ | | | +------+ | | 491 . | | | | | | | | 492 .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 493 .|Leaf111| |Leaf112| |Leaf121| |Leaf122| 494 .+-------+ +-------+ +-------+ +-------+ 496 Figure 3: Topology with multiple planes 498 We will use topology in Figure 2 (called commonly a fat tree/network 499 in modern IP fabric considerations [VAHDAT08] as homonym to the 500 original definition of the term [FATTREE]) in all further 501 considerations. This figure depicts a generic "single plane fat- 502 tree" and the concepts explained using three levels apply by 503 induction to further levels and higher degrees of connectivity. 504 Further, this document will deal also with designs that provide only 505 sparser connectivity and "partitioned spines" as shown in Figure 3 506 and explained further in Section 5.1.2. 508 4. Requirement Considerations 510 [RFC7938] gives the original set of requirements augmented here based 511 upon recent experience in the operation of fat-tree networks. 513 REQ1: The control protocol should discover the physical links 514 automatically and be able to detect cabling that violates 515 fat-tree topology constraints. It must react accordingly to 516 such mis-cabling attempts, at a minimum preventing 517 adjacencies between nodes from being formed and traffic from 518 being forwarded on those mis-cabled links. E.g. connecting 519 a leaf to a spine at level 2 should be detected and ideally 520 prevented. 522 REQ2: A node without any configuration beside default values 523 should come up at the correct level in any PoD it is 524 introduced into. Optionally, it must be possible to 525 configure nodes to restrict their participation to the 526 PoD(s) targeted at any level. 528 REQ3: Optionally, the protocol should allow to provision IP 529 fabrics where the individual switches carry no configuration 530 information and are all deriving their level from a "seed". 531 Observe that this requirement may collide with the desire to 532 detect cabling misconfiguration and with that only one of 533 the requirements can be fully met in a chosen configuration 534 mode. 536 REQ4: The solution should allow for minimum size routing 537 information base and forwarding tables at leaf level for 538 speed, cost and simplicity reasons. Holding excessive 539 amount of information away from leaf nodes simplifies 540 operation and lowers cost of the underlay and allows to 541 scale and introduce proper multi-homing down to the server 542 level. The routing solution should allow for easy 543 instantiation of multiple routing planes. Coupled with 544 mobility defined in Paragraph 17 this should allow for 545 "light-weight" overlays on an IP fabric with e.g. native 546 IPv6 mobility support. 548 REQ5: Very high degree of ECMP must be supported. Maximum ECMP is 549 currently understood as the most efficient routing approach 550 to maximize the throughput of switching fabrics 551 [MAKSIC2013]. 553 REQ6: Non equal cost anycast must be supported to allow for easy 554 and robust multi-homing of services without regressing to 555 careful balancing of link costs. 557 REQ7: Traffic engineering should be allowed by modification of 558 prefixes and/or their next-hops. 560 REQ8: The solution should allow for access to link states of the 561 whole topology to enable efficient support for modern 562 control architectures like SPRING [RFC7855] or PCE 563 [RFC4655]. 565 REQ9: The solution should easily accommodate opaque data to be 566 carried throughout the topology to subsets of nodes. This 567 can be used for many purposes, one of them being a key-value 568 store that allows bootstrapping of nodes based right at the 569 time of topology discovery. Another use is distributing MAC 570 to L3 address binding from the leafs up north in case of 571 e.g. DHCP. 573 REQ10: Nodes should be taken out and introduced into production 574 with minimum wait-times and minimum of "shaking" of the 575 network, i.e. radius of propagation (often called "blast 576 radius") of changed information should be as small as 577 feasible. 579 REQ11: The protocol should allow for maximum aggregation of carried 580 routing information while at the same time automatically de- 581 aggregating the prefixes to prevent black-holing in case of 582 failures. The de-aggregation should support maximum 583 possible ECMP/N-ECMP remaining after failure. 585 REQ12: Reducing the scope of communication needed throughout the 586 network on link and state failure, as well as reducing 587 advertisements of repeating or idiomatic information in 588 stable state is highly desirable since it leads to better 589 stability and faster convergence behavior. 591 REQ13: Once a packet traverses a link in a "southbound" direction, 592 it must not take any further "northbound" steps along its 593 path to delivery to its destination under normal, i.e. 594 fully converged, conditions. Taking a path through the 595 spine in cases where a shorter path is available is highly 596 undesirable. 598 REQ14: Parallel links between same set of nodes must be 599 distinguishable for SPF, failure and traffic engineering 600 purposes. 602 REQ15: The protocol must not rely on interfaces having discernible 603 unique addresses, i.e. it must operate in presence of 604 unnumbered links (even parallel ones) or links of a single 605 node having same addresses. 607 REQ16: It would be desirable to achieve fast re-balancing of flows 608 when links, especially towards the spines are lost or 609 provisioned without regressing to per flow traffic 610 engineering which introduces significant amount of 611 complexity while possibly not being reactive enough to 612 account for short-lived flows. 614 REQ17: The control plane should be able to unambiguously determine 615 the current point of attachment (which port on which leaf 616 node) of a prefix, even in a context of fast mobility, e.g., 617 when the prefix is a host address on a wireless node that 1) 618 may associate to any of multiple access points (APs) that 619 are attached to different ports on a same leaf node or to 620 different leaf nodes, and 2) may move and reassociate 621 several times to a different AP within a sub-second period. 623 REQ18: The protocol should provide security mechanisms that allow 624 to restrict nodes, especially leafs without proper 625 credentials from forming three-way adjacencies. 627 Following list represents possible requirements and requirements 628 under discussion: 630 PEND1: Supporting anything but point-to-point links is a non- 631 requirement. Questions remain: for connecting to the 632 leaves, is there a case where multipoint is desirable? One 633 could still model it as point-to-point links; it seems there 634 is no need for anything more than a NBMA-type construct. 636 PEND2: What is the maximum scale of number leaf prefixes we need to 637 carry. 500'000 seems plenty even if we deploy RIFT down to 638 servers as leafs. 640 Finally, following are the non-requirements: 642 NONREQ1: Broadcast media support is unnecessary. However, 643 miscabling leading to multiple nodes on a broadcast 644 segment must be operationally easily recognizable and 645 operationally easily detectable while not taxing the 646 protocol excessively. 648 NONREQ2: Purging link state elements is unnecessary given its 649 fragility and complexity and today's large memory size on 650 even modest switches and routers. 652 NONREQ3: Special support for layer 3 multi-hop adjacencies is not 653 part of the protocol specification. Such support can be 654 easily provided by using tunneling technologies the same 655 way IGPs today are solving the problem. 657 5. RIFT: Routing in Fat Trees 659 Derived from the above requirements we present a detailed outline of 660 a protocol optimized for Routing in Fat Trees (RIFT) that in most 661 abstract terms has many properties of a modified link-state protocol 663 [RFC2328][ISO10589-Second-Edition] when "pointing north" and path- 664 vector [RFC4271] protocol when "pointing south". While this is an 665 unusual combination, it does quite naturally exhibit the desirable 666 properties we seek. 668 5.1. Overview 670 5.1.1. Properties 672 The most singular property of RIFT is that it floods flat link-state 673 information northbound only so that each level obtains the full 674 topology of levels south of it. That information is never flooded 675 East-West (we'll talk about exceptions later) or back South again. 676 In the southbound direction the protocol operates like a "fully 677 summarizing, unidirectional" path vector protocol or rather a 678 distance vector with implicit split horizon whereas the information 679 propagates one hop south and is 're-advertised' by nodes at next 680 lower level, normally just the default route. However, RIFT uses 681 flooding in the southern direction as well to avoid the necessity to 682 build an update per adjacency. We omit describing the East-West 683 direction out for the moment. 685 Those information flow constraints create not only an anisotropic 686 protocol (i.e. the information is not distributed "evenly" or 687 "clumped" but summarized along the N-S gradient) but also a "smooth" 688 information propagation where nodes do not receive the same 689 information from multiple fronts which would force them to perform a 690 diffused computation to tie-break the same reachability information 691 arriving on arbitrary links and ultimately force hop-by-hop 692 forwarding on shortest-paths only. 694 To account for the "northern" and the "southern" information split 695 the link state database is partitioned into "north representation" 696 and "south representation" TIEs, whereas in simplest terms the N-TIEs 697 contain a link state topology description of lower levels and and 698 S-TIEs carry simply default routes. This oversimplified view will be 699 refined gradually in following sections while introducing protocol 700 procedures aimed to fulfill the described requirements. 702 5.1.2. Generalized Topology View 704 This section will dwell on the topologies addresses by RIFT including 705 multi plane fabrics and their related implications. Readers that are 706 only interested in single plane designs, i.e. all top-of-fabric nodes 707 being topologically equal and initially connected to all the switches 708 at the level below them can skip this section and resulting 709 Section 5.2.5.2 as well. 711 Given the difficulty of visualizing multi plane design which are 712 effectively multi-dimensional switching matrices we will introduce a 713 methodology allowing us to visualize the connectivity in a two- 714 dimensional document and leverage the fact that we are dealing 715 basically with crossbar fabrics stacked on top of each other where 716 ports also align "on top of each other" in a regular fashion. 718 The typical topology for which RIFT is defined is built of a number P 719 of PoDs, connected together by a number S of spine nodes. A PoD node 720 has a number of ports called Radix, with half of them (K=Radix/2) 721 used to connect host devices from the south, and half to connect to 722 interleaved PoD Top-Level switches to the north. Ratio K can be 723 chosen differently without loss of generality when port speeds differ 724 or fabric is oversubscribed but K=R/2 allows for more readable 725 representation whereby there are as many ports facing north as south 726 on any intermediate node. We represent a node hence in a schematic 727 fashion with ports "sticking out" to its north and south rather than 728 by the usual real-world front faceplate designs of the day. 730 Figure 4 provides a view of a leaf node as seen from the north, i.e. 731 showing ports that connect northbound and for lack of a better 732 symbol, we have chosen to use the "HH" symbol as ASCII visualisation 733 of a RJ45 jack. Observe that the number of PoDs is not related to 734 Radix unless the Spine Nodes are constrained to be the same as the 735 PoD nodes in a particular deployment. We set the radix of the leaf 736 to K_LEAF, in this example 6 ports. 738 Top view 739 +----+ 740 | | 741 | HH | e.g., Radix = 12, K_LEAF = 6 742 | | 743 | HH | 744 | | ------------------------- 745 | HH ------- Physical Port (Ethernet) ----+ 746 | | ------------------------- | 747 | HH | | 748 | | | 749 | HH | | 750 | | | 751 | HH | | 752 | | | 753 +----+ | 755 || || || || || || || 756 +----+ +------------------------------------------------+ 757 | | | | 758 +----+ +------------------------------------------------+ 759 || || || || || || || 760 Side views 762 Figure 4: A Leaf Node, K_LEAF=6 764 The Radix of a node on top of a PoD may be different than that of the 765 leaf node, though more often than not a same type of node is used for 766 both, effectively forming a square (K*K). In the general case, we 767 could have switches with K_TOP southern ports on nodes at the top of 768 the PoD that is not necessarily the same as K_LEAF; for instance, in 769 the representations below, we pick a K_LEAF of 6 and a K_TOP of 8. 770 In order to form a crossbar, we need K_TOP Leaf Nodes as illustrated 771 in Figure 5. 773 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ 774 | | | | | | | | | | | | | | | | 775 | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | 776 | | | | | | | | | | | | | | | | 777 | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | 778 | | | | | | | | | | | | | | | | 779 | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | 780 | | | | | | | | | | | | | | | | 781 | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | 782 | | | | | | | | | | | | | | | | 783 | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | 784 | | | | | | | | | | | | | | | | 785 | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | 786 | | | | | | | | | | | | | | | | 787 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ 789 Figure 5: Southern View of a PoD, K_TOP=8 791 The K_TOP Leaf Nodes are fully interconnected with the K_LEAF PoD-top 792 nodes, providing a connectivity that can be represented as a crossbar 793 as seen from the north and illustrated in Figure 6. The result is 794 that, in the absence of a breakage, a packet entering the PoD from 795 North on any port can be routed to any port on the south of the PoD 796 and vice versa. 798 E<-*->W 800 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ 801 | | | | | | | | | | | | | | | | 802 +----------------------------------------------------------------+ 803 | HH HH HH HH HH HH HH HH | 804 +----------------------------------------------------------------+ 805 +----------------------------------------------------------------+ 806 | HH HH HH HH HH HH HH HH | 807 +----------------------------------------------------------------+ 808 +----------------------------------------------------------------+ 809 | HH HH HH HH HH HH HH HH | 810 +----------------------------------------------------------------+ 811 +----------------------------------------------------------------+ 812 | HH HH HH HH HH HH HH HH | 813 +----------------------------------------------------------------+ 814 +----------------------------------------------------------------+ 815 | HH HH HH HH HH HH HH HH |<-+ 816 +----------------------------------------------------------------+ | 817 +----------------------------------------------------------------+ | 818 | HH HH HH HH HH HH HH HH | | 819 +----------------------------------------------------------------+ | 820 | | | | | | | | | | | | | | | | | 821 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | 822 ^ | 823 | | 824 | ---------- --------------------- | 825 +----- Leaf Node PoD top Node (Spine) --+ 826 ---------- --------------------- 828 Figure 6: Northern View of a PoD's Spines, K_TOP=8 830 Side views of this PoD is illustrated in Figure 7 and Figure 8. 832 Connecting to Spine 834 || || || || || || || || 835 +----------------------------------------------------------------+ N 836 | PoD top Node seen sideways | ^ 837 +----------------------------------------------------------------+ | 838 || || || || || || || || * 839 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | 840 | | | | | | | | | | | | | | | | v 841 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ S 842 || || || || || || || || 844 Connecting to Client nodes 846 Figure 7: Side View of a PoD, K_TOP=8, K_LEAF=6 848 Connecting to Spine 850 || || || || || || 851 +----+ +----+ +----+ +----+ +----+ +----+ N 852 | | | | | | | | | | | PoD top Nodes ^ 853 +----+ +----+ +----+ +----+ +----+ +----+ | 854 || || || || || || * 855 +------------------------------------------------+ | 856 | Leaf seen sideways | v 857 +------------------------------------------------+ S 858 || || || || || || 860 Connecting to Client nodes 862 Figure 8: Other side View of a PoD, K_TOP=8, K_LEAF=6, 90o turn in 863 E-W Plane 865 Note that a resulting PoD can be abstracted as a bigger node with a 866 number K of K_POD= K_TOP * K_LEAF, and the design can recurse. 868 It is critical at this junction that the concept and the picture of 869 those "crossed crossbars" is clear before progressing further, 870 otherwise following considerations will be difficult to comprehend. 872 Further, the PoDs are interconnected with one another through a Top- 873 of-Fabric at the very top or the north edge of the fabric. The 874 resulting ToF is NOT partitioned if and only if (IIF) every PoD top 875 level node (spine) is connected to every ToF Node. This is also 876 referred to as a single plane configuration. In order to reach a 877 1::1 connectivity ratio between the ToF and the Leaves, it results 878 that there are K_TOP ToF nodes, because each port of a ToP node 879 connects to a different ToF node, and K_LEAF ToP nodes for the same 880 reason. Consequently, it takes (P * K_LEAF) ports on a ToF node to 881 connect to each of the K_LEAF ToP nodes of the P PoDs, as illustrated 882 in Figure 9. 884 [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] <-----+ 885 | | | | | | | | | 886 [=================================] | ----------- 887 | | | | | | | | +----- Top-of-Fabric 888 [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] +----- Node -------+ 889 | ----------- | 890 | v 891 +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ <-----+ +-+ 892 | | | | | | | | | | | | | | | | | | 893 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | | 894 [ |H| |H| |H| |H| |H| |H| |H| |H| ] ------------------------- | | 895 [ |H| |H| |H| |H| |H| |H| |H| |H<--- Physical Port (Ethernet) | | 896 [ |H| |H| |H| |H| |H| |H| |H| |H| ] ------------------------- | | 897 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | | 898 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | | 899 | | | | | | | | | | | | | | | | | | 900 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | | 901 [ |H| |H| |H| |H| |H| |H| |H| |H| ] -------------- | | 902 [ |H| |H| |H| |H| |H| |H| |H| |H| ] <--- PoD top level | | 903 [ |H| |H| |H| |H| |H| |H| |H| |H| ] node (Spine) ---+ | | 904 [ |H| |H| |H| |H| |H| |H| |H| |H| ] -------------- | | | 905 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | | | 906 | | | | | | | | | | | | | | | | -+ +- +-+ v | | 907 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | | --| |--[ ]--| | 908 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | ----- | --| |--[ ]--| | 909 [ |H| |H| |H| |H| |H| |H| |H| |H| ] +--- PoD ---+ --| |--[ ]--| | 910 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | ----- | --| |--[ ]--| | 911 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | | --| |--[ ]--| | 912 [ |H| |H| |H| |H| |H| |H| |H| |H| ] | | --| |--[ ]--| | 913 | | | | | | | | | | | | | | | | -+ +- +-+ | | 914 +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ 916 Figure 9: Fabric Spines and TOFs in Single Plane Design, 3 PoDs 918 The top view can be collapsed into a third dimension where the hidden 919 depth index is representing the PoD number. So we can show one PoD 920 as a class of PoDs and hence save one dimension in our 921 representation. The Spine Node expands in the depth and the vertical 922 dimensions whereas the PoD top level Nodes are constrained in 923 horizontal dimension. A port in the 2-D representation represents 924 effectively the class of all the ports at the same position in all 925 the PoDs that are projected in its position along the depth axis. 926 This is shown in Figure 10. 928 / / / / / / / / / / / / / / / / 929 / / / / / / / / / / / / / / / / 930 / / / / / / / / / / / / / / / / 931 / / / / / / / / / / / / / / / / ] 932 +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ ]] 933 | | | | | | | | | | | | | | | | ] --------------------------- 934 [ |H| |H| |H| |H| |H| |H| |H| |H| ] <-- PoD top level node (Spine) 935 [ |H| |H| |H| |H| |H| |H| |H| |H| ] --------------------------- 936 [ |H| |H| |H| |H| |H| |H| |H| |H| ]]]] 937 [ |H| |H| |H| |H| |H| |H| |H| |H| ]]] ^^ 938 [ |H| |H| |H| |H| |H| |H| |H| |H| ]] // PoDs 939 [ |H| |H| |H| |H| |H| |H| |H| |H| ] // (in depth) 940 | |/| |/| |/| |/| |/| |/| |/| |/ // 941 +-+ +-+ +-+/+-+/+-+ +-+ +-+ +-+ // 942 ^ 943 | ---------------- 944 +----- Top-of-Fabric Node 945 ---------------- 947 Figure 10: Collapsed Northern View of a Fabric for Any Number of PoDs 949 This type of deployment introduces a "single plane limit" where the 950 bound is the available radix of the ToF nodes, which limits (P * 951 K_LEAF). Nevertheless, a distinct advantage of a connected or 952 unpartitioned Top-of-Fabric is that all failures can be resolved by 953 simple, non-transitive, positive disaggregation described in 954 Section 5.2.5.1 that propagates only within one level of the fabric. 955 In other words unpartitoned ToF nodes can always reach nodes below or 956 withdraw the routes from PoDs they cannot reach unambiguously. To be 957 more precise, all failures which still allow all the ToF nodes to see 958 each other via south reflection as explained in Section 5.2.5. 960 In order to scale beyond the "single plane limit", the Top-of-Fabric 961 can be partitioned by a number N of identically wired planes, N being 962 an integer divider of K_LEAF. The 1::1 ratio and the desired 963 symmetry are still served, this time with (K_TOP * N) ToF nodes, each 964 of (P * K_LEAF / N) ports. N=1 represents a non-partitioned Spine 965 and N=K_LEAF is a maximally partitioned Spine. Further, if R is any 966 divisor of K_LEAF, then (N=K_LEAF/R) is a feasible number of planes 967 and R a redundancy factor. If proves convenient for deployments to 968 use a radix for the leaf nodes that is a power of 2 so they can pick 969 a number of planes that is a lower power of 2. The example in 970 Figure 11 splits the Spine in 2 planes with a redundancy factor R=3, 971 meaning that there are 3 non-intersecting paths between any leaf node 972 and any ToF node. A ToF node must have in this case at least 3*P 973 ports, and be directly connected to 3 of the 6 PoD-ToP nodes (spines) 974 in each PoD. 976 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ 977 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 978 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | 979 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 980 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 981 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | 982 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 983 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 984 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | 985 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 986 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ 988 Plane 1 989 ----------- . ------------ . ------------ . ------------ . -------- 990 Plane 2 992 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ 993 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 994 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | 995 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 996 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 997 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | 998 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 999 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1000 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | 1001 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1002 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ 1003 ^ 1004 | 1005 | ---------------- 1006 +----- Top-of-Fabric node 1007 "across" depth 1008 ---------------- 1010 Figure 11: Northern View of a Multi-Plane ToF Level, K_LEAF=6, N=2 1012 At the extreme end of the spectrum, it is even possible to fully 1013 partition the spine with N = K_LEAF and R=1, while maintaining 1014 connectivity between each leaf node and each Top-of-Fabric node. In 1015 that case the ToF node connects to a single Port per PoD, so it 1016 appears as a single port in the projected view represented in 1017 Figure 12 and the number of ports required on the Spine Node is more 1018 or equal to P, the number of PoDs. 1020 Plane 1 1021 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ -+ 1022 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1023 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | 1024 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1025 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | 1026 ----------- . ------------ . ------------ . ------------ . -------- | 1027 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | 1028 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1029 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | 1030 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1031 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | 1032 ----------- . ------------ . ------------ . ------------ . -------- | 1033 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | 1034 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1035 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | 1036 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1037 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | 1038 ----------- . ------------ . ------------ . ------------ . -------- +<-+ 1039 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | | 1040 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1041 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | | 1042 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1043 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | | 1044 ----------- . ------------ . ------------ . ------------ . -------- | | 1045 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | | 1046 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1047 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | | 1048 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1049 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | | 1050 ----------- . ------------ . ------------ . ------------ . -------- | | 1051 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | | 1052 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1053 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | | 1054 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1055 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ -+ | 1056 Plane 6 ^ | 1057 | | 1058 | ---------------- -------------- | 1059 +----- ToF Node Class of PoDs ---+ 1060 ---------------- ------------- 1062 Figure 12: Northern View of a Maximally Partitioned ToF Level, R=1 1064 5.1.3. Fallen Leaf Problem 1066 As mentioned earlier, RIFT exhibits an anisotropic behavior tailored 1067 for fabrics with a North / South orientation and a high level of 1068 interleaving paths. A non-partitioned fabric makes a total loss of 1069 connectivity between a Top-of-Fabric node at the north and a leaf 1070 node at the south a very rare but yet possible occasion that is fully 1071 healed by positive disaggregation described in Section 5.2.5.1. In 1072 large fabrics or fabrics built from switches with low radix, the ToF 1073 ends often being partioned in planes which makes the occurrence of 1074 having a given leaf being only reachable from a subset of the ToF 1075 nodes more likely to happen. This makes some further considerations 1076 necessary. 1078 We define a "Fallen Leaf" as a leaf that can be reached by only a 1079 subset of Top-of-Fabric nodes but cannot be reached by all due to 1080 missing connectivity. If R is the redundancy factor, then it takes 1081 at least R breakages to reach a "Fallen Leaf" situation. 1083 In a maximally partitioned fabric, the redundancy factor R is 1, so 1084 any breakage may cause one or more fallen leaves, but not all cases 1085 require disaggregation. The following cases do not require 1086 particular action: 1088 If the breakage is a southern link from a leaf Node going down, 1089 then connectivity to any node attached to the link is lost. There 1090 is no need to disaggregate since the connectivity is lost for all 1091 spine nodes in a same fashion. 1093 If the breakage is a leaf Node going down, then connectivity 1094 through that leaf is lost for all nodes. There is no need to 1095 disaggregate since the connectivity is lost for all spine nodes in 1096 a same fashion. 1098 If the breakage is a ToF Node going down, then northern traffic is 1099 routed via alternate ToF nodes in the same plane and there is no 1100 need to disaggregate routes. 1102 In a general manner, the mechanism of non-transitive positive 1103 disaggregation is sufficient when the disaggregating ToF nodes 1104 collectively connect to all the ToP nodes in the broken plane. This 1105 happens in the following case: 1107 If the breakage is the last northern link from a ToP node to a ToF 1108 node going down, then the fallen leaf problem affects only The ToF 1109 node, and the connectivity to all the nodes in the PoD is lost 1110 from that ToF node. This can be observed by other ToF nodes 1111 within the plane where the ToP node is located and positively 1112 disaggregated within that plane. 1114 On the other hand, there is a need to disaggregate the routes to 1115 Fallen Leaves in a transitive fashion all the way to the other leaves 1116 in the following cases: 1118 If the breakage is the last northern link from a Leaf node within 1119 a plane - there is only one such link in a maximally partitioned 1120 fabric - that goes down, then connectivity to all unicast prefixes 1121 attached to the Leaf node is lost within the plane where the link 1122 is located. Southern Reflection by a Leaf Node - e.g., between 1123 ToP nodes if the PoD has only 2 levels - happens in between 1124 planes, allowing the ToP nodes to detect the problem within the 1125 PoD where it occurs and positively disaggregate. The breakage can 1126 be observed by the ToF nodes in the same plane through the 1127 flooding of N-TIEs from the ToP nodes, but the ToF nodes need to 1128 be aware of all the affected prefixes for the negative 1129 disaggregation to be fully effective. The problem can also be 1130 observed by the ToF nodes in the other planes through the flooding 1131 of N-TIEs from the affected Leaf nodes, together with non-node 1132 N-TIEs which indicate the affected prefixes. To be effective in 1133 that case, the positive disaggregation must reach down to the 1134 nodes that make the plane selection, which are typically the 1135 ingress Leaf nodes, and the information is not useful for routing 1136 in the intermediate levels. 1138 If the breakage is a ToP node in a maximally partitioned fabric - 1139 in which case it is the only ToP node serving that plane in that 1140 PoD - that goes down, then the connectivity to all the nodes in 1141 the PoD is lost within the plane where the ToP node is located - 1142 all leaves fall. Since the Southern Reflection between the ToF 1143 nodes happens only within a plane, ToF nodes in other planes 1144 cannot discover the case of fallen leaves in a different plane, 1145 and cannot determine beyond their local plane whether a Leaf node 1146 that was initially reachable has become unreachable. As above, 1147 the breakage can be observed by the ToF nodes in the plane where 1148 the breakage happened, and then again, the ToF nodes in the plane 1149 need to be aware of all the affected prefixes for the negative 1150 disaggregation to be fully effective. The problem can also be 1151 observed by the ToF nodes in the other planes through the flooding 1152 of N-TIEs from the affected Leaf nodes, if there are only 3 levels 1153 and the ToP nodes are directly connected to the Leaf nodes, and 1154 then again it can only be effective it is propagated transitively 1155 to the Leaf, and useless above that level. 1157 For the sake of readability let us roll the abstractions back to a 1158 simplest example and observe that in Figure 3 the loss of link Spine 1159 122 to Leaf 122 will make Leaf 122 a fallen leaf for Top-of-Fabric 1160 plane B. Worse, if the cabling was never present in first place, 1161 plane B will not even be able to know that such a fallen leaf exists. 1162 Hence partitioning without further treatment results in two grave 1163 problems: 1165 o Leaf111 trying to route to Leaf122 MUST choose Spine 111 in plane 1166 A as its next hop since plane B will inevitably blackhole the 1167 packet when forwarding using default routes or do excessive bow 1168 tie'ing, i.e. this information must be in its routing table. 1170 o any kind of "flooding" or distance vector trying to deal with the 1171 problem by distributing host routes will be able to converge only 1172 using paths through leafs, i.e. the flooding of information on 1173 Leaf122 will go up to Top-of-Fabric A and then "loopback" over 1174 other leafs to ToF B leading in extreme cases to traffic for 1175 Leaf122 when presented to plane B taking an "inverted fabric" path 1176 where leafs start to serve as TOFs. 1178 5.1.4. Discovering Fallen Leaves 1180 As we illustrate later and without further proof here, to deal with 1181 fallen leafs in multi-plane designs RIFT requires all the ToF nodes 1182 to share the same topology database. This happens naturally in 1183 single plane design but needs additional considerations in multi- 1184 plane fabrics. To satisfy this RIFT in multi-plane designs relies at 1185 the ToF Level on ring interconnection of switches in multiple planes. 1186 Other solutions are possible but they either need more cabling or end 1187 up having much longer flooding path or single points of failure. 1189 In more detail, by reserving two ports on each Top-of-Fabric node it 1190 is possible to connect them together in an interplane bi-directional 1191 ring as illustrated in Figure 13 (where we show a bi-directional ring 1192 connecting switches across planes). The rings will exchange full 1193 topology information between planes and with that allow consequently 1194 by the means of transitive, negative disaggregation described in 1195 Section 5.2.5.2 to efficiently fix any possible fallen leaf scenario. 1196 Somewhat as a side-effect, the exchange of information fulfills the 1197 requirement to present full view of the fabric topology at the Top- 1198 of-Fabric level without the need to collate it from multiple points 1199 by additional complexity of technologies like [RFC7752]. 1201 +----+ +----+ +----+ +----+ +----+ +----+ +--------+ 1202 | | | | | | | | | | | | | | 1203 | | | | | | | | 1204 +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ | 1205 +-| |--| |--| |--| |--| |--| |--| |-+ | 1206 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | Plane A 1207 +-| |--| |--| |--| |--| |--| |--| |-+ | 1208 +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ | 1209 | | | | | | | | 1210 +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ | 1211 +-| |--| |--| |--| |--| |--| |--| |-+ | 1212 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | Plane B 1213 +-| |--| |--| |--| |--| |--| |--| |-+ | 1214 +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ | 1215 | | | | | | | | 1216 ... | 1217 | | | | | | | | 1218 +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ | 1219 +-| |--| |--| |--| |--| |--| |--| |-+ | 1220 | | HH | | HH | | HH | | HH | | HH | | HH | | HH | | | Plane X 1221 +-| |--| |--| |--| |--| |--| |--| |-+ | 1222 +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ +-o--+ | 1223 | | | | | | | | 1224 | | | | | | | | | | | | | | 1225 +----+ +----+ +----+ +----+ +----+ +----+ +--------+ 1227 Figure 13: Connecting Top-of-Fabric Nodes Across Planes by Two Rings 1229 5.1.5. Addressing the Fallen Leaves Problem 1231 One consequence of the Fallen Leaf problem is that some prefixes 1232 attached to the fallen leaf become unreachable from some of the ToF 1233 nodes. RIFT proposes two methods to address this issue, the positive 1234 and the negative disaggregation. Both methods flood S-TIEs to 1235 advertise the impacted prefix(es). 1237 When used for the operation of disaggregation, a positive S-TIE, as 1238 usual, indicates reachability to a prefix of given length and all 1239 addresses subsumed by it. In contrast, a negative route 1240 advertisement indicates that the origin cannot route to the 1241 advertised prefix. 1243 The positive disaggregation is originated by a router that can still 1244 reach the advertised prefix, and the operation is not transitive, 1245 meaning that the receiver does not generate its own flooding south as 1246 a consequence of receiving positive disaggregation advertisements 1247 from an higher level node. The effect of a positive disaggregation 1248 is that the traffic to the impacted prefix will follow the prefix 1249 longest match and will be limited to the northbound routers that 1250 advertised the more specific route. 1252 In contrast, the negative disaggregation is transitive, and is 1253 propagated south when all the possible routes northwards are barred. 1254 A negative route advertisement is only actionable when the negative 1255 prefix is aggregated by a positive route advertisement for a shorter 1256 prefix. In that case, the negative advertisement carves an exception 1257 to the positive route in the routing table (one could think of 1258 "punching a hole"), making the positive prefix reachable through the 1259 originator with the special consideration of the negative prefix 1260 removing certain next hop neighbors. 1262 When the ToF is not partitioned, the collective southern flooding of 1263 the positive disaggregation by the ToF nodes that can still reach the 1264 impacted prefix is in general enough to cover all the switches at the 1265 next level south, typically the ToP nodes. If all those switches are 1266 aware of the disaggregation, they collectively create a ceiling that 1267 intercepts all the traffic north and forwards it to the ToF nodes 1268 that advertised the more specific route. In that case, the positive 1269 disaggregation alone is sufficient to solve the fallen leaf problem. 1271 On the other hand, when the fabric is partitioned in planes, the 1272 positive disaggregation from ToF nodes in different planes do not 1273 reach the ToP switches in the affected plane and cannot solve the 1274 fallen leaves problem. In other words, a breakage in a plane can 1275 only be solved in that plane. Also, the selection of the plane for a 1276 packet typically occurs at the leaf level and the disaggregation must 1277 be transitive and reach all the leaves. In that case, the negative 1278 disaggregation is necessary. The details on the RIFT approach to 1279 deal with fallen leafs in an optimal way is specified in 1280 Section 5.2.5.2. 1282 5.2. Specification 1284 5.2.1. Transport 1286 All packet formats are defined in Thrift models in Appendix A. 1288 The serialized model is carried in an envelope within a UDP frame 1289 that provides security and allows validation/modification of several 1290 important fields without de-serialization for performance and 1291 security reasons. 1293 The ultimate transport envelope, especially the placement of nonces 1294 is under active discussion. 1296 +--------+----------+-------------+----------+-------------+---------+----------------+ 1297 | UDP | TIE | Fingerprint | Key ID | Security | Model | Serialized | 1298 | Header | Lifetime | Type | | Fingerprint | Version | RIFT Model ... | 1299 | | | (e.g. SHA) | | | | Object | 1300 +--------+----------+-------------+----------+-------------+---------+----------------+ 1302 Figure 14: Security Envelope 1304 5.2.2. Link (Neighbor) Discovery (LIE Exchange) 1306 LIE exchange happens over well-known administratively locally scoped 1307 and configured or otherwise well-known IPv4 multicast address 1308 [RFC2365] or link-local multicast scope [RFC4291] for IPv6 [RFC8200] 1309 using a configured or otherwise a well-known destination UDP port 1310 defined in Appendix C.1. LIEs SHOULD be sent with a TTL of 1 to 1311 prevent RIFT information reaching beyond a single L3 next-hop in the 1312 topology. LIEs SHOULD be sent with network control precedence. 1313 Originating port of the LIE has no further significance other than 1314 identifying the origination point. LIEs are exchanged over all links 1315 running RIFT. An implementation MAY listen and send LIEs on IPv4 1316 and/or IPv6 multicast addresses. LIEs on same link are considered 1317 part of the same negotiation independent on the address family they 1318 arrive on. Observe further that the LIE source address may not 1319 identify the peer uniquely in unnumbered or link-local address cases 1320 so the response transmission MUST occur over the same interface the 1321 LIEs have been received on. A node CAN use any of the adjacency's 1322 source addresses it saw in LIEs on the specific interface during 1323 adjacency formation to send TIEs. That implies that an 1324 implementation MUST be ready to accept TIEs on all addresses it used 1325 as source of LIE frames. 1327 Observe further that the protocol does NOT support selective 1328 disabling of address families or any local address changes in three 1329 way state, i.e. if a link has entered three way IPv4 and/or IPv6 with 1330 a neighbor on an adjacency and it wants to stop supporting one of the 1331 families or change any of its local addresses, it has to tear down 1332 and rebuild the adjacency. It also has to remove any information it 1333 stored about adjacency's' LIE source addresses seen. 1335 All RIFT routers MUST support IPv4 forwarding and MAY support IPv6 1336 forwarding. A three way adjacency over IPv6 addresses implies 1337 support for IPv4 forwarding. 1339 Unless Section 5.2.7 is used, each node is provisioned with the level 1340 at which it is operating and its PoD (or otherwise a default level 1341 and "undefined" PoD are assumed; meaning that leafs do not need to be 1342 configured at all if initial configuration values are all left at 0). 1343 Nodes in the spine are configured with "any" PoD which has the same 1344 value "undefined" PoD hence we will talk about "undefined/any" PoD. 1345 This information is propagated in the LIEs exchanged. 1347 Further definitions of leaf flags are found in Section 5.2.7 given 1348 they have implications in terms of level and adjacency forming here. 1350 A node tries to form a three way adjacency if and only if 1352 1. the node is in the same PoD or either the node or the neighbor 1353 advertises "undefined/any" PoD membership (PoD# = 0) AND 1355 2. the neighboring node is running the same MAJOR schema version AND 1357 3. the neighbor is not member of some PoD while the node has a 1358 northbound adjacency already joining another PoD AND 1360 4. the neighboring node uses a valid System ID AND 1362 5. the neighboring node uses a different System ID than the node 1363 itself 1365 6. the advertised MTUs match on both sides AND 1367 7. both nodes advertise defined level values AND 1369 8. [ 1371 i) the node is at level 0 and has no three way adjacencies 1372 already to HAT nodes with level different than the adjacent 1373 node OR 1375 ii) the node is not at level 0 and the neighboring node is at 1376 level 0 OR 1378 iii) both nodes are at level 0 AND both indicate support for 1379 Section 5.3.9 OR 1381 iv) neither node is at level 0 and the neighboring node is at 1382 most one level away 1384 ]. 1386 The rule in Paragraph 3 MAY be optionally disregarded by a node if 1387 PoD detection is undesirable or has to be disregarded. 1389 A node configured with "undefined" PoD membership MUST, after 1390 building first northbound three way adjacencies to a node being in a 1391 defined PoD, advertise that PoD as part of its LIEs. In case that 1392 adjacency is lost, from all available northbound three way 1393 adjacencies the node with the highest System ID and defined PoD is 1394 chosen. That way the northmost defined PoD value (normally the top 1395 spines in a PoD) can diffuse southbound towards the leafs "forcing" 1396 the PoD value on any node with "undefined" PoD. 1398 LIEs arriving with a TTL larger than 1 MUST be ignored. 1400 A node SHOULD NOT send out LIEs without defined level in the header 1401 but in certain scenarios it may be beneficial for trouble-shooting 1402 purposes. 1404 LIE exchange uses three way handshake mechanism which is a cleaned up 1405 version of [RFC5303]. Observe that for easier comprehension the 1406 terminology of one/two and three-way states does NOT align with OSPF 1407 or ISIS FSMs albeit they use roughly same mechanisms. LIE packets 1408 reflect nonces and may contain an SHA-1 [RFC6234] over nonces and 1409 some of the LIE data which prevents corruption and replay attacks. 1411 5.2.3. Topology Exchange (TIE Exchange) 1413 5.2.3.1. Topology Information Elements 1415 Topology and reachability information in RIFT is conveyed by the 1416 means of TIEs which have good amount of commonalities with LSAs in 1417 OSPF. 1419 The TIE exchange mechanism uses the port indicated by each node in 1420 the LIE exchange and the interface on which the adjacency has been 1421 formed as destination. It SHOULD use TTL of 1 as well. 1423 TIEs contain sequence numbers, lifetimes and a type. Each type has a 1424 large identifying number space and information is spread across 1425 possibly many TIEs of a certain type by the means of a hash function 1426 that a node or deployment can individually determine. One extreme 1427 design choice is a prefix per TIE which leads to more BGP-like 1428 behavior where small increments are only advertised on route changes 1429 vs. deploying with dense prefix packing into few TIEs leading to more 1430 traditional IGP trade-off with fewer TIEs. An implementation may 1431 even rehash prefix to TIE mapping at any time at the cost of 1432 significant amount of re-advertisements of TIEs. 1434 More information about the TIE structure can be found in the schema 1435 in Appendix A. 1437 5.2.3.2. South- and Northbound Representation 1439 A central concept of RIFT is that each node represents itself 1440 differently depending on the direction in which it is advertising 1441 information. More precisely, a spine node represents two different 1442 databases over its adjacencies depending whether it advertises TIEs 1443 to the north or to the south/sideways. We call those differing TIE 1444 databases either south- or northbound (S-TIEs and N-TIEs) depending 1445 on the direction of distribution. 1447 The N-TIEs hold all of the node's adjacencies and local prefixes 1448 while the S-TIEs hold only all of the node's adjacencies, the default 1449 prefix with necessary disaggregated prefixes and local prefixes. We 1450 will explain this in detail further in Section 5.2.5. 1452 The TIE types are symmetric in both directions and Table 2 provides a 1453 quick reference to main TIE types including direction and their 1454 function. 1456 +----------+--------------------------------------------------------+ 1457 | TIE-Type | Content | 1458 +----------+--------------------------------------------------------+ 1459 | node | node properties, adjacencies and information helping | 1460 | N-TIE | in complex disaggregation scenarios | 1461 +----------+--------------------------------------------------------+ 1462 | node | same content as node N-TIE except the information to | 1463 | S-TIE | help disaggregation | 1464 +----------+--------------------------------------------------------+ 1465 | Prefix | contains nodes' directly reachable prefixes | 1466 | N-TIE | | 1467 +----------+--------------------------------------------------------+ 1468 | Prefix | contains originated defaults and de-aggregated | 1469 | S-TIE | prefixes | 1470 +----------+--------------------------------------------------------+ 1471 | KV | contains nodes northbound KVs | 1472 | N-TIE | | 1473 +----------+--------------------------------------------------------+ 1474 | KV | contains nodes southbound KVs | 1475 | S-TIE | | 1476 +----------+--------------------------------------------------------+ 1478 Table 2: TIE Types 1480 As an example illustrating a databases holding both representations, 1481 consider the topology in Figure 2 with the optional link between 1482 spine 111 and spine 112 (so that the flooding on an East-West link 1483 can be shown). This example assumes unnumbered interfaces. First, 1484 here are the TIEs generated by some nodes. For simplicity, the key 1485 value elements which may be included in their S-TIEs or N-TIEs are 1486 not shown. 1488 Spine21 S-TIEs: 1489 Node S-TIE: 1490 NodeElement(level=2, neighbors((Spine 111, level 1, cost 1), 1491 (Spine 112, level 1, cost 1), (Spine 121, level 1, cost 1), 1492 (Spine 122, level 1, cost 1))) 1493 Prefix S-TIE: 1494 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 1496 Spine 111 S-TIEs: 1497 Node S-TIE: 1498 NodeElement(level=1, neighbors((Spine21, level 2, cost 1, links(...)), 1499 (Spine22, level 2, cost 1, links(...)), 1500 (Spine 112, level 1, cost 1, links(...)), 1501 (Leaf111, level 0, cost 1, links(...)), 1502 (Leaf112, level 0, cost 1, links(...)))) 1503 Prefix S-TIE: 1504 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 1506 Spine 111 N-TIEs: 1507 Node N-TIE: 1508 NodeElement(level=1, 1509 neighbors((Spine21, level 2, cost 1, links(...)), 1510 (Spine22, level 2, cost 1, links(...)), 1511 (Spine 112, level 1, cost 1, links(...)), 1512 (Leaf111, level 0, cost 1, links(...)), 1513 (Leaf112, level 0, cost 1, links(...)))) 1514 Prefix N-TIE: 1515 NorthPrefixesElement(prefixes(Spine 111.loopback) 1517 Spine 121 S-TIEs: 1518 Node S-TIE: 1519 NodeElement(level=1, neighbors((Spine21,level 2,cost 1), 1520 (Spine22, level 2, cost 1), (Leaf121, level 0, cost 1), 1521 (Leaf122, level 0, cost 1))) 1522 Prefix S-TIE: 1523 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 1525 Spine 121 N-TIEs: 1526 Node N-TIE: 1527 NodeElement(level=1, 1528 neighbors((Spine21, level 2, cost 1, links(...)), 1529 (Spine22, level 2, cost 1, links(...)), 1530 (Leaf121, level 0, cost 1, links(...)), 1531 (Leaf122, level 0, cost 1, links(...)))) 1532 Prefix N-TIE: 1533 NorthPrefixesElement(prefixes(Spine 121.loopback) 1535 Leaf112 N-TIEs: 1536 Node N-TIE: 1537 NodeElement(level=0, 1538 neighbors((Spine 111, level 1, cost 1, links(...)), 1539 (Spine 112, level 1, cost 1, links(...)))) 1540 Prefix N-TIE: 1541 NorthPrefixesElement(prefixes(Leaf112.loopback, Prefix112, 1542 Prefix_MH)) 1544 Figure 15: example TIES generated in a 2 level spine-and-leaf 1545 topology 1547 5.2.3.3. Flooding 1549 The mechanism used to distribute TIEs is the well-known (albeit 1550 modified in several respects to address fat tree requirements) 1551 flooding mechanism used by today's link-state protocols. Although 1552 flooding is initially more demanding to implement it avoids many 1553 problems with update style used in diffused computation such as path 1554 vector protocols. Since flooding tends to present an unscalable 1555 burden in large, densely meshed topologies (fat trees being 1556 unfortunately such a topology) we provide as solution a close to 1557 optimal global flood reduction and load balancing optimization in 1558 Section 5.2.3.8. 1560 As described before, TIEs themselves are transported over UDP with 1561 the ports indicated in the LIE exchanges and using the destination 1562 address on which the LIE adjacency has been formed. For unnumbered 1563 IPv4 interfaces same considerations apply as in equivalent OSPF case. 1565 On reception of a TIE with an undefined level value in the packet 1566 header the node SHOULD issue a warning and indiscriminately discard 1567 the packet. 1569 Precise finite state machines and procedures can be found in 1570 Appendix B.3. 1572 5.2.3.4. TIE Flooding Scopes 1574 In a somewhat analogous fashion to link-local, area and domain 1575 flooding scopes, RIFT defines several complex "flooding scopes" 1576 depending on the direction and type of TIE propagated. 1578 Every N-TIE is flooded northbound, providing a node at a given level 1579 with the complete topology of the Clos or Fat Tree network underneath 1580 it, including all specific prefixes. This means that a packet 1581 received from a node at the same or lower level whose destination is 1582 covered by one of those specific prefixes may be routed directly 1583 towards the node advertising that prefix rather than sending the 1584 packet to a node at a higher level. 1586 A node's Node S-TIEs, consisting of all node's adjacencies and prefix 1587 S-TIEs limited to those related to default IP prefix and 1588 disaggregated prefixes, are flooded southbound in order to allow the 1589 nodes one level down to see connectivity of the higher level as well 1590 as reachability to the rest of the fabric. In order to allow an E-W 1591 disconnected node in a given level to receive the S-TIEs of other 1592 nodes at its level, every *NODE* S-TIE is "reflected" northbound to 1593 level from which it was received. It should be noted that East-West 1594 links are included in South TIE flooding; those TIEs need to be 1595 flooded to satisfy algorithms in Section 5.2.4. In that way nodes at 1596 same level can learn about each other without a lower level, e.g. in 1597 case of leaf level. The precise flooding scopes are given in 1598 Table 3. Those rules govern as well what SHOULD be included in TIDEs 1599 on the adjacency. East-West flooding scopes are identical to South 1600 flooding scopes except in case of ToF East-West links (rings). 1602 Node S-TIE "south reflection" allows to support positive 1603 disaggregation on failures describes in Section 5.2.5 and flooding 1604 reduction in Section 5.2.3.8. 1606 +-----------+---------------------+---------------+-----------------+ 1607 | Type / | South | North | East-West | 1608 | Direction | | | | 1609 +-----------+---------------------+---------------+-----------------+ 1610 | node | flood if level of | flood if | flood only if | 1611 | S-TIE | originator is equal | level of | this node is | 1612 | | to this node | originator is | not ToF | 1613 | | | higher than | | 1614 | | | this node | | 1615 +-----------+---------------------+---------------+-----------------+ 1616 | non-node | flood self- | flood only if | flood only if | 1617 | S-TIE | originated only | neighbor is | self-originated | 1618 | | | originator of | and this node | 1619 | | | TIE | is not ToF | 1620 +-----------+---------------------+---------------+-----------------+ 1621 | all | never flood | flood always | flood only if | 1622 | N-TIEs | | | this node is | 1623 | | | | ToF | 1624 +-----------+---------------------+---------------+-----------------+ 1625 | TIDE | include at least | include at | if this node is | 1626 | | all non-self | least all | ToF then | 1627 | | originated N-TIE | node S-TIEs | include all | 1628 | | headers and self- | and all | N-TIEs, | 1629 | | originated S-TIE | S-TIEs | otherwise only | 1630 | | headers and node | originated by | self-originated | 1631 | | S-TIEs of nodes at | peer and all | TIEs | 1632 | | same level | N-TIEs | | 1633 +-----------+---------------------+---------------+-----------------+ 1634 | TIRE as | request all N-TIEs | request all | if this node is | 1635 | Request | and all peer's | S-TIEs | ToF then apply | 1636 | | self-originated | | North scope | 1637 | | TIEs and all node | | rules, | 1638 | | S-TIEs | | otherwise South | 1639 | | | | scope rules | 1640 +-----------+---------------------+---------------+-----------------+ 1641 | TIRE as | Ack all received | Ack all | Ack all | 1642 | Ack | TIEs | received TIEs | received TIEs | 1643 +-----------+---------------------+---------------+-----------------+ 1645 Table 3: Flooding Scopes 1647 If the TIDE includes additional TIE headers beside the ones 1648 specified, the receiving neighbor must apply according filter to the 1649 received TIDE strictly and MUST NOT request the extra TIE headers 1650 that were not allowed by the flooding scope rules in its direction. 1652 As an example to illustrate these rules, consider using the topology 1653 in Figure 2, with the optional link between spine 111 and spine 112, 1654 and the associated TIEs given in Figure 15. The flooding from 1655 particular nodes of the TIEs is given in Table 4. 1657 +-------------+----------+------------------------------------------+ 1658 | Router | Neighbor | TIEs | 1659 | floods to | | | 1660 +-------------+----------+------------------------------------------+ 1661 | Leaf111 | Spine | Leaf111 N-TIEs, Spine 111 node S-TIE | 1662 | | 112 | | 1663 | Leaf111 | Spine | Leaf111 N-TIEs, Spine 112 node S-TIE | 1664 | | 111 | | 1665 | | | | 1666 | Spine 111 | Leaf111 | Spine 111 S-TIEs | 1667 | Spine 111 | Leaf112 | Spine 111 S-TIEs | 1668 | Spine 111 | Spine | Spine 111 S-TIEs | 1669 | | 112 | | 1670 | Spine 111 | Spine21 | Spine 111 N-TIEs, Leaf111 N-TIEs, | 1671 | | | Leaf112 N-TIEs, Spine22 node S-TIE | 1672 | Spine 111 | Spine22 | Spine 111 N-TIEs, Leaf111 N-TIEs, | 1673 | | | Leaf112 N-TIEs, Spine21 node S-TIE | 1674 | | | | 1675 | ... | ... | ... | 1676 | Spine21 | Spine | Spine21 S-TIEs | 1677 | | 111 | | 1678 | Spine21 | Spine | Spine21 S-TIEs | 1679 | | 112 | | 1680 | Spine21 | Spine | Spine21 S-TIEs | 1681 | | 121 | | 1682 | Spine21 | Spine | Spine21 S-TIEs | 1683 | | 122 | | 1684 | ... | ... | ... | 1685 +-------------+----------+------------------------------------------+ 1687 Table 4: Flooding some TIEs from example topology 1689 5.2.3.5. Initial and Periodic Database Synchronization 1691 The initial exchange of RIFT is modeled after ISIS with TIDE being 1692 equivalent to CSNP and TIRE playing the role of PSNP. The content of 1693 TIDEs and TIREs is governed by Table 3. 1695 5.2.3.6. Purging 1697 RIFT does not purge information that has been distributed by the 1698 protocol. Purging mechanisms in other routing protocols have proven 1699 to be complex and fragile over many years of experience. Abundant 1700 amounts of memory are available today even on low-end platforms. The 1701 information will age out and all computations will deliver correct 1702 results if a node leaves the network due to the new information 1703 distributed by its adjacent nodes. 1705 Once a RIFT node issues a TIE with an ID, it MUST preserve the ID as 1706 long as feasible (also when the protocol restarts), even if the TIE 1707 looses all content. The re-advertisement of empty TIE fulfills the 1708 purpose of purging any information advertised in previous versions. 1709 The originator is free to not re-originate the according empty TIE 1710 again or originate an empty TIE with relatively short lifetime to 1711 prevent large number of long-lived empty stubs polluting the network. 1712 Each node MUST timeout and clean up the according empty TIEs 1713 independently. 1715 Upon restart a node MUST, as any link-state implementation, be 1716 prepared to receive TIEs with its own system ID and supercede them 1717 with equivalent, newly generated, empty TIEs with a higher sequence 1718 number. As above, the lifetime can be relatively short since it only 1719 needs to exceed the necessary propagation and processing delay by all 1720 the nodes that are within the TIE's flooding scope. 1722 5.2.3.7. Southbound Default Route Origination 1724 Under certain conditions nodes issue a default route in their South 1725 Prefix TIEs with costs as computed in Section 5.3.6.1. 1727 A node X that 1729 1. is NOT overloaded AND 1731 2. has southbound or East-West adjacencies 1733 originates in its south prefix TIE such a default route IIF 1735 1. all other nodes at X's' level are overloaded OR 1737 2. all other nodes at X's' level have NO northbound adjacencies OR 1739 3. X has computed reachability to a default route during N-SPF. 1741 The term "all other nodes at X's' level" describes obviously just the 1742 nodes at the same level in the PoD with a viable lower level 1743 (otherwise the node S-TIEs cannot be reflected and the nodes in e.g. 1744 PoD 1 and PoD 2 are "invisible" to each other). 1746 A node originating a southbound default route MUST install a default 1747 discard route if it did not compute a default route during N-SPF. 1749 5.2.3.8. Northbound TIE Flooding Reduction 1751 Section 1.4 of the Optimized Link State Routing Protocol [RFC3626] 1752 (OLSR) introduces the concept of a "multipoint relay" (MPR) that 1753 minimize the overhead of flooding messages in the network by reducing 1754 redundant retransmissions in the same region. 1756 A similar technique is applied to RIFT to control northbound 1757 flooding. Important observations first: 1759 1. a node MUST flood self-originated N-TIE to all the reachable 1760 nodes at the level above which we call the node's "parents"; 1762 2. it is typically not necessary that all parents reflood the N-TIEs 1763 to achieve a complete flooding of all the reachable nodes two 1764 levels above which we choose to call the node's "grandparents"; 1766 3. to control the volume of its flooding two hops North and yet keep 1767 it robust enough, it is advantageous for a node to select a 1768 subset of its parents as "Flood Repeaters" (FRs), which combined 1769 together deliver two or more copies of its flooding to all of its 1770 parents, i.e. the originating node's grandparents; 1772 4. nodes at the same level do NOT have to agree on a specific 1773 algorithm to select the FRs, but overall load balancing should be 1774 achieved so that different nodes at the same level should tend to 1775 select different parents as FRs; 1777 5. there are usually many solutions to the problem of finding a set 1778 of FRs for a given node; the problem of finding the minimal set 1779 is (similar to) a NP-Complete problem and a globally optimal set 1780 may not be the minimal one if load-balancing with other nodes is 1781 an important consideration; 1783 6. it is expected that there will be often sets of equivalent nodes 1784 at a level L, defined as having a common set of parents at L+1. 1785 Applying this observation at both L and L+1, an algorithm may 1786 attempt to split the larger problem in a sum of smaller separate 1787 problems; 1789 7. it is another expectation that there will be from time to time a 1790 broken link between a parent and a grandparent, and in that case 1791 the parent is probably a poor FR due to its lower reliability. 1792 An algorithm may attempt to eliminate parents with broken 1793 northbound adjacencies first in order to reduce the number of 1794 FRs. Albeit it could be argued that relying on higher fanout FRs 1795 will slow flooding due to higher replication load reliability of 1796 FR's links seems to be a more pressing concern. 1798 In a fully connected Clos Network, this means that a node selects one 1799 arbitrary parent as FR and then a second one for redundancy. The 1800 computation can be kept relatively simple and completely distributed 1801 without any need for synchronization amongst nodes. In a "PoD" 1802 structure, where the Level L+2 is partitioned in silos of equivalent 1803 grandparents that are only reachable from respective parents, this 1804 means treating each silo as a fully connected Clos Network and solve 1805 the problem within the silo. 1807 In terms of signaling, a node has enough information to select its 1808 set of FRs; this information is derived from the node's parents' Node 1809 S-TIEs, which indicate the parent's reachable northbound adjacencies 1810 to its own parents, i.e. the node's grandparents. An optional 1811 boolean information `you_are_flood_repeater` in a LIE packet to a 1812 parent is set to indicate whether the parent lost its flood repeater 1813 leadership and with that SHOULD NOT reflood N-TIEs. 1815 This specification proposes a simple default algorithm that SHOULD be 1816 implemented and used by default on every RIFT node. 1818 o let |NA(Node) be the set of Northbound adjacencies of node Node 1819 and CN(Node) be the cardinality of |NA(Node); 1821 o let |SA(Node) be the set of Southbound adjacencies of node Node 1822 and CS(Node) be the cardinality of |SA(Node); 1824 o let |P(Node) be the set of node Node's parents; 1826 o let |G(Node) be the set of node Node's grandparents. Observe 1827 that |G(Node) = |P(|P(Node)); 1829 o let N be the child node at level L computing a set of FR; 1831 o let P be a node at level L+1 and a parent node of N, i.e. bi- 1832 directionally reachable over adjacency A(N, P); 1834 o let G be a grandparent node of N, reachable transitively via a 1835 parent P over adjacencies ADJ(N, P) and ADJ(P, G). Observe that N 1836 does not have enough information to check bidirectional 1837 reachability of A(P, G); 1839 o let R be a redundancy constant integer; a value of 2 or higher for 1840 R is RECOMMENDED; 1842 o let S be a similarity constant integer; a value in range 0 .. 2 1843 for S is RECOMMENDED, the value of 1 SHOULD be used. Two 1844 cardinalities are considered as equivalent if their absolute 1845 difference is less than or equal to S, i.e. |a-b|<=S. 1847 o let RND be a 64-bit random number generated by the system once on 1848 startup. 1850 The algorithm consists of the following steps: 1852 1. derive a 64-bits number by XOR'ing 'N's system ID with RND and 1853 then 1855 2. derive a 16-bits pseudo-random unsigned integer PR(N) from the 1856 resulting 64-bits number by splitting it in 16-bits-long words 1857 W1, W2, ..., Wn and then XOR'ing the circularly shifted resulting 1858 words together, and casting the resulting representation: 1860 1. (unsigned integer) (W1<<1 xor (W2<<2) xor ... xor (Wn< 0 do 1902 1. for i from C_k(N)-1 to 1 decrementing by 1 do 1904 1. set j to PR(N) modulo i; 1906 2. exchange |A_k[j] and |A_k[i]; 1908 2. set k=k-1; 1910 6. for each grandparent, initialize a counter with the number of its 1911 Southbound adjacencies : 1913 1. for each G in |G(N) set c(G) = CS(G); 1915 and 1917 7. finally keep as FRs only parents that are needed to maintain the 1918 number of adjacencies between the FRs and any grandparent G equal 1919 or above the redundancy constant R: 1921 1. for each P in reshuffled |A(N); 1923 1. if there exists an adjacency ADJ(P, G) in |NA(P) such 1924 that c(G) <= R then 1926 1. place P in FR set; 1928 2. else 1930 1. for all adjacencies ADJ(P, G) in |NA(P) 1932 1. decrement c(G); 1934 The algorithm MUST be re-evaluated by a node on every change of local 1935 adjacencies or reception of a parent S-TIE with changed adjacencies. 1936 A node MAY apply a hysteresis to prevent excessive amount of 1937 computation during periods of network instability just like in case 1938 of reachability computation. 1940 A node SHOULD send out LIEs that grant leadership before LIEs that 1941 revoke it on leadership changes to prevent transient behavior where 1942 the full coverage of grand parents is not guaranteed. Albeit the 1943 condition will correct in positively stable manner due to LIE 1944 retransmission and periodic TIDEs, it can slow down flooding 1945 convergence on leadership changes. 1947 5.2.4. Reachability Computation 1949 A node has three sources of relevant information. A node knows the 1950 full topology south from the received N-TIEs. A node has the set of 1951 prefixes with associated distances and bandwidths from received 1952 S-TIEs. 1954 To compute reachability, a node runs conceptually a northbound and a 1955 southbound SPF. We call that N-SPF and S-SPF. 1957 Since neither computation can "loop", it is possible to compute non- 1958 equal-cost or even k-shortest paths [EPPSTEIN] and "saturate" the 1959 fabric to the extent desired. 1961 5.2.4.1. Northbound SPF 1963 N-SPF uses northbound and East-West adjacencies in the computing 1964 node's node N-TIEs (since if the node is a leaf it may not have 1965 generated a node S-TIE) when starting Dijkstra. Observe that N-SPF 1966 is really just a one hop variety since Node S-TIEs are not re-flooded 1967 southbound beyond a single level (or East-West) and with that the 1968 computation cannot progress beyond adjacent nodes. 1970 Once progressing, we are using the next level's node S-TIEs to find 1971 according adjacencies to verify backlink connectivity. Just as in 1972 case of IS-IS or OSPF, two unidirectional links are associated 1973 together to confirm bidirectional connectivity. 1975 Default route found when crossing an E-W link is used IIF 1977 1. the node itself does NOT have any northbound adjacencies AND 1979 2. the adjacent node has one or more northbound adjacencies 1981 This rule forms a "one-hop default route split-horizon" and prevents 1982 looping over default routes while allowing for "one-hop protection" 1983 of nodes that lost all northbound adjacencies except at Top-of-Fabric 1984 where the links are used exclusively to flood topology information in 1985 multi-plane designs. 1987 Other south prefixes found when crossing E-W link MAY be used IIF 1989 1. no north neighbors are advertising same or supersuming non- 1990 default prefix AND 1992 2. the node does not originate a non-default supersuming prefix 1993 itself. 1995 i.e. the E-W link can be used as the gateway of last resort for a 1996 specific prefix only. Using south prefixes across E-W link can be 1997 beneficial e.g. on automatic de-aggregation in pathological fabric 1998 partitioning scenarios. 2000 A detailed example can be found in Section 6.4. 2002 5.2.4.2. Southbound SPF 2004 S-SPF uses only the southbound adjacencies in the node S-TIEs, i.e. 2005 progresses towards nodes at lower levels. Observe that E-W 2006 adjacencies are NEVER used in the computation. This enforces the 2007 requirement that a packet traversing in a southbound direction must 2008 never change its direction. 2010 S-SPF uses northbound adjacencies in node N-TIEs to verify backlink 2011 connectivity. 2013 5.2.4.3. East-West Forwarding Within a Level 2015 Ultimately, it should be observed that in presence of a "ring" of E-W 2016 links in a level neither SPF will provide a "ring protection" scheme 2017 since such a computation would have to deal necessarily with breaking 2018 of "loops" in generic Dijkstra sense; an application for which RIFT 2019 is not intended. It is outside the scope of this document how an 2020 underlay can be used to provide a full-mesh connectivity between 2021 nodes in the same level that would allow for N-SPF to provide 2022 protection for a single node loosing all its northbound adjacencies 2023 (as long as any of the other nodes in the level are northbound 2024 connected). 2026 Using south prefixes over horizontal links is optional and can 2027 protect against pathological fabric partitioning cases that leave 2028 only paths to destinations that would necessitate multiple changes of 2029 forwarding direction between north and south. 2031 5.2.5. Automatic Disaggregation on Link & Node Failures 2033 5.2.5.1. Positive, Non-transitive Disaggregation 2035 Under normal circumstances, node's S-TIEs contain just the 2036 adjacencies and a default route. However, if a node detects that its 2037 default IP prefix covers one or more prefixes that are reachable 2038 through it but not through one or more other nodes at the same level, 2039 then it MUST explicitly advertise those prefixes in an S-TIE. 2041 Otherwise, some percentage of the northbound traffic for those 2042 prefixes would be sent to nodes without according reachability, 2043 causing it to be black-holed. Even when not black-holing, the 2044 resulting forwarding could 'backhaul' packets through the higher 2045 level spines, clearly an undesirable condition affecting the blocking 2046 probabilities of the fabric. 2048 We refer to the process of advertising additional prefixes southbound 2049 as 'positive de-aggregation' or 'positive dis-aggregation'. 2051 A node determines the set of prefixes needing de-aggregation using 2052 the following steps: 2054 1. A DAG computation in the southern direction is performed first, 2055 i.e. the N-TIEs are used to find all of prefixes it can reach and 2056 the set of next-hops in the lower level for each of them. Such a 2057 computation can be easily performed on a fat tree by e.g. setting 2058 all link costs in the southern direction to 1 and all northern 2059 directions to infinity. We term set of those prefixes |R, and 2060 for each prefix, r, in |R, we define its set of next-hops to 2061 be |H(r). 2063 2. The node uses reflected S-TIEs to find all nodes at the same 2064 level in the same PoD and the set of southbound adjacencies for 2065 each. The set of nodes at the same level is termed |N and for 2066 each node, n, in |N, we define its set of southbound adjacencies 2067 to be |A(n). 2069 3. For a given r, if the intersection of |H(r) and |A(n), for any n, 2070 is null then that prefix r must be explicitly advertised by the 2071 node in an S-TIE. 2073 4. Identical set of de-aggregated prefixes is flooded on each of the 2074 node's southbound adjacencies. In accordance with the normal 2075 flooding rules for an S-TIE, a node at the lower level that 2076 receives this S-TIE will not propagate it south-bound. Neither 2077 is it necessary for the receiving node to reflect the 2078 disaggregated prefixes back over its adjacencies to nodes at the 2079 level from which it was received. 2081 To summarize the above in simplest terms: if a node detects that its 2082 default route encompasses prefixes for which one of the other nodes 2083 in its level has no possible next-hops in the level below, it has to 2084 disaggregate it to prevent black-holing or suboptimal routing through 2085 such nodes. Hence a node X needs to determine if it can reach a 2086 different set of south neighbors than other nodes at the same level, 2087 which are connected to it via at least one common south neighbor. If 2088 it can, then prefix disaggregation may be required. If it can't, 2089 then no prefix disaggregation is needed. An example of 2090 disaggregation is provided in Section 6.3. 2092 A possible algorithm is described last: 2094 1. Create partial_neighbors = (empty), a set of neighbors with 2095 partial connectivity to the node X's level from X's perspective. 2096 Each entry is a list of south neighbor of X and a list of nodes 2097 of X.level that can't reach that neighbor. 2099 2. A node X determines its set of southbound neighbors 2100 X.south_neighbors. 2102 3. For each S-TIE originated from a node Y that X has which is at 2103 X.level, if Y.south_neighbors is not the same as 2104 X.south_neighbors but the nodes share at least one southern 2105 neighbor, for each neighbor N in X.south_neighbors but not in 2106 Y.south_neighbors, add (N, (Y)) to partial_neighbors if N isn't 2107 there or add Y to the list for N. 2109 4. If partial_neighbors is empty, then node X does not to 2110 disaggregate any prefixes. If node X is advertising 2111 disaggregated prefixes in its S-TIE, X SHOULD remove them and re- 2112 advertise its according S-TIEs. 2114 A node X computes reachability to all nodes below it based upon the 2115 received N-TIEs first. This results in a set of routes, each 2116 categorized by (prefix, path_distance, next-hop-set). Alternately, 2117 for clarity in the following procedure, these can be organized by 2118 next-hop-set as ( (next-hops), {(prefix, path_distance)}). If 2119 partial_neighbors isn't empty, then the following procedure describes 2120 how to identify prefixes to disaggregate. 2122 disaggregated_prefixes = { empty } 2123 nodes_same_level = { empty } 2124 for each S-TIE 2125 if (S-TIE.level == X.level and 2126 X shares at least one S-neighbor with X) 2127 add S-TIE.originator to nodes_same_level 2128 end if 2129 end for 2131 for each next-hop-set NHS 2132 isolated_nodes = nodes_same_level 2133 for each NH in NHS 2134 if NH in partial_neighbors 2135 isolated_nodes = intersection(isolated_nodes, 2136 partial_neighbors[NH].nodes) 2137 end if 2138 end for 2140 if isolated_nodes is not empty 2141 for each prefix using NHS 2142 add (prefix, distance) to disaggregated_prefixes 2143 end for 2144 end if 2145 end for 2147 copy disaggregated_prefixes to X's S-TIE 2148 if X's S-TIE is different 2149 schedule S-TIE for flooding 2150 end if 2152 Figure 16: Computation of Disaggregated Prefixes 2154 Each disaggregated prefix is sent with the according path_distance. 2155 This allows a node to send the same S-TIE to each south neighbor. 2156 The south neighbor which is connected to that prefix will thus have a 2157 shorter path. 2159 Finally, to summarize the less obvious points partially omitted in 2160 the algorithms to keep them more tractable: 2162 1. all neighbor relationships MUST perform backlink checks. 2164 2. overload bits as introduced in Section 5.3.1 have to be respected 2165 during the computation. 2167 3. all the lower level nodes are flooded the same disaggregated 2168 prefixes since we don't want to build an S-TIE per node and 2169 complicate things unnecessarily. The PoD containing the prefix 2170 will prefer southbound anyway. 2172 4. positively disaggregated prefixes do NOT have to propagate to 2173 lower levels. With that the disturbance in terms of new flooding 2174 is contained to a single level experiencing failures. 2176 5. disaggregated prefix S-TIEs are not "reflected" by the lower 2177 level, i.e. nodes within same level do NOT need to be aware 2178 which node computed the need for disaggregation. 2180 6. The fabric is still supporting maximum load balancing properties 2181 while not trying to send traffic northbound unless necessary. 2183 To close this section it is worth to observe that in a single plane 2184 ToF this disaggregation prevents blackholing up to (K_LEAF * P) link 2185 failures in terms of Section 5.1.2 or in other terms, it takes at 2186 minimum that many link failures to partition the ToF into multiple 2187 planes. 2189 5.2.5.2. Negative, Transitive Disaggregation for Fallen Leafs 2191 As explained in Section 5.1.3 failures in multi-plane Top-of-Fabric 2192 or more than (K_LEAF * P) links failing in single plane design can 2193 generate fallen leafs. Such scenario cannot be addressed by positive 2194 disaggregation only and needs a further mechanism. 2196 5.2.5.2.1. Cabling of Multiple Top-of-Fabric Planes 2198 Let us return in this section to designs with multiple planes as 2199 shown in Figure 3. Figure 17 highlights how the ToF is cabled in 2200 case of two planes by the means of dual-rings to distribute all the 2201 N-TIEs within both planes. For people familiar with traditional 2202 link-state routing protocols ToF level can be considered equivalent 2203 to area 0 in OSPF or level-2 in ISIS which need to be "connected" as 2204 well for the protocol to operate correctly. 2206 . ++==========++ ++==========++ 2207 . II II II II 2208 .+----++--+ +----++--+ +----++--+ +----++--+ 2209 .|ToF A1| |ToF B1| |ToF B2| |ToF A2| 2210 .++-+-++--+ ++-+-++--+ ++-+-++--+ ++-+-++--+ 2211 . | | II | | II | | II | | II 2212 . | | ++==========++ | | ++==========++ 2213 . | | | | | | | | 2214 . 2215 . ~~~ Highlighted ToF of the previous multi-plane figure ~~ 2217 Figure 17: Topologically connected planes 2219 As described in Section 5.1.3 failures in multi-plane fabrics can 2220 lead to blackholes which normal positive disaggregation cannot fix. 2221 The mechanism of negative, transitive disaggregation incorporated in 2222 RIFT provides the according solution. 2224 5.2.5.2.2. Transitive Advertisement of Negative Disaggregates 2226 A ToF node that discovers that it cannot reach a fallen leaf 2227 disaggregates all the prefixes of such leafs. It uses for that 2228 purpose negative prefix S-TIEs that are, as usual, flooded southwards 2229 with the scope defined in Section 5.2.3.4. 2231 Transitively, a node explicitly loses connectivity to a prefix when 2232 none of its children advertises it and when the prefix is negatively 2233 disaggregated by all of its parents. When that happens, the node 2234 originates the negative prefix further down south. Since the 2235 mechanism applies recursively south the negative prefix may propagate 2236 transitively all the way down to the leaf. This is necessary since 2237 leafs connected to multiple planes by means of disjoint paths may 2238 have to choose the correct plane already at the very bottom of the 2239 fabric to make sure that they don't send traffic towards another leaf 2240 using a plane where it is "fallen" at which in point a blackhole is 2241 unavoidable. 2243 When the connectivity is restored, a node that disaggregated a prefix 2244 withdraws the negative disaggregation by the usual mechanism of re- 2245 advertising TIEs omitting the negative prefix. 2247 5.2.5.2.3. Computation of Negative Disaggregates 2249 The document omitted so far the description of the computation 2250 necessary to generate the correct set of negative prefixes. Negative 2251 prefixes can in fact be advertised due to two different triggers. We 2252 describe them consecutively. 2254 The first origination reason is a computation that uses all the node 2255 N-TIEs to build the set of all reachable nodes by reachability 2256 computation over the complete graph. The computation uses the node 2257 itself as root. This is compared with the result of the normal 2258 southbound SPF as described in Section 5.2.4.2. The difference are 2259 the fallen leafs and all their attached prefixes are advertised as 2260 negative prefixes southbound if the node does not see the prefix 2261 being reachable within southbound SPF. 2263 The second mechanism hinges on the understanding how the negative 2264 prefixes are used within the computation as described in Figure 18. 2265 When attaching the negative prefixes at certain point in time the 2266 negative prefix may find itself with all the viable nodes from the 2267 shorter match nexthop being pruned. In other words, all its 2268 northbound neighbors provided a negative prefix advertisement. This 2269 is the trigger to advertise this negative prefix transitively south 2270 and normally caused by the node being in a plane where the prefix 2271 belongs to a fabric leaf that has "fallen" in this plane. Obviously, 2272 when one of the northbound switches withdraws its negative 2273 advertisement, the node has to withdraw its transitively provided 2274 negative prefix as well. 2276 5.2.6. Attaching Prefixes 2278 After the SPF is run, it is necessary to attach according prefixes. 2279 For S-SPF, prefixes from an N-TIE are attached to the originating 2280 node with that node's next-hop set and a distance equal to the 2281 prefix's cost plus the node's minimized path distance. The RIFT 2282 route database, a set of (prefix, type=spf, path_distance, next-hop 2283 set), accumulates these results. Obviously, the prefix retains its 2284 type which is used to tie-break between the same prefix advertised 2285 with different types. 2287 In case of N-SPF prefixes from each S-TIE need to also be added to 2288 the RIFT route database. The N-SPF is really just a stub so the 2289 computing node needs simply to determine, for each prefix in an S-TIE 2290 that originated from adjacent node, what next-hops to use to reach 2291 that node. Since there may be parallel links, the next-hops to use 2292 can be a set; presence of the computing node in the associated Node 2293 S-TIE is sufficient to verify that at least one link has 2294 bidirectional connectivity. The set of minimum cost next-hops from 2295 the computing node X to the originating adjacent node is determined. 2297 Each prefix has its cost adjusted before being added into the RIFT 2298 route database. The cost of the prefix is set to the cost received 2299 plus the cost of the minimum distance next-hop to that neighbor while 2300 taking into account its attributes such as mobility per Section 5.3.3 2301 necessary. Then each prefix can be added into the RIFT route 2302 database with the next_hop_set; ties are broken based upon type first 2303 and then distance and further attributes. RIFT route preferences are 2304 normalized by the according thrift model type. 2306 An exemplary implementation for node X follows: 2308 for each S-TIE 2309 if S-TIE.level > X.level 2310 next_hop_set = set of minimum cost links to the S-TIE.originator 2311 next_hop_cost = minimum cost link to S-TIE.originator 2312 end if 2313 for each prefix P in the S-TIE 2314 P.cost = P.cost + next_hop_cost 2315 if P not in route_database: 2316 add (P, type=DistVector, P.cost, next_hop_set) to route_database 2317 end if 2318 if (P in route_database): 2319 if route_database[P].cost > P.cost): 2320 update route_database[P] with (P, DistVector, P.cost, next_hop_set) 2321 else if route_database[P].cost == P.cost 2322 update route_database[P] with (P, DistVector, P.cost, 2323 merge(next_hop_set, route_database[P].next_hop_set)) 2324 else 2325 // Not preferred route so ignore 2326 end if 2327 end if 2328 end for 2329 end for 2331 Figure 18: Adding Routes from S-TIE Positive and Negative Prefixes 2333 After the positive prefixes are attached and tie-broken, negative 2334 prefixes are attached and used in case of northbound computation, 2335 ideally from the shortest length to the longest. The nexthop 2336 adjacencies for a negative prefix are inherited from the longest 2337 prefix that aggregates it, and subsequently adjacencies to nodes that 2338 advertised negative for this prefix are removed. As an example, if a 2339 hypothetical RIFT routing table contains A.1/16 @ [A,B], A.1.1/24 @ 2340 [C,D] it will on reception of negative A.1.1.1/32 from D include the 2341 entry A.1.1.1/32 @ [C] resulting from computation inheriting A.1.1/24 2342 nexthops (C and D) and pruning all the nodes that advertised this 2343 negative prefix (which is D in this case). 2345 The rule of inheritance MUST be maintained when the nexthop list for 2346 a prefix is modified, as the modification may affect the entries for 2347 matching negative prefixes of immediate longer prefix length. For 2348 instance, if a nexthop is added, then by inheritance it must be added 2349 to all the negative routes of immediate longer prefixes length unless 2350 it is pruned due to a negative advertisement for the same next hop. 2351 Similarily, if a nexthop is deleted for a given prefix, then it is 2352 deleted for all the immediately aggregated negative routes. This 2353 will recurse in the case of nested negative prefix aggregations. 2355 The rule of inheritance must also be maintained when a new prefix of 2356 intermediate length is inserted, or when the immediately aggregating 2357 prefix is deleted from the routing table, making an even shorter 2358 aggregating prefix the one from which the negative routes now inherit 2359 their adjacencies. As the aggregating prefix changes, all the 2360 negative routes must be recomputed, and then again the process may 2361 recurse in case of nested negative prefix aggregations. 2363 Observe that despite seeming quite computationally expensive the 2364 operations are only necessary if the only available advertisements 2365 for a prefix are negative since tie-breaking always prefers positive 2366 information for the prefix which stops any kind of recursion since 2367 positive information never inherits next hops. 2369 5.2.7. Optional Zero Touch Provisioning (ZTP) 2371 Each RIFT node can optionally operate in zero touch provisioning 2372 (ZTP) mode, i.e. it has no configuration (unless it is a Top-of- 2373 Fabric at the top of the topology or the must operate in the topology 2374 as leaf and/or support leaf-2-leaf procedures) and it will fully 2375 configure itself after being attached to the topology. Configured 2376 nodes and nodes operating in ZTP can be mixed and will form a valid 2377 topology if achievable. 2379 The derivation of the level of each node happens based on offers 2380 received from its neighbors whereas each node (with possibly 2381 exceptions of configured leafs) tries to attach at the highest 2382 possible point in the fabric. This guarantees that even if the 2383 diffusion front reaches a node from "below" faster than from "above", 2384 it will greedily abandon already negotiated level derived from nodes 2385 topologically below it and properly peers with nodes above. 2387 The fabric is very conciously numbered from the top to allow for PoDs 2388 of different heights and minimize number of provisioning necessary, 2389 in this case just a TOP_OF_FABRIC flag. 2391 This section describes the necessary concepts and procedures for ZTP 2392 operation. 2394 5.2.7.1. Terminology 2396 The interdependencies between the different flags and the configured 2397 level can be somewhat vexing at first and it may take multiple reads 2398 of the glossary to make sense. 2400 Automatic Level Derivation: Procedures which allow nodes without 2401 level configured to derive it automatically. Only applied if 2402 CONFIGURED_LEVEL is undefined. 2404 UNDEFINED_LEVEL: An imaginary value that indicates that the level 2405 has not beeen determined and has not been configured. Schemas 2406 normally indicate that by a missing optional value without an 2407 available defined default. 2409 LEAF_ONLY: An optional configuration flag that can be configured on 2410 a node to make sure it never leaves the "bottom of the hierarchy". 2411 TOP_OF_FABRIC flag and CONFIGURED_LEVEL cannot be defined at the 2412 same time as this flag. It implies CONFIGURED_LEVEL value of 0. 2414 TOP_OF_FABRIC flag: Configuration flag provided to all Top-of-Fabric 2415 nodes. LEAF_FLAG and CONFIGURED_LEVEL cannot be defined at the 2416 same time as this flag. It implies a CONFIGURED_LEVEL value. In 2417 fact, it is basically a shortcut for configuring same level at all 2418 Top-of-Fabric nodes which is unavoidable since an initial 'seed' 2419 is needed for other ZTP nodes to derive their level in the 2420 topology. The flag plays an important role in fabrics with 2421 multiple planes to enable successful negative disaggregation 2422 (Section 5.2.5.2). 2424 CONFIGURED_LEVEL: A level value provided manually. When this is 2425 defined (i.e. it is not an UNDEFINED_LEVEL) the node is not 2426 participating in ZTP. TOP_OF_FABRIC flag is ignored when this 2427 value is defined. LEAF_ONLY can be set only if this value is 2428 undefined or set to 0. 2430 DERIVED_LEVEL: Level value computed via automatic level derivation 2431 when CONFIGURED_LEVEL is equal to UNDEFINED_LEVEL. 2433 LEAF_2_LEAF: An optional flag that can be configured on a node to 2434 make sure it supports procedures defined in Section 5.3.9. In a 2435 strict sense it is a capability that implies LEAF_ONLY and the 2436 according restrictions. TOP_OF_FABRIC flag is ignored when set at 2437 the same time as this flag. 2439 LEVEL_VALUE: In ZTP case the original definition of "level" in 2440 Section 3.1 is both extended and relaxed. First, level is defined 2441 now as LEVEL_VALUE and is the first defined value of 2442 CONFIGURED_LEVEL followed by DERIVED_LEVEL. Second, it is 2443 possible for nodes to be more than one level apart to form 2444 adjacencies if any of the nodes is at least LEAF_ONLY. 2446 Valid Offered Level (VOL): A neighbor's level received on a valid 2447 LIE (i.e. passing all checks for adjacency formation while 2448 disregarding all clauses involving level values) persisting for 2449 the duration of the holdtime interval on the LIE. Observe that 2450 offers from nodes offering level value of 0 do not constitute VOLs 2451 (since no valid DERIVED_LEVEL can be obtained from those and 2452 consequently `not_a_ztp_offer` MUST be ignored). Offers from LIEs 2453 with `not_a_ztp_offer` being true are not VOLs either. If a node 2454 maintains parallel adjacencies to the neighbor, VOL on each 2455 adjacency is considered as equivalent, i.e. the newest VOL from 2456 any such adjacency updates the VOL received from the same node. 2458 Highest Available Level (HAL): Highest defined level value seen from 2459 all VOLs received. 2461 Highest Available Level Systems (HALS): Set of nodes offering HAL 2462 VOLs. 2464 Highest Adjacency Three Way (HAT): Highest neigbhor level of all the 2465 formed three way adjacencies for the node. 2467 5.2.7.2. Automatic SystemID Selection 2469 RIFT identifies each node via a SystemID which is a 64 bits wide 2470 integer. It is relatively simple to derive a, for all practical 2471 purposes collision free, value for each node on startup. For that 2472 purpose, a node MUST use as system ID EUI-64 MA-L format [EUI64] 2473 where the organizationally governed 24 bits can be used to generate 2474 system IDs for multiple RIFT instances running on the system. 2476 As matter of operational concern, the router MUST ensure that such 2477 identifier is not changing very frequently (or at least not without 2478 sending all its TIEs with fairly short lifetimes) since otherwise the 2479 network may be left with large amounts of stale TIEs in other nodes 2480 (though this is not necessarily a serious problem if the procedures 2481 described in Section 8 are implemented). 2483 5.2.7.3. Generic Fabric Example 2485 ZTP forces us to think about miscabled or unusually cabled fabric and 2486 how such a topology can be forced into a "lattice" structure which a 2487 fabric represents (with further restrictions). Let us consider a 2488 necessary and sufficient physical cabling in Figure 19. We assume 2489 all nodes being in the same PoD. 2491 . +---+ 2492 . | A | s = TOP_OF_FABRIC 2493 . | s | l = LEAF_ONLY 2494 . ++-++ l2l = LEAF_2_LEAF 2495 . | | 2496 . +--+ +--+ 2497 . | | 2498 . +--++ ++--+ 2499 . | E | | F | 2500 . | +-+ | +-----------+ 2501 . ++--+ | ++-++ | 2502 . | | | | | 2503 . | +-------+ | | 2504 . | | | | | 2505 . | | +----+ | | 2506 . | | | | | 2507 . ++-++ ++-++ | 2508 . | I +-----+ J | | 2509 . | | | +-+ | 2510 . ++-++ +--++ | | 2511 . | | | | | 2512 . +---------+ | +------+ | 2513 . | | | | | 2514 . +-----------------+ | | 2515 . | | | | | 2516 . ++-++ ++-++ | 2517 . | X +-----+ Y +-+ 2518 . |l2l| | l | 2519 . +---+ +---+ 2521 Figure 19: Generic ZTP Cabling Considerations 2523 First, we must anchor the "top" of the cabling and that's what the 2524 TOP_OF_FABRIC flag at node A is for. Then things look smooth until 2525 we have to decide whether node Y is at the same level as I, J or at 2526 the same level as Y and consequently, X is south of it. This is 2527 unresolvable here until we "nail down the bottom" of the topology. 2528 To achieve that we choose to use in this example the leaf flags. We 2529 will see further then whether Y chooses to form adjacencies to F or 2530 I, J successively. 2532 5.2.7.4. Level Determination Procedure 2534 A node starting up with UNDEFINED_VALUE (i.e. without a 2535 CONFIGURED_LEVEL or any leaf or TOP_OF_FABRIC flag) MUST follow those 2536 additional procedures: 2538 1. It advertises its LEVEL_VALUE on all LIEs (observe that this can 2539 be UNDEFINED_LEVEL which in terms of the schema is simply an 2540 omitted optional value). 2542 2. It computes HAL as numerically highest available level in all 2543 VOLs. 2545 3. It chooses then MAX(HAL-1,0) as its DERIVED_LEVEL. The node then 2546 starts to advertise this derived level. 2548 4. A node that lost all adjacencies with HAL value MUST hold down 2549 computation of new DERIVED_LEVEL for a short period of time 2550 unless it has no VOLs from southbound adjacencies. After the 2551 holddown expired, it MUST discard all received offers, recompute 2552 DERIVED_LEVEL and announce it to all neighbors. 2554 5. A node MUST reset any adjacency that has changed the level it is 2555 offering and is in three way state. 2557 6. A node that changed its defined level value MUST readvertise its 2558 own TIEs (since the new `PacketHeader` will contain a different 2559 level than before). Sequence number of each TIE MUST be 2560 increased. 2562 7. After a level has been derived the node MUST set the 2563 `not_a_ztp_offer` on LIEs towards all systems extending a VOL for 2564 HAL. 2566 A node starting with LEVEL_VALUE being 0 (i.e. it assumes a leaf 2567 function by being configured with the appropriate flags or has a 2568 CONFIGURED_LEVEL of 0) MUST follow those additional procedures: 2570 1. It computes HAT per procedures above but does NOT use it to 2571 compute DERIVED_LEVEL. HAT is used to limit adjacency formation 2572 per Section 5.2.2. 2574 It MAY also follow modified procedures: 2576 1. It may pick a different strategy to choose VOL, e.g. use the VOL 2577 value with highest number of VOLs. Such strategies are only 2578 possible since the node always remains "at the bottom of the 2579 fabric" while another layer could "invert" the fabric by picking 2580 its prefered VOL in a different fashion than always trying to 2581 achieve the highest viable level. 2583 5.2.7.5. Resulting Topologies 2585 The procedures defined in Section 5.2.7.4 will lead to the RIFT 2586 topology and levels depicted in Figure 20. 2588 . +---+ 2589 . | As| 2590 . | 24| 2591 . ++-++ 2592 . | | 2593 . +--+ +--+ 2594 . | | 2595 . +--++ ++--+ 2596 . | E | | F | 2597 . | 23+-+ | 23+-----------+ 2598 . ++--+ | ++-++ | 2599 . | | | | | 2600 . | +-------+ | | 2601 . | | | | | 2602 . | | +----+ | | 2603 . | | | | | 2604 . ++-++ ++-++ | 2605 . | I +-----+ J | | 2606 . | 22| | 22| | 2607 . ++--+ +--++ | 2608 . | | | 2609 . +---------+ | | 2610 . | | | 2611 . ++-++ +---+ | 2612 . | X | | Y +-+ 2613 . | 0 | | 0 | 2614 . +---+ +---+ 2616 Figure 20: Generic ZTP Topology Autoconfigured 2618 In case we imagine the LEAF_ONLY restriction on Y is removed the 2619 outcome would be very different however and result in Figure 21. 2620 This demonstrates basically that auto configuration makes miscabling 2621 detection hard and with that can lead to undesirable effects in cases 2622 where leafs are not "nailed" by the accordingly configured flags and 2623 arbitrarily cabled. 2625 A node MAY analyze the outstanding level offers on its interfaces and 2626 generate warnings when its internal ruleset flags a possible 2627 miscabling. As an example, when a node's sees ZTP level offers that 2628 differ by more than one level from its chosen level (with proper 2629 accounting for leaf's being at level 0) this can indicate miscabling. 2631 . +---+ 2632 . | As| 2633 . | 24| 2634 . ++-++ 2635 . | | 2636 . +--+ +--+ 2637 . | | 2638 . +--++ ++--+ 2639 . | E | | F | 2640 . | 23+-+ | 23+-------+ 2641 . ++--+ | ++-++ | 2642 . | | | | | 2643 . | +-------+ | | 2644 . | | | | | 2645 . | | +----+ | | 2646 . | | | | | 2647 . ++-++ ++-++ +-+-+ 2648 . | I +-----+ J +-----+ Y | 2649 . | 22| | 22| | 22| 2650 . ++-++ +--++ ++-++ 2651 . | | | | | 2652 . | +-----------------+ | 2653 . | | | 2654 . +---------+ | | 2655 . | | | 2656 . ++-++ | 2657 . | X +--------+ 2658 . | 0 | 2659 . +---+ 2661 Figure 21: Generic ZTP Topology Autoconfigured 2663 5.2.8. Stability Considerations 2665 The autoconfiguration mechanism computes a global maximum of levels 2666 by diffusion. The achieved equilibrium can be disturbed massively by 2667 all nodes with highest level either leaving or entering the domain 2668 (with some finer distinctions not explained further). It is 2669 therefore recommended that each node is multi-homed towards nodes 2670 with respective HAL offerings. Fortuntately, this is the natural 2671 state of things for the topology variants considered in RIFT. 2673 5.3. Further Mechanisms 2675 5.3.1. Overload Bit 2677 Overload Bit MUST be respected in all according reachability 2678 computations. A node with overload bit set SHOULD NOT advertise any 2679 reachability prefixes southbound except locally hosted ones. A node 2680 in overload SHOULD advertise all its locally hosted prefixes north 2681 and southbound. 2683 The leaf node SHOULD set the 'overload' bit on its node TIEs, since 2684 if the spine nodes were to forward traffic not meant for the local 2685 node, the leaf node does not have the topology information to prevent 2686 a routing/forwarding loop. 2688 5.3.2. Optimized Route Computation on Leafs 2690 Since the leafs do see only "one hop away" they do not need to run a 2691 full SPF but can simply gather prefix candidates from their neighbors 2692 and build the according routing table. 2694 A leaf will have no N-TIEs except its own and optionally from its 2695 East-West neighbors. A leaf will have S-TIEs from its neighbors. 2697 Instead of creating a network graph from its N-TIEs and neighbor's 2698 S-TIEs and then running an SPF, a leaf node can simply compute the 2699 minimum cost and next_hop_set to each leaf neighbor by examining its 2700 local adjacencies, determining bi-directionality from the associated 2701 N-TIE, and specifying the neighbor's next_hop_set set and cost from 2702 the minimum cost local adjacency to that neighbor. 2704 Then a leaf attaches prefixes as in Section 5.2.6. 2706 5.3.3. Mobility 2708 It is a requirement for RIFT to maintain at the control plane a real 2709 time status of which prefix is attached to which port of which leaf, 2710 even in a context of mobility where the point of attachement may 2711 change several times in a subsecond period of time. 2713 There are two classical approaches to maintain such knowledge in an 2714 unambiguous fashion: 2716 time stamp: With this method, the infrastructure memorizes the 2717 precise time at which the movement is observed. One key advantage 2718 of this technique is that it has no dependency on the mobile 2719 device. One drawback is that the infrastructure must be precisely 2720 synchronized to be able to compare time stamps as observed by the 2721 various points of attachment, e.g., using the variation of the 2722 Precision Time Protocol (PTP) IEEE Std. 1588 [IEEEstd1588], 2723 [IEEEstd8021AS] designed for bridged LANs IEEE Std. 802.1AS 2724 [IEEEstd8021AS]. Both the precision of the synchronisation 2725 protocol and the resolution of the time stamp must beat the 2726 highest possible roaming time on the fabric. Another drawback is 2727 that the presence of the mobile device may be observed only 2728 asynchronously, e.g., after it starts using an IP protocol such as 2729 ARP [RFC0826], IPv6 Neighbor Discovery [RFC4861][RFC4862], or DHCP 2730 [RFC2131][RFC3315]. 2732 sequence counter: With this method, a mobile node notifies its point 2733 of attachment on arrival with a sequence counter that is 2734 incremented upon each movement. On the positive side, this method 2735 does not have a dependency on a precise sense of time, since the 2736 sequence of movements is kept in order by the device. The 2737 disadvantage of this approach is the lack of support for protocols 2738 that may be used by the mobile node to register its presence to 2739 the leaf node with the capability to provide a sequence counter. 2740 Well-known issues with wrapping sequence counters must be 2741 addressed properly, and many forms of sequence counters that vary 2742 in both wrapping rules and comparison rules. A particular 2743 knowledge of the source of the sequence counter is required to 2744 operate it, and the comparison between sequence counters from 2745 heterogeneous sources can be hard to impossible. 2747 RIFT supports a hybrid approach contained in an optional 2748 `PrefixSequenceType` prefix attribute that we call a `monotonic 2749 clock` consisting of a timestamp and optional sequence number. In 2750 case of presence of the attribute: 2752 o The leaf node MUST advertise a time stamp of the latest sighting 2753 of a prefix, e.g., by snooping IP protocols or the node using the 2754 time at which it advertised the prefix. RIFT transports the time 2755 stamp within the desired prefix N-TIEs as 802.1AS timestamp. 2757 o RIFT may interoperate with the "update to 6LoWPAN Neighbor 2758 Discovery" [I-D.ietf-6lo-rfc6775-update], which provides a method 2759 for registering a prefix with a sequence counter called a 2760 Transaction ID (TID). RIFT transports in such case the TID in its 2761 native form. 2763 o RIFT also defines an abstract negative clock (ANSC) that compares 2764 as less than any other clock. By default, the lack of a 2765 `PrefixSequenceType` in a Prefix N-TIE is interpreted as ANSC. We 2766 call this also an `undefined` clock. 2768 o Any prefix present on the fabric in multiple nodes that has the 2769 `same` clock is considered as anycast. ASNC is always considered 2770 smaller than any defined clock. 2772 o RIFT implementation assumes by default that all nodes are being 2773 synchronized to 200 milliseconds precision which is easily 2774 achievable even in very large fabrics using [RFC5905]. An 2775 implementation MAY provide a way to reconfigure a domain to a 2776 different value. We call this variable MAXIMUM_CLOCK_DELTA. 2778 5.3.3.1. Clock Comparison 2780 All monotonic clock values are comparable to each other using the 2781 following rules: 2783 1. ASNC is older than any other value except ASNC AND 2785 2. Clock with timestamp differing by more than MAXIMUM_CLOCK_DELTA 2786 are comparable by using the timestamps only AND 2788 3. Clocks with timestamps differing by less than MAXIMUM_CLOCK_DELTA 2789 are comparable by using their TIDs only AND 2791 4. An undefined TID is always older than any other TID AND 2793 5. TIDs are compared using rules of [I-D.ietf-6lo-rfc6775-update]. 2795 5.3.3.2. Interaction between Time Stamps and Sequence Counters 2797 For slow movements that occur less frequently than e.g. once per 2798 second, the time stamp that the RIFT infrastruture captures is enough 2799 to determine the freshest discovery. If the point of attachement 2800 changes faster than the maximum drift of the time stamping mechanism 2801 (i.e. MAXIMUM_CLOCK_DELTA), then a sequence counter is required to 2802 add resolution to the freshness evaluation, and it must be sized so 2803 that the counters stay comparable within the resolution of the time 2804 stampling mechanism. 2806 The sequence counter in [I-D.ietf-6lo-rfc6775-update] is encoded as 2807 one octet, wraps after 127 increments, and, by default, values are 2808 defined as comparable as long as they are less than SEQUENCE_WINDOW = 2809 16 apart. An implementation MAY allow this to be configurable 2810 throughout the domain, and the number can be pushed up to 64 and 2811 still preserve the capability to discover an error situation where 2812 counters are not comparable. 2814 Within the resolution of MAXIMUM_CLOCK_DELTA the sequence counters 2815 captured during 2 sequential values of the time stamp must be 2816 comparable. This means with default values that a node may move up 2817 to 16 times during a 200 milliseconds period and the clocks remain 2818 still comparable thus allowing the infrastructure to assert the 2819 freshest advertisement with no ambiguity. 2821 5.3.3.3. Anycast vs. Unicast 2823 A unicast prefix can be attached to at most one leaf, whereas an 2824 anycast prefix may be reachable via more than one leaf. 2826 If a monotonic clock attribute is provided on the prefix, then the 2827 prefix with the `newest` clock value is strictly prefered. An 2828 anycast prefix does not carry a clock or all clock attributes MUST be 2829 the same under the rules of Section 5.3.3.1. 2831 Observe that it is important that in mobility events the leaf is re- 2832 flooding as quickly as possible the absence of the prefix that moved 2833 away. 2835 Observe further that without support for 2836 [I-D.ietf-6lo-rfc6775-update] movements on the fabric within 2837 intervals smaller than 100msec will be seen as anycast. 2839 5.3.3.4. Overlays and Signaling 2841 RIFT is agnostic whichever the overlay technology [MIP, LISP, VxLAN, 2842 NVO3] and the associated signaling is deployed over it. But it is 2843 expected that leaf nodes, and possibly Top-of-Fabric nodes can 2844 perform the according encapsulation. 2846 In the context of mobility, overlays provide a classical solution to 2847 avoid injecting mobile prefixes in the fabric and improve the 2848 scalability of the solution. It makes sense on a data center that 2849 already uses overlays to consider their applicability to the mobility 2850 solution; as an example, a mobility protocol such as LISP may inform 2851 the ingress leaf of the location of the egress leaf in real time. 2853 Another possibility is to consider that mobility as an underlay 2854 service and support it in RIFT to an extent. The load on the fabric 2855 augments with the amount of mobility obviously since a move forces 2856 flooding and computation on all nodes in the scope of the move so 2857 tunneling from leaf to the Top-of-Fabric may be desired. Future 2858 versions of this document may describe support for such tunneling in 2859 RIFT. 2861 5.3.4. Key/Value Store 2863 5.3.4.1. Southbound 2865 The protocol supports a southbound distribution of key-value pairs 2866 that can be used to e.g. distribute configuration information during 2867 topology bring-up. The KV S-TIEs can arrive from multiple nodes and 2868 hence need tie-breaking per key. We use the following rules 2870 1. Only KV TIEs originated by a node to which the receiver has an 2871 adjacency are considered. 2873 2. Within all valid KV S-TIEs containing the key, the value of the 2874 KV S-TIE for which the according node S-TIE is present, has the 2875 highest level and within the same level has highest originating 2876 system ID is preferred. If keys in the most preferred TIEs are 2877 overlapping, the behavior is undefined. 2879 Observe that if a node goes down, the node south of it looses 2880 adjacencies to it and with that the KVs will be disregarded and on 2881 tie-break changes new KV re-advertised to prevent stale information 2882 being used by nodes further south. KV information in southbound 2883 direction is not result of independent computation of every node but 2884 a diffused computation. 2886 5.3.4.2. Northbound 2888 Certain use cases seem to necessitate distribution of essentialy KV 2889 information that is generated in the leafs in the northbound 2890 direction. Such information is flooded in KV N-TIEs. Since the 2891 originator of northbound KV is preserved during northbound flooding, 2892 overlapping keys could be used. However, to omit further protocol 2893 complexity, only the value of the key in TIE tie-broken in same 2894 fashion as southbound KV TIEs is used. 2896 5.3.5. Interactions with BFD 2898 RIFT MAY incorporate BFD [RFC5881] to react quickly to link failures. 2899 In such case following procedures are introduced: 2901 After RIFT three way hello adjacency convergence a BFD session MAY 2902 be formed automatically between the RIFT endpoints without further 2903 configuration using the exchanged discriminators. 2905 In case established BFD session goes Down after it was Up, RIFT 2906 adjacency should be re-initialized started from Init. 2908 In case of parallel links between nodes each link may run its own 2909 independent BFD session or they may share a session. 2911 In case RIFT changes link identifiers both the hello as well as 2912 the BFD sessions SHOULD be brought down and back up again. 2914 Multiple RIFT instances MAY choose to share a single BFD session 2915 (in such case it is undefined what discriminators are used albeit 2916 RIFT CAN advertise the same link ID for the same interface in 2917 multiple instances and with that "share" the discriminators). 2919 BFD TTL follows [RFC5082]. 2921 5.3.6. Fabric Bandwidth Balancing 2923 A well understood problem in fabrics is that in case of link losses 2924 it would be ideal to rebalance how much traffic is offered to 2925 switches in the next level based on the ingress and egress bandwidth 2926 they have. Current attempts rely mostly on specialized traffic 2927 engineering via controller or leafs being aware of complete topology 2928 with according cost and complexity. 2930 RIFT can support a very light weight mechanism that can deal with the 2931 problem in an approximative way based on the fact that RIFT is loop- 2932 free. 2934 5.3.6.1. Northbound Direction 2936 Every RIFT node SHOULD compute the amount of northbound bandwith 2937 available through neighbors at higher level and modify distance 2938 received on default route from this neighbor. Those different 2939 distances SHOULD be used to support weighted ECMP forwarding towards 2940 higher level when using default route. We call such a distance 2941 Bandwidth Adjusted Distance or BAD. This is best illustrated by a 2942 simple example. 2944 . 100 x 100 100 MBits 2945 . | x | | 2946 . +-+---+-+ +-+---+-+ 2947 . | | | | 2948 . |Spin111| |Spin112| 2949 . +-+---+++ ++----+++ 2950 . |x || || || 2951 . || |+---------------+ || 2952 . || +---------------+| || 2953 . || || || || 2954 . || || || || 2955 . -----All Links 10 MBit------- 2956 . || || || || 2957 . || || || || 2958 . || +------------+| || || 2959 . || |+------------+ || || 2960 . |x || || || 2961 . +-+---+++ +--++-+++ 2962 . | | | | 2963 . |Leaf111| |Leaf112| 2964 . +-------+ +-------+ 2966 Figure 22: Balancing Bandwidth 2968 All links from Leafs in Figure 22 are assumed to 10 MBit/s bandwidth 2969 while the uplinks one level further up are assumed to be 100 MBit/s. 2970 Further, in Figure 22 we assume that Leaf111 lost one of the parallel 2971 links to Spine 111 and with that wants to possibly push more traffic 2972 onto Spine 112. Leaf 112 has equal bandwidth to Spine 111 and Spine 2973 112 but Spine 111 lost one of its uplinks. 2975 The local modification of the received default route distance from 2976 upper level is achieved by running a relatively simple algorithm 2977 where the bandwidth is weighted exponentially while the distance on 2978 the default route represents a multiplier for the bandwidth weight 2979 for easy operational adjustements. 2981 On a node L use Node TIEs to compute for each non-overloaded 2982 northbound neighbor N three values: 2984 L_N_u: as sum of the bandwidth available to N 2986 N_u: as sum of the uplink bandwidth available on N 2988 T_N_u: as sum of L_N_u * OVERSUBSCRIPTION_CONSTANT + N_u 2990 For all T_N_u determine the according M_N_u as 2991 log_2(next_power_2(T_N_u)) and determine MAX_M_N_u as maximum value 2992 of all M_N_u. 2994 For each advertised default route from a node N modify the advertised 2995 distance D to BAD = D * (1 + MAX_M_N_u - M_N_u) and use BAD instead 2996 of distance D to weight balance default forwarding towards N. 2998 For the example above a simple table of values will help the 2999 understanding. We assume the default route distance is advertised 3000 with D=1 everywhere and OVERSUBSCRIPTION_CONSTANT = 1. 3002 +---------+-----------+-------+-------+-----+ 3003 | Node | N | T_N_u | M_N_u | BAD | 3004 +---------+-----------+-------+-------+-----+ 3005 | Leaf111 | Spine 111 | 110 | 7 | 2 | 3006 +---------+-----------+-------+-------+-----+ 3007 | Leaf111 | Spine 112 | 220 | 8 | 1 | 3008 +---------+-----------+-------+-------+-----+ 3009 | Leaf112 | Spine 111 | 120 | 7 | 2 | 3010 +---------+-----------+-------+-------+-----+ 3011 | Leaf112 | Spine 112 | 220 | 8 | 1 | 3012 +---------+-----------+-------+-------+-----+ 3014 Table 5: BAD Computation 3016 All the multiplications and additions are saturating, i.e. when 3017 exceeding range of the bandwidth type are set to highest possible 3018 value of the type. 3020 BAD is only computed for default routes. A node MAY compute and use 3021 BAD for any disaggregated prefixes or other RIFT routes. 3023 Observe further that a change in available bandwidth will only affect 3024 at maximum two levels down in the fabric, i.e. blast radius of 3025 bandwidth changes is contained no matter its height. 3027 5.3.6.2. Southbound Direction 3029 Due to its loop free properties a node could take during S-SPF into 3030 account the available bandwidth on the nodes in lower levels and 3031 modify the amount of traffic offered to next level's "southbound" 3032 nodes based as what it sees is the total achievable maximum flow 3033 through those nodes. It is worth observing that such computations 3034 will work better if standardized but does not have to be necessarily. 3035 As long the packet keeps on heading south it will take one of the 3036 available paths and arrive at the intended destination. 3038 Future versions of this document will fill in more details. 3040 5.3.7. Label Binding 3042 A node MAY advertise on its TIEs a locally significant, downstream 3043 assigned label for the according interface. One use of such label is 3044 a hop-by-hop encapsulation allowing to easily distinguish forwarding 3045 planes served by a multiplicity of RIFT instances. 3047 5.3.8. Segment Routing Support with RIFT 3049 Recently, alternative architecture to reuse labels as segment 3050 identifiers [I-D.ietf-spring-segment-routing] has gained traction and 3051 may present use cases in IP fabric that would justify its deployment. 3052 Such use cases will either precondition an assignment of a label per 3053 node (or other entities where the mechanisms are equivalent) or a 3054 global assignment and a knowledge of topology everywhere to compute 3055 segment stacks of interest. We deal with the two issues separately. 3057 5.3.8.1. Global Segment Identifiers Assignment 3059 Global segment identifiers are normally assumed to be provided by 3060 some kind of a centralized "controller" instance and distributed to 3061 other entities. This can be performed in RIFT by attaching a 3062 controller to the Top-of-Fabric nodes at the top of the fabric where 3063 the whole topology is always visible, assign such identifiers and 3064 then distribute those via the KV mechanism towards all nodes so they 3065 can perform things like probing the fabric for failures using a stack 3066 of segments. 3068 5.3.8.2. Distribution of Topology Information 3070 Some segment routing use cases seem to precondition full knowledge of 3071 fabric topology in all nodes which can be performed albeit at the 3072 loss of one of highly desirable properties of RIFT, namely minimal 3073 blast radius. Basically, RIFT can function as a flat IGP by 3074 switching off its flooding scopes. All nodes will end up with full 3075 topology view and albeit the N-SPF and S-SPF are still performed 3076 based on RIFT rules, any computation with segment identifiers that 3077 needs full topology can use it. 3079 Beside blast radius problem, excessive flooding may present 3080 significant load on implementations. 3082 5.3.9. Leaf to Leaf Procedures 3084 RIFT can optionally allow special leaf East-West adjacencies under 3085 additional set of rules. The leaf supporting those procedures MUST: 3087 advertise the LEAF_2_LEAF flag in node capabilities AND 3089 set the overload bit on all leaf's node TIEs AND 3091 flood only node's own north and south TIEs over E-W leaf 3092 adjacencies AND 3094 always use E-W leaf adjacency in both north as well as south 3095 computation AND 3097 install a discard route for any advertised aggregate in leaf's 3098 TIEs AND 3100 never form southbound adjacencies. 3102 This will allow the E-W leaf nodes to exchange traffic strictly for 3103 the prefixes advertised in each other's north prefix TIEs (since the 3104 southbound computation will find the reverse direction in the other 3105 node's TIE and install its north prefixes). 3107 5.3.10. Address Family and Multi Topology Considerations 3109 Multi-Topology (MT)[RFC5120] and Multi-Instance (MI)[RFC6822] is used 3110 today in link-state routing protocols to support several domains on 3111 the same physical topology. RIFT supports this capability by 3112 carrying transport ports in the LIE protocol exchanges. Multiplexing 3113 of LIEs can be achieved by either choosing varying multicast 3114 addresses or ports on the same address. 3116 BFD interactions in Section 5.3.5 are implementation dependent when 3117 multiple RIFT instances run on the same link. 3119 5.3.11. Reachability of Internal Nodes in the Fabric 3121 RIFT does not precondition that its nodes have reachable addresses 3122 albeit for operational purposes this is clearly desirable. Under 3123 normal operating conditions this can be easily achieved by e.g. 3124 injecting the node's loopback address into North Prefix TIEs. 3126 Things get more interesting in case a node looses all its northbound 3127 adjacencies but is not at the top of the fabric. That is outside the 3128 scope of this document and may be covered in a separate document 3129 about policy guided prefixes [PGP reference]. 3131 5.3.12. One-Hop Healing of Levels with East-West Links 3133 Based on the rules defined in Section 5.2.4, Section 5.2.3.7 and 3134 given presence of E-W links, RIFT can provide a one-hop protection of 3135 nodes that lost all their northbound links or in other complex link 3136 set failure scenarios except at Top-of-Fabric where the links are 3137 used exclusively to flood topology information in multi-plane 3138 designs. Section 6.4 explains the resulting behavior based on one 3139 such example. 3141 6. Examples 3143 6.1. Normal Operation 3145 This section describes RIFT deployment in the example topology 3146 without any node or link failures. We disregard flooding reduction 3147 for simplicity's sake. 3149 As first step, the following bi-directional adjacencies will be 3150 created (and any other links that do not fulfill LIE rules in 3151 Section 5.2.2 disregarded): 3153 1. Spine 21 (PoD 0) to Spine 111, Spine 112, Spine 121, and Spine 3154 122 3156 2. Spine 22 (PoD 0) to Spine 111, Spine 112, Spine 121, and Spine 3157 122 3159 3. Spine 111 to Leaf 111, Leaf 112 3161 4. Spine 112 to Leaf 111, Leaf 112 3163 5. Spine 121 to Leaf 121, Leaf 122 3165 6. Spine 122 to Leaf 121, Leaf 122 3167 Consequently, N-TIEs would be originated by Spine 111 and Spine 112 3168 and each set would be sent to both Spine 21 and Spine 22. N-TIEs 3169 also would be originated by Leaf 111 (w/ Prefix 111) and Leaf 112 (w/ 3170 Prefix 112 and the multi-homed prefix) and each set would be sent to 3171 Spine 111 and Spine 112. Spine 111 and Spine 112 would then flood 3172 these N-TIEs to Spine 21 and Spine 22. 3174 Similarly, N-TIEs would be originated by Spine 121 and Spine 122 and 3175 each set would be sent to both Spine 21 and Spine 22. N-TIEs also 3176 would be originated by Leaf 121 (w/ Prefix 121 and the multi-homed 3177 prefix) and Leaf 122 (w/ Prefix 122) and each set would be sent to 3178 Spine 121 and Spine 122. Spine 121 and Spine 122 would then flood 3179 these N-TIEs to Spine 21 and Spine 22. 3181 At this point both Spine 21 and Spine 22, as well as any controller 3182 to which they are connected, would have the complete network 3183 topology. At the same time, Spine 111/112/121/122 hold only the 3184 N-ties of level 0 of their respective PoD. Leafs hold only their own 3185 N-TIEs. 3187 S-TIEs with adjacencies and a default IP prefix would then be 3188 originated by Spine 21 and Spine 22 and each would be flooded to 3189 Spine 111, Spine 112, Spine 121, and Spine 122. Spine 111, Spine 3190 112, Spine 121, and Spine 122 would each send the S-TIE from Spine 21 3191 to Spine 22 and the S-TIE from Spine 22 to Spine 21. (S-TIEs are 3192 reflected up to level from which they are received but they are NOT 3193 propagated southbound.) 3195 A S-TIE with a default IP prefix would be originated by Node 111 and 3196 Spine 112 and each would be sent to Leaf 111 and Leaf 112. 3198 Similarly, an S-TIE with a default IP prefix would be originated by 3199 Node 121 and Spine 122 and each would be sent to Leaf 121 and Leaf 3200 122. At this point IP connectivity with maximum possible ECMP has 3201 been established between the leafs while constraining the amount of 3202 information held by each node to the minimum necessary for normal 3203 operation and dealing with failures. 3205 6.2. Leaf Link Failure 3207 . | | | | 3208 .+-+---+-+ +-+---+-+ 3209 .| | | | 3210 .|Spin111| |Spin112| 3211 .+-+---+-+ ++----+-+ 3212 . | | | | 3213 . | +---------------+ X 3214 . | | | X Failure 3215 . | +-------------+ | X 3216 . | | | | 3217 .+-+---+-+ +--+--+-+ 3218 .| | | | 3219 .|Leaf111| |Leaf112| 3220 .+-------+ +-------+ 3221 . + + 3222 . Prefix111 Prefix112 3224 Figure 23: Single Leaf link failure 3226 In case of a failing leaf link between spine 112 and leaf 112 the 3227 link-state information will cause re-computation of the necessary SPF 3228 and the higher levels will stop forwarding towards prefix 112 through 3229 spine 112. Only spines 111 and 112, as well as both spines will see 3230 control traffic. Leaf 111 will receive a new S-TIE from spine 112 3231 and reflect back to spine 111. Spine 111 will de-aggregate prefix 3232 111 and prefix 112 but we will not describe it further here since de- 3233 aggregation is emphasized in the next example. It is worth observing 3234 however in this example that if leaf 111 would keep on forwarding 3235 traffic towards prefix 112 using the advertised south-bound default 3236 of spine 112 the traffic would end up on Top-of-Fabric 21 and ToF 22 3237 and cross back into pod 1 using spine 111. This is arguably not as 3238 bad as black-holing present in the next example but clearly 3239 undesirable. Fortunately, de-aggregation prevents this type of 3240 behavior except for a transitory period of time. 3242 6.3. Partitioned Fabric 3243 . +--------+ +--------+ S-TIE of Spine21 3244 . | | | | received by 3245 . |ToF 21| |ToF 22| south reflection of 3246 . ++-+--+-++ ++-+--+-++ spines 112 and 111 3247 . | | | | | | | | 3248 . | | | | | | | 0/0 3249 . | | | | | | | | 3250 . | | | | | | | | 3251 . +--------------+ | +--- XXXXXX + | | | +---------------+ 3252 . | | | | | | | | 3253 . | +-----------------------------+ | | | 3254 . 0/0 | | | | | | | 3255 . | 0/0 0/0 +- XXXXXXXXXXXXXXXXXXXXXXXXX -+ | 3256 . | 1.1/16 | | | | | | 3257 . | | +-+ +-0/0-----------+ | | 3258 . | | | 1.1./16 | | | | 3259 .+-+----++ +-+-----+ ++-----0/0 ++----0/0 3260 .| | | | | 1.1/16 | 1.1/16 3261 .|Spin111| |Spin112| |Spin121| |Spin122| 3262 .+-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 3263 . | | | | | | | | 3264 . | +---------------+ | | +----------------+ | 3265 . | | | | | | | | 3266 . | +-------------+ | | | +--------------+ | | 3267 . | | | | | | | | 3268 .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 3269 .| | | | | | | | 3270 .|Leaf111| |Leaf112| |Leaf121| |Leaf122| 3271 .+-+-----+ ++------+ +-----+-+ +-+-----+ 3272 . + + + + 3273 . Prefix111 Prefix112 Prefix121 Prefix122 3274 . 1.1/16 3276 Figure 24: Fabric partition 3278 Figure 24 shows the arguably most catastrophic but also the most 3279 interesting case. Spine 21 is completely severed from access to 3280 Prefix 121 (we use in the figure 1.1/16 as example) by double link 3281 failure. However unlikely, if left unresolved, forwarding from leaf 3282 111 and leaf 112 to prefix 121 would suffer 50% black-holing based on 3283 pure default route advertisements by Top-of-Fabric 21 and ToF 22. 3285 The mechanism used to resolve this scenario is hinging on the 3286 distribution of southbound representation by Top-of-Fabric 21 that is 3287 reflected by spine 111 and spine 112 to ToF 22. Spine 22, having 3288 computed reachability to all prefixes in the network, advertises with 3289 the default route the ones that are reachable only via lower level 3290 neighbors that ToF 21 does not show an adjacency to. That results in 3291 spine 111 and spine 112 obtaining a longest-prefix match to prefix 3292 121 which leads through ToF 22 and prevents black-holing through ToF 3293 21 still advertising the 0/0 aggregate only. 3295 The prefix 121 advertised by Top-of-Fabric 22 does not have to be 3296 propagated further towards leafs since they do no benefit from this 3297 information. Hence the amount of flooding is restricted to ToF 21 3298 reissuing its S-TIEs and south reflection of those by spine 111 and 3299 spine 112. The resulting SPF in ToF 22 issues a new prefix S-TIEs 3300 containing 1.1/16. None of the leafs become aware of the changes and 3301 the failure is constrained strictly to the level that became 3302 partitioned. 3304 To finish with an example of the resulting sets computed using 3305 notation introduced in Section 5.2.5, Top-of-Fabric 22 constructs the 3306 following sets: 3308 |R = Prefix 111, Prefix 112, Prefix 121, Prefix 122 3310 |H (for r=Prefix 111) = Spine 111, Spine 112 3312 |H (for r=Prefix 112) = Spine 111, Spine 112 3314 |H (for r=Prefix 121) = Spine 121, Spine 122 3316 |H (for r=Prefix 122) = Spine 121, Spine 122 3318 |A (for Spine 21) = Spine 111, Spine 112 3320 With that and |H (for r=prefix 121) and |H (for r=prefix 122) being 3321 disjoint from |A (for Top-of-Fabric 21), ToF 22 will originate an 3322 S-TIE with prefix 121 and prefix 122, that is flooded to spines 112, 3323 112, 121 and 122. 3325 6.4. Northbound Partitioned Router and Optional East-West Links 3326 . + + + 3327 . X N1 | N2 | N3 3328 . X | | 3329 .+--+----+ +--+----+ +--+-----+ 3330 .| |0/0> <0/0| |0/0> <0/0| | 3331 .| A01 +----------+ A02 +----------+ A03 | Level 1 3332 .++-+-+--+ ++--+--++ +---+-+-++ 3333 . | | | | | | | | | 3334 . | | +----------------------------------+ | | | 3335 . | | | | | | | | | 3336 . | +-------------+ | | | +--------------+ | 3337 . | | | | | | | | | 3338 . | +----------------+ | +-----------------+ | 3339 . | | | | | | | | | 3340 . | | +------------------------------------+ | | 3341 . | | | | | | | | | 3342 .++-+-+--+ | +---+---+ | +-+---+-++ 3343 .| | +-+ +-+ | | 3344 .| L01 | | L02 | | L03 | Level 0 3345 .+-------+ +-------+ +--------+ 3347 Figure 25: North Partitioned Router 3349 Figure 25 shows a part of a fabric where level 1 is horizontally 3350 connected and A01 lost its only northbound adjacency. Based on N-SPF 3351 rules in Section 5.2.4.1 A01 will compute northbound reachability by 3352 using the link A01 to A02 (whereas A02 will NOT use this link during 3353 N-SPF). Hence A01 will still advertise the default towards level 0 3354 and route unidirectionally using the horizontal link. 3356 As further consideration, the moment A02 looses link N2 the situation 3357 evolves again. A01 will have no more northbound reachability while 3358 still seeing A03 advertising northbound adjacencies in its south node 3359 tie. With that it will stop advertising a default route due to 3360 Section 5.2.3.7. 3362 6.5. Multi-Plane Fabric and Negative Disaggregation 3364 TODO 3366 7. Implementation and Operation: Further Details 3368 7.1. Considerations for Leaf-Only Implementation 3370 Ideally RIFT can be stretched out to the loWest level in the IP 3371 fabric to integrate ToRs or even servers. Since those entities would 3372 run as leafs only, it is worth to observe that a leaf only version is 3373 significantly simpler to implement and requires much less resources: 3375 1. Under normal conditions, the leaf needs to support a multipath 3376 default route only. In worst partitioning case it has to be 3377 capable of accommodating all the leaf routes in its own PoD to 3378 prevent black-holing. 3380 2. Leaf nodes hold only their own N-TIEs and S-TIEs of Level 1 nodes 3381 they are connected to; so overall few in numbers. 3383 3. Leaf node does not have to support flooding reduction or any type 3384 of de-aggregation computation or propagation. 3386 4. Unless optional leaf-2-leaf procedures are desired default route 3387 origination and S-TIE origination is unnecessary. 3389 7.2. Adaptations to Other Proposed Data Center Topologies 3391 . +-----+ +-----+ 3392 . | | | | 3393 .+-+ S0 | | S1 | 3394 .| ++---++ ++---++ 3395 .| | | | | 3396 .| | +------------+ | 3397 .| | | +------------+ | 3398 .| | | | | 3399 .| ++-+--+ +--+-++ 3400 .| | | | | 3401 .| | A0 | | A1 | 3402 .| +-+--++ ++---++ 3403 .| | | | | 3404 .| | +------------+ | 3405 .| | +-----------+ | | 3406 .| | | | | 3407 .| +-+-+-+ +--+-++ 3408 .+-+ | | | 3409 . | L0 | | L1 | 3410 . +-----+ +-----+ 3412 Figure 26: Level Shortcut 3414 Strictly speaking, RIFT is not limited to Clos variations only. The 3415 protocol preconditions only a sense of 'compass rose direction' 3416 achieved by configuration (or derivation) of levels and other 3417 topologies are possible within this framework. So, conceptually, one 3418 could include leaf to leaf links and even shortcut between levels but 3419 certain requirements in Section 4 will not be met anymore. As an 3420 example, shortcutting levels illustrated in Figure 26 will lead 3421 either to suboptimal routing when L0 sends traffic to L1 (since using 3422 S0's default route will lead to the traffic being sent back to A0 or 3423 A1) or the leafs need each other's routes installed to understand 3424 that only A0 and A1 should be used to talk to each other. 3426 Whether such modifications of topology constraints make sense is 3427 dependent on many technology variables and the exhausting treatment 3428 of the topic is definitely outside the scope of this document. 3430 7.3. Originating Non-Default Route Southbound 3432 Obviously, an implementation may choose to originate southbound 3433 instead of a strict default route (as described in Section 5.2.3.7) a 3434 shorter prefix P' but in such a scenario all addresses carried within 3435 the RIFT domain must be contained within P'. 3437 8. Security Considerations 3439 8.1. General 3441 The protocol has provisions for nonces and can include authentication 3442 mechanisms in the future comparable to [RFC5709] and [RFC7987]. 3444 One can consider additionally attack vectors where a router may 3445 reboot many times while changing its system ID and pollute the 3446 network with many stale TIEs or TIEs are sent with very long 3447 lifetimes and not cleaned up when the routes vanishes. Those attack 3448 vectors are not unique to RIFT. Given large memory footprints 3449 available today those attacks should be relatively benign. Otherwise 3450 a node SHOULD implement a strategy of discarding contents of all TIEs 3451 that were not present in the SPF tree over a certain, configurable 3452 period of time. Since the protocol, like all modern link-state 3453 protocols, is self-stabilizing and will advertise the presence of 3454 such TIEs to its neighbors, they can be re-requested again if a 3455 computation finds that it sees an adjacency formed towards the system 3456 ID of the discarded TIEs. 3458 8.2. ZTP 3460 Section 5.2.7 presents many attack vectors in untrusted environments, 3461 starting with nodes that oscillate their level offers to the 3462 possiblity of a node offering a three way adjacency with the highest 3463 possible level value with a very long holdtime trying to put itself 3464 "on top of the lattice" and with that gaining access to the whole 3465 southbound topology. Session authentication mechanisms are necessary 3466 in environments where this is possible. 3468 8.3. Lifetime 3470 The protocol uses in TIE flooding the traditional lifetime approach 3471 that is vulnerable to sophisticated attack vectors under normal 3472 circumstances. However, on IP fabrics with some kind of, even 3473 coarse, clock synchronization, RIFT allows to recognize such attacks 3474 by including optional, protected information at origin. 3476 9. IANA Considerations 3478 This specification will request at an opportune time multiple 3479 registry points to exchange protocol packets in a standardized way, 3480 amongst them multicast address assignments and standard port numbers. 3481 The schema itself defines many values and codepoints which can be 3482 considered registries themselves. 3484 10. Acknowledgments 3486 Many thanks to Naiming Shen for some of the early discussions around 3487 the topic of using IGPs for routing in topologies related to Clos. 3488 Russ White to be especially acknowledged for the key conversation on 3489 epistomology that allowed to tie current asynchronous distributed 3490 systems theory results to a modern protocol design presented here. 3491 Adrian Farrel, Joel Halpern, Jeffrey Zhang, Krzysztof Szarkowicz, 3492 Nagendra Kumar provided thoughtful comments that improved the 3493 readability of the document and found good amount of corners where 3494 the light failed to shine. Kris Price was first to mention single 3495 router, single arm default considerations. Jeff Tantsura helped out 3496 with some initial thoughts on BFD interactions while Jeff Haas 3497 corrected several misconceptions about BFD's finer points. Artur 3498 Makutunowicz pointed out many possible improvements and acted as 3499 sounding board in regard to modern protocol implementation techniques 3500 RIFT is exploring. Barak Gafni formalized first time clearly the 3501 problem of partitioned spine and fallen leafs on a (clean) napkin in 3502 Singapore that led to the very important part of the specification 3503 centered around multiple Top-of-Fabric planes and negative 3504 disaggregation. 3506 11. References 3508 11.1. Normative References 3510 [I-D.ietf-6lo-rfc6775-update] 3511 Thubert, P., Nordmark, E., Chakrabarti, S., and C. 3512 Perkins, "Registration Extensions for 6LoWPAN Neighbor 3513 Discovery", draft-ietf-6lo-rfc6775-update-21 (work in 3514 progress), June 2018. 3516 [ISO10589] 3517 ISO "International Organization for Standardization", 3518 "Intermediate system to Intermediate system intra-domain 3519 routeing information exchange protocol for use in 3520 conjunction with the protocol for providing the 3521 connectionless-mode Network Service (ISO 8473), ISO/IEC 3522 10589:2002, Second Edition.", Nov 2002. 3524 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 3525 Requirement Levels", BCP 14, RFC 2119, 3526 DOI 10.17487/RFC2119, March 1997, 3527 . 3529 [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, 3530 DOI 10.17487/RFC2328, April 1998, 3531 . 3533 [RFC2365] Meyer, D., "Administratively Scoped IP Multicast", BCP 23, 3534 RFC 2365, DOI 10.17487/RFC2365, July 1998, 3535 . 3537 [RFC3626] Clausen, T., Ed. and P. Jacquet, Ed., "Optimized Link 3538 State Routing Protocol (OLSR)", RFC 3626, 3539 DOI 10.17487/RFC3626, October 2003, 3540 . 3542 [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A 3543 Border Gateway Protocol 4 (BGP-4)", RFC 4271, 3544 DOI 10.17487/RFC4271, January 2006, 3545 . 3547 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 3548 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 3549 2006, . 3551 [RFC4655] Farrel, A., Vasseur, J., and J. Ash, "A Path Computation 3552 Element (PCE)-Based Architecture", RFC 4655, 3553 DOI 10.17487/RFC4655, August 2006, 3554 . 3556 [RFC5082] Gill, V., Heasley, J., Meyer, D., Savola, P., Ed., and C. 3557 Pignataro, "The Generalized TTL Security Mechanism 3558 (GTSM)", RFC 5082, DOI 10.17487/RFC5082, October 2007, 3559 . 3561 [RFC5120] Przygienda, T., Shen, N., and N. Sheth, "M-ISIS: Multi 3562 Topology (MT) Routing in Intermediate System to 3563 Intermediate Systems (IS-ISs)", RFC 5120, 3564 DOI 10.17487/RFC5120, February 2008, 3565 . 3567 [RFC5303] Katz, D., Saluja, R., and D. Eastlake 3rd, "Three-Way 3568 Handshake for IS-IS Point-to-Point Adjacencies", RFC 5303, 3569 DOI 10.17487/RFC5303, October 2008, 3570 . 3572 [RFC5709] Bhatia, M., Manral, V., Fanto, M., White, R., Barnes, M., 3573 Li, T., and R. Atkinson, "OSPFv2 HMAC-SHA Cryptographic 3574 Authentication", RFC 5709, DOI 10.17487/RFC5709, October 3575 2009, . 3577 [RFC5881] Katz, D. and D. Ward, "Bidirectional Forwarding Detection 3578 (BFD) for IPv4 and IPv6 (Single Hop)", RFC 5881, 3579 DOI 10.17487/RFC5881, June 2010, 3580 . 3582 [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch, 3583 "Network Time Protocol Version 4: Protocol and Algorithms 3584 Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010, 3585 . 3587 [RFC6234] Eastlake 3rd, D. and T. Hansen, "US Secure Hash Algorithms 3588 (SHA and SHA-based HMAC and HKDF)", RFC 6234, 3589 DOI 10.17487/RFC6234, May 2011, 3590 . 3592 [RFC6822] Previdi, S., Ed., Ginsberg, L., Shand, M., Roy, A., and D. 3593 Ward, "IS-IS Multi-Instance", RFC 6822, 3594 DOI 10.17487/RFC6822, December 2012, 3595 . 3597 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 3598 S. Ray, "North-Bound Distribution of Link-State and 3599 Traffic Engineering (TE) Information Using BGP", RFC 7752, 3600 DOI 10.17487/RFC7752, March 2016, 3601 . 3603 [RFC7855] Previdi, S., Ed., Filsfils, C., Ed., Decraene, B., 3604 Litkowski, S., Horneffer, M., and R. Shakir, "Source 3605 Packet Routing in Networking (SPRING) Problem Statement 3606 and Requirements", RFC 7855, DOI 10.17487/RFC7855, May 3607 2016, . 3609 [RFC7938] Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of 3610 BGP for Routing in Large-Scale Data Centers", RFC 7938, 3611 DOI 10.17487/RFC7938, August 2016, 3612 . 3614 [RFC7987] Ginsberg, L., Wells, P., Decraene, B., Przygienda, T., and 3615 H. Gredler, "IS-IS Minimum Remaining Lifetime", RFC 7987, 3616 DOI 10.17487/RFC7987, October 2016, 3617 . 3619 [RFC8200] Deering, S. and R. Hinden, "Internet Protocol, Version 6 3620 (IPv6) Specification", STD 86, RFC 8200, 3621 DOI 10.17487/RFC8200, July 2017, 3622 . 3624 11.2. Informative References 3626 [CLOS] Yuan, X., "On Nonblocking Folded-Clos Networks in Computer 3627 Communication Environments", IEEE International Parallel & 3628 Distributed Processing Symposium, 2011. 3630 [DIJKSTRA] 3631 Dijkstra, E., "A Note on Two Problems in Connexion with 3632 Graphs", Journal Numer. Math. , 1959. 3634 [DOT] Ellson, J. and L. Koutsofios, "Graphviz: open source graph 3635 drawing tools", Springer-Verlag , 2001. 3637 [DYNAMO] De Candia et al., G., "Dynamo: amazon's highly available 3638 key-value store", ACM SIGOPS symposium on Operating 3639 systems principles (SOSP '07), 2007. 3641 [EPPSTEIN] 3642 Eppstein, D., "Finding the k-Shortest Paths", 1997. 3644 [EUI64] IEEE, "Guidelines for Use of Extended Unique Identifier 3645 (EUI), Organizationally Unique Identifier (OUI), and 3646 Company ID (CID)", IEEE EUI, 3647 . 3649 [FATTREE] Leiserson, C., "Fat-Trees: Universal Networks for 3650 Hardware-Efficient Supercomputing", 1985. 3652 [I-D.ietf-spring-segment-routing] 3653 Filsfils, C., Previdi, S., Ginsberg, L., Decraene, B., 3654 Litkowski, S., and R. Shakir, "Segment Routing 3655 Architecture", draft-ietf-spring-segment-routing-15 (work 3656 in progress), January 2018. 3658 [IEEEstd1588] 3659 IEEE, "IEEE Standard for a Precision Clock Synchronization 3660 Protocol for Networked Measurement and Control Systems", 3661 IEEE Standard 1588, 3662 . 3664 [IEEEstd8021AS] 3665 IEEE, "IEEE Standard for Local and Metropolitan Area 3666 Networks - Timing and Synchronization for Time-Sensitive 3667 Applications in Bridged Local Area Networks", 3668 IEEE Standard 802.1AS, 3669 . 3671 [ISO10589-Second-Edition] 3672 International Organization for Standardization, 3673 "Intermediate system to Intermediate system intra-domain 3674 routeing information exchange protocol for use in 3675 conjunction with the protocol for providing the 3676 connectionless-mode Network Service (ISO 8473)", Nov 2002. 3678 [MAKSIC2013] 3679 Maksic et al., N., "Improving Utilization of Data Center 3680 Networks", IEEE Communications Magazine, Nov 2013. 3682 [RFC0826] Plummer, D., "An Ethernet Address Resolution Protocol: Or 3683 Converting Network Protocol Addresses to 48.bit Ethernet 3684 Address for Transmission on Ethernet Hardware", STD 37, 3685 RFC 826, DOI 10.17487/RFC0826, November 1982, 3686 . 3688 [RFC2131] Droms, R., "Dynamic Host Configuration Protocol", 3689 RFC 2131, DOI 10.17487/RFC2131, March 1997, 3690 . 3692 [RFC3315] Droms, R., Ed., Bound, J., Volz, B., Lemon, T., Perkins, 3693 C., and M. Carney, "Dynamic Host Configuration Protocol 3694 for IPv6 (DHCPv6)", RFC 3315, DOI 10.17487/RFC3315, July 3695 2003, . 3697 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 3698 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 3699 DOI 10.17487/RFC4861, September 2007, 3700 . 3702 [RFC4862] Thomson, S., Narten, T., and T. Jinmei, "IPv6 Stateless 3703 Address Autoconfiguration", RFC 4862, 3704 DOI 10.17487/RFC4862, September 2007, 3705 . 3707 [VAHDAT08] 3708 Al-Fares, M., Loukissas, A., and A. Vahdat, "A Scalable, 3709 Commodity Data Center Network Architecture", SIGCOMM , 3710 2008. 3712 Appendix A. Information Elements Schema 3714 This section introduces the schema for information elements. 3716 On schema changes that 3718 1. change field numbers or 3720 2. add new required fields or 3722 3. remove fields or 3724 4. change lists into sets, unions into structures or 3726 5. change multiplicity of fields or 3728 6. changes name of any field or 3730 7. change datatypes of any field or 3732 8. adds, changes or removes a default value of any field or 3734 9. removes or changes any defined constant or constant value 3736 major version of the schema MUST increase. All other changes MUST 3737 increase minor version within the same major. 3739 Observe however that introducing an optional field of a structure 3740 type without a default does not cause a major version increase even 3741 if the fields inside the structure are optional with defaults. 3743 Thrift serializer/deserializer MUST not discard optional, unknown 3744 fields but preserve and serialize them again when re-flooding whereas 3745 missing optional fields MAY be replaced with according default values 3746 if present. 3748 All signed integer as forced by Thrift support must be cast for 3749 internal purposes to equivalent unsigned values without discarding 3750 the signedness bit. An implementation SHOULD try to avoid using the 3751 signedness bit when generating values. 3753 The schema is normative. 3755 A.1. common.thrift 3757 /** 3758 Thrift file with common definitions for RIFT 3759 */ 3761 /** @note MUST be interpreted in implementation as unsigned 64 bits. 3762 * The implementation SHOULD NOT use the MSB. 3763 */ 3764 typedef i64 SystemIDType 3765 typedef i32 IPv4Address 3766 /** this has to be of length long enough to accomodate prefix */ 3767 typedef binary IPv6Address 3768 /** @note MUST be interpreted in implementation as unsigned 16 bits */ 3769 typedef i16 UDPPortType 3770 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 3771 typedef i32 TIENrType 3772 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 3773 typedef i32 MTUSizeType 3774 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 3775 typedef i32 SeqNrType 3776 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 3777 typedef i32 LifeTimeInSecType 3778 /** @note MUST be interpreted in implementation as unsigned 16 bits */ 3779 typedef i16 LevelType 3780 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 3781 typedef i32 PodType 3782 /** @note MUST be interpreted in implementation as unsigned 16 bits */ 3783 typedef i16 VersionType 3784 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 3785 typedef i32 MetricType 3786 /** @note MUST be interpreted in implementation as unstructured 64 bits */ 3787 typedef i64 RouteTagType 3788 /** @note MUST be interpreted in implementation as unstructured 32 bits label value */ 3789 typedef i32 LabelType 3790 /** @note MUST be interpreted in implementation as unsigned 32 bits */ 3791 typedef i32 BandwithInMegaBitsType 3792 typedef string KeyIDType 3793 /** node local, unique identification for a link (interface/tunnel 3794 * etc. Basically anything RIFT runs on). This is kept 3795 * at 32 bits so it aligns with BFD [RFC5880] discriminator size. 3796 */ 3797 typedef i32 LinkIDType 3798 typedef string KeyNameType 3799 typedef i8 PrefixLenType 3800 /** timestamp in seconds since the epoch */ 3801 typedef i64 TimestampInSecsType 3802 /** security nonce */ 3803 typedef i64 NonceType 3804 /** LIE FSM holdtime type */ 3805 typedef i16 TimeIntervalInSecType 3806 /** Transaction ID type for prefix mobility as specified by RFC6550, value 3807 MUST be interpreted in implementation as unsigned */ 3808 typedef i8 PrefixTransactionIDType 3809 /** timestamp per IEEE 802.1AS, values MUST be interpreted in implementation as unsigned */ 3810 struct IEEE802_1ASTimeStampType { 3811 1: required i64 AS_sec; 3812 2: optional i32 AS_nsec; 3813 } 3815 /** Flags indicating nodes behavior in case of ZTP and support 3816 for special optimization procedures. It will force level to `leaf_level` or 3817 `top-of-fabric` level accordingly and enable according procedures 3818 */ 3819 enum HierarchyIndications { 3820 leaf_only = 0, 3821 leaf_only_and_leaf_2_leaf_procedures = 1, 3822 top_of_fabric = 2, 3823 } 3825 /** This MUST be used when node is configured as top of fabric in ZTP. 3826 This is kept reasonably low to alow for fast ZTP convergence on 3827 failures. */ 3828 const LevelType top_of_fabric_level = 24 3829 /** default bandwidth on a link */ 3830 const BandwithInMegaBitsType default_bandwidth = 100 3831 /** fixed leaf level when ZTP is not used */ 3832 const LevelType leaf_level = 0 3833 const LevelType default_level = leaf_level 3834 const PodType default_pod = 0 3835 const LinkIDType undefined_linkid = 0 3837 /** default distance used */ 3838 const MetricType default_distance = 1 3839 /** any distance larger than this will be considered infinity */ 3840 const MetricType infinite_distance = 0x7FFFFFFF 3841 /** represents invalid distance */ 3842 const MetricType invalid_distance = 0 3843 const bool overload_default = false 3844 const bool flood_reduction_default = true 3845 /** default LIE FSM holddown time */ 3846 const TimeIntervalInSecType default_lie_holdtime = 3 3847 /** default ZTP FSM holddown time */ 3848 const TimeIntervalInSecType default_ztp_holdtime = 1 3849 /** by default LIE levels are ZTP offers */ 3850 const bool default_not_a_ztp_offer = false 3851 /** by default e'one is repeating flooding */ 3852 const bool default_you_are_flood_repeater = true 3853 /** 0 is illegal for SystemID */ 3854 const SystemIDType IllegalSystemID = 0 3855 /** empty set of nodes */ 3856 const set empty_set_of_nodeids = {} 3857 /** default lifetime is one week */ 3858 const LifeTimeInSecType default_lifetime = 604800 3859 /** any `TieHeader` that has a smaller lifetime difference 3860 than this constant is equal (if other fields equal) */ 3861 const LifeTimeInSecType lifetime_diff2ignore = 300 3863 /** default UDP port to run LIEs on */ 3864 const UDPPortType default_lie_udp_port = 911 3865 /** default UDP port to receive TIEs on, that can be peer specific */ 3866 const UDPPortType default_tie_udp_flood_port = 912 3868 /** default MTU link size to use */ 3869 const MTUSizeType default_mtu_size = 1400 3871 /** indicates whether the direction is northbound/east-west 3872 * or southbound */ 3873 enum TieDirectionType { 3874 Illegal = 0, 3875 South = 1, 3876 North = 2, 3877 DirectionMaxValue = 3, 3878 } 3880 enum AddressFamilyType { 3881 Illegal = 0, 3882 AddressFamilyMinValue = 1, 3883 IPv4 = 2, 3884 IPv6 = 3, 3885 AddressFamilyMaxValue = 4, 3886 } 3888 struct IPv4PrefixType { 3889 1: required IPv4Address address; 3890 2: required PrefixLenType prefixlen; 3891 } 3893 struct IPv6PrefixType { 3894 1: required IPv6Address address; 3895 2: required PrefixLenType prefixlen; 3896 } 3897 union IPAddressType { 3898 1: optional IPv4Address ipv4address; 3899 2: optional IPv6Address ipv6address; 3900 } 3902 union IPPrefixType { 3903 1: optional IPv4PrefixType ipv4prefix; 3904 2: optional IPv6PrefixType ipv6prefix; 3905 } 3907 /** @note: Sequence of a prefix. Comparison function: 3908 if diff(timestamps) < 200msecs better transactionid wins 3909 else better time wins 3910 */ 3911 struct PrefixSequenceType { 3912 1: required IEEE802_1ASTimeStampType timestamp; 3913 2: optional PrefixTransactionIDType transactionid; 3914 } 3916 enum TIETypeType { 3917 Illegal = 0, 3918 TIETypeMinValue = 1, 3919 /** first legal value */ 3920 NodeTIEType = 2, 3921 PrefixTIEType = 3, 3922 PositiveDisaggregationPrefixTIEType = 4, 3923 NegativeDisaggregationPrefixTIEType = 5, 3924 PGPrefixTIEType = 6, 3925 KeyValueTIEType = 7, 3926 ExternalPrefixTIEType = 8, 3927 TIETypeMaxValue = 9, 3928 } 3930 /** @note: route types which MUST be ordered on their preference 3931 * PGP prefixes are most preferred attracting 3932 * traffic north (towards spine) and then south 3933 * normal prefixes are attracting traffic south (towards leafs), 3934 * i.e. prefix in NORTH PREFIX TIE is preferred over SOUTH PREFIX TIE 3935 * 3936 * @todo: external routes 3937 * @note: The only purpose of those values is to introduce an 3938 * ordering whereas an implementation can choose internally 3939 * any other values as long the ordering is preserved 3940 */ 3941 enum RouteType { 3942 Illegal = 0, 3943 RouteTypeMinValue = 1, 3944 /** First legal value. */ 3945 /** Discard routes are most prefered */ 3946 Discard = 2, 3948 /** Local prefixes are directly attached prefixes on the 3949 * system such as e.g. interface routes. 3950 */ 3951 LocalPrefix = 3, 3952 /** advertised in S-TIEs */ 3953 SouthPGPPrefix = 4, 3954 /** advertised in N-TIEs */ 3955 NorthPGPPrefix = 5, 3956 /** advertised in N-TIEs */ 3957 NorthPrefix = 6, 3958 /** advertised in S-TIEs, either normal prefix or positive disaggregation */ 3959 SouthPrefix = 7, 3960 /** externally imported north */ 3961 NorthExternalPrefix = 8, 3962 /** externally imported south */ 3963 SouthExternalPrefix = 9, 3964 /** negative, transitive are least preferred of 3965 local variety */ 3966 NegativeSouthPrefix = 10, 3967 RouteTypeMaxValue = 11, 3968 } 3970 A.2. encoding.thrift 3972 /** 3973 Thrift file for packet encodings for RIFT 3974 */ 3976 include "common.thrift" 3978 /** represents protocol encoding schema major version */ 3979 const i32 protocol_major_version = 19 3980 /** represents protocol encoding schema minor version */ 3981 const i32 protocol_minor_version = 0 3983 /** common RIFT packet header */ 3984 struct PacketHeader { 3985 1: required common.VersionType major_version = protocol_major_version; 3986 2: required common.VersionType minor_version = protocol_minor_version; 3987 /** this is the node sending the packet, in case of LIE/TIRE/TIDE 3988 also the originator of it */ 3989 3: required common.SystemIDType sender; 3990 /** level of the node sending the packet, required on everything except 3991 * LIEs. Lack of presence on LIEs indicates UNDEFINED_LEVEL and is used 3992 * in ZTP procedures. 3993 */ 3994 4: optional common.LevelType level; 3995 } 3997 /** Community serves as community for PGP purposes */ 3998 struct Community { 3999 1: required i32 top; 4000 2: required i32 bottom; 4001 } 4003 /** Neighbor structure */ 4004 struct Neighbor { 4005 1: required common.SystemIDType originator; 4006 2: required common.LinkIDType remote_id; 4007 } 4009 /** Capabilities the node supports */ 4010 struct NodeCapabilities { 4011 /** can this node participate in flood reduction */ 4012 1: optional bool flood_reduction = 4013 common.flood_reduction_default; 4014 /** does this node restrict itself to be top-of-fabric or 4015 leaf only (in ZTP) and does it support leaf-2-leaf procedures */ 4016 2: optional common.HierarchyIndications hierarchy_indications; 4017 } 4019 /** RIFT LIE packet 4021 @note this node's level is already included on the packet header */ 4022 struct LIEPacket { 4023 /** optional node or adjacency name */ 4024 1: optional string name; 4025 /** local link ID */ 4026 2: required common.LinkIDType local_id; 4027 /** UDP port to which we can receive flooded TIEs */ 4028 3: required common.UDPPortType flood_port = 4029 common.default_tie_udp_flood_port; 4030 /** layer 3 MTU, used to discover to mismatch */ 4031 4: optional common.MTUSizeType link_mtu_size = 4032 common.default_mtu_size; 4033 /** local link bandwidth on the interface */ 4034 5: optional common.BandwithInMegaBitsType link_bandwidth = 4035 common.default_bandwidth; 4036 /** this will reflect the neighbor once received to provid 4037 3-way connectivity */ 4038 6: optional Neighbor neighbor; 4039 7: optional common.PodType pod = common.default_pod; 4040 /** optional local nonce used for security computations */ 4041 8: optional common.NonceType nonce; 4042 /** optional neighbor's reflected nonce for security purposes. Significant delta 4043 in nonces seen compared to current local nonce can be used to prevent replays */ 4044 9: optional common.NonceType last_neighbor_nonce; 4045 /** optional node capabilities shown in the LIE. The capabilies 4046 MUST match the capabilities shown in the Node TIEs, otherwise 4047 the behavior is unspecified. A node detecting the mismatch 4048 SHOULD generate according error. 4049 */ 4050 10: optional NodeCapabilities capabilities; 4051 /** required holdtime of the adjacency, i.e. how much time 4052 MUST expire without LIE for the adjacency to drop 4053 */ 4054 11: required common.TimeIntervalInSecType holdtime = 4055 common.default_lie_holdtime; 4056 /** indicates that the level on the LIE MUST NOT be used 4057 to derive a ZTP level by the receiving node. */ 4058 12: optional bool not_a_ztp_offer = 4059 common.default_not_a_ztp_offer; 4060 /** indicates to northbound neighbor that it should 4061 be reflooding this node's N-TIEs to achieve flood reducuction and 4062 balancing for northbound flooding. To be ignored if received from a 4063 northbound adjacency. */ 4064 13: optional bool you_are_flood_repeater = 4065 common.default_you_are_flood_repeater; 4066 /** optional downstream assigned locally significant label 4067 value for the adjacency. */ 4068 14: optional common.LabelType label; 4069 } 4071 /** LinkID pair describes one of parallel links between two nodes */ 4072 struct LinkIDPair { 4073 /** node-wide unique value for the local link */ 4074 1: required common.LinkIDType local_id; 4075 /** received remote link ID for this link */ 4076 2: required common.LinkIDType remote_id; 4077 /** more properties of the link can go in here */ 4078 } 4080 /** ID of a TIE 4082 @note: TIEID space is a total order achieved by comparing the elements 4083 in sequence defined and comparing each value as an 4084 unsigned integer of according length. 4085 */ 4086 struct TIEID { 4087 /** indicates direction of the TIE */ 4088 1: required common.TieDirectionType direction; 4089 /** indicates originator of the TIE */ 4090 2: required common.SystemIDType originator; 4091 3: required common.TIETypeType tietype; 4092 4: required common.TIENrType tie_nr; 4093 } 4095 /** Header of a TIE. 4097 @note: TIEID space is a total order achieved by comparing the elements 4098 in sequence defined and comparing each value as an 4099 unsigned integer of according length. `origination_time` is 4100 disregarded for comparison purposes. 4101 */ 4102 struct TIEHeader { 4103 2: required TIEID tieid; 4104 3: required common.SeqNrType seq_nr; 4105 /** remaining lifetime that expires down to 0 just like in ISIS. 4106 TIEs with lifetimes differing by less than `lifetime_diff2ignore` MUST 4107 be considered EQUAL. */ 4108 4: required common.LifeTimeInSecType remaining_lifetime; 4109 /** optional absolute timestamp when the TIE 4110 was generated. This can be used on fabrics with 4111 synchronized clock to prevent lifetime modification attacks. */ 4112 10: optional common.IEEE802_1ASTimeStampType origination_time; 4113 /** optional original lifetime when the TIE 4114 was generated. This can be used on fabrics with 4115 synchronized clock to prevent lifetime modification attacks. */ 4116 12: optional common.LifeTimeInSecType origination_lifetime; 4117 } 4119 /** A TIDE with sorted TIE headers, if headers unsorted, behavior is undefined */ 4120 struct TIDEPacket { 4121 /** all 00s marks starts */ 4122 1: required TIEID start_range; 4123 /** all FFs mark end */ 4124 2: required TIEID end_range; 4125 /** _sorted_ list of headers */ 4126 3: required list headers; 4127 } 4129 /** A TIRE packet */ 4130 struct TIREPacket { 4131 1: required set headers; 4132 } 4134 /** Neighbor of a node */ 4135 struct NodeNeighborsTIEElement { 4136 /** Level of neighbor */ 4137 1: required common.LevelType level; 4138 /** Cost to neighbor. 4140 @note: All parallel links to same node 4141 incur same cost, in case the neighbor has multiple 4142 parallel links at different cost, the largest distance 4143 (highest numerical value) MUST be advertised 4144 @note: any neighbor with cost <= 0 MUST be ignored in computations */ 4145 3: optional common.MetricType cost = common.default_distance; 4146 /** can carry description of multiple parallel links in a TIE */ 4147 4: optional set link_ids; 4149 /** total bandwith to neighbor, this will be normally sum of the 4150 bandwidths of all the parallel links. */ 4151 5: optional common.BandwithInMegaBitsType bandwidth = 4152 common.default_bandwidth; 4153 } 4155 /** Flags the node sets */ 4156 struct NodeFlags { 4157 /** node is in overload, do not transit traffic through it */ 4158 1: optional bool overload = common.overload_default; 4159 } 4161 /** Description of a node. 4163 It may occur multiple times in different TIEs but if either 4164 * capabilities values do not match or 4165 * flags values do not match or 4166 * neighbors repeat with different values or 4167 * visible in same level/having partition upper do not match 4168 the behavior is undefined and a warning SHOULD be generated. 4169 Neighbors can be distributed across multiple TIEs however if 4170 the sets are disjoint. 4172 @note: observe that absence of fields implies defined defaults 4173 */ 4174 struct NodeTIEElement { 4175 1: required common.LevelType level; 4176 /** if neighbor systemID repeats in other node TIEs of same node 4177 the behavior is undefined. Equivalent to |A_(n,s)(N) in spec. */ 4178 2: required map neighbors; 4180 3: optional NodeCapabilities capabilities; 4181 4: optional NodeFlags flags; 4182 /** optional node name for easier operations */ 4183 5: optional string name; 4185 } 4187 struct PrefixAttributes { 4188 2: required common.MetricType metric = common.default_distance; 4189 /** generic unordered set of route tags, can be redistributed to other protocols or use 4190 within the context of real time analytics */ 4191 3: optional set tags; 4192 /** optional monotonic clock for mobile addresses */ 4193 4: optional common.PrefixSequenceType monotonic_clock; 4194 } 4196 /** multiple prefixes */ 4197 struct PrefixTIEElement { 4198 /** prefixes with the associated attributes. 4199 if the same prefix repeats in multiple TIEs of same node 4200 behavior is unspecified */ 4201 1: required map prefixes; 4202 } 4204 /** keys with their values */ 4205 struct KeyValueTIEElement { 4206 /** if the same key repeats in multiple TIEs of same node 4207 or with different values, behavior is unspecified */ 4208 1: required map keyvalues; 4209 } 4211 /** single element in a TIE. enum common.TIETypeType 4212 in TIEID indicates which elements MUST be present 4213 in the TIEElement. In case of mismatch the unexpected 4214 elements MUST be ignored. In case of lack of expected 4215 element the TIE an error MUST be reported and the TIE 4216 MUST be ignored. 4217 */ 4218 union TIEElement { 4219 /** in case of enum common.TIETypeType.NodeTIEType */ 4220 1: optional NodeTIEElement node; 4221 /** in case of enum common.TIETypeType.PrefixTIEType */ 4222 2: optional PrefixTIEElement prefixes; 4223 /** positive prefixes (always southbound) 4224 It MUST NOT be advertised within a North TIE. 4225 */ 4226 3: optional PrefixTIEElement positive_disaggregation_prefixes; 4227 /** transitive, negative prefixes (always southbound) which 4228 MUST be aggregated and propagated 4229 according to the specification 4230 southwards towards lower levels to heal 4231 pathological upper level partitioning, otherwise 4232 blackholes may occur in multiplane fabrics. 4234 It MUST NOT be advertised within a North TIE. 4235 */ 4236 4: optional PrefixTIEElement negative_disaggregation_prefixes; 4237 /** externally reimported prefixes */ 4238 5: optional PrefixTIEElement external_prefixes; 4239 /** Key-Value store elements */ 4240 6: optional KeyValueTIEElement keyvalues; 4241 /** @todo: policy guided prefixes */ 4242 } 4244 /** @todo: flood header separately in UDP to allow changing lifetime and SHA without reserialization 4246 */ 4247 struct TIEPacket { 4248 1: required TIEHeader header; 4249 2: required TIEElement element; 4250 } 4252 union PacketContent { 4253 1: optional LIEPacket lie; 4254 2: optional TIDEPacket tide; 4255 3: optional TIREPacket tire; 4256 4: optional TIEPacket tie; 4257 } 4259 /** protocol packet structure */ 4260 struct ProtocolPacket { 4261 1: required PacketHeader header; 4262 2: required PacketContent content; 4263 } 4265 Appendix B. Finite State Machines and Precise Operational 4266 Specifications 4268 Some FSM figures are provided as [DOT] description due to limitations 4269 of ASCII art. 4271 On Entry action is performed every time and right before the 4272 according state is entered, i.e. after any transitions from previous 4273 state. 4275 On Exit action is performed every time and immediately when a state 4276 is exited, i.e. before any transitions towards target state are 4277 performed. 4279 Any attempt to transition from a state towards another on reception 4280 of an event where no action is specified MUST be considered an 4281 unrecoverable error. 4283 The FSMs and procedures are NOT normative in the sense that an 4284 implementation MUST implement them literally (which would be 4285 overspecification) but an implementation MUST exhibit externally 4286 observable behavior that is identical to the execution of the 4287 specified FSMs. 4289 Where a FSM representation is inconvenient, i.e. the amount of 4290 procedures and kept state exceeds the amount of transitions, we defer 4291 to a more procedural description on data structures. 4293 B.1. LIE FSM 4295 Initial state is `OneWay`. 4297 Event `MultipleNeighbors` occurs normally when more than two nodes 4298 see each other on the same link or a remote node is quickly 4299 reconfigured or rebooted without regressing to `OneWay` first. Each 4300 occurence of the event SHOULD generate a clear, according 4301 notification to help operational deployments. 4303 The machine sends LIEs on several transitions to accelerate adjacency 4304 bring-up without waiting for the timer tic. 4306 digraph Ga556dde74c30450aae125eaebc33bd57 { 4307 Nd16ab5092c6b421c88da482eb4ae36b6[label="ThreeWay"][shape="oval"]; 4308 N54edd2b9de7641688608f44fca346303[label="OneWay"][shape="oval"]; 4309 Nfeef2e6859ae4567bd7613a32cc28c0e[label="TwoWay"][shape="oval"]; 4310 N7f2bb2e04270458cb5c9bb56c4b96e23[label="Enter"][style="invis"][shape="plain"]; 4311 N292744a4097f492f8605c926b924616b[label="Enter"][style="dashed"][shape="plain"]; 4312 Nc48847ba98e348efb45f5b78f4a5c987[label="Exit"][style="invis"][shape="plain"]; 4313 Nd16ab5092c6b421c88da482eb4ae36b6 -> N54edd2b9de7641688608f44fca346303 4314 [label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|MultipleNeighbors|"] 4315 [color="black"][arrowhead="normal" dir="both" arrowtail="none"]; 4316 Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6 4317 [label="|TimerTick|\n|LieRcvd|\n|SendLie|"][color="black"] 4318 [arrowhead="normal" dir="both" arrowtail="none"]; 4319 Nfeef2e6859ae4567bd7613a32cc28c0e -> Nfeef2e6859ae4567bd7613a32cc28c0e 4320 [label="|TimerTick|\n|LieRcvd|\n|SendLie|"][color="black"] 4321 [arrowhead="normal" dir="both" arrowtail="none"]; 4322 N54edd2b9de7641688608f44fca346303 -> Nd16ab5092c6b421c88da482eb4ae36b6 4323 [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"]; 4324 Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6 4325 [label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"][color="blue"] 4326 [arrowhead="normal" dir="both" arrowtail="none"]; 4327 Nd16ab5092c6b421c88da482eb4ae36b6 -> Nd16ab5092c6b421c88da482eb4ae36b6 4328 [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"]; 4329 Nfeef2e6859ae4567bd7613a32cc28c0e -> N54edd2b9de7641688608f44fca346303 4330 [label="|LevelChanged|"][color="blue"][arrowhead="normal" dir="both" arrowtail="none"]; 4331 Nfeef2e6859ae4567bd7613a32cc28c0e -> N54edd2b9de7641688608f44fca346303 4332 [label="|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|MultipleNeighbors|"] 4333 [color="black"][arrowhead="normal" dir="both" arrowtail="none"]; 4334 Nfeef2e6859ae4567bd7613a32cc28c0e -> Nd16ab5092c6b421c88da482eb4ae36b6 4335 [label="|ValidReflection|"][color="red"][arrowhead="normal" dir="both" arrowtail="none"]; 4336 N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303 4337 [label="|TimerTick|\n|LieRcvd|\n|NeighborChangedLevel|\n|NeighborChangedAddress|\n|UnacceptableHeader|\n|MTUMismatch|\n|PODMismatch|\n|HoldtimeExpired|\n|SendLie|"] 4338 [color="black"][arrowhead="normal" dir="both" arrowtail="none"]; 4339 N292744a4097f492f8605c926b924616b -> N54edd2b9de7641688608f44fca346303 4340 [label=""][color="black"][arrowhead="normal" dir="both" arrowtail="none"]; 4341 Nd16ab5092c6b421c88da482eb4ae36b6 -> N54edd2b9de7641688608f44fca346303 4342 [label="|LevelChanged|"][color="blue"][arrowhead="normal" dir="both" arrowtail="none"]; 4343 N54edd2b9de7641688608f44fca346303 -> Nfeef2e6859ae4567bd7613a32cc28c0e 4344 [label="|NewNeighbor|"][color="black"][arrowhead="normal" dir="both" arrowtail="none"]; 4345 N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303 4346 [label="|LevelChanged|\n|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"] 4347 [color="blue"][arrowhead="normal" dir="both" arrowtail="none"]; 4348 Nfeef2e6859ae4567bd7613a32cc28c0e -> Nfeef2e6859ae4567bd7613a32cc28c0e 4349 [label="|HALChanged|\n|HATChanged|\n|HALSChanged|\n|UpdateZTPOffer|"] 4350 [color="blue"][arrowhead="normal" dir="both" arrowtail="none"]; 4351 Nd16ab5092c6b421c88da482eb4ae36b6 -> Nfeef2e6859ae4567bd7613a32cc28c0e 4352 [label="|NeighborDroppedReflection|"] 4353 [color="red"][arrowhead="normal" dir="both" arrowtail="none"]; 4354 N54edd2b9de7641688608f44fca346303 -> N54edd2b9de7641688608f44fca346303 4355 [label="|NeighborDroppedReflection|"][color="red"] 4356 [arrowhead="normal" dir="both" arrowtail="none"]; 4357 } 4359 LIE FSM DOT 4361 .. To be updated .. 4363 LIE FSM Figure 4365 Events 4367 o TimerTick: one second timer tic 4369 o LevelChanged: node's level has been changed by ZTP or 4370 configuration 4372 o HALChanged: best HAL computed by ZTP has changed 4374 o HATChanged: HAT computed by ZTP has changed 4375 o HALSChanged: set of HAL offering systems computed by ZTP has 4376 changed 4378 o LieRcvd: received LIE 4380 o NewNeighbor: new neighbor parsed 4382 o ValidReflection: received own reflection from neighbor 4384 o NeighborDroppedReflection: lost previous own reflection from 4385 neighbor 4387 o NeighborChangedLevel: neighbor changed advertised level 4389 o NeighborChangedAddress: neighbor changed IP address 4391 o UnacceptableHeader: unacceptable header seen 4393 o MTUMismatch: MTU mismatched 4395 o PODMismatch: Unacceptable PoD seen 4397 o HoldtimeExpired: adjacency hold down expired 4399 o MultipleNeighbors: more than one neighbor seen on interface 4401 o SendLie: send a LIE out 4403 o UpdateZTPOffer: update this node's ZTP offer 4405 Actions 4407 on TimerTick in TwoWay finishes in TwoWay: PUSH SendLie event, if 4408 holdtime expired PUSH HoldtimeExpired event 4410 on HALChanged in TwoWay finishes in TwoWay: store new HAL 4412 on MTUMismatch in ThreeWay finishes in OneWay: no action 4414 on HALChanged in ThreeWay finishes in ThreeWay: store new HAL 4416 on ValidReflection in TwoWay finishes in ThreeWay: no action 4418 on ValidReflection in OneWay finishes in ThreeWay: no action 4420 on NeighborDroppedReflection in ThreeWay finishes in TwoWay: no 4421 action 4422 on LieRcvd in ThreeWay finishes in ThreeWay: PROCESS_LIE 4424 on MultipleNeighbors in TwoWay finishes in OneWay: no action 4426 on UnacceptableHeader in ThreeWay finishes in OneWay: no action 4428 on MTUMismatch in TwoWay finishes in OneWay: no action 4430 on LevelChanged in OneWay finishes in OneWay: update level with 4431 event value, PUSH SendLie event 4433 on UnacceptableHeader in TwoWay finishes in OneWay: no action 4435 on HALSChanged in TwoWay finishes in TwoWay: store HALS 4437 on UpdateZTPOffer in TwoWay finishes in TwoWay: send offer to ZTP 4438 FSM 4440 on NeighborChangedLevel in TwoWay finishes in OneWay: no action 4442 on NewNeighbor in OneWay finishes in TwoWay: PUSH SendLie event 4444 on NeighborChangedAddress in ThreeWay finishes in OneWay: no 4445 action 4447 on HALChanged in OneWay finishes in OneWay: store new HAL 4449 on NeighborChangedLevel in OneWay finishes in OneWay: no action 4451 on HoldtimeExpired in TwoWay finishes in OneWay: no action 4453 on SendLie in TwoWay finishes in TwoWay: SEND_LIE 4455 on LevelChanged in TwoWay finishes in OneWay: update level with 4456 event value 4458 on NeighborChangedAddress in OneWay finishes in OneWay: no action 4460 on HATChanged in TwoWay finishes in TwoWay: store HAT 4462 on LieRcvd in TwoWay finishes in TwoWay: PROCESS_LIE 4464 on MultipleNeighbors in ThreeWay finishes in OneWay: no action 4466 on MTUMismatch in OneWay finishes in OneWay: no action 4468 on SendLie in OneWay finishes in OneWay: SEND_LIE 4469 on LieRcvd in OneWay finishes in OneWay: PROCESS_LIE 4471 on TimerTick in ThreeWay finishes in ThreeWay: PUSH SendLie event, 4472 if holdtime expired PUSH HoldtimeExpired event 4474 on TimerTick in OneWay finishes in OneWay: PUSH SendLie event 4476 on PODMismatch in ThreeWay finishes in OneWay: no action 4478 on LevelChanged in ThreeWay finishes in OneWay: update level with 4479 event value 4481 on NeighborChangedLevel in ThreeWay finishes in OneWay: no action 4483 on UpdateZTPOffer in OneWay finishes in OneWay: send offer to ZTP 4484 FSM 4486 on UpdateZTPOffer in ThreeWay finishes in ThreeWay: send offer to 4487 ZTP FSM 4489 on HATChanged in OneWay finishes in OneWay: store HAT 4491 on HATChanged in ThreeWay finishes in ThreeWay: store HAT 4493 on HoldtimeExpired in OneWay finishes in OneWay: no action 4495 on UnacceptableHeader in OneWay finishes in OneWay: no action 4497 on PODMismatch in OneWay finishes in OneWay: no action 4499 on SendLie in ThreeWay finishes in ThreeWay: SEND_LIE 4501 on NeighborChangedAddress in TwoWay finishes in OneWay: no action 4503 on ValidReflection in ThreeWay finishes in ThreeWay: no action 4505 on HALSChanged in OneWay finishes in OneWay: store HALS 4507 on HoldtimeExpired in ThreeWay finishes in OneWay: no action 4509 on HALSChanged in ThreeWay finishes in ThreeWay: store HALS 4511 on NeighborDroppedReflection in OneWay finishes in OneWay: no 4512 action 4514 on PODMismatch in TwoWay finishes in OneWay: no action 4516 on Entry into OneWay: CLEANUP 4518 Following words are used for well known procedures: 4520 1. PUSH Event: pushes an event to be executed by the FSM upon exit 4521 of this action 4523 2. CLEANUP: neighbor MUST be reset to unknown 4525 3. SEND_LIE: create a new LIE packet 4527 1. reflecting the neighbor if known and valid and 4529 2. setting the necessary `not_a_ztp_offer` variable if level was 4530 derived from last known neighbor on this interface and 4532 3. setting `you_are_not_flood_repeater` to computed value 4534 4. PROCESS_LIE: 4536 1. if lie has wrong major version OR our own system ID or 4537 invalid system ID then CLEANUP else 4539 2. if lie has non matching MTUs then CLEANUP, PUSH 4540 UpdateZTPOffer, PUSH MTUMismatch else 4542 3. if PoD rules do not allow adjacency forming then CLEANUP, 4543 PUSH PODMismatch, PUSH MTUMismatch else 4545 4. if lie has undefined level OR my level is undefined OR this 4546 node is leaf and remote level lower than HAT OR (lie's level 4547 is not leaf AND its difference is more than one from my 4548 level) then CLEANUP, PUSH UpdateZTPOffer, PUSH 4549 UnacceptableHeader else 4551 5. PUSH UpdateZTPOffer, construct temporary new neighbor 4552 structure with values from lie, if no current neighbor exists 4553 then set neighbor to new neighbor, PUSH NewNeighbor event, 4554 CHECK_THREE_WAY else 4556 1. if current neighbor system ID differs from lie's system 4557 ID then PUSH MultipleNeighbors else 4559 2. if current neighbor stored level differs from lie's level 4560 then PUSH NeighborChangedLevel else 4562 3. if current neighbor stored IPv4/v6 address differs from 4563 lie's address then PUSH NeighborChangedAddress else 4565 4. if any of neighbor's flood address port, name, local 4566 linkid changed then PUSH NeighborChangedMinorFields and 4568 5. CHECK_THREE_WAY 4570 5. CHECK_THREE_WAY: if current state is one-way do nothing else 4572 1. if lie packet does not contain neighbor then if current state 4573 is three-way then PUSH NeighborDroppedReflection else 4575 2. if packet reflects this system's ID and local port and state 4576 is three-way then PUSH event ValidReflection else PUSH event 4577 MultipleNeighbors 4579 B.2. ZTP FSM 4581 Initial state is ComputeBestOffer. 4583 digraph G04743cd825cc40c5b93de0616ffb851b { 4584 N29e7db3976644f62b6f3b2801bccb854[label="Enter"] 4585 [style="dashed"][shape="plain"]; 4586 N33df4993a1664be18a2196001c27a64c[label="HoldingDown"][shape="oval"]; 4587 N839f77189e324c82b21b8a709b4b021d[label="ComputeBestOffer"][shape="oval"]; 4588 Nc97f2b02808d4751afcc630687bf7421[label="UpdatingClients"][shape="oval"]; 4589 N7ad21867360c44709be20a99f33dd1f7[label="Enter"] 4590 [style="dashed"][shape="plain"]; 4591 N33df4993a1664be18a2196001c27a64c -> N33df4993a1664be18a2196001c27a64c 4592 [label="|ComputationDone|"][color="green"] 4593 [arrowhead="normal" dir="both" arrowtail="none"]; 4594 N29e7db3976644f62b6f3b2801bccb854 -> Nc97f2b02808d4751afcc630687bf7421 4595 [label=""] 4596 [color="black"][arrowhead="normal" dir="both" arrowtail="none"]; 4597 N839f77189e324c82b21b8a709b4b021d -> N839f77189e324c82b21b8a709b4b021d 4598 [label="|NeighborOffer|\n|WithdrawNeighborOffer|"] 4599 [color="blue"][arrowhead="normal" dir="both" arrowtail="none"]; 4600 N33df4993a1664be18a2196001c27a64c -> N839f77189e324c82b21b8a709b4b021d 4601 [label="|ChangeLocalLeafIndications|\n|ChangeLocalConfiguredLevel|"] 4602 [color="gold"] 4603 [arrowhead="normal" dir="both" arrowtail="none"]; 4604 N839f77189e324c82b21b8a709b4b021d -> N839f77189e324c82b21b8a709b4b021d 4605 [label="|BetterHAL|\n|BetterHAT|\n|LostHAT|"] 4606 [color="red"][arrowhead="normal" dir="both" arrowtail="none"]; 4607 N33df4993a1664be18a2196001c27a64c -> N33df4993a1664be18a2196001c27a64c 4608 [label="|NeighborOffer|\n|WithdrawNeighborOffer|"][color="blue"] 4609 [arrowhead="normal" dir="both" arrowtail="none"]; 4610 Nc97f2b02808d4751afcc630687bf7421 -> N839f77189e324c82b21b8a709b4b021d 4611 [label="|BetterHAL|\n|BetterHAT|\n|LostHAT|"][color="red"] 4612 [arrowhead="normal" dir="both" arrowtail="none"]; 4613 N33df4993a1664be18a2196001c27a64c -> N33df4993a1664be18a2196001c27a64c 4614 [label="|ShortTic|"][color="black"][arrowhead="normal" dir="both" 4615 arrowtail="none"]; 4616 Nc97f2b02808d4751afcc630687bf7421 -> Nc97f2b02808d4751afcc630687bf7421 4617 [label="|NeighborOffer|\n|WithdrawNeighborOffer|"][color="blue"] 4618 [arrowhead="normal" dir="both" arrowtail="none"]; 4619 N33df4993a1664be18a2196001c27a64c -> N33df4993a1664be18a2196001c27a64c 4620 [label="|BetterHAL|\n|BetterHAT|\n|LostHAL|\n|LostHAT|"][color="red"] 4621 [arrowhead="normal" dir="both" arrowtail="none"]; 4622 N839f77189e324c82b21b8a709b4b021d -> N33df4993a1664be18a2196001c27a64c 4623 [label="|LostHAL|"][color="red"][arrowhead="normal" dir="both" 4624 arrowtail="none"]; 4625 N7ad21867360c44709be20a99f33dd1f7 -> N839f77189e324c82b21b8a709b4b021d 4626 [label=""][color="black"][arrowhead="normal" dir="both" arrowtail="none"]; 4627 N839f77189e324c82b21b8a709b4b021d -> Nc97f2b02808d4751afcc630687bf7421 4628 [label="|ComputationDone|"][color="green"][arrowhead="normal" dir="both" 4629 arrowtail="none"]; 4630 N839f77189e324c82b21b8a709b4b021d -> N839f77189e324c82b21b8a709b4b021d 4631 [label="|ChangeLocalLeafIndications|\n|ChangeLocalConfiguredLevel|"] 4632 [color="gold"] 4633 [arrowhead="normal" dir="both" arrowtail="none"]; 4634 Nc97f2b02808d4751afcc630687bf7421 -> N33df4993a1664be18a2196001c27a64c 4635 [label="|LostHAL|"] 4636 [color="red"][arrowhead="normal" dir="both" arrowtail="none"]; 4637 N33df4993a1664be18a2196001c27a64c -> N839f77189e324c82b21b8a709b4b021d 4638 [label="|HoldDownExpired|"][color="green"][arrowhead="normal" dir="both" 4639 arrowtail="none"]; 4640 Nc97f2b02808d4751afcc630687bf7421 -> N839f77189e324c82b21b8a709b4b021d 4641 [label="|ChangeLocalLeafIndications|\n|ChangeLocalConfiguredLevel|"] 4642 [color="gold"] 4643 [arrowhead="normal" dir="both" arrowtail="none"]; 4644 } 4646 ZTP FSM DOT 4648 Events 4650 o ChangeLocalLeafIndications: node configured with new leaf flags 4652 o ChangeLocalConfiguredLevel: node locally configured with a defined 4653 level 4655 o NeighborOffer: a new neighbor offer with optional level and 4656 neighbor state 4658 o WithdrawNeighborOffer: a neighbor's offer withdrawn 4659 o BetterHAL: better HAL computed internally 4661 o BetterHAT: better HAT computed internally 4663 o LostHAL: lost last HAL in computation 4665 o LostHAT: lost HAT in computation 4667 o ComputationDone: computation performed 4669 o HoldDownExpired: holddown expired 4671 Actions 4673 on LostHAT in ComputeBestOffer finishes in ComputeBestOffer: 4674 LEVEL_COMPUTE 4676 on LostHAT in HoldingDown finishes in HoldingDown: no action 4678 on LostHAL in HoldingDown finishes in HoldingDown: 4680 on ChangeLocalLeafIndications in UpdatingClients finishes in 4681 ComputeBestOffer: store leaf flags 4683 on LostHAT in UpdatingClients finishes in ComputeBestOffer: no 4684 action 4686 on BetterHAT in HoldingDown finishes in HoldingDown: no action 4688 on NeighborOffer in ComputeBestOffer finishes in ComputeBestOffer: 4690 if no level offered REMOVE_OFFER else 4692 if level > leaf then UPDATE_OFFER else REMOVE_OFFER 4694 on BetterHAT in UpdatingClients finishes in ComputeBestOffer: no 4695 action 4697 on ChangeLocalConfiguredLevel in HoldingDown finishes in 4698 ComputeBestOffer: store configured level 4700 on BetterHAL in ComputeBestOffer finishes in ComputeBestOffer: 4701 LEVEL_COMPUTE 4703 on HoldDownExpired in HoldingDown finishes in ComputeBestOffer: 4704 PURGE_OFFERS 4705 on ShortTic in HoldingDown finishes in HoldingDown: if holddown 4706 timer expired PUSH_EVENT HoldDownExpired 4708 on ComputationDone in ComputeBestOffer finishes in 4709 UpdatingClients: no action 4711 on LostHAL in UpdatingClients finishes in HoldingDown: if any 4712 southbound adjacencies present update holddown timer to normal 4713 duration else fire holddown timer immediately 4715 on NeighborOffer in UpdatingClients finishes in UpdatingClients: 4717 if no level offered REMOVE_OFFER else 4719 if level > leaf then UPDATE_OFFER else REMOVE_OFFER 4721 on ChangeLocalConfiguredLevel in ComputeBestOffer finishes in 4722 ComputeBestOffer: store configured level and LEVEL_COMPUTE 4724 on NeighborOffer in HoldingDown finishes in HoldingDown: 4726 if no level offered REMOVE_OFFER else 4728 if level > leaf then UPDATE_OFFER else REMOVE_OFFER 4730 on LostHAL in ComputeBestOffer finishes in HoldingDown: if any 4731 southbound adjacencies present update holddown timer to normal 4732 duration else fire holddown timer immediately 4734 on BetterHAT in ComputeBestOffer finishes in ComputeBestOffer: 4735 LEVEL_COMPUTE 4737 on WithdrawNeighborOffer in ComputeBestOffer finishes in 4738 ComputeBestOffer: REMOVE_OFFER 4740 on ChangeLocalLeafIndications in ComputeBestOffer finishes in 4741 ComputeBestOffer: store leaf flags and LEVEL_COMPUTE 4743 on BetterHAL in HoldingDown finishes in HoldingDown: no action 4745 on WithdrawNeighborOffer in HoldingDown finishes in HoldingDown: 4746 REMOVE_OFFER 4748 on ChangeLocalLeafIndications in HoldingDown finishes in 4749 ComputeBestOffer: store leaf flags 4751 on ChangeLocalConfiguredLevel in UpdatingClients finishes in 4752 ComputeBestOffer: store level 4753 on ComputationDone in HoldingDown finishes in HoldingDown: 4755 on BetterHAL in UpdatingClients finishes in ComputeBestOffer: no 4756 action 4758 on WithdrawNeighborOffer in UpdatingClients finishes in 4759 UpdatingClients: REMOVE_OFFER 4761 on Entry into UpdatingClients: update all LIE FSMs with 4762 computation results 4764 on Entry into ComputeBestOffer: LEVEL_COMPUTE 4766 Following words are used for well known procedures: 4768 1. PUSH Event: pushes an event to be executed by the FSM upon exit 4769 of this action 4771 2. COMPARE_OFFERS: checks whether based on current offers and held 4772 last results the events BetterHAL/LostHAL/BetterHAT/LostHAT are 4773 necessary and returns them 4775 3. UPDATE_OFFER: store current offer and COMPARE_OFFERS, PUSH 4776 according events 4778 4. LEVEL_COMPUTE: compute best offered or configured level and HAL/ 4779 HAT, if anything changed PUSH ComputationDone 4781 5. REMOVE_OFFER: remove the according offer and COMPARE_OFFERS, PUSH 4782 according events 4784 6. PURGE_OFFERS: REMOVE_OFFER for all held offers, COMPARE OFFERS, 4785 PUSH according events 4787 B.3. Flooding Procedures 4789 Flooding Procedures are described in terms of a flooding state of an 4790 adjacency and resulting operations on it driven by packet arrivals. 4791 The FSM has basically a single state and is not well suited to 4792 represent the behavior. 4794 RIFT does not specify any kind of flood rate limiting since such 4795 specifications always assume particular points in available 4796 technology speeds and feeds and those points are shifting at faster 4797 and faster rate (speed of light holding for the moment). The 4798 implementation is well served to react accordingly to losses or 4799 overruns which are easily detected by raising retransmission rates. 4801 Flooding of all according topology exchange elements SHOULD be 4802 performed at highest feasible rate whereas the rate of transmission 4803 MUST be throttled by reacting to adequate features of the system such 4804 as e.g. queue lengths. 4806 B.3.1. FloodState Structure per Adjacency 4808 The structure contains conceptually the following elements. The word 4809 collection or queue indicates a set of elements that can be iterated: 4811 TIES_TX: Collection containing all the TIEs to transmit on the 4812 adjacency. 4814 TIES_ACK: Collection containing all the TIEs that have to be 4815 acknowledged on the adjacency. 4817 TIES_REQ: Collection containing all the TIE headers that have to be 4818 requested on the adjacency. 4820 TIES_RTX: Collection containing all TIEs that need retransmission 4821 with the according time to retransmit. 4823 LAST_RCVD_TIDE_END: last received TIE ID in last received TIDE. 4825 Following words are used for well known procedures operating on this 4826 structure: 4828 is_flood_reduced(TIE): returns whether a TIE can be flood reduced or 4829 not. 4831 is_tide_entry_filtered(TIE): returns whether a header should be 4832 propagated in TIDE according to flooding scopes. 4834 is_request_filtered(TIE): returns whether a TIE request should be 4835 propagated to neighbor or not according to flooding scopes. 4837 is_flood_filtered(TIE): returns whether a TIE requested be flooded 4838 to neighbor or not according to flooding scopes. 4840 try_to_transmit_tie(TIE): 4842 A. if not is_flood_filtered(TIE) then 4844 1. remove TIE from TIES_RTX if present 4846 2. if TIE" with same key on TIES_ACK then 4848 a. if TIE" same or newer than TIE do nothing else 4849 b. remove TIE" from TIES_ACK and add TIE to TIES_TX 4851 3. else insert TIE into TIES_TX 4853 ack_tie(TIE): remove TIE from all collections and then insert TIE 4854 into TIES_ACK. 4856 tie_been_acked(TIE): remove TIE from all collections. 4858 remove_from_all_queues(TIE): same as `tie_been_acked`. 4860 request_tie(TIE): if not is_request_filtered(TIE) then 4861 remove_from_all_queues(TIE) and add to TIES_REQ. 4863 move_to_rtx_list(TIE): remove TIE from TIES_TX and then add to 4864 TIES_RTX using TIE retransmission interval. 4866 clear_requests(TIEs): remove all TIEs from TIES_REQ. 4868 bump_own_tie(TIE): for self-originated TIE originate an empty or re- 4869 generate with version number higher then the one in TIE. 4871 The collection SHOULD be served with following priorities if the 4872 system cannot process all the collections in real time: 4874 Elements on TIES_ACK should be processed with highest priority 4876 TIES_TX 4878 TIES_REQ and TIES_RTX 4880 B.3.2. TIDEs 4882 `TIEID` and `TIEHeader` space forms a strict total order which 4883 implies that a comparison relation is possible between two elements. 4884 With that it is implictly possible to compare TIEs, TIEHeaders and 4885 TIEIDs to each other whereas the shortest viable key is always 4886 implied. 4888 B.3.2.1. TIDE Generation 4890 As given by timer constant, periodically generate TIDEs by: 4892 NEXT_TIDE_ID: ID of next TIE to be sent in TIDE. 4894 TIDE_START: Begin of TIDE packet range. 4896 a. NEXT_TIDE_ID = MIN_TIEID 4897 b. while NEXT_TIDE_ID not equal to MAX_TIEID do 4899 1. TIDE_START = NEXT_TIDE_ID 4901 2. HEADERS = At most TIRDEs_PER_PKT headers in TIEDB starting at 4902 NEXT_TIDE_ID or higher that SHOULD be filtered by 4903 is_tide_entry_filtered 4905 3. if HEADERS is empty then START = MIN_TIEID else START = first 4906 element in HEADERS 4908 4. if HEADERS' size less than TIRDEs_PER_PKT then END = 4909 MAX_TIEID else END = last element in HEADERS 4911 5. send sorted HEADERS as TIDE setting START and END as its 4912 range 4914 6. NEXT_TIDE_ID = END 4916 The constant `TIRDEs_PER_PKT` SHOULD be generated and used by the 4917 implementation to limit the amount of TIE headers per TIDE so the 4918 sent TIDE PDU does not exceed interface MTU. 4920 TIDE PDUs SHOULD be spaced on sending to prevent packet drops. 4922 B.3.2.2. TIDE Processing 4924 On reception of TIDEs the following processing is performed: 4926 TXKEYS: Collection of TIE Headers to be send after processing of 4927 the packet 4929 REQKEYS: Collection of TIEIDs to be requested after processing of 4930 the packet 4932 CLEARKEYS: Collection of TIEIDs to be removed from flood state 4933 queues 4935 LASTPROCESSED: Last processed TIEID in TIDE 4937 DBTIE: TIE in the LSDB if found 4939 a. if TIDE.start_range > LAST_RCVD_TIDE_END then add all HEADERs in 4940 LSDB where (TIE >= LAST_RCVD_TIDE_END and TIE < TIDE.start_range) 4941 to TXKEYS 4943 b. LASTPROCESSED = TIDE.start_range 4944 c. for every HEADER in TIDE do 4946 1. DBTIE = find HEADER in current LSDB 4948 2. if HEADER < LASTPROCESSED then report error and reset 4949 adjacency and return 4951 3. put all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and 4952 TIE.HEADER < HEADER) into TXKEYS 4954 4. LASTPROCESSED = HEADER 4956 5. if DBTIE not found then 4958 1. if originator is this node then bump_own_tie else put 4959 HEADER into REQKEYS 4961 6. if DBTIE.HEADER < HEADER then 4963 1. if originator is this node then bump_own_tie else put 4964 HEADER into REQKEYS 4966 7. if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS 4968 8. if DBTIE.HEADER = HEADER then put DBTIE.HEADER into CLEARKEYS 4970 d. put all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and 4971 TIE.HEADER <= TIDE.end_range) into TXKEYS 4973 e. LAST_RCVD_TIDE_END = (if TIDE.end_range == MAX_TIEID then 4974 MIN_TIEID else TIDE.end_range) 4976 f. for all TIEs in TXKEYS try_to_transmit_tie(TIE) 4978 g. for all TIEs in REQKEYS request_tie(TIE) 4980 h. for all TIEs in CLEARKEYS remove_from_all_queues(TIE) 4982 B.3.3. TIREs 4984 B.3.3.1. TIRE Generation 4986 There is not much to say here. Elements from both TIES_REQ and 4987 TIES_ACK MUST be collected and sent out as fast as feasible as TIREs. 4989 B.3.3.2. TIRE Processing 4991 On reception of TIREs the following processing is performed: 4993 TXKEYS: Collection of TIE Headers to be send after processing of 4994 the packet 4996 REQKEYS: Collection of TIEIDs to be requested after processing of 4997 the packet 4999 ACKKEYS: Collection of TIEIDs that have been acked 5001 DBTIE: TIE in the LSDB if found 5003 a. for every HEADER in TIRE do 5005 1. DBTIE = find HEADER in current LSDB 5007 2. if DBTIE not found then do nothing 5009 3. if DBTIE.HEADER < HEADER then put HEADER into REQKEYS 5011 4. if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS 5013 5. if DBTIE.HEADER = HEADER then put DBTIE.HEADER into ACKKEYS 5015 b. for all TIEs in TXKEYS try_to_transmit_tie(TIE) 5017 c. for all TIEs in REQKEYS request_tie(TIE) 5019 d. for all TIEs in ACKKEYS tie_been_acked(TIE) 5021 B.3.4. TIEs Processing on Flood State Adjacency 5023 On reception of TIEs the following processing is performed: 5025 ACKTIE: TIE to acknowledge 5027 TXTIE: TIE to transmit 5029 DBTIE: TIE in the LSDB if found 5031 a. DBTIE = find TIE in current LSDB 5033 b. if DBTIE not found then 5035 1. if originator is this node then bump_own_tie with a short 5036 remaining lifetime 5038 2. else insert TIE into LSDB and ACKTIE = TIE 5040 else 5042 1. if DBTIE.HEADER = TIE.HEADER then ACKTIE = TIE 5044 2. if DBTIE.HEADER < TIE.HEADER then 5046 i. if originator is this node then bump_own_tie 5048 ii. else insert TIE into LSDB and ACKTIE = TIE 5050 3. if DBTIE.HEADER > TIE.HEADER then TXTIE = TIE 5052 c. if TXTIE is set then try_to_transmit_tie(TXTIE) 5054 d. if ACKTIE is set then ack_tie(TIE) 5056 B.3.5. TIEs Processing When LSDB Received Newer Version on Other 5057 Adjacencies 5059 The Link State Database can be considered to be a switchboard that 5060 does not need any flooding procedures but can be given new versions 5061 of TIEs by a peer. Consecutively, a peer receives from the LSDB 5062 newer versions of TIEs received by other peeers and processes them 5063 (without any filtering) just like receving TIEs from its remote peer. 5064 This publisher model can be implemented in many ways. 5066 Appendix C. Constants 5068 C.1. Configurable Protocol Constants 5070 This section gather constants that are provided in the schema files 5071 and the document. 5073 +----------------+--------------+-----------------------------------+ 5074 | | Type | Value | 5075 +----------------+--------------+-----------------------------------+ 5076 | LIE IPv4 | Default | 224.0.0.120 or all-rift-routers | 5077 | Multicast | Value, | to be assigned in IPv4 Multicast | 5078 | Address | Configurable | Address Space Registry in Local | 5079 | | | Network Control Block | 5080 +----------------+--------------+-----------------------------------+ 5081 | LIE IPv6 | Default | FF02::A1F7 or all-rift-routers to | 5082 | Multicast | Value, | be assigned in IPv6 Multicast | 5083 | Address | Configurable | Address Assignments | 5084 +----------------+--------------+-----------------------------------+ 5085 | LIE | Default | 911 | 5086 | Destination | Value, | | 5087 | Port | Configurable | | 5088 +----------------+--------------+-----------------------------------+ 5089 | Level value | Constant | 24 | 5090 | for | | | 5091 | TOP_OF_FABRIC | | | 5092 | flag | | | 5093 +----------------+--------------+-----------------------------------+ 5094 | Default LIE | Default | 3 seconds | 5095 | Holdtime | Value, | | 5096 | | Configurable | | 5097 +----------------+--------------+-----------------------------------+ 5098 | TIE | Default | 1 second | 5099 | Retransmission | Value | | 5100 | Interval | | | 5101 +----------------+--------------+-----------------------------------+ 5102 | TIDE | Default | 3 seconds | 5103 | Generation | Value, | | 5104 | Interval | Configurable | | 5105 +----------------+--------------+-----------------------------------+ 5106 | MIN_TIEID | Constant | TIE Key with minimal values: | 5107 | signifies | | TIEID(originator=0, | 5108 | start of TIDEs | | tietype=TIETypeMinValue, | 5109 | | | tie_nr=0, direction=South) | 5110 +----------------+--------------+-----------------------------------+ 5111 | MAX_TIEID | Constant | TIE Key with maximal values: | 5112 | signifies end | | TIEID(originator=MAX_UINT64, | 5113 | of TIDEs | | tietype=TIETypeMaxValue, | 5114 | | | tie_nr=MAX_UINT64, | 5115 | | | direction=North) | 5116 +----------------+--------------+-----------------------------------+ 5118 Table 6: all_constants 5120 Appendix D. TODO 5122 o explain what needs implemented for leaf/spine/superspine/ToF 5123 version in detail 5125 o section on E-W superspine/ToF flooding scope to connect partitions 5127 o finish security envelope, move remaining lifetime out the TIE 5128 packet so it can be modified independently of the SHA-1'd TIE 5130 o move adjacency formation rules onto FSM text and remove 2.4.2 5132 o add an intermediate state on multiple neighbors 5134 o write negative disaggregation example 5136 o deal with the case of stale north TIEs stuck more than one level 5137 up (propagate header description southbound) 5139 Author's Address 5141 The RIFT Team 5142 "Heaven is under our feet as well as over our heads"