idnits 2.17.1 draft-ietf-rift-rift-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 2 instances of lines with multicast IPv4 addresses in the document. If these are generic example addresses, they should be changed to use the 233.252.0.x range defined in RFC 5771 Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 6455 has weird spacing: '...berType undef...' == Line 6459 has weird spacing: '...velType top_...' == Line 6461 has weird spacing: '...itsType defau...' == Line 6463 has weird spacing: '...velType leaf...' == Line 6464 has weird spacing: '...velType defa...' == (28 more instances...) == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (May 26, 2020) is 1403 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'A' is mentioned on line 343, but not defined == Missing Reference: 'B' is mentioned on line 343, but not defined == Missing Reference: 'C' is mentioned on line 353, but not defined == Missing Reference: 'D' is mentioned on line 353, but not defined == Missing Reference: 'E' is mentioned on line 346, but not defined == Missing Reference: 'F' is mentioned on line 346, but not defined == Missing Reference: 'StoreHAL' is mentioned on line 1639, but not defined == Missing Reference: 'StoreHALS' is mentioned on line 1640, but not defined == Missing Reference: 'CleanUp' is mentioned on line 1531, but not defined == Missing Reference: 'StoreHAT' is mentioned on line 1641, but not defined == Missing Reference: '-' is mentioned on line 3888, but not defined == Missing Reference: 'ProcessLIE' is mentioned on line 1620, but not defined == Missing Reference: 'SendLIE' is mentioned on line 1621, but not defined == Missing Reference: 'SendOfferToZTPFSM' is mentioned on line 1625, but not defined == Missing Reference: 'StartMulNeighTimer' is mentioned on line 1630, but not defined == Missing Reference: 'StoreLevel' is mentioned on line 1651, but not defined == Missing Reference: 'UpdateLevel' is mentioned on line 1606, but not defined == Missing Reference: 'StartMultipleNeighborsTimer' is mentioned on line 1643, but not defined == Missing Reference: 'SendOfferToZTP' is mentioned on line 1646, but not defined == Missing Reference: 'NH' is mentioned on line 3109, but not defined == Missing Reference: 'P' is mentioned on line 3313, but not defined == Missing Reference: 'RemoveExpiredOffers' is mentioned on line 3901, but not defined == Missing Reference: 'StoreConfiguredLevel' is mentioned on line 3856, but not defined == Missing Reference: 'StoreLeafFlags' is mentioned on line 3889, but not defined == Missing Reference: 'StoreConfigLevel' is mentioned on line 3890, but not defined == Missing Reference: 'RFC5880' is mentioned on line 6416, but not defined -- Possible downref: Non-RFC (?) normative reference: ref. 'EUI64' -- Possible downref: Non-RFC (?) normative reference: ref. 'ISO10589' ** Obsolete normative reference: RFC 5549 (Obsoleted by RFC 8950) ** Obsolete normative reference: RFC 6830 (Obsoleted by RFC 9300, RFC 9301) ** Obsolete normative reference: RFC 7752 (Obsoleted by RFC 9552) Summary: 3 errors (**), 0 flaws (~~), 35 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RIFT Working Group A. Przygienda, Ed. 3 Internet-Draft Juniper 4 Intended status: Standards Track A. Sharma 5 Expires: November 27, 2020 Comcast 6 P. Thubert 7 Cisco 8 Bruno. Rijsman 9 Individual 10 Dmitry. Afanasiev 11 Yandex 12 May 26, 2020 14 RIFT: Routing in Fat Trees 15 draft-ietf-rift-rift-12 17 Abstract 19 This document defines a specialized, dynamic routing protocol for 20 Clos and fat-tree network topologies optimized towards minimization 21 of configuration and operational complexity. The protocol 23 o deals with no configuration, fully automated construction of fat- 24 tree topologies based on detection of links, 26 o minimizes the amount of routing state held at each level, 28 o automatically prunes and load balances topology flooding exchanges 29 over a sufficient subset of links, 31 o supports automatic disaggregation of prefixes on link and node 32 failures to prevent black-holing and suboptimal routing, 34 o allows traffic steering and re-routing policies, 36 o allows loop-free non-ECMP forwarding, 38 o automatically re-balances traffic towards the spines based on 39 bandwidth available and finally 41 o provides mechanisms to synchronize a limited key-value data-store 42 that can be used after protocol convergence to e.g. bootstrap 43 higher levels of functionality on nodes. 45 Status of This Memo 47 This Internet-Draft is submitted in full conformance with the 48 provisions of BCP 78 and BCP 79. 50 Internet-Drafts are working documents of the Internet Engineering 51 Task Force (IETF). Note that other groups may also distribute 52 working documents as Internet-Drafts. The list of current Internet- 53 Drafts is at https://datatracker.ietf.org/drafts/current/. 55 Internet-Drafts are draft documents valid for a maximum of six months 56 and may be updated, replaced, or obsoleted by other documents at any 57 time. It is inappropriate to use Internet-Drafts as reference 58 material or to cite them other than as "work in progress." 60 This Internet-Draft will expire on November 27, 2020. 62 Copyright Notice 64 Copyright (c) 2020 IETF Trust and the persons identified as the 65 document authors. All rights reserved. 67 This document is subject to BCP 78 and the IETF Trust's Legal 68 Provisions Relating to IETF Documents 69 (https://trustee.ietf.org/license-info) in effect on the date of 70 publication of this document. Please review these documents 71 carefully, as they describe your rights and restrictions with respect 72 to this document. Code Components extracted from this document must 73 include Simplified BSD License text as described in Section 4.e of 74 the Trust Legal Provisions and are provided without warranty as 75 described in the Simplified BSD License. 77 Table of Contents 79 1. Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 80 2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 6 81 2.1. Requirements Language . . . . . . . . . . . . . . . . . . 8 82 3. Reference Frame . . . . . . . . . . . . . . . . . . . . . . . 8 83 3.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 8 84 3.2. Topology . . . . . . . . . . . . . . . . . . . . . . . . 13 85 4. RIFT: Routing in Fat Trees . . . . . . . . . . . . . . . . . 15 86 4.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . 16 87 4.1.1. Properties . . . . . . . . . . . . . . . . . . . . . 16 88 4.1.2. Generalized Topology View . . . . . . . . . . . . . . 17 89 4.1.2.1. Terminology . . . . . . . . . . . . . . . . . . . 17 90 4.1.2.2. Clos as Crossed Crossbars . . . . . . . . . . . . 18 91 4.1.3. Fallen Leaf Problem . . . . . . . . . . . . . . . . . 28 92 4.1.4. Discovering Fallen Leaves . . . . . . . . . . . . . . 30 93 4.1.5. Addressing the Fallen Leaves Problem . . . . . . . . 31 94 4.2. Specification . . . . . . . . . . . . . . . . . . . . . . 32 95 4.2.1. Transport . . . . . . . . . . . . . . . . . . . . . . 33 96 4.2.2. Link (Neighbor) Discovery (LIE Exchange) . . . . . . 33 97 4.2.2.1. LIE FSM . . . . . . . . . . . . . . . . . . . . . 36 98 4.2.3. Topology Exchange (TIE Exchange) . . . . . . . . . . 46 99 4.2.3.1. Topology Information Elements . . . . . . . . . . 46 100 4.2.3.2. South- and Northbound Representation . . . . . . 46 101 4.2.3.3. Flooding . . . . . . . . . . . . . . . . . . . . 49 102 4.2.3.4. TIE Flooding Scopes . . . . . . . . . . . . . . . 56 103 4.2.3.5. 'Flood Only Node TIEs' Bit . . . . . . . . . . . 59 104 4.2.3.6. Initial and Periodic Database Synchronization . . 60 105 4.2.3.7. Purging and Roll-Overs . . . . . . . . . . . . . 60 106 4.2.3.8. Southbound Default Route Origination . . . . . . 61 107 4.2.3.9. Northbound TIE Flooding Reduction . . . . . . . . 62 108 4.2.3.10. Special Considerations . . . . . . . . . . . . . 67 109 4.2.4. Reachability Computation . . . . . . . . . . . . . . 67 110 4.2.4.1. Northbound SPF . . . . . . . . . . . . . . . . . 67 111 4.2.4.2. Southbound SPF . . . . . . . . . . . . . . . . . 68 112 4.2.4.3. East-West Forwarding Within a non-ToF Level . . . 69 113 4.2.4.4. East-West Links Within ToF Level . . . . . . . . 69 114 4.2.5. Automatic Disaggregation on Link & Node Failures . . 69 115 4.2.5.1. Positive, Non-transitive Disaggregation . . . . . 69 116 4.2.5.2. Negative, Transitive Disaggregation for Fallen 117 Leaves . . . . . . . . . . . . . . . . . . . . . 73 118 4.2.6. Attaching Prefixes . . . . . . . . . . . . . . . . . 75 119 4.2.7. Optional Zero Touch Provisioning (ZTP) . . . . . . . 84 120 4.2.7.1. Terminology . . . . . . . . . . . . . . . . . . . 85 121 4.2.7.2. Automatic SystemID Selection . . . . . . . . . . 86 122 4.2.7.3. Generic Fabric Example . . . . . . . . . . . . . 87 123 4.2.7.4. Level Determination Procedure . . . . . . . . . . 88 124 4.2.7.5. ZTP FSM . . . . . . . . . . . . . . . . . . . . . 89 125 4.2.7.6. Resulting Topologies . . . . . . . . . . . . . . 95 126 4.2.8. Stability Considerations . . . . . . . . . . . . . . 97 127 4.3. Further Mechanisms . . . . . . . . . . . . . . . . . . . 98 128 4.3.1. Overload Bit . . . . . . . . . . . . . . . . . . . . 98 129 4.3.2. Optimized Route Computation on Leaves . . . . . . . . 98 130 4.3.3. Mobility . . . . . . . . . . . . . . . . . . . . . . 98 131 4.3.3.1. Clock Comparison . . . . . . . . . . . . . . . . 100 132 4.3.3.2. Interaction between Time Stamps and Sequence 133 Counters . . . . . . . . . . . . . . . . . . . . 100 134 4.3.3.3. Anycast vs. Unicast . . . . . . . . . . . . . . . 101 135 4.3.3.4. Overlays and Signaling . . . . . . . . . . . . . 101 136 4.3.4. Key/Value Store . . . . . . . . . . . . . . . . . . . 101 137 4.3.4.1. Southbound . . . . . . . . . . . . . . . . . . . 101 138 4.3.4.2. Northbound . . . . . . . . . . . . . . . . . . . 102 139 4.3.5. Interactions with BFD . . . . . . . . . . . . . . . . 102 140 4.3.6. Fabric Bandwidth Balancing . . . . . . . . . . . . . 103 141 4.3.6.1. Northbound Direction . . . . . . . . . . . . . . 103 142 4.3.6.2. Southbound Direction . . . . . . . . . . . . . . 106 143 4.3.7. Label Binding . . . . . . . . . . . . . . . . . . . . 106 144 4.3.8. Leaf to Leaf Procedures . . . . . . . . . . . . . . . 106 145 4.3.9. Address Family and Multi Topology Considerations . . 106 146 4.3.10. Reachability of Internal Nodes in the Fabric . . . . 107 147 4.3.11. One-Hop Healing of Levels with East-West Links . . . 107 148 4.4. Security . . . . . . . . . . . . . . . . . . . . . . . . 107 149 4.4.1. Security Model . . . . . . . . . . . . . . . . . . . 107 150 4.4.2. Security Mechanisms . . . . . . . . . . . . . . . . . 109 151 4.4.3. Security Envelope . . . . . . . . . . . . . . . . . . 110 152 4.4.4. Weak Nonces . . . . . . . . . . . . . . . . . . . . . 113 153 4.4.5. Lifetime . . . . . . . . . . . . . . . . . . . . . . 114 154 4.4.6. Key Management . . . . . . . . . . . . . . . . . . . 114 155 4.4.7. Security Association Changes . . . . . . . . . . . . 114 156 5. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . 115 157 5.1. Normal Operation . . . . . . . . . . . . . . . . . . . . 115 158 5.2. Leaf Link Failure . . . . . . . . . . . . . . . . . . . . 117 159 5.3. Partitioned Fabric . . . . . . . . . . . . . . . . . . . 118 160 5.4. Northbound Partitioned Router and Optional East-West 161 Links . . . . . . . . . . . . . . . . . . . . . . . . . . 119 162 6. Implementation and Operation: Further Details . . . . . . . . 120 163 6.1. Considerations for Leaf-Only Implementation . . . . . . . 120 164 6.2. Considerations for Spine Implementation . . . . . . . . . 121 165 6.3. Adaptations to Other Proposed Data Center Topologies . . 121 166 6.4. Originating Non-Default Route Southbound . . . . . . . . 122 167 7. Security Considerations . . . . . . . . . . . . . . . . . . . 122 168 7.1. General . . . . . . . . . . . . . . . . . . . . . . . . . 122 169 7.2. ZTP . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 170 7.3. Lifetime . . . . . . . . . . . . . . . . . . . . . . . . 123 171 7.4. Packet Number . . . . . . . . . . . . . . . . . . . . . . 123 172 7.5. Outer Fingerprint Attacks . . . . . . . . . . . . . . . . 123 173 7.6. TIE Origin Fingerprint DoS Attacks . . . . . . . . . . . 123 174 7.7. Host Implementations . . . . . . . . . . . . . . . . . . 124 175 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 124 176 8.1. Requested Multicast and Port Numbers . . . . . . . . . . 124 177 8.2. Requested Registries with Suggested Values . . . . . . . 125 178 8.2.1. Registry RIFT_v4/common/AddressFamilyType . . . . . . 125 179 8.2.1.1. Requested Entries . . . . . . . . . . . . . . . . 125 180 8.2.2. Registry RIFT_v4/common/HierarchyIndications . . . . 125 181 8.2.2.1. Requested Entries . . . . . . . . . . . . . . . . 125 182 8.2.3. Registry RIFT_v4/common/IEEE802_1ASTimeStampType . . 125 183 8.2.3.1. Requested Entries . . . . . . . . . . . . . . . . 126 184 8.2.4. Registry RIFT_v4/common/IPAddressType . . . . . . . . 126 185 8.2.4.1. Requested Entries . . . . . . . . . . . . . . . . 126 186 8.2.5. Registry RIFT_v4/common/IPPrefixType . . . . . . . . 126 187 8.2.5.1. Requested Entries . . . . . . . . . . . . . . . . 126 188 8.2.6. Registry RIFT_v4/common/IPv4PrefixType . . . . . . . 126 189 8.2.6.1. Requested Entries . . . . . . . . . . . . . . . . 126 190 8.2.7. Registry RIFT_v4/common/IPv6PrefixType . . . . . . . 126 191 8.2.7.1. Requested Entries . . . . . . . . . . . . . . . . 127 192 8.2.8. Registry RIFT_v4/common/PrefixSequenceType . . . . . 127 193 8.2.8.1. Requested Entries . . . . . . . . . . . . . . . . 127 194 8.2.9. Registry RIFT_v4/common/RouteType . . . . . . . . . . 127 195 8.2.9.1. Requested Entries . . . . . . . . . . . . . . . . 127 196 8.2.10. Registry RIFT_v4/common/TIETypeType . . . . . . . . . 128 197 8.2.10.1. Requested Entries . . . . . . . . . . . . . . . 128 198 8.2.11. Registry RIFT_v4/common/TieDirectionType . . . . . . 128 199 8.2.11.1. Requested Entries . . . . . . . . . . . . . . . 128 200 8.2.12. Registry RIFT_v4/encoding/Community . . . . . . . . . 128 201 8.2.12.1. Requested Entries . . . . . . . . . . . . . . . 128 202 8.2.13. Registry RIFT_v4/encoding/KeyValueTIEElement . . . . 129 203 8.2.13.1. Requested Entries . . . . . . . . . . . . . . . 129 204 8.2.14. Registry RIFT_v4/encoding/LIEPacket . . . . . . . . . 129 205 8.2.14.1. Requested Entries . . . . . . . . . . . . . . . 129 206 8.2.15. Registry RIFT_v4/encoding/LinkCapabilities . . . . . 130 207 8.2.15.1. Requested Entries . . . . . . . . . . . . . . . 130 208 8.2.16. Registry RIFT_v4/encoding/LinkIDPair . . . . . . . . 130 209 8.2.16.1. Requested Entries . . . . . . . . . . . . . . . 131 210 8.2.17. Registry RIFT_v4/encoding/Neighbor . . . . . . . . . 131 211 8.2.17.1. Requested Entries . . . . . . . . . . . . . . . 131 212 8.2.18. Registry RIFT_v4/encoding/NodeCapabilities . . . . . 131 213 8.2.18.1. Requested Entries . . . . . . . . . . . . . . . 132 214 8.2.19. Registry RIFT_v4/encoding/NodeFlags . . . . . . . . . 132 215 8.2.19.1. Requested Entries . . . . . . . . . . . . . . . 132 216 8.2.20. Registry RIFT_v4/encoding/NodeNeighborsTIEElement . . 132 217 8.2.20.1. Requested Entries . . . . . . . . . . . . . . . 132 218 8.2.21. Registry RIFT_v4/encoding/NodeTIEElement . . . . . . 132 219 8.2.21.1. Requested Entries . . . . . . . . . . . . . . . 133 220 8.2.22. Registry RIFT_v4/encoding/PacketContent . . . . . . . 133 221 8.2.22.1. Requested Entries . . . . . . . . . . . . . . . 133 222 8.2.23. Registry RIFT_v4/encoding/PacketHeader . . . . . . . 133 223 8.2.23.1. Requested Entries . . . . . . . . . . . . . . . 134 224 8.2.24. Registry RIFT_v4/encoding/PrefixAttributes . . . . . 134 225 8.2.24.1. Requested Entries . . . . . . . . . . . . . . . 134 226 8.2.25. Registry RIFT_v4/encoding/PrefixTIEElement . . . . . 134 227 8.2.25.1. Requested Entries . . . . . . . . . . . . . . . 135 228 8.2.26. Registry RIFT_v4/encoding/ProtocolPacket . . . . . . 135 229 8.2.26.1. Requested Entries . . . . . . . . . . . . . . . 135 230 8.2.27. Registry RIFT_v4/encoding/TIDEPacket . . . . . . . . 135 231 8.2.27.1. Requested Entries . . . . . . . . . . . . . . . 135 232 8.2.28. Registry RIFT_v4/encoding/TIEElement . . . . . . . . 135 233 8.2.28.1. Requested Entries . . . . . . . . . . . . . . . 136 234 8.2.29. Registry RIFT_v4/encoding/TIEHeader . . . . . . . . . 136 235 8.2.29.1. Requested Entries . . . . . . . . . . . . . . . 137 236 8.2.30. Registry RIFT_v4/encoding/TIEHeaderWithLifeTime . . . 137 237 8.2.30.1. Requested Entries . . . . . . . . . . . . . . . 137 238 8.2.31. Registry RIFT_v4/encoding/TIEID . . . . . . . . . . . 137 239 8.2.31.1. Requested Entries . . . . . . . . . . . . . . . 138 240 8.2.32. Registry RIFT_v4/encoding/TIEPacket . . . . . . . . . 138 241 8.2.32.1. Requested Entries . . . . . . . . . . . . . . . 138 242 8.2.33. Registry RIFT_v4/encoding/TIREPacket . . . . . . . . 138 243 8.2.33.1. Requested Entries . . . . . . . . . . . . . . . 138 244 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 138 245 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 139 246 10.1. Normative References . . . . . . . . . . . . . . . . . . 139 247 10.2. Informative References . . . . . . . . . . . . . . . . . 141 248 Appendix A. Sequence Number Binary Arithmetic . . . . . . . . . 143 249 Appendix B. Information Elements Schema . . . . . . . . . . . . 144 250 B.1. common.thrift . . . . . . . . . . . . . . . . . . . . . . 146 251 B.2. encoding.thrift . . . . . . . . . . . . . . . . . . . . . 152 252 Appendix C. Constants . . . . . . . . . . . . . . . . . . . . . 161 253 C.1. Configurable Protocol Constants . . . . . . . . . . . . . 161 254 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 163 256 1. Authors 258 This work is a product of a list of individuals which are all to be 259 considered major contributors independent of the fact whether their 260 name made it to the limited boilerplate author's list or not. 262 Tony Przygienda, Ed. | Alankar Sharma | Pascal Thubert 263 Juniper Networks | Comcast | Cisco 265 Bruno Rijsman | Ilya Vershkov | Dmitry Afanasiev 266 Individual | Mellanox | Yandex 268 Don Fedyk | Alia Atlas | John Drake 269 Individual | Individual | Juniper 271 Table 1: RIFT Authors 273 2. Introduction 275 Clos [CLOS] and Fat-Tree [FATTREE] topologies have gained prominence 276 in today's networking, primarily as result of the paradigm shift 277 towards a centralized data-center based architecture that is poised 278 to deliver a majority of computation and storage services in the 279 future. Today's current routing protocols were geared towards a 280 network with an irregular topology and low degree of connectivity 281 originally but given they were the only available options, 282 consequently several attempts to apply those protocols to Clos have 283 been made. Most successfully BGP [RFC4271] [RFC7938] has been 284 extended to this purpose, not as much due to its inherent suitability 285 but rather because the perceived capability to easily modify BGP and 286 the immanent difficulties with link-state [DIJKSTRA] based protocols 287 to optimize topology exchange and converge quickly in large scale 288 densely meshed topologies. The incumbent protocols precondition 289 normally extensive configuration or provisioning during bring up and 290 re-dimensioning. This tends to be viable only for a set of 291 organizations with according networking operation skills and budgets. 292 For many IP fabric builders a desirable protocol would be one that 293 auto-configures itself and deals with failures and mis-configurations 294 with a minimum of human intervention only. Such a solution would 295 allow local IP fabric bandwidth to be consumed in a 'standard 296 component' fashion, i.e. provision it much faster and operate it at 297 much lower costs than today, much like compute or storage is consumed 298 already. 300 In looking at the problem through the lens of data center 301 requirements, RIFT addresses challenges in IP fabric routing not 302 through an incremental modification of either a link-state 303 (distributed computation) or distance-vector (diffused computation) 304 but rather a mixture of both, colloquially best described as "link- 305 state towards the spine" and "distance vector towards the leaves". 306 In other words, "bottom" levels are flooding their link-state 307 information in the "northern" direction while each node generates 308 under normal conditions a "default route" and floods it in the 309 "southern" direction. This type of protocol allows naturally for 310 highly desirable aggregation. Alas, such aggregation could blackhole 311 traffic in cases of misconfiguration or while failures are being 312 resolved or even cause partial network partitioning and this has to 313 be addressed by some adequate mechanism. The approach RIFT takes is 314 described in Section 4.2.5 and is basically based on automatic, 315 sufficient disaggregation of prefixes in case of link and node 316 failures. 318 For the visually oriented reader, Figure 1 presents a first level 319 simplified view of the resulting information and routes on a RIFT 320 fabric. The top of the fabric is holding in its link-state database 321 the nodes below it and the routes to them. In the second row of the 322 database table we indicate that partial information of other nodes in 323 the same level is available as well. The details of how this is 324 achieved will be postponed for the moment. When we look at the 325 "bottom" of the fabric, the leaves, we see that the topology is 326 basically empty and they only hold a load balanced default route to 327 the next level under normal conditions. 329 The balance of this document details a dedicated IP fabric routing 330 protocol, fills in the specification details and ultimately includes 331 resulting security considerations. 333 . [A,B,C,D] 334 . [E] 335 . +-----+ +-----+ 336 . | E | | F | A/32 @ [C,D] 337 . +-+-+-+ +-+-+-+ B/32 @ [C,D] 338 . | | | | C/32 @ C 339 . | | +-----+ | D/32 @ D 340 . | | | | 341 . | +------+ | 342 . | | | | 343 . [A,B] +-+---+ | | +---+-+ [A,B] 344 . [D] | C +--+ +-+ D | [C] 345 . +-+-+-+ +-+-+-+ 346 . 0/0 @ [E,F] | | | | 0/0 @ [E,F] 347 . A/32 @ A | | +-----+ | A/32 @ A 348 . B/32 @ B | | | | B/32 @ B 349 . | +------+ | 350 . | | | | 351 . +-+---+ | | +---+-+ 352 . | A +--+ +-+ B | 353 . 0/0 @ [C,D] +-----+ +-----+ 0/0 @ [C,D] 355 Figure 1: RIFT Information Distribution 357 2.1. Requirements Language 359 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 360 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 361 document are to be interpreted as described in RFC 8174 [RFC8174]. 363 3. Reference Frame 365 3.1. Terminology 367 This section presents the terminology used in this document. It is 368 assumed that the reader is thoroughly familiar with the terms and 369 concepts used in OSPF [RFC2328] and IS-IS [ISO10589-Second-Edition], 370 [ISO10589] as well as the according graph theoretical concepts of 371 shortest path first (SPF) [DIJKSTRA] computation and DAGs. 373 Crossbar: Physical arrangement of ports in a switching matrix 374 without implying any further scheduling or buffering disciplines. 376 Clos/Fat Tree: This document uses the terms Clos and Fat Tree 377 interchangeably whereas it always refers to a folded spine-and- 378 leaf topology with possibly multiple Points of Delivery (PoDs) and 379 one or multiple Top of Fabric (ToF) planes. Several modifications 380 such as leaf-2-leaf shortcuts and multiple level shortcuts are 381 possible and described further in the document. 383 Directed Acyclic Graph (DAG): A finite directed graph with no 384 directed cycles (loops). If links in Clos are considered as 385 either being all directed towards the top or vice versa, each of 386 such two graphs is a DAG. 388 Folded Spine-and-Leaf: In case Clos fabric input and output stages 389 are analogous, the fabric can be "folded" to build a "superspine" 390 or top which we will call Top of Fabric (ToF) in this document. 392 Level: Clos and Fat Tree networks are topologically partially 393 ordered graphs and 'level' denotes the set of nodes at the same 394 height in such a network, where the bottom level (leaf) is the 395 level with lowest value. A node has links to nodes one level down 396 and/or one level up. Under some circumstances, a node may have 397 links to nodes at the same level. As footnote: Clos terminology 398 uses often the concept of "stage" but due to the folded nature of 399 the Fat Tree we do not use it to prevent misunderstandings. 401 Superspine vs. Aggregation and Spine vs. Edge/Leaf: 402 Traditional level names in 5-stages folded Clos for Level 2, 1 and 403 0 respectively. We normalize this language to talk about top-of- 404 fabric (ToF), top-of-pod (ToP) and leaves. 406 Zero Touch Provisioning (ZTP): Optional RIFT mechanism which allows 407 to derive node levels automatically based on minimum configuration 408 (only ToF property has to be provisioned on according nodes). 410 Point of Delivery (PoD): A self-contained vertical slice or subset 411 of a Clos or Fat Tree network containing normally only level 0 and 412 level 1 nodes. A node in a PoD communicates with nodes in other 413 PoDs via the Top-of-Fabric. We number PoDs to distinguish them 414 and use PoD #0 to denote "undefined" PoD. 416 Top of PoD (ToP): The set of nodes that provide intra-PoD 417 communication and have northbound adjacencies outside of the PoD, 418 i.e. are at the "top" of the PoD. 420 Top of Fabric (ToF): The set of nodes that provide inter-PoD 421 communication and have no northbound adjacencies, i.e. are at the 422 "very top" of the fabric. ToF nodes do not belong to any PoD and 423 are assigned "undefined" PoD value to indicate the equivalent of 424 "any" PoD. 426 Spine: Any nodes north of leaves and south of top-of-fabric nodes. 427 Multiple layers of spines in a PoD are possible. 429 Leaf: A node without southbound adjacencies. Its level is 0 (except 430 cases where it is deriving its level via ZTP and is running 431 without LEAF_ONLY which will be explained in Section 4.2.7). 433 Top-of-fabric Plane or Partition: In large fabrics top-of-fabric 434 switches may not have enough ports to aggregate all switches south 435 of them and with that, the ToF is 'split' into multiple 436 independent planes. Introduction and Section 4.1.2 explains the 437 concept in more detail. A plane is subset of ToF nodes that see 438 each other through south reflection or E-W links. 440 Radix: A radix of a switch is basically number of switching ports it 441 provides. It's sometimes called fanout as well. 443 North Radix: Ports cabled northbound to higher level nodes. 445 South Radix: Ports cabled southbound to lower level nodes. 447 South/Southbound and North/Northbound (Direction): 448 When describing protocol elements and procedures, we will be using 449 in different situations the directionality of the compass. I.e., 450 'south' or 'southbound' mean moving towards the bottom of the Clos 451 or Fat Tree network and 'north' and 'northbound' mean moving 452 towards the top of the Clos or Fat Tree network. 454 Northbound Link: A link to a node one level up or in other words, 455 one level further north. 457 Southbound Link: A link to a node one level down or in other words, 458 one level further south. 460 East-West Link: A link between two nodes at the same level. East- 461 West links are normally not part of Clos or "fat-tree" topologies. 463 Leaf shortcuts (L2L): East-West links at leaf level will need to be 464 differentiated from East-West links at other levels. 466 Routing on the host (RotH): Modern data center architecture variant 467 where servers/leaves are multi-homed and consecutively participate 468 in routing. 470 Northbound representation: Subset of topology information flooded 471 towards higher levels of the fabric. 473 Southbound representation: Subset of topology information sent 474 towards a lower level. 476 South Reflection: Often abbreviated just as "reflection" it defines 477 a mechanism where South Node TIEs are "reflected" from the level 478 south back up north to allow nodes in the same level without E-W 479 links to "see" each other's node TIEs. 481 TIE: This is an acronym for a "Topology Information Element". TIEs 482 are exchanged between RIFT nodes to describe parts of a network 483 such as links and address prefixes, in a fashion similar to ISIS 484 LSPs or OSPF LSAs. A TIE has always a direction and a type. We 485 will talk about North TIEs (sometimes abbreviated as N-TIEs) when 486 talking about TIEs in the northbound representation and South-TIEs 487 (sometimes abbreviated as S-TIEs) for the southbound equivalent. 488 TIEs have different types such as node and prefix TIEs. 490 Node TIE: This stands as acronym for a "Node Topology Information 491 Element" that contains all adjacencies the node discovered and 492 information about node itself. Node TIE should NOT be confused 493 with a North TIE since "node" defines the type of TIE rather than 494 its direction. 496 Prefix TIE: This is an acronym for a "Prefix Topology Information 497 Element" and it contains all prefixes directly attached to this 498 node in case of a North TIE and in case of South TIE the necessary 499 default routes the node advertises southbound. 501 Key Value TIE: A South TIE that is carrying a set of key value pairs 502 [DYNAMO]. It can be used to distribute information in the 503 southbound direction within the protocol. 505 TIDE: Topology Information Description Element, equivalent to CSNP 506 in ISIS. 508 TIRE: Topology Information Request Element, equivalent to PSNP in 509 ISIS. It can both confirm received and request missing TIEs. 511 De-aggregation/Disaggregation: Process in which a node decides to 512 advertise more specific prefixes Southwards, either positively to 513 attract the corresponding traffic, or negatively to repel it. 514 Disaggregation is performed to prevent black-holing and suboptimal 515 routing to the more specific prefixes. 517 LIE: This is an acronym for a "Link Information Element", largely 518 equivalent to HELLOs in IGPs and exchanged over all the links 519 between systems running RIFT to form three way adjacencies. 521 Flood Repeater (FR): A node can designate one or more northbound 522 neighbor nodes to be flood repeaters. The flood repeaters are 523 responsible for flooding northbound TIEs further north. They are 524 similar to MPR in OSLR. The document sometimes calls them flood 525 leaders as well. 527 Bandwidth Adjusted Distance (BAD): Each RIFT node can calculate the 528 amount of northbound bandwidth available towards a node compared 529 to other nodes at the same level and can modify the route distance 530 accordingly to allow for the lower level to adjust their load 531 balancing towards spines. 533 Overloaded: Applies to a node advertising `overload` attribute as 534 set. The semantics closely follow the meaning of the same 535 attribute in [ISO10589-Second-Edition]. 537 Interface: A layer 3 entity over which RIFT control packets are 538 exchanged. 540 Three-Way Adjacency: RIFT tries to form a unique adjacency over an 541 interface and exchange local configuration and necessary ZTP 542 information. An adjacency is only advertised in node TIEs and 543 used for computations after it achieved three-way state, i.e. both 544 routers reflected each other in LIEs including relevant security 545 information. LIEs before three-way state is reached may carry ZTP 546 related information already. 548 Bi-directional Adjacency: Bidirectional adjacency is an adjacency 549 where nodes of both sides of the adjacency advertised it in the 550 node TIEs with the correct levels and system IDs. Bi- 551 directionality is used to check in different algorithms whether 552 the link should be included. 554 Neighbor: Once a three-way adjacency has been formed a neighborship 555 relationship contains the neighbor's properties. Multiple 556 adjacencies can be formed to a remote node via parallel interfaces 557 but such adjacencies are NOT sharing a neighbor structure. Saying 558 "neighbor" is thus equivalent to saying "a three-way adjacency". 560 Cost: The term signifies the weighted distance between two 561 neighbors. 563 Distance: Sum of costs (bound by infinite distance) between two 564 nodes. 566 Shortest-Path First (SPF): A well-known graph algorithm attributed 567 to Dijkstra that establishes a tree of shortest paths from a 568 source to destinations on the graph. We use SPF acronym due to 569 its familiarity as general term for the node reachability 570 calculations RIFT can employ to ultimately calculate routes of 571 which Dijkstra algorithm is one. 573 North SPF (N-SPF): A reachability calculation that is progressing 574 northbound, as example SPF that is using South Node TIEs only. 575 Normally it progresses a single hop only and installs default 576 routes. 578 South SPF (S-SPF): A reachability calculation that is progressing 579 southbound, as example SPF that is using North Node TIEs only. 581 Security Envelope RIFT packets are flooded within an authenticated 582 security envelope that allows to protect the integrity of 583 information a node accepts. 585 3.2. Topology 586 ^ N +--------+ +--------+ 587 Level 2 | |ToF 21| |ToF 22| 588 E <-*-> W ++-+--+-++ ++-+--+-++ 589 | | | | | | | | | 590 S v P111/2 P121/2 | | | | 591 ^ ^ ^ ^ | | | | 592 | | | | | | | | 593 +--------------+ | +-----------+ | | | +---------------+ 594 | | | | | | | | 595 South +-----------------------------+ | | ^ 596 | | | | | | | All TIEs 597 0/0 0/0 0/0 +-----------------------------+ | 598 v v v | | | | | 599 | | +-+ +<-0/0----------+ | | 600 | | | | | | | | 601 +-+----++ optional +-+----++ ++----+-+ ++-----++ 602 Level 1 | | E/W link | | | | | | 603 |Spin111+----------+Spin112| |Spin121| |Spin122| 604 +-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 605 | | | South | | | | 606 | +---0/0--->-----+ 0/0 | +----------------+ | 607 0/0 | | | | | | | 608 | +---<-0/0-----+ | v | +--------------+ | | 609 v | | | | | | | 610 +-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 611 Level 0 | | (L2L) | | | | | | 612 |Leaf111+~~~~~~~~~~+Leaf112| |Leaf121| |Leaf122| 613 +-+-----+ +-+---+-+ +--+--+-+ +-+-----+ 614 + + \ / + + 615 Prefix111 Prefix112 \ / Prefix121 Prefix122 616 multi-homed 617 Prefix 618 +---------- PoD 1 ---------+ +---------- PoD 2 ---------+ 620 Figure 2: A Three Level Spine-and-Leaf Topology 621 .+--------+ +--------+ +--------+ +--------+ 622 .|ToF A1| |ToF B1| |ToF B2| |ToF A2| 623 .++-+-----+ ++-+-----+ ++-+-----+ ++-+-----+ 624 . | | | | | | | | 625 . | | | | | +---------------+ 626 . | | | | | | | | 627 . | | | +-------------------------+ | 628 . | | | | | | | | 629 . | +-----------------------+ | | | | 630 . | | | | | | | | 631 . | | +---------+ | +---------+ | | 632 . | | | | | | | | 633 . | +---------------------------------+ | | 634 . | | | | | | | | 635 .++-+-----+ ++-+-----+ +--+-+---+ +----+-+-+ 636 .|Spine111| |Spine112| |Spine121| |Spine122| 637 .+-+---+--+ ++----+--+ +-+---+--+ ++---+---+ 638 . | | | | | | | | 639 . | +--------+ | | +--------+ | 640 . | | | | | | | | 641 . | -------+ | | | +------+ | | 642 . | | | | | | | | 643 .+-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 644 .|Leaf111| |Leaf112| |Leaf121| |Leaf122| 645 .+-------+ +-------+ +-------+ +-------+ 647 Figure 3: Topology with Multiple Planes 649 We will use topology in Figure 2 (called commonly a fat tree/network 650 in modern IP fabric considerations [VAHDAT08] as homonym to the 651 original definition of the term [FATTREE]) in all further 652 considerations. This figure depicts a generic "single plane fat- 653 tree" and the concepts explained using three levels apply by 654 induction to further levels and higher degrees of connectivity. 655 Further, this document will deal also with designs that provide only 656 sparser connectivity and "partitioned spines" as shown in Figure 3 657 and explained further in Section 4.1.2. 659 4. RIFT: Routing in Fat Trees 661 We present here a detailed outline of a protocol optimized for 662 Routing in Fat Trees (RIFT) that in most abstract terms has many 663 properties of a modified link-state protocol 664 [RFC2328][ISO10589-Second-Edition] when distributing information 665 northbound and distance vector [RFC4271] protocol when distributing 666 information southbound. While this is an unusual combination, it 667 does quite naturally exhibit the desirable properties we seek. 669 4.1. Overview 671 4.1.1. Properties 673 The most singular property of RIFT is that it floods flat link-state 674 information northbound only so that each level obtains the full 675 topology of levels south of it. Link-State information is, with some 676 exceptions, never flooded East-West or back South again. Exceptions 677 like south reflection is explained in detail in Section 4.2.5.1 and 678 east-west flooding at ToF level in multi-plane fabrics is outlined in 679 Section 4.1.2. In southbound direction, the protocol operates like a 680 "fully summarizing, unidirectional" path vector protocol or rather a 681 distance vector with implicit split horizon. Routing information, 682 normally just the default route, propagates one hop south and is 're- 683 advertised' by nodes at next lower level. However, RIFT uses 684 flooding in the southern direction as well to avoid the overhead of 685 building an update per adjacency. We omit describing the East-West 686 direction for the moment. 688 Those information flow constraints create not only an anisotropic 689 protocol (i.e. the information is not distributed "evenly" or 690 "clumped" but summarized along the N-S gradient) but also a "smooth" 691 information propagation where nodes do not receive the same 692 information from multiple directions at the same time. Normally, 693 accepting the same reachability on any link, without understanding 694 its topological significance, forces tie-breaking on some kind of 695 distance metric. And such tie-breaking leads ultimately in hop-by- 696 hop forwarding to shortest paths only. In contrast to that, RIFT, 697 under normal conditions, does not need to tie-break same reachability 698 information from multiple directions. Its computation principles 699 (south forwarding direction is always preferred) leads to valley-free 700 forwarding behavior. And since valley free routing is loop-free, it 701 can use all feasible paths which is another highly desirable property 702 if available bandwidth should be utilized to the maximum extent 703 possible. 705 To account for the "northern" and the "southern" information split 706 the link state database is partitioned accordingly into "north 707 representation" and "south representation" TIEs. In simplest terms 708 the North TIEs contain a link state topology description of lower 709 levels and and South TIEs carry simply default routes towards the 710 level above. This oversimplified view will be refined gradually in 711 following sections while introducing protocol procedures and state 712 machines at the same time. 714 4.1.2. Generalized Topology View 716 This section will shed some light on the topologies RIFT addresses, 717 including multi plane fabrics and their implications. Readers that 718 are only interested in single plane designs, i.e. all top-of-fabric 719 nodes being topologically equal and initially connected to all the 720 switches at the level below them, can skip the rest of Section 4.1.2 721 and resulting Section 4.2.5.2 as well. 723 It is quite difficult to visualize multi plane design, which are 724 effectively multi-dimensional switching matrices. To cope with that, 725 we will introduce a methodology allowing us to depict the 726 connectivity in two-dimensional pictures. Further, we will leverage 727 the fact that we are dealing basically with stacked crossbar fabrics 728 where ports align "on top of each other" in a regular fashion. 730 A word of caution to the reader; at this point it should be observed 731 that the language used to describe Clos variations, especially in 732 multi-plane designs, varies widely between sources. This description 733 follows the terminology introduced in Section 3.1. It is unavoidable 734 to have it present to be able to follow the rest of this section 735 correctly. 737 4.1.2.1. Terminology 739 This section describes the terminology and acronyms used in the rest 740 of the text. 742 P: Denotes the number of PoDs in a topology. 744 S: Denotes the number of ToF nodes in a topology. 746 K: Denotes the number of ports in radix of a switch pointing north or 747 south. Further, K_LEAF denotes number of ports pointing south, 748 i.e. towards leaves, and K_TOP for number of ports pointing north 749 towards a higher spine level. To simplify the visual aids, 750 notations and further considerations, K will be mostly set to 751 Radix/2. 753 ToF Plane: Set of ToFs that are aware of each other by means of 754 south reflection. We number planes by capital letters, e.g. 755 plane A. 757 N: Denote the number of independent ToF planes in a topology. 759 R: Denotes a redundancy factor, i.e. number of connections a spine 760 has towards a ToF plane. In single plane design K_TOP is equal to 761 R. 763 Fallen Leaf: A fallen leaf in a plane Z is a switch that lost all 764 connectivity northbound to Z. 766 4.1.2.2. Clos as Crossed Crossbars 768 The typical topology for which RIFT is defined is built of P number 769 of PoDs and connected together by S number of ToF nodes. A PoD node 770 has K number of ports (also called Radix). We consider half of them 771 (K=Radix/2) as connecting host devices from the south, and the other 772 half connecting to interleaved PoD Top-Level switches to the north. 773 Ratio K can be chosen differently without loss of generality when 774 port speeds differ or the fabric is oversubscribed but K=R/2 allows 775 for more readable representation whereby there are as many ports 776 facing north as south on any intermediate node. We represent a node 777 hence in a schematic fashion with ports "sticking out" to its north 778 and south rather than by the usual real-world front faceplate designs 779 of the day. 781 Figure 4 provides a view of a leaf node as seen from the north, i.e. 782 showing ports that connect northbound. For lack of a better symbol, 783 we have chosen to use the "o" as ASCII visualisation of a single 784 port. In this example, K_LEAF has 6 ports. Observe that the number 785 of PoDs is not related to Radix unless the ToF Nodes are constrained 786 to be the same as the PoD nodes in a particular deployment. 788 Top view 789 +---+ 790 | | 791 | o | e.g., Radix = 12, K_LEAF = 6 792 | | 793 | o | 794 | | ------------------------- 795 | o ------- Physical Port (Ethernet) ----+ 796 | | ------------------------- | 797 | o | | 798 | | | 799 | o | | 800 | | | 801 | o | | 802 | | | 803 +---+ | 805 || || || || || || || 806 +----+ +------------------------------------------------+ 807 | | | | 808 +----+ +------------------------------------------------+ 809 || || || || || || || 810 Side views 812 Figure 4: A Leaf Node, K_LEAF=6 814 The Radix of a PoD's top node may be different than that of the leaf 815 node. Though, more often than not, a same type of node is used for 816 both, effectively forming a square (K*K). In general case, we could 817 have switches with K_TOP southern ports on nodes at the top of the 818 PoD which are not necessarily the same as K_LEAF. For instance, in 819 the representations below, we pick a 6 port K_LEAF and a 8 port 820 K_TOP. In order to form a crossbar, we need K_TOP Leaf Nodes as 821 illustrated in Figure 5. 823 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ 824 | | | | | | | | | | | | | | | | 825 | o | | o | | o | | o | | o | | o | | o | | o | 826 | | | | | | | | | | | | | | | | 827 | o | | o | | o | | o | | o | | o | | o | | o | 828 | | | | | | | | | | | | | | | | 829 | o | | o | | o | | o | | o | | o | | o | | o | 830 | | | | | | | | | | | | | | | | 831 | o | | o | | o | | o | | o | | o | | o | | o | 832 | | | | | | | | | | | | | | | | 833 | o | | o | | o | | o | | o | | o | | o | | o | 834 | | | | | | | | | | | | | | | | 835 | o | | o | | o | | o | | o | | o | | o | | o | 836 | | | | | | | | | | | | | | | | 837 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ 839 Figure 5: Southern View of a PoD, K_TOP=8 841 As further visualized in Figure 6 the K_TOP Leaf Nodes are fully 842 interconnected with the K_LEAF PoD-top nodes, providing connectivity 843 that can be represented as a crossbar when "looked at" from the 844 north. The result is that, in the absence of a failure, a packet 845 entering the PoD from the north on any port can be routed to any port 846 in the south of the PoD and vice versa. And that is precisely why it 847 makes sense to talk about a "switching matrix". 849 E<-*->W 851 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ 852 | | | | | | | | | | | | | | | | 853 +--------------------------------------------------------+ 854 | o o o o o o o o | 855 +--------------------------------------------------------+ 856 +--------------------------------------------------------+ 857 | o o o o o o o o | 858 +--------------------------------------------------------+ 859 +--------------------------------------------------------+ 860 | o o o o o o o o | 861 +--------------------------------------------------------+ 862 +--------------------------------------------------------+ 863 | o o o o o o o o | 864 +--------------------------------------------------------+ 865 +--------------------------------------------------------+ 866 | o o o o o o o o |<-+ 867 +--------------------------------------------------------+ | 868 +--------------------------------------------------------+ | 869 | o o o o o o o o | | 870 +--------------------------------------------------------+ | 871 | | | | | | | | | | | | | | | | | 872 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | 873 ^ | 874 | | 875 | ---------- --------------------- | 876 +----- Leaf Node PoD top Node (Spine) --+ 877 ---------- --------------------- 879 Figure 6: Northern View of a PoD's Spines, K_TOP=8 881 Side views of this PoD is illustrated in Figure 7 and Figure 8. 883 Connecting to Spine 885 || || || || || || || || 886 +----------------------------------------------------------------+ N 887 | PoD top Node seen sideways | ^ 888 +----------------------------------------------------------------+ | 889 || || || || || || || || * 890 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ | 891 | | | | | | | | | | | | | | | | v 892 +----+ +----+ +----+ +----+ +----+ +----+ +----+ +----+ S 893 || || || || || || || || 895 Connecting to Client nodes 897 Figure 7: Side View of a PoD, K_TOP=8, K_LEAF=6 899 Connecting to Spine 901 || || || || || || 902 +----+ +----+ +----+ +----+ +----+ +----+ N 903 | | | | | | | | | | | PoD top Nodes ^ 904 +----+ +----+ +----+ +----+ +----+ +----+ | 905 || || || || || || * 906 +------------------------------------------------+ | 907 | Leaf seen sideways | v 908 +------------------------------------------------+ S 909 || || || || || || 911 Connecting to Client nodes 913 Figure 8: Other Side View of a PoD, K_TOP=8, K_LEAF=6, 90o turn in 914 E-W Plane 916 As next step, let us observe that a resulting PoD can be abstracted 917 as a bigger node with a number K of K_POD= K_TOP * K_LEAF, and the 918 design can recurse. 920 It will be critical at this point that, before progressing further, 921 the concept and the picture of "crossed crossbars" is clear. Else, 922 the following considerations might be difficult to comprehend. 924 To continue, the PoDs are interconnected with each other through a 925 Top-of-Fabric (ToF) node at the very top or the north edge of the 926 fabric. The resulting ToF is NOT partitioned if, and only if (IIF), 927 every PoD top level node (spine) is connected to every ToF Node. 929 This topology is also referred to as a single plane configuration and 930 is quite popular due to its simplicity. In order to reach a 1:1 931 connectivity ratio between the ToF and the leaves, it results that 932 there are K_TOP ToF nodes, because each port of a ToP node connects 933 to a different ToF node, and K_LEAF ToP nodes for the same reason. 934 Consequently, it will take (P * K_LEAF) ports on a ToF node to 935 connect to each of the K_LEAF ToP nodes of the P PoDs, as shown in 936 Figure 9. 938 [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] <-----+ 939 | | | | | | | | | 940 [=================================] | ----------- 941 | | | | | | | | +----- Top-of-Fabric 942 [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] +----- Node -------+ 943 | ----------- | 944 | v 945 +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ <-----+ +-+ 946 | | | | | | | | | | | | | | | | | | 947 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | 948 [ |o| |o| |o| |o| |o| |o| |o| |o| ] ------------------------- | | 949 [ |o| |o| |o| |o| |o| |o| |o| |o<--- Physical Port (Ethernet) | | 950 [ |o| |o| |o| |o| |o| |o| |o| |o| ] ------------------------- | | 951 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | 952 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | 953 | | | | | | | | | | | | | | | | | | 954 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | 955 [ |o| |o| |o| |o| |o| |o| |o| |o| ] -------------- | | 956 [ |o| |o| |o| |o| |o| |o| |o| |o| ] <--- PoD top level | | 957 [ |o| |o| |o| |o| |o| |o| |o| |o| ] node (Spine) ---+ | | 958 [ |o| |o| |o| |o| |o| |o| |o| |o| ] -------------- | | | 959 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | | 960 | | | | | | | | | | | | | | | | -+ +- +-+ v | | 961 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | --| |--[ ]--| | 962 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | ----- | --| |--[ ]--| | 963 [ |o| |o| |o| |o| |o| |o| |o| |o| ] +--- PoD ---+ --| |--[ ]--| | 964 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | ----- | --| |--[ ]--| | 965 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | --| |--[ ]--| | 966 [ |o| |o| |o| |o| |o| |o| |o| |o| ] | | --| |--[ ]--| | 967 | | | | | | | | | | | | | | | | -+ +- +-+ | | 968 +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ 970 Figure 9: Fabric Spines and TOFs in Single Plane Design, 3 PoDs 972 The top view can be collapsed into a third dimension where the hidden 973 depth index is representing the PoD number. We can then show one PoD 974 as a class of PoDs and hence save one dimension in our 975 representation. The Spine Node expands in the depth and the vertical 976 dimensions, whereas the PoD top level Nodes are constrained, in 977 horizontal dimension. A port in the 2-D representation represents 978 effectively the class of all the ports at the same position in all 979 the PoDs that are projected in its position along the depth axis. 980 This is shown in Figure 10. 982 / / / / / / / / / / / / / / / / 983 / / / / / / / / / / / / / / / / 984 / / / / / / / / / / / / / / / / 985 / / / / / / / / / / / / / / / / ] 986 +-+ +-+ +-+ +-+ +-+ +-+ +-+ +-+ ]] 987 | | | | | | | | | | | | | | | | ] --------------------------- 988 [ |o| |o| |o| |o| |o| |o| |o| |o| ] <-- PoD top level node (Spine) 989 [ |o| |o| |o| |o| |o| |o| |o| |o| ] --------------------------- 990 [ |o| |o| |o| |o| |o| |o| |o| |o| ]]]] 991 [ |o| |o| |o| |o| |o| |o| |o| |o| ]]] ^^ 992 [ |o| |o| |o| |o| |o| |o| |o| |o| ]] // PoDs 993 [ |o| |o| |o| |o| |o| |o| |o| |o| ] // (in depth) 994 | |/| |/| |/| |/| |/| |/| |/| |/ // 995 +-+ +-+ +-+/+-+/+-+ +-+ +-+ +-+ // 996 ^ 997 | ---------------- 998 +----- Top-of-Fabric Node 999 ---------------- 1001 Figure 10: Collapsed Northern View of a Fabric for Any Number of PoDs 1003 As simple as single plane deployment is it introduces a limit due to 1004 the bound on the available radix of the ToF nodes that has to be at 1005 least P * K_LEAF. Nevertheless, we will see that a distinct 1006 advantage of a connected or non-partitioned Top-of-Fabric is that all 1007 failures can be resolved by simple, non-transitive, positive 1008 disaggregation (i.e. nodes advertising more specific prefixes with 1009 the default to the level below them that is however not propagated 1010 further down the fabric) as described in Section 4.2.5.1 . In other 1011 words; non-partitioned ToF nodes can always reach nodes below or 1012 withdraw the routes from PoDs they cannot reach unambiguously. And 1013 with this, positive disaggregation can heal all failures and still 1014 allow all the ToF nodes to see each other via south reflection. 1015 Disaggregation will be explained in further detail in Section 4.2.5. 1017 In order to scale beyond the "single plane limit", the Top-of-Fabric 1018 can be partitioned by a N number of identically wired planes where N 1019 is an integer divider of K_LEAF. The 1:1 ratio and the desired 1020 symmetry are still served, this time with (K_TOP * N) ToF nodes, each 1021 of (P * K_LEAF / N) ports. N=1 represents a non-partitioned Spine 1022 and N=K_LEAF is a maximally partitioned Spine. Further, if R is any 1023 integer divisor of K_LEAF, then N=K_LEAF/R is a feasible number of 1024 planes and R a redundancy factor. If proves convenient for 1025 deployments to use a radix for the leaf nodes that is a power of 2 so 1026 they can pick a number of planes that is a lower power of 2. The 1027 example in Figure 11 splits the Spine in 2 planes with a redundancy 1028 factor R=3, meaning that there are 3 non-intersecting paths between 1029 any leaf node and any ToF node. A ToF node must have, in this case, 1030 at least 3*P ports, and be directly connected to 3 of the 6 PoD-ToP 1031 nodes (spines) in each PoD. 1033 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ 1034 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1035 | | o | | o | | o | | o | | o | | o | | o | | o | | 1036 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1037 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1038 | | o | | o | | o | | o | | o | | o | | o | | o | | 1039 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1040 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1041 | | o | | o | | o | | o | | o | | o | | o | | o | | 1042 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1043 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ 1045 Plane 1 1046 ----------- . ------------ . ------------ . ------------ . -------- 1047 Plane 2 1049 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ 1050 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1051 | | o | | o | | o | | o | | o | | o | | o | | o | | 1052 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1053 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1054 | | o | | o | | o | | o | | o | | o | | o | | o | | 1055 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1056 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1057 | | o | | o | | o | | o | | o | | o | | o | | o | | 1058 +-| |--| |--| |--| |--| |--| |--| |--| |-+ 1059 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ 1060 ^ 1061 | 1062 | ---------------- 1063 +----- Top-of-Fabric node 1064 "across" depth 1065 ---------------- 1067 Figure 11: Northern View of a Multi-Plane ToF Level, K_LEAF=6, N=2 1069 At the extreme end of the spectrum it is even possible to fully 1070 partition the spine with N = K_LEAF and R=1, while maintaining 1071 connectivity between each leaf node and each Top-of-Fabric node. In 1072 that case the ToF node connects to a single Port per PoD, so it 1073 appears as a single port in the projected view represented in 1074 Figure 12. The number of ports required on the Spine Node is more or 1075 equal to P, the number of PoDs. 1077 Plane 1 1078 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ -+ 1079 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1080 | | o | | o | | o | | o | | o | | o | | o | | o | | | 1081 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1082 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | 1083 ----------- . ------------------- . ------------ . -------- | 1084 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | 1085 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1086 | | o | | o | | o | | o | | o | | o | | o | | o | | | 1087 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1088 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | 1089 ----------- . ------------ . ---- . ------------ . -------- | 1090 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | 1091 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1092 | | o | | o | | o | | o | | o | | o | | o | | o | | | 1093 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | 1094 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | 1095 ----------- . ------------ . ------------------- . -------- +<-+ 1096 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | 1097 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1098 | | o | | o | | o | | o | | o | | o | | o | | o | | | | 1099 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1100 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | 1101 ----------- . ------------ . ------------ . ---- . -------- | | 1102 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | 1103 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1104 | | o | | o | | o | | o | | o | | o | | o | | o | | | | 1105 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1106 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | 1107 ----------- . ------------ . ------------ . --------------- | | 1108 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ | | 1109 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1110 | | o | | o | | o | | o | | o | | o | | o | | o | | | | 1111 +-| |--| |--| |--| |--| |--| |--| |--| |-+ | | 1112 +---+ +---+ +---+ +---+ +---+ +---+ +---+ +---+ -+ | 1113 Plane 6 ^ | 1114 | | 1115 | ---------------- ------------- | 1116 +----- ToF Node Class of PoDs ---+ 1117 ---------------- ------------- 1119 Figure 12: Northern View of a Maximally Partitioned ToF Level, R=1 1121 4.1.3. Fallen Leaf Problem 1123 As mentioned earlier, RIFT exhibits an anisotropic behavior tailored 1124 for fabrics with a North / South orientation and a high level of 1125 interleaving paths. A non-partitioned fabric makes a total loss of 1126 connectivity between a Top-of-Fabric node at the north and a leaf 1127 node at the south a very rare but yet possible occasion that is fully 1128 healed by positive disaggregation as described in Section 4.2.5.1. 1129 In large fabrics or fabrics built from switches with low radix, the 1130 ToF ends often being partitioned in planes which makes the occurrence 1131 of having a given leaf being only reachable from a subset of the ToF 1132 nodes more likely to happen. This makes some further considerations 1133 necessary. 1135 We define a "Fallen Leaf" as a leaf that can be reached by only a 1136 subset, but not all, of Top-of-Fabric nodes due to missing 1137 connectivity. If R is the redundancy factor, then it takes at least 1138 R breakages to reach a "Fallen Leaf" situation. 1140 In a maximally partitioned fabric, the redundancy factor is R= 1, so 1141 any breakage in the fabric may cause one or more fallen leaves. 1142 However, not all cases require disaggregation. The following cases 1143 do not require particular action in such scenario: 1145 If a southern link on a node goes down, then connectivity through 1146 that node is lost for all nodes south of it. There is no need to 1147 disaggregate since the connectivity to this node is lost for all 1148 spine nodes in a same fashion. 1150 If a ToF Node goes down, then northern traffic towards it is 1151 routed via alternate ToF nodes in the same plane and there is no 1152 need to disaggregate routes. 1154 In a general manner, the mechanism of non-transitive positive 1155 disaggregation is sufficient when the disaggregating ToF nodes 1156 collectively connect to all the ToP nodes in the broken plane. This 1157 happens in the following case: 1159 If the breakage is the last northern link from a ToP node to a ToF 1160 node going down, then the fallen leaf problem affects only The ToF 1161 node, and the connectivity to all the nodes in the PoD is lost 1162 from that ToF node. This can be observed by other ToF nodes 1163 within the plane where the ToP node is located and positively 1164 disaggregated within that plane. 1166 On the other hand, there is a need to disaggregate the routes to 1167 Fallen Leaves in a transitive fashion, all the way to the other 1168 leaves in the following cases: 1170 o If the breakage is the last northern link from a leaf node within 1171 a plane (there is only one such link in a maximally partitioned 1172 fabric) that goes down, then connectivity to all unicast prefixes 1173 attached to the leaf node is lost within the plane where the link 1174 is located. Southern Reflection by a leaf node, e.g., between ToP 1175 nodes, if the PoD has only 2 levels, happens in between planes, 1176 allowing the ToP nodes to detect the problem within the PoD where 1177 it occurs and positively disaggregate. The breakage can be 1178 observed by the ToF nodes in the same plane through the North 1179 flooding of TIEs from the ToP nodes. The ToF nodes however need 1180 to be aware of all the affected prefixes for the negative, 1181 possibly transitive disaggregation to be fully effective (i.e. a 1182 node advertising in control plane that it cannot reach a certain 1183 more specific prefix than default whereas such disaggregation must 1184 in extreme condition propagate further down southbound). The 1185 problem can also be observed by the ToF nodes in the other planes 1186 through the flooding of North TIEs from the affected leaf nodes, 1187 together with non-node North TIEs which indicate the affected 1188 prefixes. To be effective in that case, the positive 1189 disaggregation must reach down to the nodes that make the plane 1190 selection, which are typically the ingress leaf nodes. The 1191 information is not useful for routing in the intermediate levels. 1193 o If the breakage is a ToP node in a maximally partitioned fabric - 1194 in which case it is the only ToP node serving the plane in that 1195 PoD - goes down, then the connectivity to all the nodes in the PoD 1196 is lost within the plane where the ToP node is located. 1197 Consequently, all leaves of the PoD fall in this plane. Since the 1198 Southern Reflection between the ToF nodes happens only within a 1199 plane, ToF nodes in other planes cannot discover fallen leaves in 1200 a different plane. They also cannot determine beyond their local 1201 plane whether a leaf node that was initially reachable has become 1202 unreachable. As the breakage can be observed by the ToF nodes in 1203 the plane where the breakage happened, the ToF nodes in the plane 1204 need to be aware of all the affected prefixes for the negative 1205 disaggregation to be fully effective. The problem can also be 1206 observed by the ToF nodes in the other planes through the flooding 1207 of North TIEs from the affected leaf nodes, if there are only 3 1208 levels and the ToP nodes are directly connected to the leaf nodes, 1209 and then again it can only be effective it is propagated 1210 transitively to the leaf, and useless above that level. 1212 For the sake of easy comprehension let us roll the abstractions back 1213 into a simple example and observe that in Figure 3 the loss of link 1214 Spine 122 to Leaf 122 will make Leaf 122 a fallen leaf for Top-of- 1215 Fabric plane B. Worse, if the cabling was never present in first 1216 place, plane B will not even be able to know that such a fallen leaf 1217 exists. Hence partitioning without further treatment results in two 1218 grave problems: 1220 o Leaf 111 trying to route to Leaf 122 MUST choose Spine 111 in 1221 plane A as its next hop since plane B will inevitably blackhole 1222 the packet when forwarding using default routes or do excessive 1223 bow tying. This information must be in its routing table. 1225 o Any kind of "flooding" or distance vector trying to deal with the 1226 problem by distributing host routes will be able to converge only 1227 using paths through leaves. The flooding of information on Leaf 1228 122 would have to go up to Top-of-Fabric A and then "loopback" 1229 over other leaves to ToF B leading in extreme cases to traffic for 1230 Leaf 122 when presented to plane B taking an "inverted fabric" 1231 path where leaves start to serve as TOFs, at least for the 1232 duration of a protocol's convergence. 1234 4.1.4. Discovering Fallen Leaves 1236 As illustrated later, and without further proof, the way to deal with 1237 fallen leaves in multi-plane designs, when aggregation is used, is 1238 that RIFT requires all the ToF nodes to share the same north topology 1239 database. This happens naturally in single plane design by the means 1240 of northbound flooding and south reflection but needs additional 1241 considerations in multi-plane fabrics. To satisfy this RIFT, in 1242 multi-plane designs, relies at the ToF level on ring interconnection 1243 of switches in multiple planes. Other solutions are possible but 1244 they either need more cabling or end up having much longer flooding 1245 paths and/or single points of failure. 1247 In detail, by reserving two ports on each Top-of-Fabric node it is 1248 possible to connect them together by interplane bi-directional rings 1249 as illustrated in Figure 13. The rings will be used to exchange full 1250 north topology information between planes. All ToFs having same 1251 north topology allows by the means of transitive, negative 1252 disaggregation described in Section 4.2.5.2 to efficiently fix any 1253 possible fallen leaf scenario. Somewhat as a side-effect, the 1254 exchange of information fulfills the ask to present full view of the 1255 fabric topology at the Top-of-Fabric level, without the need to 1256 collate it from multiple points by additional complexity of 1257 technologies like [RFC7752]. 1259 +---+ +---+ +---+ +---+ +---+ +---+ +--------+ 1260 | | | | | | | | | | | | | | 1261 | | | | | | | | 1262 +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | 1263 +-| |--| |--| |--| |--| |--| |--| |-+ | 1264 | | o | | o | | o | | o | | o | | o | | o | | | Plane A 1265 +-| |--| |--| |--| |--| |--| |--| |-+ | 1266 +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | 1267 | | | | | | | | 1268 +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | 1269 +-| |--| |--| |--| |--| |--| |--| |-+ | 1270 | | o | | o | | o | | o | | o | | o | | o | | | Plane B 1271 +-| |--| |--| |--| |--| |--| |--| |-+ | 1272 +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | 1273 | | | | | | | | 1274 ... | 1275 | | | | | | | | 1276 +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | 1277 +-| |--| |--| |--| |--| |--| |--| |-+ | 1278 | | o | | o | | o | | o | | o | | o | | o | | | Plane X 1279 +-| |--| |--| |--| |--| |--| |--| |-+ | 1280 +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ +-o-+ | 1281 | | | | | | | | 1282 | | | | | | | | | | | | | | 1283 +---+ +---+ +---+ +---+ +---+ +---+ +--------+ 1284 Rings 1 2 3 4 5 6 7 1286 Figure 13: Connecting Top-of-Fabric Nodes Across Planes by Rings 1288 4.1.5. Addressing the Fallen Leaves Problem 1290 One consequence of the "Fallen Leaf" problem is that some prefixes 1291 attached to the fallen leaf become unreachable from some of the ToF 1292 nodes. RIFT proposes two methods to address this issue, the positive 1293 and the negative disaggregation. Both methods flood South TIEs to 1294 advertise the impacted prefix(es). 1296 When used for the operation of disaggregation, a positive South TIE, 1297 as usual, indicates reachability to a prefix of given length and all 1298 addresses subsumed by it. In contrast, a negative route 1299 advertisement indicates that the origin cannot route to the 1300 advertised prefix. 1302 The positive disaggregation is originated by a router that can still 1303 reach the advertised prefix, and the operation is not transitive. In 1304 other words, the receiver does not generate its own flooding south as 1305 a consequence of receiving positive disaggregation advertisements 1306 from a higher level node. The effect of a positive disaggregation is 1307 that the traffic to the impacted prefix will follow the longest match 1308 and will be limited to the northbound routers that advertised the 1309 more specific route. 1311 In contrast, the negative disaggregation can be transitive, and is 1312 propagated south when all the possible routes have been advertised as 1313 negative exceptions. A negative route advertisement is only 1314 actionable when the negative prefix is aggregated by a positive route 1315 advertisement for a shorter prefix. In such case, the negative 1316 advertisement "punches out a hole" in the positive route in the 1317 routing table, making the positive prefix reachable through the 1318 originator with the special consideration of the negative prefix 1319 removing certain next hop neighbors. 1321 When the ToF is not partitioned, the collective southern flooding of 1322 the positive disaggregation by the ToF nodes that can still reach the 1323 impacted prefix is in general enough to cover all the switches at the 1324 next level south, typically the ToP nodes. If all those switches are 1325 aware of the disaggregation, they collectively create a ceiling that 1326 intercepts all the traffic north and forwards it to the ToF nodes 1327 that advertised the more specific route. In that case, the positive 1328 disaggregation alone is sufficient to solve the fallen leaf problem. 1330 On the other hand, when the fabric is partitioned in planes, the 1331 positive disaggregation from ToF nodes in different planes do not 1332 reach the ToP switches in the affected plane and cannot solve the 1333 fallen leaves problem. In other words, a breakage in a plane can 1334 only be solved in that plane. Also, the selection of the plane for a 1335 packet typically occurs at the leaf level and the disaggregation must 1336 be transitive and reach all the leaves. In that case, the negative 1337 disaggregation is necessary. The details on the RIFT approach to 1338 deal with fallen leaves in an optimal way are specified in 1339 Section 4.2.5.2. 1341 4.2. Specification 1343 This section specifies the protocol in a normative fashion by either 1344 prescriptive procedures or behavior defined by Finite State Machines 1345 (FSM). 1347 Some FSM figures are provided as [DOT] description due to limitations 1348 of ASCII art. 1350 "On Entry" actions on FSM state are performed every time and right 1351 before the according state is entered, i.e. after any transitions 1352 from previous state. 1354 "On Exit" actions are performed every time and immediately when a 1355 state is exited, i.e. before any transitions towards target state are 1356 performed. 1358 Any attempt to transition from a state towards another on reception 1359 of an event where no action is specified MUST be considered an 1360 unrecoverable error. 1362 The FSMs and procedures are normative in the sense that an 1363 implementation MUST implement them either literally or an 1364 implementation MUST exhibit externally observable behavior that is 1365 identical to the execution of the specified FSMs. 1367 Where a FSM representation is inconvenient, i.e. the amount of 1368 procedures and kept state exceeds the amount of transitions, we defer 1369 to a more procedural description on data structures. 1371 4.2.1. Transport 1373 All packet formats are defined in Thrift [thrift] models in 1374 Appendix B. 1376 The serialized model is carried in an envelope within a UDP frame 1377 that provides security and allows validation/modification of several 1378 important fields without de-serialization for performance and 1379 security reasons. 1381 4.2.2. Link (Neighbor) Discovery (LIE Exchange) 1383 RIFT LIE exchange auto-discovers neighbors, negotiates ZTP parameters 1384 and discovers miscablings. It uses a three-way handshake mechanism 1385 which is a cleaned up version of [RFC5303]. Observe that for easier 1386 comprehension the terminology of one/two and three-way states does 1387 NOT align with OSPF or ISIS FSMs albeit they use roughly same 1388 mechanisms. The formation progresses under normal conditions from 1389 one-way to two-way and then three-way state at which point it is 1390 ready to exchange TIEs per Section 4.2.3. 1392 LIE exchange happens over well-known administratively locally scoped 1393 and configured or otherwise well-known IPv4 multicast address 1394 [RFC2365] and/or link-local multicast scope [RFC4291] for IPv6 1395 [RFC8200] using a configured or otherwise a well-known destination 1396 UDP port defined in Appendix C.1. LIEs SHOULD be sent with an IPv4 1397 Time to Live (TTL) / IPv6 Hop Limit (HL) of 1 to prevent RIFT 1398 information reaching beyond a single L3 next-hop in the topology. 1399 LIEs SHOULD be sent with network control precedence. 1401 Originating port of the LIE has no further significance other than 1402 identifying the origination point. LIEs are exchanged over all links 1403 running RIFT. 1405 An implementation MAY listen and send LIEs on IPv4 and/or IPv6 1406 multicast addresses. A node MUST NOT originate LIEs on an address 1407 family if it does not process received LIEs on that family. LIEs on 1408 same link are considered part of the same negotiation independent of 1409 the address family they arrive on. Observe further that the LIE 1410 source address may not identify the peer uniquely in unnumbered or 1411 link-local address cases so the response transmission MUST occur over 1412 the same interface the LIEs have been received on. A node MAY use 1413 any of the adjacency's source addresses it saw in LIEs on the 1414 specific interface during adjacency formation to send TIEs. That 1415 implies that an implementation MUST be ready to accept TIEs on all 1416 addresses it used as source of LIE frames. 1418 A three-way adjacency over any address family implies support for 1419 IPv4 forwarding if the `v4_forwarding_capable` flag is set to true 1420 and a node can use [RFC5549] type of forwarding in such a situation. 1421 It is expected that the whole fabric supports the same type of 1422 forwarding of address families on all the links. Operation of a 1423 fabric where only some of the links are supporting forwarding on an 1424 address family and others do not is outside the scope of this 1425 specification. 1427 The protocol does NOT support selective disabling of address 1428 families, disabling v4 forwarding capability or any local address 1429 changes in three-way state, i.e. if a link has entered three-way IPv4 1430 and/or IPv6 with a neighbor on an adjacency and it wants to stop 1431 supporting one of the families or change any of its local addresses 1432 or stop v4 forwarding, it has to tear down and rebuild the adjacency. 1433 It also has to remove any information it stored about the adjacency 1434 such as LIE source addresses seen. 1436 Unless ZTP as described in Section 4.2.7 is used, each node is 1437 provisioned with the level at which it is operating. It MAY be also 1438 provisioned with its PoD. If any of those values is undefined, then 1439 accordingly a default level and/or an "undefined" PoD are assumed. 1440 This means that leaves do not need to be configured at all if initial 1441 configuration values are all left at "undefined" value. Nodes above 1442 ToP MUST remain at "any" PoD value which has the same value as 1443 "undefined" PoD. This information is propagated in the LIEs 1444 exchanged. 1446 Further definitions of leaf flags are found in Section 4.2.7 given 1447 they have implications in terms of level and adjacency forming here. 1449 A node tries to form a three-way adjacency if and only if 1451 1. the node is in the same PoD or either the node or the neighbor 1452 advertises "undefined/any" PoD membership (PoD# = 0) AND 1454 2. the neighboring node is running the same MAJOR schema version AND 1456 3. the neighbor is not member of some PoD while the node has a 1457 northbound adjacency already joining another PoD AND 1459 4. the neighboring node uses a valid System ID AND 1461 5. the neighboring node uses a different System ID than the node 1462 itself 1464 6. the advertised MTUs match on both sides AND 1466 7. both nodes advertise defined level values AND 1468 8. [ 1470 i) the node is at level 0 and has no three way adjacencies 1471 already to nodes at Highest Adjacency Three-Way level (HAT as 1472 defined later in Section 4.2.7.1) with level different than 1473 the adjacent node OR 1475 ii) the node is not at level 0 and the neighboring node is at 1476 level 0 OR 1478 iii) both nodes are at level 0 AND both indicate support for 1479 Section 4.3.8 OR 1481 iv) neither node is at level 0 and the neighboring node is at 1482 most one level away 1484 ]. 1486 The rules checking PoD numbering MAY be optionally disregarded by a 1487 node if PoD detection is undesirable or has to be ignored. This will 1488 not affect the correctness of the protocol except preventing 1489 detection of certain miscabling cases. 1491 A node configured with "undefined" PoD membership MUST, after 1492 building first northbound three way adjacencies to a node being in a 1493 defined PoD, advertise that PoD as part of its LIEs. In case that 1494 adjacency is lost, from all available northbound three way 1495 adjacencies the node with the highest System ID and defined PoD is 1496 chosen. That way the northmost defined PoD value (normally the ToP 1497 nodes) can diffuse southbound towards the leaves "forcing" the PoD 1498 value on any node with "undefined" PoD. 1500 LIEs arriving with IPv4 Time to Live (TTL) / IPv6 Hop Limit (HL) 1501 larger than 1 MUST be ignored. 1503 A node SHOULD NOT send out LIEs without defined level in the header 1504 but in certain scenarios it may be beneficial for trouble-shooting 1505 purposes. 1507 4.2.2.1. LIE FSM 1509 This section specifies the precise, normative LIE FSM and can be 1510 omitted unless the reader is pursuing an implementation of the 1511 protocol. 1513 Initial state is `OneWay`. 1515 Event `MultipleNeighbors` occurs normally when more than two nodes 1516 see each other on the same link or a remote node is quickly 1517 reconfigured or rebooted without regressing to `OneWay` first. Each 1518 occurrence of the event SHOULD generate a clear, according 1519 notification to help operational deployments. 1521 The machine sends LIEs on several transitions to accelerate adjacency 1522 bring-up without waiting for the timer tic. 1524 Enter 1525 | 1526 V 1527 +-----------+ 1528 | OneWay |<----+ 1529 | | | HALChanged [StoreHAL] 1530 | Entry: | | HALSChanged [StoreHALS] 1531 | [CleanUp] | | HATChanged [StoreHAT] 1532 | | | HoldTimerExpired [-] 1533 | | | InstanceNameMismatch [-] 1534 | | | LevelChanged [UpdateLevel, PUSH SendLie] 1535 | | | LieReceived [ProcessLIE] 1536 | | | MTUMismatch [-] 1537 | | | NeighborAddressAdded [-] 1538 | | | NeighborChangedAddress [-] 1539 | | | NeighborChangedLevel [-] 1540 | | | NeighborChangedMinorFields [-] 1541 | | | NeighborDroppedReflection [-] 1542 | | | PODMismatch [-] 1543 | | | SendLIE [SendLIE] 1544 | | | TimerTick [PUSH SendLIE] 1545 | | | UnacceptableHeader 1546 | | | UpdateZTPOffer [SendOfferToZTPFSM] 1547 | |-----+ 1548 | | 1549 | |<--------------------- (ThreeWay) 1550 | |---------------------> 1551 | | ValidReflection [-] 1552 | | 1553 | |---------------------> (Multiple 1554 | | MultipleNeighbors Neighbors 1555 +-----------+ [StartMulNeighTimer] Wait) 1556 ^ | 1557 | | 1558 | | NewNeighbor [PUSH SendLIE] 1559 | V 1560 (TwoWay) 1562 LIE FSM 1564 (OneWay) 1565 | ^ 1566 | | HoldTimeExpired [-] 1567 | | InstanceNameMismatch [-] 1568 | | LevelChanged [StoreLevel] 1569 | | MTUMismatch [-] 1570 | | NeighborChangedAddress [-] 1571 | | NeighborChangedLevel [-] 1572 | | PODMismatch [-] 1573 | | UnacceptableHeader [-] 1574 V | 1575 +-----------+ 1576 | TwoWay |<----+ 1577 | | | HALChanged [StoreHAL] 1578 | | | HALSChanged [StoreHALS] 1579 | | | HATChanged [StoreHAT] 1580 | | | LevelChanged [StoreLevel] 1581 | | | LIERcvd [ProcessLIE] 1582 | | | SendLIE [SendLIE] 1583 | | | TimerTick [PUSH SendLIE, 1584 | | | IF HoldTimer expired 1585 | | | PUSH HoldTimerExpired] 1586 | | | UpdateZTPOffer [SendOfferToZTPFSM] 1587 | |-----+ 1588 | | 1589 | |<---------------------- 1590 | |----------------------> (Multiple 1591 | | NewNeighbor Neighbors 1592 | | [StartMulNeighTimer] Wait) 1593 | | MultipleNeighbors 1594 +-----------+ [StartMulNeighTimer] 1595 ^ | 1596 | | ValidReflection [-] 1597 | V 1598 (ThreeWay) 1600 LIE FSM (continued) 1602 (TwoWay) (OneWay) 1603 ^ | ^ 1604 | | | HoldTimerExpired [-] 1605 | | | InstanceNameMismatch [-] 1606 | | | LevelChanged [UpdateLevel] 1607 | | | MTUMismatch [-] 1608 | | | NeighborChangedAddress [-] 1609 | | | NeighborChangedLevel [-] 1610 NeighborDropped- | | | PODMismatch [-] 1611 Reflection [-] | | | UnacceptableHeader [-] 1612 | V | 1613 +-----------+ | 1614 | ThreeWay |-----+ 1615 | | 1616 | |<----+ 1617 | | | HALChanged [StoreHAL] 1618 | | | HALSChanged [StoreHALS] 1619 | | | HATChanged [StoreHAT] 1620 | | | LieReceived [ProcessLIE] 1621 | | | SendLIE [SendLIE] 1622 | | | TimerTick [PUSH SendLie, 1623 | | | IF HoldTimer expired 1624 | | | PUSH HoldTimerExpired] 1625 | | | UpdateZTPOffer [SendOfferToZTPFSM] 1626 | | | ValidReflection [-] 1627 | |-----+ 1628 | |----------------------> (Multiple 1629 | | MultipleNeighbors Neighbors 1630 +-----------+ [StartMulNeighTimer] Wait) 1632 LIE FSM (continued) 1634 (TwoWay) (ThreeWay) 1635 | | 1636 V V 1637 +------------+ 1638 | Multiple |<----+ 1639 | Neighbors | | HALChanged [StoreHAL] 1640 | Wait | | HALSChanged [StoreHALS] 1641 | | | HATChanged [StoreHAT] 1642 | | | MultipleNeighbors 1643 | | | [StartMultipleNeighborsTimer] 1644 | | | TimerTick [IF MulNeighTimer expired 1645 | | | PUSH MultipleNeighborsDone] 1646 | | | UpdateZTPOffer [SendOfferToZTP] 1647 | |-----+ 1648 | | 1649 | |<--------------------------- 1650 | |---------------------------> (OneWay) 1651 | | LevelChanged [StoreLevel] 1652 +------------+ MultipleNeighborsDone [-] 1654 LIE FSM (continued) 1656 Events 1658 o TimerTick: one second timer tic 1660 o LevelChanged: node's level has been changed by ZTP or 1661 configuration 1663 o HALChanged: best HAL computed by ZTP has changed 1665 o HATChanged: HAT computed by ZTP has changed 1667 o HALSChanged: set of HAL offering systems computed by ZTP has 1668 changed 1670 o LieRcvd: received LIE 1672 o NewNeighbor: new neighbor parsed 1674 o ValidReflection: received own reflection from neighbor 1676 o NeighborDroppedReflection: lost previous own reflection from 1677 neighbor 1679 o NeighborChangedLevel: neighbor changed advertised level 1681 o NeighborChangedAddress: neighbor changed IP address 1682 o UnacceptableHeader: unacceptable header seen 1684 o MTUMismatch: MTU mismatched 1686 o InstanceNameMismatch: Instance mismatched 1688 o PODMismatch: Unacceptable PoD seen 1690 o HoldtimeExpired: adjacency hold down expired 1692 o MultipleNeighbors: more than one neighbor seen on interface 1694 o MultipleNeighborsDone: cooldown for multiple neighbors expired 1696 o SendLie: send a LIE out 1698 o UpdateZTPOffer: update this node's ZTP offer 1700 Actions 1702 on MultipleNeighbors in OneWay finishes in MultipleNeighborsWait: 1703 start multiple neighbors timer as 4 * DEFAULT_LIE_HOLDTIME 1705 on NeighborDroppedReflection in ThreeWay finishes in TwoWay: no 1706 action 1708 on NeighborDroppedReflection in OneWay finishes in OneWay: no 1709 action 1711 on PODMismatch in TwoWay finishes in OneWay: no action 1713 on NewNeighbor in TwoWay finishes in MultipleNeighborsWait: PUSH 1714 SendLie event 1716 on LieRcvd in OneWay finishes in OneWay: PROCESS_LIE 1718 on UnacceptableHeader in ThreeWay finishes in OneWay: no action 1720 on UpdateZTPOffer in TwoWay finishes in TwoWay: send offer to ZTP 1721 FSM 1723 on NeighborChangedAddress in ThreeWay finishes in OneWay: no 1724 action 1726 on HALChanged in MultipleNeighborsWait finishes in 1727 MultipleNeighborsWait: store new HAL 1729 on NeighborChangedAddress in TwoWay finishes in OneWay: no action 1730 on MultipleNeighbors in TwoWay finishes in MultipleNeighborsWait: 1731 start multiple neighbors timer as 4 * DEFAULT_LIE_HOLDTIME 1733 on LevelChanged in ThreeWay finishes in OneWay: update level with 1734 event value 1736 on LieRcvd in ThreeWay finishes in ThreeWay: PROCESS_LIE 1738 on ValidReflection in OneWay finishes in ThreeWay: no action 1740 on NeighborChangedLevel in TwoWay finishes in OneWay: no action 1742 on MultipleNeighbors in ThreeWay finishes in 1743 MultipleNeighborsWait: start multiple neighbors timer as 4 * 1744 DEFAULT_LIE_HOLDTIME 1746 on InstanceNameMismatch in OneWay finishes in OneWay: no action 1748 on NewNeighbor in OneWay finishes in TwoWay: PUSH SendLie event 1750 on UpdateZTPOffer in OneWay finishes in OneWay: send offer to ZTP 1751 FSM 1753 on UpdateZTPOffer in ThreeWay finishes in ThreeWay: send offer to 1754 ZTP FSM 1756 on MTUMismatch in ThreeWay finishes in OneWay: no action 1758 on TimerTick in OneWay finishes in OneWay: PUSH SendLie event 1760 on SendLie in TwoWay finishes in TwoWay: SEND_LIE 1762 on ValidReflection in ThreeWay finishes in ThreeWay: no action 1764 on InstanceNameMismatch in TwoWay finishes in OneWay: no action 1766 on HoldtimeExpired in OneWay finishes in OneWay: no action 1768 on TimerTick in ThreeWay finishes in ThreeWay: PUSH SendLie event, 1769 if holdtime expired PUSH HoldtimeExpired event 1771 on HALChanged in TwoWay finishes in TwoWay: store new HAL 1773 on HoldtimeExpired in ThreeWay finishes in OneWay: no action 1775 on HALSChanged in TwoWay finishes in TwoWay: store HALS 1777 on HALSChanged in ThreeWay finishes in ThreeWay: store HALS 1778 on ValidReflection in TwoWay finishes in ThreeWay: no action 1780 on MultipleNeighborsDone in MultipleNeighborsWait finishes in 1781 OneWay: no action 1783 on NeighborAddressAdded in OneWay finishes in OneWay: no action 1785 on TimerTick in MultipleNeighborsWait finishes in 1786 MultipleNeighborsWait: decrement MultipleNeighbors timer, if 1787 expired PUSH MultipleNeighborsDone 1789 on MTUMismatch in OneWay finishes in OneWay: no action 1791 on MultipleNeighbors in MultipleNeighborsWait finishes in 1792 MultipleNeighborsWait: start multiple neighbors timer as 4 * 1793 DEFAULT_LIE_HOLDTIME 1795 on LieRcvd in TwoWay finishes in TwoWay: PROCESS_LIE 1797 on HATChanged in MultipleNeighborsWait finishes in 1798 MultipleNeighborsWait: store HAT 1800 on HoldtimeExpired in TwoWay finishes in OneWay: no action 1802 on NeighborChangedLevel in ThreeWay finishes in OneWay: no action 1804 on LevelChanged in OneWay finishes in OneWay: update level with 1805 event value, PUSH SendLie event 1807 on SendLie in OneWay finishes in OneWay: SEND_LIE 1809 on HATChanged in OneWay finishes in OneWay: store HAT 1811 on LevelChanged in TwoWay finishes in TwoWay: update level with 1812 event value 1814 on HATChanged in TwoWay finishes in TwoWay: store HAT 1816 on PODMismatch in ThreeWay finishes in OneWay: no action 1818 on LevelChanged in MultipleNeighborsWait finishes in OneWay: 1819 update level with event value 1821 on UnacceptableHeader in TwoWay finishes in OneWay: no action 1823 on NeighborChangedLevel in OneWay finishes in OneWay: no action 1825 on InstanceNameMismatch in ThreeWay finishes in OneWay: no action 1826 on HATChanged in ThreeWay finishes in ThreeWay: store HAT 1828 on HALChanged in OneWay finishes in OneWay: store new HAL 1830 on UnacceptableHeader in OneWay finishes in OneWay: no action 1832 on HALChanged in ThreeWay finishes in ThreeWay: store new HAL 1834 on UpdateZTPOffer in MultipleNeighborsWait finishes in 1835 MultipleNeighborsWait: send offer to ZTP FSM 1837 on NeighborChangedMinorFields in OneWay finishes in OneWay: no 1838 action 1840 on NeighborChangedAddress in OneWay finishes in OneWay: no action 1842 on MTUMismatch in TwoWay finishes in OneWay: no action 1844 on PODMismatch in OneWay finishes in OneWay: no action 1846 on SendLie in ThreeWay finishes in ThreeWay: SEND_LIE 1848 on TimerTick in TwoWay finishes in TwoWay: PUSH SendLie event, if 1849 holdtime expired PUSH HoldtimeExpired event 1851 on HALSChanged in OneWay finishes in OneWay: store HALS 1853 on HALSChanged in MultipleNeighborsWait finishes in 1854 MultipleNeighborsWait: store HALS 1856 on Entry into OneWay: CLEANUP 1858 Following words are used for well known procedures: 1860 1. PUSH Event: pushes an event to be executed by the FSM upon exit 1861 of this action 1863 2. CLEANUP: neighbor MUST be reset to unknown 1865 3. SEND_LIE: create a new LIE packet 1867 1. reflecting the neighbor if known and valid and 1869 2. setting the necessary `not_a_ztp_offer` variable if level was 1870 derived from last known neighbor on this interface and 1872 3. setting `you_are_flood_repeater` to computed value 1874 4. PROCESS_LIE: 1876 1. if lie has wrong major version OR our own system ID or 1877 invalid system ID then CLEANUP else 1879 2. if lie has non matching MTUs then CLEANUP, PUSH 1880 UpdateZTPOffer, PUSH MTUMismatch else 1882 3. if PoD rules do not allow adjacency forming then CLEANUP, 1883 PUSH PODMismatch, PUSH MTUMismatch else 1885 4. if lie has undefined level OR my level is undefined OR this 1886 node is leaf and remote level lower than HAT OR (lie's level 1887 is not leaf AND its difference is more than one from my 1888 level) then CLEANUP, PUSH UpdateZTPOffer, PUSH 1889 UnacceptableHeader else 1891 5. PUSH UpdateZTPOffer, construct temporary new neighbor 1892 structure with values from lie, if no current neighbor exists 1893 then set neighbor to new neighbor, PUSH NewNeighbor event, 1894 CHECK_THREE_WAY else 1896 1. if current neighbor system ID differs from lie's system 1897 ID then PUSH MultipleNeighbors else 1899 2. if current neighbor stored level differs from lie's level 1900 then PUSH NeighborChangedLevel else 1902 3. if current neighbor stored IPv4/v6 address differs from 1903 lie's address then PUSH NeighborChangedAddress else 1905 4. if any of neighbor's flood address port, name, local 1906 linkid changed then PUSH NeighborChangedMinorFields and 1908 5. CHECK_THREE_WAY 1910 5. CHECK_THREE_WAY: if current state is one-way do nothing else 1912 1. if lie packet does not contain neighbor then if current state 1913 is three-way then PUSH NeighborDroppedReflection else 1915 2. if packet reflects this system's ID and local port and state 1916 is three-way then PUSH event ValidReflection else PUSH event 1917 MultipleNeighbors 1919 4.2.3. Topology Exchange (TIE Exchange) 1921 4.2.3.1. Topology Information Elements 1923 Topology and reachability information in RIFT is conveyed by the 1924 means of TIEs which have good amount of commonalities with LSAs in 1925 OSPF. 1927 The TIE exchange mechanism uses the port indicated by each node in 1928 the LIE exchange and the interface on which the adjacency has been 1929 formed as destination. It SHOULD use TTL of 1 as well and set inter- 1930 network control precedence on according packets. 1932 TIEs contain sequence numbers, lifetimes and a type. Each type has 1933 ample identifying number space and information is spread across 1934 possibly many TIEs of a certain type by the means of a hash function 1935 that a node or deployment can individually determine. One extreme 1936 design choice is a prefix per TIE which leads to more BGP-like 1937 behavior where small increments are only advertised on route changes 1938 vs. deploying with dense prefix packing into few TIEs leading to more 1939 traditional IGP trade-off with fewer TIEs. An implementation may 1940 even rehash prefix to TIE mapping at any time at the cost of 1941 significant amount of re-advertisements of TIEs. 1943 More information about the TIE structure can be found in the schema 1944 in Appendix B. 1946 4.2.3.2. South- and Northbound Representation 1948 A central concept of RIFT is that each node represents itself 1949 differently depending on the direction in which it is advertising 1950 information. More precisely, a spine node represents two different 1951 databases over its adjacencies depending whether it advertises TIEs 1952 to the north or to the south/sideways. We call those differing TIE 1953 databases either south- or northbound (South TIEs and North TIEs) 1954 depending on the direction of distribution. 1956 The North TIEs hold all of the node's adjacencies and local prefixes 1957 while the South TIEs hold only all of the node's adjacencies, the 1958 default prefix with necessary disaggregated prefixes and local 1959 prefixes. We will explain this in detail further in Section 4.2.5. 1961 The TIE types are mostly symmetric in both directions and Table 2 1962 provides a quick reference to main TIE types including direction and 1963 their function. 1965 +--------------------+----------------------------------------------+ 1966 | TIE-Type | Content | 1967 +--------------------+----------------------------------------------+ 1968 | Node North TIE | node properties and adjacencies | 1969 +--------------------+----------------------------------------------+ 1970 | Node South TIE | same content as node North TIE | 1971 +--------------------+----------------------------------------------+ 1972 | Prefix North TIE | contains nodes' directly reachable prefixes | 1973 +--------------------+----------------------------------------------+ 1974 | Prefix South TIE | contains originated defaults and directly | 1975 | | reachable prefixes | 1976 +--------------------+----------------------------------------------+ 1977 | Positive | contains disaggregated prefixes | 1978 | Disaggregation | | 1979 | South TIE | | 1980 +--------------------+----------------------------------------------+ 1981 | Negative | contains special, negatively disaggregated | 1982 | Disaggregation | prefixes to support multi-plane designs | 1983 | South TIE | | 1984 +--------------------+----------------------------------------------+ 1985 | External Prefix | contains external prefixes | 1986 | North TIE | | 1987 +--------------------+----------------------------------------------+ 1988 | Key-Value North | contains nodes northbound KVs | 1989 | TIE | | 1990 +--------------------+----------------------------------------------+ 1991 | Key-Value South | contains nodes southbound KVs | 1992 | TIE | | 1993 +--------------------+----------------------------------------------+ 1995 Table 2: TIE Types 1997 As an example illustrating a databases holding both representations, 1998 consider the topology in Figure 2 with the optional link between 1999 spine 111 and spine 112 (so that the flooding on an East-West link 2000 can be shown). This example assumes unnumbered interfaces. First, 2001 here are the TIEs generated by some nodes. For simplicity, the key 2002 value elements which may be included in their South TIEs or North 2003 TIEs are not shown. 2005 ToF 21 South TIEs: 2006 Node South TIE: 2007 NodeElement(level=2, neighbors((Spine 111, level 1, cost 1), 2008 (Spine 112, level 1, cost 1), (Spine 121, level 1, cost 1), 2009 (Spine 122, level 1, cost 1))) 2010 Prefix South TIE: 2011 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 2013 Spine 111 South TIEs: 2014 Node South TIE: 2015 NodeElement(level=1, neighbors((ToF 21, level 2, cost 1, 2016 links(...)), 2017 (ToF 22, level 2, cost 1, links(...)), 2018 (Spine 112, level 1, cost 1, links(...)), 2019 (Leaf111, level 0, cost 1, links(...)), 2020 (Leaf112, level 0, cost 1, links(...)))) 2021 Prefix South TIE: 2022 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 2024 Spine 111 North TIEs: 2025 Node North TIE: 2026 NodeElement(level=1, 2027 neighbors((ToF 21, level 2, cost 1, links(...)), 2028 (ToF 22, level 2, cost 1, links(...)), 2029 (Spine 112, level 1, cost 1, links(...)), 2030 (Leaf111, level 0, cost 1, links(...)), 2031 (Leaf112, level 0, cost 1, links(...)))) 2032 Prefix North TIE: 2033 NorthPrefixesElement(prefixes(Spine 111.loopback) 2035 Spine 121 South TIEs: 2036 Node South TIE: 2037 NodeElement(level=1, neighbors((ToF 21,level 2,cost 1), 2038 (ToF 22, level 2, cost 1), (Leaf121, level 0, cost 1), 2039 (Leaf122, level 0, cost 1))) 2040 Prefix South TIE: 2041 SouthPrefixesElement(prefixes(0/0, cost 1), (::/0, cost 1)) 2043 Spine 121 North TIEs: 2044 Node North TIE: 2045 NodeElement(level=1, 2046 neighbors((ToF 21, level 2, cost 1, links(...)), 2047 (ToF 22, level 2, cost 1, links(...)), 2048 (Leaf121, level 0, cost 1, links(...)), 2049 (Leaf122, level 0, cost 1, links(...)))) 2050 Prefix North TIE: 2051 NorthPrefixesElement(prefixes(Spine 121.loopback) 2053 Leaf112 North TIEs: 2054 Node North TIE: 2055 NodeElement(level=0, 2056 neighbors((Spine 111, level 1, cost 1, links(...)), 2057 (Spine 112, level 1, cost 1, links(...)))) 2058 Prefix North TIE: 2059 NorthPrefixesElement(prefixes(Leaf112.loopback, Prefix112, 2060 Prefix_MH)) 2062 Figure 14: Example TIES Generated in a 2 Level Spine-and-Leaf 2063 Topology 2065 It may be here not necessarily obvious why the node South TIEs 2066 contain all the adjacencies of the according node. This will be 2067 necessary for algorithms given in Section 4.2.3.9 and Section 4.3.6. 2069 4.2.3.3. Flooding 2071 The mechanism used to distribute TIEs is the well-known (albeit 2072 modified in several respects to take advantage of fat tree topology) 2073 flooding mechanism used by today's link-state protocols. Although 2074 flooding is initially more demanding to implement it avoids many 2075 problems with update style used in diffused computation such as 2076 distance vector protocols. Since flooding tends to present an 2077 unscalable burden in large, densely meshed topologies (fat trees 2078 being unfortunately such a topology) we provide as solution close to 2079 optimal global flood reduction and load balancing optimization in 2080 Section 4.2.3.9. 2082 As described before, TIEs themselves are transported over UDP with 2083 the ports indicated in the LIE exchanges and using the destination 2084 address on which the LIE adjacency has been formed. For unnumbered 2085 IPv4 interfaces same considerations apply as in equivalent OSPF case. 2087 4.2.3.3.1. Normative Flooding Procedures 2089 On reception of a TIE with an undefined level value in the packet 2090 header the node SHOULD issue a warning and indiscriminately discard 2091 the packet. 2093 This section specifies the precise, normative flooding mechanism and 2094 can be omitted unless the reader is pursuing an implementation of the 2095 protocol. 2097 Flooding Procedures are described in terms of a flooding state of an 2098 adjacency and resulting operations on it driven by packet arrivals. 2099 The FSM itself has basically just a single state and is not well 2100 suited to represent the behavior. An implementation MUST behave on 2101 the wire in the same way as the provided normative procedures of this 2102 paragraph. 2104 RIFT does not specify any kind of flood rate limiting since such 2105 specifications always assume particular points in available 2106 technology speeds and feeds and those points are shifting at faster 2107 and faster rate (speed of light holding for the moment). The encoded 2108 packets provide hints to react accordingly to losses or overruns. 2110 Flooding of all according topology exchange elements SHOULD be 2111 performed at highest feasible rate whereas the rate of transmission 2112 MUST be throttled by reacting to adequate features of the system such 2113 as e.g. queue lengths or congestion indications in the protocol 2114 packets. 2116 A node SHOULD NOT send out any topology information elements if the 2117 adjacency is not in a "three-way" state. No further tightening of 2118 this rule is possible due to possible link buffering and re-ordering 2119 of LIEs and TIEs/TIDEs/TIREs. 2121 A node MUST drop any received TIEs/TIDEs/TIREs unless it is in three- 2122 way state. 2124 TIDEs and TIREs MUST NOT be re-flooded the way TIEs of other nodes 2125 MUST be always generated by the node itself and cross only to the 2126 neighboring node. 2128 4.2.3.3.1.1. FloodState Structure per Adjacency 2130 The structure contains conceptually the following elements. The word 2131 collection or queue indicates a set of elements that can be iterated: 2133 TIES_TX: Collection containing all the TIEs to transmit on the 2134 adjacency. 2136 TIES_ACK: Collection containing all the TIEs that have to be 2137 acknowledged on the adjacency. 2139 TIES_REQ: Collection containing all the TIE headers that have to be 2140 requested on the adjacency. 2142 TIES_RTX: Collection containing all TIEs that need retransmission 2143 with the according time to retransmit. 2145 Following words are used for well known procedures operating on this 2146 structure: 2148 TIE Describes either a full RIFT TIE or accordingly just the 2149 `TIEHeader` or `TIEID`. The according meaning is unambiguously 2150 contained in the context of the algorithm. 2152 is_flood_reduced(TIE): returns whether a TIE can be flood reduced or 2153 not. 2155 is_tide_entry_filtered(TIE): returns whether a header should be 2156 propagated in TIDE according to flooding scopes. 2158 is_request_filtered(TIE): returns whether a TIE request should be 2159 propagated to neighbor or not according to flooding scopes. 2161 is_flood_filtered(TIE): returns whether a TIE requested be flooded 2162 to neighbor or not according to flooding scopes. 2164 try_to_transmit_tie(TIE): 2166 A. if not is_flood_filtered(TIE) then 2168 1. remove TIE from TIES_RTX if present 2170 2. if TIE" with same key on TIES_ACK then 2172 a. if TIE" same or newer than TIE do nothing else 2174 b. remove TIE" from TIES_ACK and add TIE to TIES_TX 2176 3. else insert TIE into TIES_TX 2178 ack_tie(TIE): remove TIE from all collections and then insert TIE 2179 into TIES_ACK. 2181 tie_been_acked(TIE): remove TIE from all collections. 2183 remove_from_all_queues(TIE): same as `tie_been_acked`. 2185 request_tie(TIE): if not is_request_filtered(TIE) then 2186 remove_from_all_queues(TIE) and add to TIES_REQ. 2188 move_to_rtx_list(TIE): remove TIE from TIES_TX and then add to 2189 TIES_RTX using TIE retransmission interval. 2191 clear_requests(TIEs): remove all TIEs from TIES_REQ. 2193 bump_own_tie(TIE): for self-originated TIE originate an empty or re- 2194 generate with version number higher then the one in TIE. 2196 The collection SHOULD be served with following priorities if the 2197 system cannot process all the collections in real time: 2199 Elements on TIES_ACK should be processed with highest priority 2201 TIES_TX 2203 TIES_REQ and TIES_RTX 2205 4.2.3.3.1.2. TIDEs 2207 `TIEID` and `TIEHeader` space forms a strict total order (modulo 2208 incomparable sequence numbers in the very unlikely event that can 2209 occur if a TIE is "stuck" in a part of a network while the originator 2210 reboots and reissues TIEs many times to the point its sequence# rolls 2211 over and forms incomparable distance to the "stuck" copy) which 2212 implies that a comparison relation is possible between two elements. 2213 With that it is implicitly possible to compare TIEs, TIEHeaders and 2214 TIEIDs to each other whereas the shortest viable key is always 2215 implied. 2217 When generating and sending TIDEs an implementation SHOULD ensure 2218 that enough bandwidth is left to send elements of Floodstate 2219 structure. 2221 4.2.3.3.1.2.1. TIDE Generation 2223 As given by timer constant, periodically generate TIDEs by: 2225 NEXT_TIDE_ID: ID of next TIE to be sent in TIDE. 2227 TIDE_START: Begin of TIDE packet range. 2229 a. NEXT_TIDE_ID = MIN_TIEID 2231 b. while NEXT_TIDE_ID not equal to MAX_TIEID do 2233 1. TIDE_START = NEXT_TIDE_ID 2235 2. HEADERS = At most TIRDEs_PER_PKT headers in TIEDB starting at 2236 NEXT_TIDE_ID or higher that SHOULD be filtered by 2237 is_tide_entry_filtered and MUST either have a lifetime left > 2238 0 or have no content 2240 3. if HEADERS is empty then START = MIN_TIEID else START = first 2241 element in HEADERS 2243 4. if HEADERS' size less than TIRDEs_PER_PKT then END = 2244 MAX_TIEID else END = last element in HEADERS 2246 5. send sorted HEADERS as TIDE setting START and END as its 2247 range 2249 6. NEXT_TIDE_ID = END 2251 The constant `TIRDEs_PER_PKT` SHOULD be generated and used by the 2252 implementation to limit the amount of TIE headers per TIDE so the 2253 sent TIDE PDU does not exceed interface MTU. 2255 TIDE PDUs SHOULD be spaced on sending to prevent packet drops. 2257 4.2.3.3.1.2.2. TIDE Processing 2259 On reception of TIDEs the following processing is performed: 2261 TXKEYS: Collection of TIE Headers to be send after processing of 2262 the packet 2264 REQKEYS: Collection of TIEIDs to be requested after processing of 2265 the packet 2267 CLEARKEYS: Collection of TIEIDs to be removed from flood state 2268 queues 2270 LASTPROCESSED: Last processed TIEID in TIDE 2272 DBTIE: TIE in the LSDB if found 2274 a. LASTPROCESSED = TIDE.start_range 2276 b. for every HEADER in TIDE do 2278 1. DBTIE = find HEADER in current LSDB 2280 2. if HEADER < LASTPROCESSED then report error and reset 2281 adjacency and return 2283 3. put all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and 2284 TIE.HEADER < HEADER) into TXKEYS 2286 4. LASTPROCESSED = HEADER 2288 5. if DBTIE not found then 2290 I) if originator is this node then bump_own_tie 2292 II) else put HEADER into REQKEYS 2294 6. if DBTIE.HEADER < HEADER then 2296 I) if originator is this node then bump_own_tie else 2297 i. if this is a North TIE header from a northbound 2298 neighbor then override DBTIE in LSDB with HEADER 2300 ii. else put HEADER into REQKEYS 2302 7. if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS 2304 8. if DBTIE.HEADER = HEADER then 2306 I) if DBTIE has content already then put DBTIE.HEADER 2307 into CLEARKEYS 2309 II) else put HEADER into REQKEYS 2311 c. put all TIEs in LSDB where (TIE.HEADER > LASTPROCESSED and 2312 TIE.HEADER <= TIDE.end_range) into TXKEYS 2314 d. for all TIEs in TXKEYS try_to_transmit_tie(TIE) 2316 e. for all TIEs in REQKEYS request_tie(TIE) 2318 f. for all TIEs in CLEARKEYS remove_from_all_queues(TIE) 2320 4.2.3.3.1.3. TIREs 2322 4.2.3.3.1.3.1. TIRE Generation 2324 Elements from both TIES_REQ and TIES_ACK MUST be collected and sent 2325 out as fast as feasible as TIREs. When sending TIREs with elements 2326 from TIES_REQ the `lifetime` field MUST be set to 0 to force 2327 reflooding from the neighbor even if the TIEs seem to be same. 2329 4.2.3.3.1.3.2. TIRE Processing 2331 On reception of TIREs the following processing is performed: 2333 TXKEYS: Collection of TIE Headers to be send after processing of 2334 the packet 2336 REQKEYS: Collection of TIEIDs to be requested after processing of 2337 the packet 2339 ACKKEYS: Collection of TIEIDs that have been acked 2341 DBTIE: TIE in the LSDB if found 2343 a. for every HEADER in TIRE do 2344 1. DBTIE = find HEADER in current LSDB 2346 2. if DBTIE not found then do nothing 2348 3. if DBTIE.HEADER < HEADER then put HEADER into REQKEYS 2350 4. if DBTIE.HEADER > HEADER then put DBTIE.HEADER into TXKEYS 2352 5. if DBTIE.HEADER = HEADER then put DBTIE.HEADER into ACKKEYS 2354 b. for all TIEs in TXKEYS try_to_transmit_tie(TIE) 2356 c. for all TIEs in REQKEYS request_tie(TIE) 2358 d. for all TIEs in ACKKEYS tie_been_acked(TIE) 2360 4.2.3.3.1.4. TIEs Processing on Flood State Adjacency 2362 On reception of TIEs the following processing is performed: 2364 ACKTIE: TIE to acknowledge 2366 TXTIE: TIE to transmit 2368 DBTIE: TIE in the LSDB if found 2370 a. DBTIE = find TIE in current LSDB 2372 b. if DBTIE not found then 2374 1. if originator is this node then bump_own_tie with a short 2375 remaining lifetime 2377 2. else insert TIE into LSDB and ACKTIE = TIE 2379 else 2381 1. if DBTIE.HEADER = TIE.HEADER then 2383 i. if DBTIE has content already then ACKTIE = TIE 2385 ii. else process like the "DBTIE.HEADER < TIE.HEADER" case 2387 2. if DBTIE.HEADER < TIE.HEADER then 2389 i. if originator is this node then bump_own_tie 2391 ii. else insert TIE into LSDB and ACKTIE = TIE 2393 3. if DBTIE.HEADER > TIE.HEADER then 2395 i. if DBTIE has content already then TXTIE = DBTIE 2397 ii. else ACKTIE = DBTIE 2399 c. if TXTIE is set then try_to_transmit_tie(TXTIE) 2401 d. if ACKTIE is set then ack_tie(TIE) 2403 4.2.3.3.1.5. TIEs Processing When LSDB Received Newer Version on Other 2404 Adjacencies 2406 The Link State Database can be considered to be a switchboard that 2407 does not need any flooding procedures but can be given new versions 2408 of TIEs by a peer. Consecutively, a peer receives from the LSDB 2409 newer versions of TIEs received by other peers and processes them 2410 (without any filtering) just like receiving TIEs from its remote 2411 peer. This publisher model can be implemented in many ways. 2413 4.2.3.3.1.6. Sending TIEs 2415 On a periodic basis all TIEs with lifetime left > 0 MUST be sent out 2416 on the adjacency, removed from TIES_TX list and requeued onto 2417 TIES_RTX list. 2419 4.2.3.4. TIE Flooding Scopes 2421 In a somewhat analogous fashion to link-local, area and domain 2422 flooding scopes, RIFT defines several complex "flooding scopes" 2423 depending on the direction and type of TIE propagated. 2425 Every North TIE is flooded northbound, providing a node at a given 2426 level with the complete topology of the Clos or Fat Tree network that 2427 is reachable southwards of it, including all specific prefixes. This 2428 means that a packet received from a node at the same or lower level 2429 whose destination is covered by one of those specific prefixes will 2430 be routed directly towards the node advertising that prefix rather 2431 than sending the packet to a node at a higher level. 2433 A node's Node South TIEs, consisting of all node's adjacencies and 2434 prefix South TIEs limited to those related to default IP prefix and 2435 disaggregated prefixes, are flooded southbound in order to allow the 2436 nodes one level down to see connectivity of the higher level as well 2437 as reachability to the rest of the fabric. In order to allow an E-W 2438 disconnected node in a given level to receive the South TIEs of other 2439 nodes at its level, every *NODE* South TIE is "reflected" northbound 2440 to level from which it was received. It should be noted that East- 2441 West links are included in South TIE flooding (except at ToF level); 2442 those TIEs need to be flooded to satisfy algorithms in Section 4.2.4. 2443 In that way nodes at same level can learn about each other without a 2444 lower level, e.g. in case of leaf level. The precise, normative 2445 flooding scopes are given in Table 3. Those rules govern as well 2446 what SHOULD be included in TIDEs on the adjacency. Again, East-West 2447 flooding scopes are identical to South flooding scopes except in case 2448 of ToF East-West links (rings) which are basically performing 2449 northbound flooding. 2451 Node South TIE "south reflection" allows to support positive 2452 disaggregation on failures describes in Section 4.2.5 and flooding 2453 reduction in Section 4.2.3.9. 2455 +-----------+---------------------+----------------+-----------------+ 2456 | Type / | South | North | East-West | 2457 | Direction | | | | 2458 +-----------+---------------------+----------------+-----------------+ 2459 | node | flood if level of | flood if level | flood only if | 2460 | South TIE | originator is equal | of originator | this node | 2461 | | to this node | is higher than | is not ToF | 2462 | | | this node | | 2463 +-----------+---------------------+----------------+-----------------+ 2464 | non-node | flood self- | flood only if | flood only if | 2465 | South TIE | originated only | neighbor is | self-originated | 2466 | | | originator of | and this node | 2467 | | | TIE | is not ToF | 2468 +-----------+---------------------+----------------+-----------------+ 2469 | all North | never flood | flood always | flood only if | 2470 | TIEs | | | this node is | 2471 | | | | ToF | 2472 +-----------+---------------------+----------------+-----------------+ 2473 | TIDE | include at least | include at | if this node is | 2474 | | all non-self | least all node | ToF then | 2475 | | originated North | South TIEs and | include all | 2476 | | TIE headers and | all South TIEs | North TIEs, | 2477 | | self-originated | originated by | otherwise only | 2478 | | South TIE headers | peer and | self-originated | 2479 | | and | all North TIEs | TIEs | 2480 | | node South TIEs of | | | 2481 | | nodes at same | | | 2482 | | level | | | 2483 +-----------+---------------------+----------------+-----------------+ 2484 | TIRE as | request all North | request all | if this node is | 2485 | Request | TIEs and all peer's | South TIEs | ToF then apply | 2486 | | self-originated | | North scope | 2487 | | TIEs and | | rules, | 2488 | | all node South TIEs | | otherwise South | 2489 | | | | scope rules | 2490 +-----------+---------------------+----------------+-----------------+ 2491 | TIRE as | Ack all received | Ack all | Ack all | 2492 | Ack | TIEs | received TIEs | received TIEs | 2493 +-----------+---------------------+----------------+-----------------+ 2495 Table 3: Normative Flooding Scopes 2497 If the TIDE includes additional TIE headers beside the ones 2498 specified, the receiving neighbor must apply according filter to the 2499 received TIDE strictly and MUST NOT request the extra TIE headers 2500 that were not allowed by the flooding scope rules in its direction. 2502 As an example to illustrate these rules, consider using the topology 2503 in Figure 2, with the optional link between spine 111 and spine 112, 2504 and the associated TIEs given in Figure 14. The flooding from 2505 particular nodes of the TIEs is given in Table 4. 2507 +-----------+----------+--------------------------------------------+ 2508 | Router | Neighbor | TIEs | 2509 | floods to | | | 2510 +-----------+----------+--------------------------------------------+ 2511 | Leaf111 | Spine | Leaf111 North TIEs, Spine 111 node South | 2512 | | 112 | TIE | 2513 | Leaf111 | Spine | Leaf111 North TIEs, Spine 112 node South | 2514 | | 111 | TIE | 2515 | | | | 2516 | Spine 111 | Leaf111 | Spine 111 South TIEs | 2517 | Spine 111 | Leaf112 | Spine 111 South TIEs | 2518 | Spine 111 | Spine | Spine 111 South TIEs | 2519 | | 112 | | 2520 | Spine 111 | ToF 21 | Spine 111 North TIEs, Leaf111 | 2521 | | | North TIEs, Leaf112 North TIEs, ToF 22 | 2522 | | | node South TIE | 2523 | Spine 111 | ToF 22 | Spine 111 North TIEs, Leaf111 | 2524 | | | North TIEs, Leaf112 North TIEs, ToF 21 | 2525 | | | node South TIE | 2526 | | | | 2527 | ... | ... | ... | 2528 | ToF 21 | Spine | ToF 21 South TIEs | 2529 | | 111 | | 2530 | ToF 21 | Spine | ToF 21 South TIEs | 2531 | | 112 | | 2532 | ToF 21 | Spine | ToF 21 South TIEs | 2533 | | 121 | | 2534 | ToF 21 | Spine | ToF 21 South TIEs | 2535 | | 122 | | 2536 | ... | ... | ... | 2537 +-----------+----------+--------------------------------------------+ 2539 Table 4: Flooding some TIEs from example topology 2541 4.2.3.5. 'Flood Only Node TIEs' Bit 2543 RIFT includes an optional ECN (Explicit Congestion Notification) 2544 mechanism to prevent "flooding inrush" on restart or bring-up with 2545 many southbound neighbors. A node MAY set on its LIEs the according 2546 bit to indicate to the neighbor that it should temporarily flood node 2547 TIEs only to it. It SHOULD only set it in the southbound direction. 2548 The receiving node SHOULD accommodate the request to lessen the 2549 flooding load on the affected node if south of the sender and SHOULD 2550 ignore the bit if northbound. 2552 Obviously this mechanism is most useful in southbound direction. The 2553 distribution of node TIEs guarantees correct behavior of algorithms 2554 like disaggregation or default route origination. Furthermore 2555 though, the use of this bit presents an inherent trade-off between 2556 processing load and convergence speed since suppressing flooding of 2557 northbound prefixes from neighbors will lead to blackholes. 2559 4.2.3.6. Initial and Periodic Database Synchronization 2561 The initial exchange of RIFT is modeled after ISIS with TIDE being 2562 equivalent to CSNP and TIRE playing the role of PSNP. The content of 2563 TIDEs and TIREs is governed by Table 3. 2565 4.2.3.7. Purging and Roll-Overs 2567 When a node exits the network, if "unpurged", residual stale TIEs may 2568 exist in the network until their lifetimes expire (which in case of 2569 RIFT is by default a rather long period to prevent ongoing re- 2570 origination of TIEs in very large topologies). RIFT does however not 2571 have a "purging mechanism" in the traditional sense based on sending 2572 specialized "purge" packets. In other routing protocols such 2573 mechanism has proven to be complex and fragile based on many years of 2574 experience. RIFT simply issues a new, empty version of the TIE with 2575 a short lifetime and relies on each node to age out and delete such 2576 TIE copy independently. Abundant amounts of memory are available 2577 today even on low-end platforms and hence keeping those relatively 2578 short-lived extra copies for a while is acceptable. The information 2579 will age out and in the meantime all computations will deliver 2580 correct results if a node leaves the network due to the new 2581 information distributed by its adjacent nodes breaking bi-directional 2582 connectivity checks in different computations. 2584 Once a RIFT node issues a TIE with an ID, it SHOULD preserve the ID 2585 as long as feasible (also when the protocol restarts), even if the 2586 TIE looses all content. The re-advertisement of empty TIE fulfills 2587 the purpose of purging any information advertised in previous 2588 versions. The originator is free to not re-originate the according 2589 empty TIE again or originate an empty TIE with relatively short 2590 lifetime to prevent large number of long-lived empty stubs polluting 2591 the network. Each node MUST timeout and clean up the according empty 2592 TIEs independently. 2594 Upon restart a node MUST, as any link-state implementation, be 2595 prepared to receive TIEs with its own system ID and supersede them 2596 with equivalent, newly generated, empty TIEs with a higher sequence 2597 number. As above, the lifetime can be relatively short since it only 2598 needs to exceed the necessary propagation and processing delay by all 2599 the nodes that are within the TIE's flooding scope. 2601 TIE sequence numbers are rolled over using the method described in 2602 Appendix A. First sequence number of any spontaneously originated 2603 TIE (i.e. not originated to override a detected older copy in the 2604 network) MUST be a reasonably unpredictable random number in the 2605 interval [0, 2^30-1] which will prevent otherwise identical TIE 2606 headers to remain "stuck" in the network with content different from 2607 TIE originated after reboot. In traditional link-state protocols 2608 this is delegated to a 16-bit checksum on packet content. RIFT 2609 avoids this design due to the CPU burden presented by computation of 2610 such checksums and additional complications tied to the fact that the 2611 checksum must be "patched" into the packet after the computation, a 2612 difficult proposition in binary hand-crafted formats already and 2613 highly incompatible with model-based, serialized formats. The 2614 sequence number space is hence consciously chosen to be 64-bits wide 2615 to make the occurrence of a TIE with same sequence number but 2616 different content as much or even more unlikely than the checksum 2617 method. To emulate the "checksum behavior" an implementation could 2618 e.g. choose to compute 64-bit checksum over the packet content and 2619 use that as first sequence number after reboot. 2621 4.2.3.8. Southbound Default Route Origination 2623 Under certain conditions nodes issue a default route in their South 2624 Prefix TIEs with costs as computed in Section 4.3.6.1. 2626 A node X that 2628 1. is NOT overloaded AND 2630 2. has southbound or East-West adjacencies 2632 originates in its south prefix TIE such a default route IIF 2634 1. all other nodes at X's' level are overloaded OR 2636 2. all other nodes at X's' level have NO northbound adjacencies OR 2638 3. X has computed reachability to a default route during N-SPF. 2640 The term "all other nodes at X's' level" describes obviously just the 2641 nodes at the same level in the PoD with a viable lower level 2642 (otherwise the node South TIEs cannot be reflected and the nodes in 2643 e.g. PoD 1 and PoD 2 are "invisible" to each other). 2645 A node originating a southbound default route MUST install a default 2646 discard route if it did not compute a default route during N-SPF. 2648 4.2.3.9. Northbound TIE Flooding Reduction 2650 Section 1.4 of the Optimized Link State Routing Protocol [RFC3626] 2651 (OLSR) introduces the concept of a "multipoint relay" (MPR) that 2652 minimize the overhead of flooding messages in the network by reducing 2653 redundant retransmissions in the same region. 2655 A similar technique is applied to RIFT to control northbound 2656 flooding. Important observations first: 2658 1. a node MUST flood self-originated North TIEs to all the reachable 2659 nodes at the level above which we call the node's "parents"; 2661 2. it is typically not necessary that all parents reflood the North 2662 TIEs to achieve a complete flooding of all the reachable nodes 2663 two levels above which we choose to call the node's 2664 "grandparents"; 2666 3. to control the volume of its flooding two hops North and yet keep 2667 it robust enough, it is advantageous for a node to select a 2668 subset of its parents as "Flood Repeaters" (FRs), which combined 2669 together deliver two or more copies of its flooding to all of its 2670 parents, i.e. the originating node's grandparents; 2672 4. nodes at the same level do NOT have to agree on a specific 2673 algorithm to select the FRs, but overall load balancing should be 2674 achieved so that different nodes at the same level should tend to 2675 select different parents as FRs; 2677 5. there are usually many solutions to the problem of finding a set 2678 of FRs for a given node; the problem of finding the minimal set 2679 is (similar to) a NP-Complete problem and a globally optimal set 2680 may not be the minimal one if load-balancing with other nodes is 2681 an important consideration; 2683 6. it is expected that there will be often sets of equivalent nodes 2684 at a level L, defined as having a common set of parents at L+1. 2685 Applying this observation at both L and L+1, an algorithm may 2686 attempt to split the larger problem in a sum of smaller separate 2687 problems; 2689 7. it is another expectation that there will be from time to time a 2690 broken link between a parent and a grandparent, and in that case 2691 the parent is probably a poor FR due to its lower reliability. 2692 An algorithm may attempt to eliminate parents with broken 2693 northbound adjacencies first in order to reduce the number of 2694 FRs. Albeit it could be argued that relying on higher fanout FRs 2695 will slow flooding due to higher replication load reliability of 2696 FR's links seems to be a more pressing concern. 2698 In a fully connected Clos Network, this means that a node selects one 2699 arbitrary parent as FR and then a second one for redundancy. The 2700 computation can be kept relatively simple and completely distributed 2701 without any need for synchronization amongst nodes. In a "PoD" 2702 structure, where the Level L+2 is partitioned in silos of equivalent 2703 grandparents that are only reachable from respective parents, this 2704 means treating each silo as a fully connected Clos Network and solve 2705 the problem within the silo. 2707 In terms of signaling, a node has enough information to select its 2708 set of FRs; this information is derived from the node's parents' Node 2709 South TIEs, which indicate the parent's reachable northbound 2710 adjacencies to its own parents, i.e. the node's grandparents. A node 2711 may send a LIE to a northbound neighbor with the optional boolean 2712 field `you_are_flood_repeater` set to false, to indicate that the 2713 northbound neighbor is not a flood repeater for the node that sent 2714 the LIE. In that case the northbound neighbor SHOULD NOT reflood 2715 northbound TIEs received from the node that sent the LIE. If the 2716 `you_are_flood_repeater` is absent or if `you_are_flood_repeater` is 2717 set to true, then the northbound neighbor is a flood repeater for the 2718 node that sent the LIE and MUST reflood northbound TIEs received from 2719 that node. 2721 This specification proposes a simple default algorithm that SHOULD be 2722 implemented and used by default on every RIFT node. 2724 o let |NA(Node) be the set of Northbound adjacencies of node Node 2725 and CN(Node) be the cardinality of |NA(Node); 2727 o let |SA(Node) be the set of Southbound adjacencies of node Node 2728 and CS(Node) be the cardinality of |SA(Node); 2730 o let |P(Node) be the set of node Node's parents; 2732 o let |G(Node) be the set of node Node's grandparents. Observe 2733 that |G(Node) = |P(|P(Node)); 2735 o let N be the child node at level L computing a set of FR; 2737 o let P be a node at level L+1 and a parent node of N, i.e. bi- 2738 directionally reachable over adjacency A(N, P); 2740 o let G be a grandparent node of N, reachable transitively via a 2741 parent P over adjacencies ADJ(N, P) and ADJ(P, G). Observe that N 2742 does not have enough information to check bidirectional 2743 reachability of ADJ(P, G); 2745 o let R be a redundancy constant integer; a value of 2 or higher for 2746 R is RECOMMENDED; 2748 o let S be a similarity constant integer; a value in range 0 .. 2 2749 for S is RECOMMENDED, the value of 1 SHOULD be used. Two 2750 cardinalities are considered as equivalent if their absolute 2751 difference is less than or equal to S, i.e. |a-b|<=S. 2753 o let RND be a 64-bit random number generated by the system once on 2754 startup. 2756 The algorithm consists of the following steps: 2758 1. Derive a 64-bits number by XOR'ing 'N's system ID with RND. 2760 2. Derive a 16-bits pseudo-random unsigned integer PR(N) from the 2761 resulting 64-bits number by splitting it in 16-bits-long words 2762 W1, W2, W3, W4 (where W1 are the least significant 16 bits of the 2763 64-bits number, and W4 are the most significant 16 bits) and then 2764 XOR'ing the circularly shifted resulting words together: 2766 A. (W1<<1) xor (W2<<2) xor (W3<<3) xor (W4<<4); 2768 where << is the circular shift operator. 2770 3. Sort the parents by decreasing number of northbound adjacencies 2771 (using decreasing system id of the parent as tie-breaker): 2772 sort |P(N) by decreasing CN(P), for all P in |P(N), as ordered 2773 array |A(N) 2775 4. Partition |A(N) in subarrays |A_k(N) of parents with equivalent 2776 cardinality of northbound adjacencies (in other words with 2777 equivalent number of grandparents they can reach): 2779 A. set k=0; // k is the ID of the subarrray 2781 B. set i=0; 2783 C. while i < CN(N) do 2785 i) set j=i; 2787 ii) while i < CN(N) and CN(|A(N)[j]) - CN(|A(N)[i]) <= S 2788 a. place |A(N)[i] in |A_k(N) // abstract action, 2789 maybe noop 2791 b. set i=i+1; 2793 iii) /* At this point j is the index in |A(N) of the first 2794 member of |A_k(N) and (i-j) is C_k(N) defined as the 2795 cardinality of |A_k(N) */ 2797 set k=k+1; 2799 /* At this point k is the total number of subarrays, initialized 2800 for the shuffling operation below */ 2802 5. shuffle individually each subarrays |A_k(N) of cardinality C_k(N) 2803 within |A(N) using the Durstenfeld variation of Fisher-Yates 2804 algorithm that depends on N's System ID: 2806 A. while k > 0 do 2808 i) for i from C_k(N)-1 to 1 decrementing by 1 do 2810 a. set j to PR(N) modulo i; 2812 b. exchange |A_k[j] and |A_k[i]; 2814 ii) set k=k-1; 2816 6. For each grandparent G, initialize a counter c(G) with the number 2817 of its south-bound adjacencies to elected flood repeaters (which 2818 is initially zero): 2820 A. for each G in |G(N) set c(G) = 0; 2822 7. Finally keep as FRs only parents that are needed to maintain the 2823 number of adjacencies between the FRs and any grandparent G equal 2824 or above the redundancy constant R: 2826 A. for each P in reshuffled |A(N); 2828 i) if there exists an adjacency ADJ(P, G) in |NA(P) such 2829 that c(G) < R then 2831 a. place P in FR set; 2833 b. for all adjacencies ADJ(P, G') in |NA(P) increment 2834 c(G') 2836 B. If any c(G) is still < R, it was not possible to elect a set 2837 of FRs that covers all grandparents with redundancy R 2839 Additional rules for flooding reduction: 2841 1. The algorithm MUST be re-evaluated by a node on every change of 2842 local adjacencies or reception of a parent South TIE with changed 2843 adjacencies. A node MAY apply a hysteresis to prevent excessive 2844 amount of computation during periods of network instability just 2845 like in case of reachability computation. 2847 2. Upon a change of the flood repeater set, a node SHOULD send out 2848 LIEs that grant flood repeater status to newly promoted nodes 2849 before it sends LIEs that revoke the status to the nodes that 2850 have been newly demoted. This is done to prevent transient 2851 behavior where the full coverage of grandparents is not 2852 guaranteed. Such a condition is sometimes unavoidable in case of 2853 lost LIEs but it will correct itself though at possible transient 2854 hit in flooding propagation speeds. 2856 3. A node MUST always flood its self-originated TIEs. 2858 4. A node receiving a TIE originated by a node for which it is not a 2859 flood repeater SHOULD NOT reflood such TIEs to its neighbors 2860 except for rules in Paragraph 6. 2862 5. The indication of flood reduction capability MUST be carried in 2863 the node TIEs and MAY be used to optimize the algorithm to 2864 account for nodes that will flood regardless. 2866 6. A node generates TIDEs as usual but when receiving TIREs or TIDEs 2867 resulting in requests for a TIE of which the newest received copy 2868 came on an adjacency where the node was not flood repeater it 2869 SHOULD ignore such requests on first and only first request. 2870 Normally, the nodes that received the TIEs as flooding repeaters 2871 should satisfy the requesting node and with that no further TIREs 2872 for such TIEs will be generated. Otherwise, the next set of 2873 TIDEs and TIREs MUST lead to flooding independent of the flood 2874 repeater status. This solves a very difficult incast problem on 2875 nodes restarting with a very wide fanout, especially northbound. 2876 To retrieve the full database they often end up processing many 2877 in-rushing copies whereas this approach load-balances the 2878 incoming database between adjacent nodes and flood repeaters 2879 should guarantee that two copies are sent by different nodes to 2880 ensure against any losses. 2882 4.2.3.10. Special Considerations 2884 First, due to the distributed, asynchronous nature of ZTP, it can 2885 create temporary convergence anomalies where nodes at higher levels 2886 of the fabric temporarily see themselves lower than they belong. 2887 Since flooding can begin before ZTP is "finished" and in fact must do 2888 so given there is no global termination criteria, information may end 2889 up in wrong layers. A special clause when changing level takes care 2890 of that. 2892 More difficult is a condition where a node (e.g. a leaf) floods a TIE 2893 north towards its grandparent, then its parent reboots, in fact 2894 partitioning the grandparent from leaf directly and then the leaf 2895 itself reboots. That can leave the grandparent holding the "primary 2896 copy" of the leaf's TIE. Normally this condition is resolved easily 2897 by the leaf re-originating its TIE with a higher sequence number than 2898 it sees in northbound TIEs, here however, when the parent comes back 2899 it won't be able to obtain leaf's North TIE from the grandparent 2900 easily and with that the leaf may not issue the TIE with a higher 2901 sequence number that can reach the grandparent for a long time. 2902 Flooding procedures are extended to deal with the problem by the 2903 means of special clauses that override the database of a lower level 2904 with headers of newer TIEs seen in TIDEs coming from the north. 2906 4.2.4. Reachability Computation 2908 A node has three possible sources of relevant information for 2909 reachability computation. A node knows the full topology south of it 2910 from the received North Node TIEs or alternately north of it from the 2911 South Node TIEs. A node has the set of prefixes with their 2912 associated distances and bandwidths from corresponding prefix TIEs. 2914 To compute prefix reachability, a node runs conceptually a northbound 2915 and a southbound SPF. We call that N-SPF and S-SPF denoting the 2916 direction in which the computation front is progressing. 2918 Since neither computation can "loop", it is possible to compute non- 2919 equal-cost or even k-shortest paths [EPPSTEIN] and "saturate" the 2920 fabric to the extent desired but we use simple, familiar SPF 2921 algorithms and concepts here as example due to their prevalence in 2922 today's routing. 2924 4.2.4.1. Northbound SPF 2926 N-SPF MUST use exclusively northbound and East-West adjacencies in 2927 the computing node's node North TIEs (since if the node is a leaf it 2928 may not have generated a node South TIE) when starting SPF. Observe 2929 that N-SPF is really just a one hop variety since Node South TIEs are 2930 not re-flooded southbound beyond a single level (or East-West) and 2931 with that the computation cannot progress beyond adjacent nodes. 2933 Once progressing, we are using the next higher level's node South 2934 TIEs to find according adjacencies to verify backlink connectivity. 2935 Just as in case of IS-IS or OSPF, two unidirectional links MUST be 2936 associated together to confirm bidirectional connectivity. 2937 Particular care MUST be paid that the Node TIEs do not only contain 2938 the correct system IDs but matching levels as well. 2940 Default route found when crossing an E-W link SHOULD be used IIF 2942 1. the node itself does NOT have any northbound adjacencies AND 2944 2. the adjacent node has one or more northbound adjacencies 2946 This rule forms a "one-hop default route split-horizon" and prevents 2947 looping over default routes while allowing for "one-hop protection" 2948 of nodes that lost all northbound adjacencies except at Top-of-Fabric 2949 where the links are used exclusively to flood topology information in 2950 multi-plane designs. 2952 Other south prefixes found when crossing E-W link MAY be used IIF 2954 1. no north neighbors are advertising same or supersuming non- 2955 default prefix AND 2957 2. the node does not originate a non-default supersuming prefix 2958 itself. 2960 i.e. the E-W link can be used as a gateway of last resort for a 2961 specific prefix only. Using south prefixes across E-W link can be 2962 beneficial e.g. on automatic de-aggregation in pathological fabric 2963 partitioning scenarios. 2965 A detailed example can be found in Section 5.4. 2967 4.2.4.2. Southbound SPF 2969 S-SPF MUST use exclusively the southbound adjacencies in the node 2970 South TIEs, i.e. progresses towards nodes at lower levels. Observe 2971 that E-W adjacencies are NEVER used in the computation. This 2972 enforces the requirement that a packet traversing in a southbound 2973 direction must never change its direction. 2975 S-SPF MUST use northbound adjacencies in node North TIEs to verify 2976 backlink connectivity by checking for presence of the link beside 2977 correct SystemID and level. 2979 4.2.4.3. East-West Forwarding Within a non-ToF Level 2981 Using south prefixes over horizontal links MAY occur if the N-SPF 2982 includes East-West adjacencies in computation. It can protect 2983 against pathological fabric partitioning cases that leave only paths 2984 to destinations that would necessitate multiple changes of forwarding 2985 direction between north and south. 2987 4.2.4.4. East-West Links Within ToF Level 2989 E-W ToF links behave in terms of flooding scopes defined in 2990 Section 4.2.3.4 like northbound links and MUST be used exclusively 2991 for control plane information flooding. Even though a ToF node could 2992 be tempted to use those links during southbound SPF and carry traffic 2993 over them this MUST NOT be attempted since it may lead in, e.g. 2994 anycast cases to routing loops. An implementation MAY try to resolve 2995 the looping problem by following on the ring strictly tie-broken 2996 shortest-paths only but the details are outside this specification. 2997 And even then, the problem of proper capacity provisioning of such 2998 links when they become traffic-bearing in case of failures is vexing. 3000 4.2.5. Automatic Disaggregation on Link & Node Failures 3002 4.2.5.1. Positive, Non-transitive Disaggregation 3004 Under normal circumstances, node's South TIEs contain just the 3005 adjacencies and a default route. However, if a node detects that its 3006 default IP prefix covers one or more prefixes that are reachable 3007 through it but not through one or more other nodes at the same level, 3008 then it MUST explicitly advertise those prefixes in an South TIE. 3009 Otherwise, some percentage of the northbound traffic for those 3010 prefixes would be sent to nodes without according reachability, 3011 causing it to be black-holed. Even when not black-holing, the 3012 resulting forwarding could 'backhaul' packets through the higher 3013 level spines, clearly an undesirable condition affecting the blocking 3014 probabilities of the fabric. 3016 We refer to the process of advertising additional prefixes southbound 3017 as 'positive de-aggregation' or 'positive dis-aggregation'. Such 3018 dis-aggregation is non-transitive, i.e. its' effects are always 3019 contained to a single level of the fabric only. Naturally, multiple 3020 node or link failures can lead to several independent instances of 3021 positive dis-aggregation necessary to prevent looping or bow-tying 3022 the fabric. 3024 A node determines the set of prefixes needing de-aggregation using 3025 the following steps: 3027 1. A DAG computation in the southern direction is performed first, 3028 i.e. the North TIEs are used to find all of prefixes it can reach 3029 and the set of next-hops in the lower level for each of them. 3030 Such a computation can be easily performed on a fat tree by e.g. 3031 setting all link costs in the southern direction to 1 and all 3032 northern directions to infinity. We term set of those 3033 prefixes |R, and for each prefix, r, in |R, we define its set of 3034 next-hops to be |H(r). 3036 2. The node uses reflected South TIEs to find all nodes at the same 3037 level in the same PoD and the set of southbound adjacencies for 3038 each. The set of nodes at the same level is termed |N and for 3039 each node, n, in |N, we define its set of southbound adjacencies 3040 to be |A(n). 3042 3. For a given r, if the intersection of |H(r) and |A(n), for any n, 3043 is null then that prefix r must be explicitly advertised by the 3044 node in an South TIE. 3046 4. Identical set of de-aggregated prefixes is flooded on each of the 3047 node's southbound adjacencies. In accordance with the normal 3048 flooding rules for an South TIE, a node at the lower level that 3049 receives this South TIE SHOULD NOT propagate it south-bound or 3050 reflect the disaggregated prefixes back over its adjacencies to 3051 nodes at the level from which it was received. 3053 To summarize the above in simplest terms: if a node detects that its 3054 default route encompasses prefixes for which one of the other nodes 3055 in its level has no possible next-hops in the level below, it has to 3056 disaggregate it to prevent black-holing or suboptimal routing through 3057 such nodes. Hence a node X needs to determine if it can reach a 3058 different set of south neighbors than other nodes at the same level, 3059 which are connected to it via at least one common south neighbor. If 3060 it can, then prefix disaggregation may be required. If it can't, 3061 then no prefix disaggregation is needed. An example of 3062 disaggregation is provided in Section 5.3. 3064 A possible algorithm is described last: 3066 1. Create partial_neighbors = (empty), a set of neighbors with 3067 partial connectivity to the node X's level from X's perspective. 3068 Each entry in the set is a south neighbor of X and a list of 3069 nodes of X.level that can't reach that neighbor. 3071 2. A node X determines its set of southbound neighbors 3072 X.south_neighbors. 3074 3. For each South TIE originated from a node Y that X has which is 3075 at X.level, if Y.south_neighbors is not the same as 3076 X.south_neighbors but the nodes share at least one southern 3077 neighbor, for each neighbor N in X.south_neighbors but not in 3078 Y.south_neighbors, add (N, (Y)) to partial_neighbors if N isn't 3079 there or add Y to the list for N. 3081 4. If partial_neighbors is empty, then node X does not disaggregate 3082 any prefixes. If node X is advertising disaggregated prefixes in 3083 its South TIE, X SHOULD remove them and re-advertise its 3084 according South TIEs. 3086 A node X computes reachability to all nodes below it based upon the 3087 received North TIEs first. This results in a set of routes, each 3088 categorized by (prefix, path_distance, next-hop set). Alternately, 3089 for clarity in the following procedure, these can be organized by 3090 next-hop set as ( (next-hops), {(prefix, path_distance)}). If 3091 partial_neighbors isn't empty, then the following procedure describes 3092 how to identify prefixes to disaggregate. 3094 disaggregated_prefixes = { empty } 3095 nodes_same_level = { empty } 3096 for each South TIE 3097 if (South TIE.level == X.level and 3098 X shares at least one S-neighbor with X) 3099 add South TIE.originator to nodes_same_level 3100 end if 3101 end for 3103 for each next-hop-set NHS 3104 isolated_nodes = nodes_same_level 3105 for each NH in NHS 3106 if NH in partial_neighbors 3107 isolated_nodes = 3108 intersection(isolated_nodes, 3109 partial_neighbors[NH].nodes) 3110 end if 3111 end for 3113 if isolated_nodes is not empty 3114 for each prefix using NHS 3115 add (prefix, distance) to disaggregated_prefixes 3116 end for 3117 end if 3118 end for 3120 copy disaggregated_prefixes to X's South TIE 3121 if X's South TIE is different 3122 schedule South TIE for flooding 3123 end if 3125 Figure 15: Computation of Disaggregated Prefixes 3127 Each disaggregated prefix is sent with the according path_distance. 3128 This allows a node to send the same South TIE to each south neighbor. 3129 The south neighbor which is connected to that prefix will thus have a 3130 shorter path. 3132 Finally, to summarize the less obvious points partially omitted in 3133 the algorithms to keep them more tractable: 3135 1. all neighbor relationships MUST perform backlink checks. 3137 2. overload bits as introduced in Section 4.3.1 have to be respected 3138 during the computation. 3140 3. all the lower level nodes are flooded the same disaggregated 3141 prefixes since we don't want to build an South TIE per node and 3142 complicate things unnecessarily. The lower level node that can 3143 compute a southbound route to the prefix will prefer it to the 3144 disaggregated route anyway based on route preference rules. 3146 4. positively disaggregated prefixes do NOT have to propagate to 3147 lower levels. With that the disturbance in terms of new flooding 3148 is contained to a single level experiencing failures. 3150 5. disaggregated Prefix South TIEs are not "reflected" by the lower 3151 level, i.e. nodes within same level do NOT need to be aware 3152 which node computed the need for disaggregation. 3154 6. The fabric is still supporting maximum load balancing properties 3155 while not trying to send traffic northbound unless necessary. 3157 In case positive disaggregation is triggered and due to the very 3158 stable but un-synchronized nature of the algorithm the nodes may 3159 issue the necessary disaggregated prefixes at different points in 3160 time. This can lead for a short time to an "incast" behavior where 3161 the first advertising router based on the nature of longest prefix 3162 match will attract all the traffic. An implementation MAY hence 3163 choose different strategies to address this behavior if needed. 3165 To close this section it is worth to observe that in a single plane 3166 ToF this disaggregation prevents blackholing up to (K_LEAF * P) link 3167 failures in terms of Section 4.1.2 or in other terms, it takes at 3168 minimum that many link failures to partition the ToF into multiple 3169 planes. 3171 4.2.5.2. Negative, Transitive Disaggregation for Fallen Leaves 3173 As explained in Section 4.1.3 failures in multi-plane Top-of-Fabric 3174 or more than (K_LEAF * P) links failing in single plane design can 3175 generate fallen leaves. Such scenario cannot be addressed by 3176 positive disaggregation only and needs a further mechanism. 3178 4.2.5.2.1. Cabling of Multiple Top-of-Fabric Planes 3180 Let us return in this section to designs with multiple planes as 3181 shown in Figure 3. Figure 16 highlights how the ToF is cabled in 3182 case of two planes by the means of dual-rings to distribute all the 3183 North TIEs within both planes. For people familiar with traditional 3184 link-state routing protocols ToF level can be considered equivalent 3185 to area 0 in OSPF or level-2 in ISIS which need to be "connected" as 3186 well for the protocol to operate correctly. 3188 . ++==========++ ++==========++ 3189 . II II II II 3190 .+----++--+ +----++--+ +----++--+ +----++--+ 3191 .|ToF A1| |ToF B1| |ToF B2| |ToF A2| 3192 .++-+-++--+ ++-+-++--+ ++-+-++--+ ++-+-++--+ 3193 . | | II | | II | | II | | II 3194 . | | ++==========++ | | ++==========++ 3195 . | | | | | | | | 3196 . 3197 . ~~~ Highlighted ToF of the previous multi-plane figure ~~ 3199 Figure 16: Topologically Connected Planes 3201 As described in Section 4.1.3 failures in multi-plane fabrics can 3202 lead to blackholes which normal positive disaggregation cannot fix. 3203 The mechanism of negative, transitive disaggregation incorporated in 3204 RIFT provides the according solution. 3206 4.2.5.2.2. Transitive Advertisement of Negative Disaggregates 3208 A ToF node that discovers that it cannot reach a fallen leaf 3209 disaggregates all the prefixes of such leaves. It uses for that 3210 purpose negative prefix South TIEs that are, as usual, flooded 3211 southwards with the scope defined in Section 4.2.3.4. 3213 Transitively, a node explicitly loses connectivity to a prefix when 3214 none of its children advertises it and when the prefix is negatively 3215 disaggregated by all of its parents. When that happens, the node 3216 originates the negative prefix further down south. Since the 3217 mechanism applies recursively south the negative prefix may propagate 3218 transitively all the way down to the leaf. This is necessary since 3219 leaves connected to multiple planes by means of disjoint paths may 3220 have to choose the correct plane already at the very bottom of the 3221 fabric to make sure that they don't send traffic towards another leaf 3222 using a plane where it is "fallen" at which in point a blackhole is 3223 unavoidable. 3225 When the connectivity is restored, a node that disaggregated a prefix 3226 withdraws the negative disaggregation by the usual mechanism of re- 3227 advertising TIEs omitting the negative prefix. 3229 4.2.5.2.3. Computation of Negative Disaggregates 3231 The document omitted so far the description of the computation 3232 necessary to generate the correct set of negative prefixes. Negative 3233 prefixes can in fact be advertised due to two different triggers. We 3234 describe them consecutively. 3236 The first origination reason is a computation that uses all the node 3237 North TIEs to build the set of all reachable nodes by reachability 3238 computation over the complete graph and including ToF links. The 3239 computation uses the node itself as root. This is compared with the 3240 result of the normal southbound SPF as described in Section 4.2.4.2. 3241 The difference are the fallen leaves and all their attached prefixes 3242 are advertised as negative prefixes southbound if the node does not 3243 see the prefix being reachable within southbound SPF. 3245 The second mechanism hinges on the understanding how the negative 3246 prefixes are used within the computation as described in Figure 17. 3247 When attaching the negative prefixes at certain point in time the 3248 negative prefix may find itself with all the viable nodes from the 3249 shorter match nexthop being pruned. In other words, all its 3250 northbound neighbors provided a negative prefix advertisement. This 3251 is the trigger to advertise this negative prefix transitively south 3252 and normally caused by the node being in a plane where the prefix 3253 belongs to a fabric leaf that has "fallen" in this plane. Obviously, 3254 when one of the northbound switches withdraws its negative 3255 advertisement, the node has to withdraw its transitively provided 3256 negative prefix as well. 3258 4.2.6. Attaching Prefixes 3260 After SPF is run, it is necessary to attach the resulting 3261 reachability information in form of prefixes. For S-SPF, prefixes 3262 from an North TIE are attached to the originating node with that 3263 node's next-hop set and a distance equal to the prefix's cost plus 3264 the node's minimized path distance. The RIFT route database, a set 3265 of (prefix, prefix-type, attributes, path_distance, next-hop set), 3266 accumulates these results. 3268 In case of N-SPF prefixes from each South TIE need to also be added 3269 to the RIFT route database. The N-SPF is really just a stub so the 3270 computing node needs simply to determine, for each prefix in an South 3271 TIE that originated from adjacent node, what next-hops to use to 3272 reach that node. Since there may be parallel links, the next-hops to 3273 use can be a set; presence of the computing node in the associated 3274 Node South TIE is sufficient to verify that at least one link has 3275 bidirectional connectivity. The set of minimum cost next-hops from 3276 the computing node X to the originating adjacent node is determined. 3278 Each prefix has its cost adjusted before being added into the RIFT 3279 route database. The cost of the prefix is set to the cost received 3280 plus the cost of the minimum distance next-hop to that neighbor while 3281 taking into account its attributes such as mobility per 3282 Section 4.3.3. Then each prefix can be added into the RIFT route 3283 database with the next-hop set; ties are broken based upon type first 3284 and then distance and further on `PrefixAttributes` and only the best 3285 combination is used for forwarding. RIFT route preferences are 3286 normalized by the according Thrift [thrift] model type. 3288 An example implementation for node X follows: 3290 for each South TIE 3291 if South TIE.level > X.level 3292 next_hop_set = set of minimum cost links to the 3293 South TIE.originator 3294 next_hop_cost = minimum cost link to 3295 South TIE.originator 3296 end if 3297 for each prefix P in the South TIE 3298 P.cost = P.cost + next_hop_cost 3299 if P not in route_database: 3300 add (P, P.cost, P.type, 3301 P.attributes, next_hop_set) to route_database 3302 end if 3303 if (P in route_database): 3304 if route_database[P].cost > P.cost or 3305 route_database[P].type > P.type: 3306 update route_database[P] with (P, P.type, P.cost, 3307 P.attributes, 3308 next_hop_set) 3309 else if route_database[P].cost == P.cost and 3310 route_database[P].type == P.type: 3311 update route_database[P] with (P, P.type, 3312 P.cost, P.attributes, 3313 merge(next_hop_set, route_database[P].next_hop_set)) 3314 else 3315 // Not preferred route so ignore 3316 end if 3317 end if 3318 end for 3319 end for 3321 Figure 17: Adding Routes from South TIE Positive and Negative 3322 Prefixes 3324 After the positive prefixes are attached and tie-broken, negative 3325 prefixes are attached and used in case of northbound computation, 3326 ideally from the shortest length to the longest. The nexthop 3327 adjacencies for a negative prefix are inherited from the longest 3328 positive prefix that aggregates it, and subsequently adjacencies to 3329 nodes that advertised negative for this prefix are removed. 3331 The rule of inheritance MUST be maintained when the nexthop list for 3332 a prefix is modified, as the modification may affect the entries for 3333 matching negative prefixes of immediate longer prefix length. For 3334 instance, if a nexthop is added, then by inheritance it must be added 3335 to all the negative routes of immediate longer prefixes length unless 3336 it is pruned due to a negative advertisement for the same next hop. 3337 Similarly, if a nexthop is deleted for a given prefix, then it is 3338 deleted for all the immediately aggregated negative routes. This 3339 will recurse in the case of nested negative prefix aggregations. 3341 The rule of inheritance must also be maintained when a new prefix of 3342 intermediate length is inserted, or when the immediately aggregating 3343 prefix is deleted from the routing table, making an even shorter 3344 aggregating prefix the one from which the negative routes now inherit 3345 their adjacencies. As the aggregating prefix changes, all the 3346 negative routes must be recomputed, and then again the process may 3347 recurse in case of nested negative prefix aggregations. 3349 Although these operations can be computationally expensive, the 3350 overall load on devices in the network is low because these 3351 computations are not run very often, as positive route advertisements 3352 are always preferred over negative ones. This prevents recursion in 3353 most cases because positive reachability information never inherits 3354 next hops. 3356 To make the negative disaggregation less abstract and provide an 3357 example let us consider a ToP node T1 with 4 ToF parents S1..S4 as 3358 represented in Figure 18: 3360 +----+ +----+ +----+ +----+ N 3361 | S1 | | S1 | | S1 | | S1 | ^ 3362 +----+ +----+ +----+ +----+ W< + >E 3363 | | | | v 3364 |+--------+ | | S 3365 ||+-----------------+ | 3366 |||+----------------+---------+ 3367 |||| 3368 +----+ 3369 | T1 | 3370 +----+ 3372 Figure 18: A ToP Node with 4 Parents 3374 If all ToF nodes can reach all the prefixes in the network; with 3375 RIFT, they will normally advertise a default route south. An 3376 abstract Routing Information Base (RIB), more commonly known as a 3377 routing table, stores all types of maintained routes including the 3378 negative ones and "tie-breaks" for the best one, whereas an abstract 3379 Forwarding table (FIB) retains only the ultimately computed 3380 "positive" routing instructions. In T1, those tables would look as 3381 illustrated in Figure 19: 3383 +---------+ 3384 | Default | 3385 +---------+ 3386 | 3387 | +--------+ 3388 +---> | Via S1 | 3389 | +--------+ 3390 | 3391 | +--------+ 3392 +---> | Via S2 | 3393 | +--------+ 3394 | 3395 | +--------+ 3396 +---> | Via S3 | 3397 | +---------+ 3398 | 3399 | +--------+ 3400 +---> | Via S4 | 3401 +--------+ 3403 Figure 19: Abstract RIB 3405 In case T1 receives a negative advertisement for prefix 2001:db8::/32 3406 from S1 a negative route is stored in the RIB (indicated by a ~ 3407 sign), while the more specific routes to the complementing ToF nodes 3408 are installed in FIB. RIB and FIB in T1 now look as illustrated in 3409 Figure 20 and Figure 21, respectively: 3411 +---------+ +-----------------+ 3412 | Default | <-------------- | ~2001:db8::/32 | 3413 +---------+ +-----------------+ 3414 | | 3415 | +--------+ | +--------+ 3416 +---> | Via S1 | +---> | Via S1 | 3417 | +--------+ +--------+ 3418 | 3419 | +--------+ 3420 +---> | Via S2 | 3421 | +--------+ 3422 | 3423 | +--------+ 3424 +---> | Via S3 | 3425 | +---------+ 3426 | 3427 | +--------+ 3428 +---> | Via S4 | 3429 +--------+ 3431 Figure 20: Abstract RIB after Negative 2001:db8::/32 from S1 3433 The negative 2001:db8::/32 prefix entry inherits from ::/0, so the 3434 positive more specific routes are the complements to S1 in the set of 3435 next-hops for the default route. That entry is composed of S2, S3, 3436 and S4, or, in other words, it uses all entries the the default route 3437 with a "hole punched" for S1 into them. These are the next hops that 3438 are still available to reach 2001:db8::/32, now that S1 advertised 3439 that it will not forward 2001:db8::/32 anymore. Ultimately, those 3440 resulting next-hops are installed in FIB for the more specific route 3441 to 2001:db8::/32 as illustrated below: 3443 +---------+ +---------------+ 3444 | Default | | 2001:db8::/32 | 3445 +---------+ +---------------+ 3446 | | 3447 | +--------+ | 3448 +---> | Via S1 | | 3449 | +--------+ | 3450 | | 3451 | +--------+ | +--------+ 3452 +---> | Via S2 | +---> | Via S2 | 3453 | +--------+ | +--------+ 3454 | | 3455 | +--------+ | +--------+ 3456 +---> | Via S3 | +---> | Via S3 | 3457 | +--------+ | +--------+ 3458 | | 3459 | +--------+ | +--------+ 3460 +---> | Via S4 | +---> | Via S4 | 3461 +--------+ +--------+ 3463 Figure 21: Abstract FIB after Negative 2001:db8::/32 from S1 3465 To illustrate matters further let us consider T1 receiving a negative 3466 advertisement for prefix 2001:db8:1::/48 from S2, which is stored in 3467 RIB again. After the update, the RIB in T1 is illustrated in 3468 Figure 22: 3470 +---------+ +----------------+ +------------------+ 3471 | Default | <----- | ~2001:db8::/32 | <------ | ~2001:db8:1::/48 | 3472 +---------+ +----------------+ +------------------+ 3473 | | | 3474 | +--------+ | +--------+ | 3475 +---> | Via S1 | +---> | Via S1 | | 3476 | +--------+ +--------+ | 3477 | | 3478 | +--------+ | +--------+ 3479 +---> | Via S2 | +---> | Via S2 | 3480 | +--------+ +--------+ 3481 | 3482 | +--------+ 3483 +---> | Via S3 | 3484 | +---------+ 3485 | 3486 | +--------+ 3487 +---> | Via S4 | 3488 +--------+ 3490 Figure 22: Abstract RIB after Negative 2001:db8:1::/48 from S2 3492 Negative 2001:db8:1::/48 inherits from 2001:db8::/32 now, so the 3493 positive more specific routes are the complements to S2 in the set of 3494 next hops for 2001:db8::/32, which are S3 and S4, or, in other words, 3495 all entries of the parent with the negative holes "punched in" again. 3496 After the update, the FIB in T1 shows as illustrated in Figure 23: 3498 +---------+ +---------------+ +-----------------+ 3499 | Default | | 2001:db8::/32 | | 2001:db8:1::/48 | 3500 +---------+ +---------------+ +-----------------+ 3501 | | | 3502 | +--------+ | | 3503 +---> | Via S1 | | | 3504 | +--------+ | | 3505 | | | 3506 | +--------+ | +--------+ | 3507 +---> | Via S2 | +---> | Via S2 | | 3508 | +--------+ | +--------+ | 3509 | | | 3510 | +--------+ | +--------+ | +--------+ 3511 +---> | Via S3 | +---> | Via S3 | +---> | Via S3 | 3512 | +--------+ | +--------+ | +--------+ 3513 | | | 3514 | +--------+ | +--------+ | +--------+ 3515 +---> | Via S4 | +---> | Via S4 | +---> | Via S4 | 3516 +--------+ +--------+ +--------+ 3518 Figure 23: Abstract FIB after Negative 2001:db8:1::/48 from S2 3520 Further, let us say that S3 stops advertising its service as default 3521 gateway. The entry is removed from RIB as usual. In order to update 3522 the FIB, it is necessary to eliminate the FIB entry for the default 3523 route, as well as all the FIB entries that were created for negative 3524 routes pointing to the RIB entry being removed (::/0). This is done 3525 recursively for 2001:db8::/32 and then for, 2001:db8:1::/48. The 3526 related FIB entries via S3 are removed, as illustrated in Figure 24. 3528 +---------+ +---------------+ +-----------------+ 3529 | Default | | 2001:db8::/32 | | 2001:db8:1::/48 | 3530 +---------+ +---------------+ +-----------------+ 3531 | | | 3532 | +--------+ | | 3533 +---> | Via S1 | | | 3534 | +--------+ | | 3535 | | | 3536 | +--------+ | +--------+ | 3537 +---> | Via S2 | +---> | Via S2 | | 3538 | +--------+ | +--------+ | 3539 | | | 3540 | | | 3541 | | | 3542 | | | 3543 | | | 3544 | +--------+ | +--------+ | +--------+ 3545 +---> | Via S4 | +---> | Via S4 | +---> | Via S4 | 3546 +--------+ +--------+ +--------+ 3548 Figure 24: Abstract FIB after Loss of S3 3550 Say that at that time, S4 would also disaggregate prefix 3551 2001:db8:1::/48. This would mean that the FIB entry for 3552 2001:db8:1::/48 becomes a discard route, and that would be the signal 3553 for T1 to disaggregate prefix 2001:db8:1::/48 negatively in a 3554 transitive fashion with its own children. 3556 Finally, let us look at the case where S3 becomes available again as 3557 a default gateway, and a negative advertisement is received from S4 3558 about prefix 2001:db8:2::/48 as opposed to 2001:db8:1::/48. Again, a 3559 negative route is stored in the RIB, and the more specific route to 3560 the complementing ToF nodes are installed in FIB. Since 3561 2001:db8:2::/48 inherits from 2001:db8::/32, the positive FIB routes 3562 are chosen by removing S4 from S2, S3, S4. The abstract FIB in T1 3563 now shows as illustrated in Figure 25: 3565 +-----------------+ 3566 | 2001:db8:2::/48 | 3567 +-----------------+ 3568 | 3569 +---------+ +---------------+ +-----------------+ 3570 | Default | | 2001:db8::/32 | | 2001:db8:1::/48 | 3571 +---------+ +---------------+ +-----------------+ 3572 | | | | 3573 | +--------+ | | | +--------+ 3574 +---> | Via S1 | | | +---> | Via S2 | 3575 | +--------+ | | | +--------+ 3576 | | | | 3577 | +--------+ | +--------+ | | +--------+ 3578 +---> | Via S2 | +---> | Via S2 | | +---> | Via S3 | 3579 | +--------+ | +--------+ | +--------+ 3580 | | | 3581 | +--------+ | +--------+ | +--------+ 3582 +---> | Via S3 | +---> | Via S3 | +---> | Via S3 | 3583 | +--------+ | +--------+ | +--------+ 3584 | | | 3585 | +--------+ | +--------+ | +--------+ 3586 +---> | Via S4 | +---> | Via S4 | +---> | Via S4 | 3587 +--------+ +--------+ +--------+ 3589 Figure 25: Abstract FIB after Negative 2001:db8:2::/48 from S4 3591 4.2.7. Optional Zero Touch Provisioning (ZTP) 3593 Each RIFT node can operate in zero touch provisioning (ZTP) mode, 3594 i.e. it has no configuration (unless it is a ToF or it is configured 3595 to operate in the overall topology as leaf and/or support leaf-2-leaf 3596 procedures) and it will fully configure itself after being attached 3597 to the topology. Configured nodes and nodes operating in ZTP can be 3598 mixed and will form a valid topology if achievable. 3600 The derivation of the level of each node happens based on offers 3601 received from its neighbors whereas each node (with possibly 3602 exceptions of configured leaves) tries to attach at the highest 3603 possible point in the fabric. This guarantees that even if the 3604 diffusion front reaches a node from "below" faster than from "above", 3605 it will greedily abandon already negotiated level derived from nodes 3606 topologically below it and properly peers with nodes above. 3608 The fabric is very consciously numbered from the top to allow for 3609 PoDs of different heights and minimize number of provisioning 3610 necessary, in this case just a TOP_OF_FABRIC flag on every node at 3611 the top of the fabric. 3613 This section describes the necessary concepts and procedures for ZTP 3614 operation. 3616 4.2.7.1. Terminology 3618 The interdependencies between the different flags and the configured 3619 level can be somewhat vexing at first and it may take multiple reads 3620 of the glossary to comprehend them. 3622 Automatic Level Derivation: Procedures which allow nodes without 3623 level configured to derive it automatically. Only applied if 3624 CONFIGURED_LEVEL is undefined. 3626 UNDEFINED_LEVEL: A "null" value that indicates that the level has 3627 not been determined and has not been configured. Schemas normally 3628 indicate that by a missing optional value without an available 3629 defined default. 3631 LEAF_ONLY: An optional configuration flag that can be configured on 3632 a node to make sure it never leaves the "bottom of the hierarchy". 3633 TOP_OF_FABRIC flag and CONFIGURED_LEVEL cannot be defined at the 3634 same time as this flag. It implies CONFIGURED_LEVEL value of 0. 3636 TOP_OF_FABRIC flag: Configuration flag that MUST be provided to all 3637 Top-of-Fabric nodes. LEAF_FLAG and CONFIGURED_LEVEL cannot be 3638 defined at the same time as this flag. It implies a 3639 CONFIGURED_LEVEL value. In fact, it is basically a shortcut for 3640 configuring same level at all Top-of-Fabric nodes which is 3641 unavoidable since an initial 'seed' is needed for other ZTP nodes 3642 to derive their level in the topology. The flag plays an 3643 important role in fabrics with multiple planes to enable 3644 successful negative disaggregation (Section 4.2.5.2). 3646 CONFIGURED_LEVEL: A level value provided manually. When this is 3647 defined (i.e. it is not an UNDEFINED_LEVEL) the node is not 3648 participating in ZTP. TOP_OF_FABRIC flag is ignored when this 3649 value is defined. LEAF_ONLY can be set only if this value is 3650 undefined or set to 0. 3652 DERIVED_LEVEL: Level value computed via automatic level derivation 3653 when CONFIGURED_LEVEL is equal to UNDEFINED_LEVEL. 3655 LEAF_2_LEAF: An optional flag that can be configured on a node to 3656 make sure it supports procedures defined in Section 4.3.8. In a 3657 strict sense it is a capability that implies LEAF_ONLY and the 3658 according restrictions. TOP_OF_FABRIC flag is ignored when set at 3659 the same time as this flag. 3661 LEVEL_VALUE: In ZTP case the original definition of "level" in 3662 Section 3.1 is both extended and relaxed. First, level is defined 3663 now as LEVEL_VALUE and is the first defined value of 3664 CONFIGURED_LEVEL followed by DERIVED_LEVEL. Second, it is 3665 possible for nodes to be more than one level apart to form 3666 adjacencies if any of the nodes is at least LEAF_ONLY. 3668 Valid Offered Level (VOL): A neighbor's level received on a valid 3669 LIE (i.e. passing all checks for adjacency formation while 3670 disregarding all clauses involving level values) persisting for 3671 the duration of the holdtime interval on the LIE. Observe that 3672 offers from nodes offering level value of 0 do not constitute VOLs 3673 (since no valid DERIVED_LEVEL can be obtained from those and 3674 consequently `not_a_ztp_offer` MUST be ignored). Offers from LIEs 3675 with `not_a_ztp_offer` being true are not VOLs either. If a node 3676 maintains parallel adjacencies to the neighbor, VOL on each 3677 adjacency is considered as equivalent, i.e. the newest VOL from 3678 any such adjacency updates the VOL received from the same node. 3680 Highest Available Level (HAL): Highest defined level value seen from 3681 all VOLs received. 3683 Highest Available Level Systems (HALS): Set of nodes offering HAL 3684 VOLs. 3686 Highest Adjacency Three Way (HAT): Highest neighbor level of all the 3687 formed three way adjacencies for the node. 3689 4.2.7.2. Automatic SystemID Selection 3691 RIFT nodes require a 64 bit SystemID which SHOULD be derived as 3692 EUI-64 MA-L derive according to [EUI64]. The organizationally 3693 governed portion of this ID (24 bits) can be used to generate 3694 multiple IDs if required to indicate more than one RIFT instance." 3696 As matter of operational concern, the router MUST ensure that such 3697 identifier is not changing very frequently (or at least not without 3698 sending all its TIEs with fairly short lifetimes) since otherwise the 3699 network may be left with large amounts of stale TIEs in other nodes 3700 (though this is not necessarily a serious problem if the procedures 3701 described in Section 7 are implemented). 3703 4.2.7.3. Generic Fabric Example 3705 ZTP forces us to think about miscabled or unusually cabled fabric and 3706 how such a topology can be forced into a "lattice" structure which a 3707 fabric represents (with further restrictions). Let us consider a 3708 necessary and sufficient physical cabling in Figure 26. We assume 3709 all nodes being in the same PoD. 3711 . +---+ 3712 . | A | s = TOP_OF_FABRIC 3713 . | s | l = LEAF_ONLY 3714 . ++-++ l2l = LEAF_2_LEAF 3715 . | | 3716 . +--+ +--+ 3717 . | | 3718 . +--++ ++--+ 3719 . | E | | F | 3720 . | +-+ | +-----------+ 3721 . ++--+ | ++-++ | 3722 . | | | | | 3723 . | +-------+ | | 3724 . | | | | | 3725 . | | +----+ | | 3726 . | | | | | 3727 . ++-++ ++-++ | 3728 . | I +-----+ J | | 3729 . | | | +-+ | 3730 . ++-++ +--++ | | 3731 . | | | | | 3732 . +---------+ | +------+ | 3733 . | | | | | 3734 . +-----------------+ | | 3735 . | | | | | 3736 . ++-++ ++-++ | 3737 . | X +-----+ Y +-+ 3738 . |l2l| | l | 3739 . +---+ +---+ 3741 Figure 26: Generic ZTP Cabling Considerations 3743 First, we must anchor the "top" of the cabling and that's what the 3744 TOP_OF_FABRIC flag at node A is for. Then things look smooth until 3745 we have to decide whether node Y is at the same level as I, J (and as 3746 consequence, X is south of it) or at the same level as X. This is 3747 unresolvable here until we "nail down the bottom" of the topology. 3748 To achieve that we choose to use in this example the leaf flags in X 3749 and Y. In case where Y would not have a leaf flag it will try to 3750 elect highest level offered and end up being in same level as I and 3751 J. 3753 4.2.7.4. Level Determination Procedure 3755 A node starting up with UNDEFINED_VALUE (i.e. without a 3756 CONFIGURED_LEVEL or any leaf or TOP_OF_FABRIC flag) MUST follow those 3757 additional procedures: 3759 1. It advertises its LEVEL_VALUE on all LIEs (observe that this can 3760 be UNDEFINED_LEVEL which in terms of the schema is simply an 3761 omitted optional value). 3763 2. It computes HAL as numerically highest available level in all 3764 VOLs. 3766 3. It chooses then MAX(HAL-1,0) as its DERIVED_LEVEL. The node then 3767 starts to advertise this derived level. 3769 4. A node that lost all adjacencies with HAL value MUST hold down 3770 computation of new DERIVED_LEVEL for a short period of time 3771 unless it has no VOLs from southbound adjacencies. After the 3772 holddown expired, it MUST discard all received offers, recompute 3773 DERIVED_LEVEL and announce it to all neighbors. 3775 5. A node MUST reset any adjacency that has changed the level it is 3776 offering and is in three-way state. 3778 6. A node that changed its defined level value MUST readvertise its 3779 own TIEs (since the new `PacketHeader` will contain a different 3780 level than before). Sequence number of each TIE MUST be 3781 increased. 3783 7. After a level has been derived the node MUST set the 3784 `not_a_ztp_offer` on LIEs towards all systems offering a VOL for 3785 HAL. 3787 8. A node that changed its level SHOULD flush from its link state 3788 database TIEs of all other nodes, otherwise stale information may 3789 persist on "direction reversal", i.e. nodes that seemed south 3790 are now north or east-west. This will not prevent the correct 3791 operation of the protocol but could be slightly confusing 3792 operationally. 3794 A node starting with LEVEL_VALUE being 0 (i.e. it assumes a leaf 3795 function by being configured with the appropriate flags or has a 3796 CONFIGURED_LEVEL of 0) MUST follow those additional procedures: 3798 1. It computes HAT per procedures above but does NOT use it to 3799 compute DERIVED_LEVEL. HAT is used to limit adjacency formation 3800 per Section 4.2.2. 3802 It MAY also follow modified procedures: 3804 1. It may pick a different strategy to choose VOL, e.g. use the VOL 3805 value with highest number of VOLs. Such strategies are only 3806 possible since the node always remains "at the bottom of the 3807 fabric" while another layer could "invert" the fabric by picking 3808 its preferred VOL in a different fashion than always trying to 3809 achieve the highest viable level. 3811 4.2.7.5. ZTP FSM 3813 This section specifies the precise, normative ZTP FSM and can be 3814 omitted unless the reader is pursuing an implementation of the 3815 protocol. 3817 Initial state is ComputeBestOffer. 3819 Enter 3820 | 3821 v 3822 +------------------+ 3823 | ComputeBestOffer | 3824 | |<----+ 3825 | Entry: | | BetterHAL [LEVEL_COMPUTE] 3826 | [LEVEL_COMPUTE] | | BetterHAT [LEVEL_COMPUTE] 3827 | | | ChangeLocalConfiguredLevel [StoreConfigLevel, 3828 | | | LEVEL_COMPUTE] 3829 | | | ChangeLocalHierarchyIndications 3830 | | | [StoreLeafFlags, 3831 | | | LEVEL_COMPUTE] 3832 | | | LostHAT [LEVEL_COMPUTE] 3833 | | | NeighborOffer [IF NoLevelOffered 3834 | | | THEN REMOVE_OFFER 3835 | | | ELSE IF OfferedLevel > Leaf 3836 | | | THEN UPDATE_OFFER 3837 | | | ELSE REMOVE_OFFER 3838 | | | ShortTic [RemoveExpiredOffers] 3839 | |-----+ 3840 | | 3841 | |<--------------------- 3842 | |---------------------> (UpdatingClients) 3843 | | ComputationDone [-] 3844 +------------------+ 3845 ^ | 3846 | | LostHAL [IF AnySouthBoundAdjacenciesPresent 3847 | | THEN UpdateHoldDownTimerToNormalValue 3848 | | ELSE FireHoldDownTimerImmediately] 3849 | V 3850 (HoldingDown) 3852 ZTP FSM FSM 3854 (ComputeBestOffer) 3855 | ^ 3856 | | ChangeLocalConfiguredLevel [StoreConfiguredLevel] 3857 | | ChangeLocalHierarchyIndications [StoreLeafFlags] 3858 | | HoldDownExpired [PURGE_OFFERS] 3859 V | 3860 +------------------+ 3861 | HoldingDown | 3862 | |<----+ 3863 | | | BetterHAL [-] 3864 | | | BetterHAT [-] 3865 | | | ComputationDone [-] 3866 | | | LostHAL [-] 3867 | | | LostHat [-] 3868 | | | NeighborOffer [IF NoLevelOffered 3869 | | | THEN REMOVE_OFFER 3870 | | | ELSE IF OfferedLevel > Leaf 3871 | | | THEN UPDATE_OFFER 3872 | | | ELSE REMOVE_OFFER 3873 | | | ShortTic [RemoveExpiredOffers, 3874 | | | IF HoldDownTimer expired 3875 | | | THEN PUSH HoldDownExpired] 3876 | |-----+ 3877 +------------------+ 3878 ^ 3879 | 3880 (UpdatingClients) 3882 ZTP FSM FSM (continued) 3884 (ComputeBestOffer) 3885 | ^ 3886 | | BetterHAL [-] 3887 | | BetterHAT [-] 3888 | | LostHAT [-] 3889 | | ChangeLocalHierarchyIndications [StoreLeafFlags] 3890 | | ChangeLocalConfiguredLevel [StoreConfigLevel] 3891 V | 3892 +------------------+ 3893 | UpdatingClients | 3894 | |<----+ 3895 | Entry: | | 3896 | [UpdateAllLIE- | | NeighborOffer [IF NoLevelOffered 3897 | FSMsWith- | | THEN REMOVE_OFFER 3898 | Computation- | | ELSE IF OfferedLevel > Leaf 3899 | Results] | | THEN UPDATE_OFFER 3900 | | | ELSE REMOVE_OFFER 3901 | | | ShortTic [RemoveExpiredOffers] 3902 | |-----+ 3903 +------------------+ 3904 | 3905 | LostHAL [IF AnySouthBoundAdjacenciesPresent 3906 | THEN UpdateHoldDownTimerToNormalValue 3907 | ELSE FireHoldDownTimerImmediately] 3908 V 3909 (HoldingDown) 3911 ZTP FSM FSM (continued) 3913 Events 3915 o ChangeLocalHierarchyIndications: node locally configured with new 3916 leaf flags 3918 o ChangeLocalConfiguredLevel: node locally configured with a defined 3919 level 3921 o NeighborOffer: a new neighbor offer with optional level and 3922 neighbor state 3924 o BetterHAL: better HAL computed internally 3926 o BetterHAT: better HAT computed internally 3928 o LostHAL: lost last HAL in computation 3930 o LostHAT: lost HAT in computation 3931 o ComputationDone: computation performed 3933 o HoldDownExpired: holddown expired 3935 o ShortTic: one second timer tick, to be ignored if transition does 3936 not exist 3938 Actions 3940 on ShortTic in HoldingDown finishes in HoldingDown: remove expired 3941 offers and if holddown timer expired PUSH_EVENT HoldDownExpired 3943 on ShortTic in ComputeBestOffer finishes in ComputeBestOffer: 3944 remove expired offers 3946 on HoldDownExpired in HoldingDown finishes in ComputeBestOffer: 3947 PURGE_OFFERS 3949 on ChangeLocalConfiguredLevel in HoldingDown finishes in 3950 ComputeBestOffer: store configured level 3952 on ShortTic in UpdatingClients finishes in UpdatingClients: remove 3953 expired offers 3955 on BetterHAT in ComputeBestOffer finishes in ComputeBestOffer: 3956 LEVEL_COMPUTE 3958 on BetterHAL in HoldingDown finishes in HoldingDown: no action 3960 on ChangeLocalHierarchyIndications in HoldingDown finishes in 3961 ComputeBestOffer: store leaf flags 3963 on BetterHAT in UpdatingClients finishes in ComputeBestOffer: no 3964 action 3966 on BetterHAL in UpdatingClients finishes in ComputeBestOffer: no 3967 action 3969 on ChangeLocalHierarchyIndications in UpdatingClients finishes in 3970 ComputeBestOffer: store leaf flags 3972 on LostHAL in HoldingDown finishes in HoldingDown: 3974 on LostHAT in ComputeBestOffer finishes in ComputeBestOffer: 3975 LEVEL_COMPUTE 3977 on LostHAT in HoldingDown finishes in HoldingDown: no action 3978 on BetterHAT in HoldingDown finishes in HoldingDown: no action 3980 on NeighborOffer in UpdatingClients finishes in UpdatingClients: 3982 if no level offered then REMOVE_OFFER 3984 else 3986 if offered level > leaf then UPDATE_OFFER 3988 else REMOVE_OFFER 3990 on LostHAL in ComputeBestOffer finishes in HoldingDown: if any 3991 southbound adjacencies present then update holddown timer to 3992 normal duration else fire holddown timer immediately 3994 on LostHAL in UpdatingClients finishes in HoldingDown: if any 3995 southbound adjacencies present then update holddown timer to 3996 normal duration else fire holddown timer immediately 3998 on ComputationDone in ComputeBestOffer finishes in 3999 UpdatingClients: no action 4001 on LostHAT in UpdatingClients finishes in ComputeBestOffer: no 4002 action 4004 on ComputationDone in HoldingDown finishes in HoldingDown: 4006 on ChangeLocalConfiguredLevel in ComputeBestOffer finishes in 4007 ComputeBestOffer: store configured level and LEVEL_COMPUTE 4009 on ChangeLocalConfiguredLevel in UpdatingClients finishes in 4010 ComputeBestOffer: store configured level 4012 on NeighborOffer in ComputeBestOffer finishes in ComputeBestOffer: 4014 if no level offered then REMOVE_OFFER 4016 else 4018 if offered level > leaf then UPDATE_OFFER 4020 else REMOVE_OFFER 4022 on NeighborOffer in HoldingDown finishes in HoldingDown: 4024 if no level offered then REMOVE_OFFER 4025 else 4027 if offered level > leaf then UPDATE_OFFER 4029 else REMOVE_OFFER 4031 on ChangeLocalHierarchyIndications in ComputeBestOffer finishes in 4032 ComputeBestOffer: store leaf flags and LEVEL_COMPUTE 4034 on BetterHAL in ComputeBestOffer finishes in ComputeBestOffer: 4035 LEVEL_COMPUTE 4037 on Entry into UpdatingClients: update all LIE FSMs with 4038 computation results 4040 on Entry into ComputeBestOffer: LEVEL_COMPUTE 4042 Following words are used for well known procedures: 4044 1. PUSH Event: pushes an event to be executed by the FSM upon exit 4045 of this action 4047 2. COMPARE_OFFERS: checks whether based on current offers and held 4048 last results the events BetterHAL/LostHAL/BetterHAT/LostHAT are 4049 necessary and returns them 4051 3. UPDATE_OFFER: store current offer with adjacency holdtime as 4052 lifetime and COMPARE_OFFERS, then PUSH according events 4054 4. LEVEL_COMPUTE: compute best offered or configured level and HAL/ 4055 HAT, if anything changed PUSH ComputationDone 4057 5. REMOVE_OFFER: remove the according offer and COMPARE_OFFERS, PUSH 4058 according events 4060 6. PURGE_OFFERS: REMOVE_OFFER for all held offers, COMPARE OFFERS, 4061 PUSH according events 4063 4.2.7.6. Resulting Topologies 4065 The procedures defined in Section 4.2.7.4 will lead to the RIFT 4066 topology and levels depicted in Figure 27. 4068 . +---+ 4069 . | As| 4070 . | 24| 4071 . ++-++ 4072 . | | 4073 . +--+ +--+ 4074 . | | 4075 . +--++ ++--+ 4076 . | E | | F | 4077 . | 23+-+ | 23+-----------+ 4078 . ++--+ | ++-++ | 4079 . | | | | | 4080 . | +-------+ | | 4081 . | | | | | 4082 . | | +----+ | | 4083 . | | | | | 4084 . ++-++ ++-++ | 4085 . | I +-----+ J | | 4086 . | 22| | 22| | 4087 . ++--+ +--++ | 4088 . | | | 4089 . +---------+ | | 4090 . | | | 4091 . ++-++ +---+ | 4092 . | X | | Y +-+ 4093 . | 0 | | 0 | 4094 . +---+ +---+ 4096 Figure 27: Generic ZTP Topology Autoconfigured 4098 In case we imagine the LEAF_ONLY restriction on Y is removed the 4099 outcome would be very different however and result in Figure 28. 4100 This demonstrates basically that auto configuration makes miscabling 4101 detection hard and with that can lead to undesirable effects in cases 4102 where leaves are not "nailed" by the accordingly configured flags and 4103 arbitrarily cabled. 4105 A node MAY analyze the outstanding level offers on its interfaces and 4106 generate warnings when its internal ruleset flags a possible 4107 miscabling. As an example, when a node's sees ZTP level offers that 4108 differ by more than one level from its chosen level (with proper 4109 accounting for leaf's being at level 0) this can indicate miscabling. 4111 . +---+ 4112 . | As| 4113 . | 24| 4114 . ++-++ 4115 . | | 4116 . +--+ +--+ 4117 . | | 4118 . +--++ ++--+ 4119 . | E | | F | 4120 . | 23+-+ | 23+-------+ 4121 . ++--+ | ++-++ | 4122 . | | | | | 4123 . | +-------+ | | 4124 . | | | | | 4125 . | | +----+ | | 4126 . | | | | | 4127 . ++-++ ++-++ +-+-+ 4128 . | I +-----+ J +-----+ Y | 4129 . | 22| | 22| | 22| 4130 . ++-++ +--++ ++-++ 4131 . | | | | | 4132 . | +-----------------+ | 4133 . | | | 4134 . +---------+ | | 4135 . | | | 4136 . ++-++ | 4137 . | X +--------+ 4138 . | 0 | 4139 . +---+ 4141 Figure 28: Generic ZTP Topology Autoconfigured 4143 4.2.8. Stability Considerations 4145 The autoconfiguration mechanism computes a global maximum of levels 4146 by diffusion. The achieved equilibrium can be disturbed massively by 4147 all nodes with highest level either leaving or entering the domain 4148 (with some finer distinctions not explained further). It is 4149 therefore recommended that each node is multi-homed towards nodes 4150 with respective HAL offerings. Fortunately, this is the natural 4151 state of things for the topology variants considered in RIFT. 4153 4.3. Further Mechanisms 4155 4.3.1. Overload Bit 4157 The overload bit MUST be respected by all necessary SPF computations. 4158 A node with the overload bit set SHOULD advertise all locally hosted 4159 prefixes both northbound and southbound, all other southbound 4160 prefixes SHOULD NOT be advertised. 4162 Leaf nodes SHOULD set the overload bit on all originated Node TIEs. 4163 If spine nodes were to forward traffic not intended for the local 4164 node, the leaf node would not be able to prevent routing/forwarding 4165 loops as it does not have the necessary topology information to do 4166 so. 4168 4.3.2. Optimized Route Computation on Leaves 4170 Leaf nodes only have visibility to directly connected nodes and 4171 therefore are not required to run "full" SPF computations. Instead, 4172 prefixes from neighboring nodes can be gathered to run a "partial" 4173 SPF computation in order to build the routing table. 4175 Leaf nodes SHOULD only hold their own N-TIEs, and in cases of L2L 4176 implementations, the N-TIEs of their East/West neighbors. Leaf nodes 4177 MUST hold all S-TIEs from their neighbors. 4179 Normally, a full network graph is created based on local N-TIEs and 4180 remote S-TIEs that it receives from neighbors, at which time, 4181 necessary SPF computations are performed. Instead, leaf nodes can 4182 simply compute the minimum cost and next-hop set of each leaf 4183 neighbor by examining its local adjacencies. Associated N-TIEs are 4184 used to determine bi-directionality and derive the next-hop set. 4185 Cost is then derived from the minimum cost of the local adjacency to 4186 the neighbor and the prefix cost. 4188 Leaf nodes would then attach necessary prefixes as described in 4189 Section 4.2.6. 4191 4.3.3. Mobility 4193 The RIFT control plane MUST maintain the real time status of every 4194 prefix, to which port it is attached, and to which leaf node that 4195 port belongs. This is still true in cases of IP mobility where the 4196 point of attachment may change several times a second. 4198 There are two classic approaches to explicitly maintain this 4199 information: 4201 timestamp: With this method, the infrastructure SHOULD record the 4202 precise time at which the movement is observed. One key advantage 4203 of this technique is that it has no dependency on the mobile 4204 device. One drawback is that the infrastructure MUST be precisely 4205 synchronized in order to be able to compare timestamps as the 4206 points of attachment change. This could be accomplished by 4207 utilizing Precision Time Protocol (PTP) IEEE Std. 1588 4208 [IEEEstd1588] or 802.1AS [IEEEstd8021AS] which is designed for 4209 bridged LANs. Both the precision of the synchronization protocol 4210 and the resolution of the timestamp must beat the highest possible 4211 roaming time on the fabric. Another drawback is that the presence 4212 of a mobile device may only be observed asynchronously, such as 4213 when it starts using an IP protocol like ARP [RFC0826], IPv6 4214 Neighbor Discovery [RFC4861], IPv6 Stateless Address Configuration 4215 [RFC4862], DHCP [RFC2131], or DHCPv6 [RFC8415]. 4217 sequence counter: With this method, a mobile device notifies its 4218 point of attachment on arrival with a sequence counter that is 4219 incremented upon each movement. On the positive side, this method 4220 does not have a dependency on a precise sense of time, since the 4221 sequence of movements is kept in order by the mobile device. The 4222 disadvantage of this approach is the lack of support for protocols 4223 that may be used by the mobile device to register its presence to 4224 the leaf node with the capability to provide a sequence counter. 4225 Well-known issues with sequence counters such as wrapping and 4226 comparison rules MUST be addressed properly. Sequence numbers 4227 MUST be compared by a single homogenous source to make operation 4228 feasible. Sequence number comparison from multiple heterogeneous 4229 sources would be extremely difficult to implement. 4231 RIFT supports a hybrid approach by using an optional 4232 'PrefixSequenceType' attribute (that we also call a 'monotonic 4233 clock') that consists of a timestamp and optional sequence number 4234 field. When this attribute is present (observe that per data schema 4235 the attribute itself is optional but in case it is included the 4236 'timestamp' field is required): 4238 o The leaf node MAY advertise a timestamp of the latest sighting of 4239 a prefix, e.g., by snooping IP protocols or the node using the 4240 time at which it advertised the prefix. RIFT transports the 4241 timestamp within the desired prefix North TIEs as 802.1AS 4242 timestamp. 4244 o RIFT MAY interoperate with "Registration Extensions for 6LoWPAN 4245 Neighbor Discovery" [RFC8505], which provides a method for 4246 registering a prefix with a sequence number called a Transaction 4247 ID (TID). In such cases, RIFT SHOULD transport the derived TID 4248 without modification. 4250 o RIFT also defines an abstract negative clock (ASNC) (also called 4251 an 'undefined' clock). ASNC MUST be considered older than any 4252 other defined clock. By default, when a node receives a prefix 4253 North TIE that does not contain a 'PrefixSequenceType' attribute, 4254 it MUST interpret the absence as ASNC. 4256 o Any prefix present on the fabric in multiple nodes that has the 4257 `same` clock is considered as anycast. 4259 o RIFT specification assumes that all nodes are being synchronized 4260 to at least 200 milliseconds of precision. This is achievable 4261 through the use of NTP [RFC5905]. An implementation MAY provide a 4262 way to reconfigure a domain to a different value, we call this 4263 variable MAXIMUM_CLOCK_DELTA. 4265 4.3.3.1. Clock Comparison 4267 All monotonic clock values MUST be compared to each other using the 4268 following rules: 4270 1. ASNC is older than any other value except ASNC AND 4272 2. Clock with timestamp differing by more than MAXIMUM_CLOCK_DELTA 4273 are comparable by using the timestamps only AND 4275 3. Clocks with timestamps differing by less than MAXIMUM_CLOCK_DELTA 4276 are comparable by using their TIDs only AND 4278 4. An undefined TID is always older than any other TID AND 4280 5. TIDs are compared using rules of [RFC8505]. 4282 4.3.3.2. Interaction between Time Stamps and Sequence Counters 4284 For attachment changes that occur less frequently (e.g. once per 4285 second), the timestamp that the RIFT infrastructure captures should 4286 be enough to determine the most current discovery. If the point of 4287 attachment changes faster than the maximum drift of the time stamping 4288 mechanism (i.e. MAXIMUM_CLOCK_DELTA), then a sequence number SHOULD 4289 be used to enable necessary precision to determine currency. 4291 The sequence counter in [RFC8505] is encoded as one octet and wraps 4292 around using Appendix A. 4294 Within the resolution of MAXIMUM_CLOCK_DELTA, sequence counter values 4295 captured during 2 sequential iterations of the same timestamp SHOULD 4296 be comparable. This means that with default values, a node may move 4297 up to 127 times in a 200 millisecond period and the clocks will 4298 remain comparable. This allows the RIFT infrastructure to explicitly 4299 assert the most up-to-date advertisement. 4301 4.3.3.3. Anycast vs. Unicast 4303 A unicast prefix can be attached to at most one leaf, whereas an 4304 anycast prefix may be reachable via more than one leaf. 4306 If a monotonic clock attribute is provided on the prefix, then the 4307 prefix with the `newest` clock value is strictly preferred. An 4308 anycast prefix does not carry a clock or all clock attributes MUST be 4309 the same under the rules of Section 4.3.3.1. 4311 Observe that it is important that in mobility events the leaf is re- 4312 flooding as quickly as possible the absence of the prefix that moved 4313 away. 4315 Observe further that without support for [RFC8505] movements on the 4316 fabric within intervals smaller than 100msec will be seen as anycast. 4318 4.3.3.4. Overlays and Signaling 4320 RIFT is agnostic to any overlay technologies and their associated 4321 control and transports that run on top of it (e.g. VXLAN). It is 4322 expected that leaf nodes and possibly Top-of-Fabric nodes can perform 4323 necessary data plane encapsulation. 4325 In the context of mobility, overlays provide another possible 4326 solution to avoid injecting mobile prefixes into the fabric as well 4327 as improving scalability of the deployment. It makes sense to 4328 consider overlays for mobility solutions in IP fabrics. As an 4329 example, a mobility protocol such as LISP [RFC6830] may inform the 4330 ingress leaf of the location of the egress leaf in real time. 4332 Another possibility is to consider that mobility as an underlay 4333 service and support it in RIFT to an extent. The load on the fabric 4334 augments with the amount of mobility obviously since a move forces 4335 flooding and computation on all nodes in the scope of the move so 4336 tunneling from leaf to the Top-of-Fabric may be desired to speed up 4337 convergence times. 4339 4.3.4. Key/Value Store 4341 4.3.4.1. Southbound 4343 RIFT supports the southbound distribution of key-value pairs that can 4344 be used to distribute information to facilitate higher levels of 4345 functionality (e.g. distribution of configuration information). KV 4346 South TIEs may arrive from multiple nodes and therefore MUST execute 4347 the following tie-breaking rules for each key: 4349 1. Only KV TIEs received from nodes to which a bi-directional 4350 adjacency exists MUST be considered. 4352 2. For each valid KV South TIEs that contains the same key, the 4353 value within the South TIE with the highest level will be 4354 preferred. If the levels are identical, the highest originating 4355 system ID will be preferred. In the case of overlapping keys in 4356 the winning South TIE, the behavior is undefined. 4358 Consider that if a node goes down, nodes south of it will lose 4359 associated adjacencies causing them to disregard corresponding KVs. 4360 New KV South TIEs are advertised to prevent stale information being 4361 used by nodes that are farther south. KV advertisements southbound 4362 are not a result of independent computation by every node over the 4363 same set of South TIEs, but a diffused computation. 4365 4.3.4.2. Northbound 4367 Certain use cases necessitate distribution of essential KV 4368 information that is generated by the leaves in the northbound 4369 direction. Such information is flooded in KV North TIEs. Since the 4370 originator of the KV North TIEs is preserved during flooding, 4371 overlapping keys MAY be used. However, to avoid further protocol 4372 complexity, the same tie-breaking rules as used in southbound 4373 distribution SHOULD be used. 4375 4.3.5. Interactions with BFD 4377 RIFT MAY incorporate BFD [RFC5881] to react quickly to link failures. 4378 In such case following procedures are introduced: 4380 After RIFT three-way hello adjacency convergence a BFD session MAY 4381 be formed automatically between the RIFT endpoints without further 4382 configuration using the exchanged discriminators. The capability 4383 of the remote side to support BFD is carried in the LIEs. 4385 In case established BFD session goes Down after it was Up, RIFT 4386 adjacency SHOULD be re-initialized and subsequently started from 4387 Init after it sees a consecutive BFD Up. 4389 In case of parallel links between nodes each link MAY run its own 4390 independent BFD session or they MAY share a session. 4392 If link identifiers or BFD capabilities change, both the LIE and 4393 any BFD sessions SHOULD be brought down and back up again. In 4394 case only the advertised capabilities change, the node MAY choose 4395 to persist the BFD session. 4397 Multiple RIFT instances MAY choose to share a single BFD session, 4398 in such cases the behavior for which discriminators are used is 4399 undefined. However, RIFT MAY advertise the same link ID for the 4400 same interface in multiple instances to "share" discriminators. 4402 BFD TTL follows [RFC5082]. 4404 4.3.6. Fabric Bandwidth Balancing 4406 A well understood problem in fabrics is that in case of link 4407 failures, it would be ideal to rebalance how much traffic is sent to 4408 switches in the next level based on available ingress and egress 4409 bandwidth. 4411 RIFT supports a very light weight mechanism that can deal with the 4412 problem in an approximate way based on the fact that RIFT is loop- 4413 free. 4415 4.3.6.1. Northbound Direction 4417 Every RIFT node SHOULD compute the amount of northbound bandwidth 4418 available through neighbors at higher level and modify distance 4419 received on default route from this neighbor. Default routes with 4420 differing distances SHOULD be used to support weighted ECMP 4421 forwarding. We call such a distance Bandwidth Adjusted Distance or 4422 BAD. This is best illustrated by a simple example. 4424 . 100 x 100 100 MBits 4425 . | x | | 4426 . +-+---+-+ +-+---+-+ 4427 . | | | | 4428 . |Spin111| |Spin112| 4429 . +-+---+++ ++----+++ 4430 . |x || || || 4431 . || |+---------------+ || 4432 . || +---------------+| || 4433 . || || || || 4434 . || || || || 4435 . -----All Links 10 MBit------- 4436 . || || || || 4437 . || || || || 4438 . || +------------+| || || 4439 . || |+------------+ || || 4440 . |x || || || 4441 . +-+---+++ +--++-+++ 4442 . | | | | 4443 . |Leaf111| |Leaf112| 4444 . +-------+ +-------+ 4446 Figure 29: Balancing Bandwidth 4448 Figure 29 depicts an example topology where links between leaf and 4449 spine nodes are 10 MBit/s and links from spine nodes northbound are 4450 100 MBit/s. Consider a parallel link failure between Leaf 111 and 4451 Spine 111 and as a result, Leaf 111 wants to forward more traffic 4452 toward Spine 112. Additionally, we consider an uplink failure on 4453 Spine 111. 4455 The local modification of the received default route distance from 4456 upper level is achieved by running a relatively simple algorithm 4457 where the bandwidth is weighted exponentially, while the distance on 4458 the default route represents a multiplier for the bandwidth weight 4459 for easy operational adjustments. 4461 On a node, L, use Node TIEs to compute from each non-overloaded 4462 northbound neighbor N to compute 3 values: 4464 L_N_u: as sum of the bandwidth available to N 4466 N_u: as sum of the uplink bandwidth available on N 4468 T_N_u: as sum of L_N_u * OVERSUBSCRIPTION_CONSTANT + N_u 4470 For all T_N_u determine the according M_N_u as 4471 log_2(next_power_2(T_N_u)) and determine MAX_M_N_u as maximum value 4472 of all such M_N_u values. 4474 For each advertised default route from a node N modify the advertised 4475 distance D to BAD = D * (1 + MAX_M_N_u - M_N_u) and use BAD instead 4476 of distance D to weight balance default forwarding towards N. 4478 For the example above, a simple table of values will help in 4479 understanding of the concept. We assume that all default route 4480 distances are advertised with D=1 and that OVERSUBSCRIPTION_CONSTANT 4481 = 1. 4483 +---------+-----------+-------+-------+-----+ 4484 | Node | N | T_N_u | M_N_u | BAD | 4485 +---------+-----------+-------+-------+-----+ 4486 | Leaf111 | Spine 111 | 110 | 7 | 2 | 4487 +---------+-----------+-------+-------+-----+ 4488 | Leaf111 | Spine 112 | 220 | 8 | 1 | 4489 +---------+-----------+-------+-------+-----+ 4490 | Leaf112 | Spine 111 | 120 | 7 | 2 | 4491 +---------+-----------+-------+-------+-----+ 4492 | Leaf112 | Spine 112 | 220 | 8 | 1 | 4493 +---------+-----------+-------+-------+-----+ 4495 Table 5: BAD Computation 4497 If a calculation produces a result exceeding the range of the type, 4498 e.g. bandwidth, the result is set to the highest possible value for 4499 that type. 4501 BAD SHOULD be only computed for default routes. A node MAY compute 4502 and use BAD for any disaggregated prefixes or other RIFT routes. A 4503 node MAY use a different algorithm to weight northbound traffic based 4504 on bandwidth. If a different algorithm is used, its successful 4505 behavior MUST NOT depend on uniformity of algorithm or 4506 synchronization of BAD computations across the fabric. E.g. it is 4507 conceivable that leaves could use real time link loads gathered by 4508 analytics to change the amount of traffic assigned to each default 4509 route next hop. 4511 Furthermore, a change in available bandwidth will only affect, at 4512 most, two levels down in the fabric, i.e. the blast radius of 4513 bandwidth adjustments is constrained no matter the fabric's height. 4515 4.3.6.2. Southbound Direction 4517 Due to its loop free nature, during South SPF, a node MAY account for 4518 maximum available bandwidth on nodes in lower levels and modify the 4519 amount of traffic offered to the next level's southbound nodes. It 4520 is worth considering that such computations may be more effective if 4521 standardized, but do not have to be. As long as a packet continues 4522 to flow southbound, it will take some viable, loop-free path to reach 4523 its destination. 4525 4.3.7. Label Binding 4527 A node MAY advertise in its LIEs, a locally significant, downstream 4528 assigned, interface specific label. One use of such a label is a 4529 hop-by-hop encapsulation allowing forwarding planes to be easily 4530 distinguished among multiple RIFT instances. 4532 4.3.8. Leaf to Leaf Procedures 4534 RIFT implementations SHOULD support special East-West adjacencies 4535 between leaf nodes. Leaf nodes supporting these procedures MUST: 4537 advertise the LEAF_2_LEAF flag in its node capabilities AND 4539 set the overload bit on all leaf's node TIEs AND 4541 flood only a node's own north and south TIEs over E-W leaf 4542 adjacencies AND 4544 always use E-W leaf adjacency in all SPF computations AND 4546 install a discard route for any advertised aggregate routes in a 4547 leaf?s TIE AND 4549 never form southbound adjacencies. 4551 This will allow the E-W leaf nodes to exchange traffic strictly for 4552 the prefixes advertised in each other's north prefix TIEs (since the 4553 southbound computation will find the reverse direction in the other 4554 node's TIE and install its north prefixes). 4556 4.3.9. Address Family and Multi Topology Considerations 4558 Multi-Topology (MT)[RFC5120] and Multi-Instance (MI)[RFC8202] 4559 concepts are used today in link-state routing protocols to support 4560 several domains on the same physical topology. RIFT supports this 4561 capability by carrying transport ports in the LIE protocol exchanges. 4563 Multiplexing of LIEs can be achieved by either choosing varying 4564 multicast addresses or ports on the same address. 4566 BFD interactions in Section 4.3.5 are implementation dependent when 4567 multiple RIFT instances run on the same link. 4569 4.3.10. Reachability of Internal Nodes in the Fabric 4571 RIFT does not require that nodes have reachable addresses in the 4572 fabric, though it is clearly desirable for operational purposes. 4573 Under normal operating conditions this can be easily achieved by 4574 injecting the node's loopback address into North and South Prefix 4575 TIEs or other implementation specific mechanisms. 4577 Special considerations arise when a node loses all northbound 4578 adjacencies, but is not at the top of the fabric. These are outside 4579 the scope of this document and could be discussed in a separate 4580 document. 4582 4.3.11. One-Hop Healing of Levels with East-West Links 4584 Based on the rules defined in Section 4.2.4, Section 4.2.3.8 and 4585 given presence of E-W links, RIFT can provide a one-hop protection 4586 for nodes that lost all their northbound links. This can also be 4587 applied to multi-plane designs where complex link set failures occur 4588 at the Top-of-Fabric when links are exclusively used for flooding 4589 topology information. Section 5.4 outlines this behavior. 4591 4.4. Security 4593 4.4.1. Security Model 4595 An inherent property of any security and ZTP architecture is the 4596 resulting trade-off in regard to integrity verification of the 4597 information distributed through the fabric vs. provisioning and auto- 4598 configuration requirements. At a minimum the security of an 4599 established adjacency should be ensured. The stricter the security 4600 model the more provisioning must take over the role of ZTP. 4602 RIFT supports the following security models to allow for flexible 4603 control by the operator. 4605 o The most security conscious operators may choose to have control 4606 over which ports interconnect between a given pair of nodes, we 4607 call this the "Port-Association Model" (PAM). This is achievable 4608 by configuring each pair of directly connected ports with a 4609 designated shared key or public/private key pair. 4611 o In physically secure data center locations, operators may choose 4612 to control connectivity between entire nodes, we call this the 4613 "Node-Association Model" (NAM). A benefit of this model is that 4614 it allows for simplified port sparing. 4616 o In the most relaxed environments, an operator may only choose to 4617 control which nodes join a particular fabric. We call this the 4618 "Fabric-Association Model" (FAM). This is achievable by using a 4619 single shared secret across the entire fabric. Such flexibility 4620 makes sense when we consider servers as leaf devices, which are 4621 replaced more often than network nodes. In addition, this model 4622 allows for simplified node sparing. 4624 o These models may be mixed throughout the fabric depending upon 4625 security requirements at various levels of the fabric and 4626 willingness to accept increased provisioning complexity. 4628 In order to support the cases mentioned above, RIFT implementations 4629 supports, through operator control, mechanisms that allow for: 4631 a. specification of the appropriate level in the fabric, 4633 b. discovery and reporting of missing connections, 4635 c. discovery and reporting of unexpected connections while 4636 preventing them from forming insecure adjacencies. 4638 Operators may only choose to configure the level of each node, but 4639 not explicitly configure which connections are allowed. In this 4640 case, RIFT will only allow adjacencies to establish between nodes 4641 that are in adjacent levels. Operators with the lowest security 4642 requirements may not use any configuration to specify which 4643 connections are allowed. Nodes in such fabrics could rely fully on 4644 ZTP and only established adjacencies between nodes in adjacent 4645 levels. Figure 30 illustrates inherent tradeoffs between the 4646 different security models. 4648 Some level of link quality verification may be required prior to an 4649 adjacency being used for forwarding. For example, an implementation 4650 may require that a BFD session comes up before advertising the 4651 adjacency. 4653 For the cases outlined above, RIFT has two approaches to enforce that 4654 a local port is connected to the correct port on the correct remote 4655 node. One approach is to piggy-back on RIFT's authentication 4656 mechanism. Assuming the provisioning model (e.g. the YANG model) is 4657 flexible enough, operators can choose to provision a unique 4658 authentication key for: 4660 a. each pair of ports in "port-association model" or 4662 b. each pair of switches in "node-association model" or 4664 c. each pair of levels or 4666 d. the entire fabric in "fabric-association model". 4668 The other approach is to rely on the system-id, port-id and level 4669 fields in the LIE message to validate an adjacency against the 4670 expected cabling topology, and optionally introduce some new rules in 4671 the FSM to allow the adjacency to come up if the expectations are 4672 met. 4674 ^ /\ | 4675 /|\ / \ | 4676 | / \ | 4677 | / PAM \ | 4678 Increasing / \ Increasing 4679 Integrity +----------+ Flexibility 4680 & / NAM \ & 4681 Increasing +--------------+ Less 4682 Provisioning / FAM \ Configuration 4683 | +------------------+ | 4684 | / Level Provisioning \ | 4685 | +----------------------+ \|/ 4686 | / Zero Configuration \ v 4687 +--------------------------+ 4689 Figure 30: Security Model 4691 4.4.2. Security Mechanisms 4693 RIFT Security goals are to ensure: 4695 1. authentication 4697 2. message integrity 4699 3. the prevention of replay attacks 4701 4. low processing overhead 4703 5. efficient messaging 4704 Message confidentiality is a non-goal. 4706 The model in the previous section allows a range of security key 4707 types that are analogous to the various security association models. 4708 PAM and NAM allow security associations at the port or node level 4709 using symmetric or asymmetric keys that are pre-installed. FAM 4710 argues for security associations to be applied only at a group level 4711 or to be refined once the topology has been established. RIFT does 4712 not specify how security keys are installed or updated, though it 4713 does specify how the key can be used to achieve security goals. 4715 The protocol has provisions for "weak" nonces to prevent replay 4716 attacks and includes authentication mechanisms comparable to 4717 [RFC5709] and [RFC7987]. 4719 4.4.3. Security Envelope 4721 RIFT MUST be carried in a mandatory secure envelope illustrated in 4722 Figure 31. Any value in the packet following a security fingerprint 4723 MUST be used only after the appropriate fingerprint has been 4724 validated. 4726 Local configuration MAY allow for the envelope's integrity checks to 4727 be skipped. 4729 0 1 2 3 4730 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 4732 UDP Header: 4733 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4734 | Source Port | RIFT destination port | 4735 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4736 | UDP Length | UDP Checksum | 4737 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4739 Outer Security Envelope Header: 4740 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4741 | RIFT MAGIC | Packet Number | 4742 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4743 | Reserved | RIFT Major | Outer Key ID | Fingerprint | 4744 | | Version | | Length | 4745 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4746 | | 4747 ~ Security Fingerprint covers all following content ~ 4748 | | 4749 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4750 | Weak Nonce Local | Weak Nonce Remote | 4751 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4752 | Remaining TIE Lifetime (all 1s in case of LIE) | 4753 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4755 TIE Origin Security Envelope Header: 4756 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4757 | TIE Origin Key ID | Fingerprint | 4758 | | Length | 4759 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4760 | | 4761 ~ Security Fingerprint covers all following content ~ 4762 | | 4763 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4765 Serialized RIFT Model Object 4766 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4767 | | 4768 ~ Serialized RIFT Model Object ~ 4769 | | 4770 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4772 Figure 31: Security Envelope 4774 RIFT MAGIC: 16 bits. Constant value of 0xA1F7 that allows to 4775 classify RIFT packets independent of used UDP port. 4777 Packet Number: 16 bits. An optional, per packet type monotonically 4778 growing number rolling over using sequence number arithmetic 4779 defined in Appendix A. A node SHOULD correctly set the number on 4780 subsequent packets or otherwise MUST set the value to 4781 `undefined_packet_number` as provided in the schema. This number 4782 can be used to detect losses and misordering in flooding for 4783 either operational purposes or in implementation to adjust 4784 flooding behavior to current link or buffer quality. This number 4785 MUST NOT be used to discard or validate the correctness of 4786 packets. 4788 RIFT Major Version: 8 bits. It allows to check whether protocol 4789 versions are compatible, i.e. if the serialized object can be 4790 decoded at all. An implementation MUST drop packets with 4791 unexpected values and MAY report a problem. 4793 Outer Key ID: 8 bits to allow key rollovers. This implies key type 4794 and algorithm. Value 0 means that no valid fingerprint was 4795 computed. This key ID scope is local to the nodes on both ends of 4796 the adjacency. 4798 TIE Origin Key ID: 24 bits. This implies key type and used 4799 algorithm. Value 0 means that no valid fingerprint was computed. 4800 This key ID scope is global to the RIFT instance since it implies 4801 the originator of the TIE so the contained object does not have to 4802 be de-serialized to obtain it. 4804 Length of Fingerprint: 8 bits. Length in 32-bit multiples of the 4805 following fingerprint (not including lifetime or weak nonces). It 4806 allows the structure to be navigated when an unknown key type is 4807 present. To clarify, a common corner case when this value is set 4808 to 0 is when it signifies an empty (0 bytes long) security 4809 fingerprint. 4811 Security Fingerprint: 32 bits * Length of Fingerprint. This is a 4812 signature that is computed over all data following after it. If 4813 the significant bits of fingerprint are fewer than the 32 bits 4814 padded length than the significant bits MUST be left aligned and 4815 remaining bits on the right padded with 0s. When using PKI the 4816 Security fingerprint originating node uses its private key to 4817 create the signature. The original packet can then be verified 4818 provided the public key is shared and current. 4820 Remaining TIE Lifetime: 32 bits. In case of anything but TIEs this 4821 field MUST be set to all ones and Origin Security Envelope Header 4822 MUST NOT be present in the packet. For TIEs this field represents 4823 the remaining lifetime of the TIE and Origin Security Envelope 4824 Header MUST be present in the packet. The value in the serialized 4825 model object MUST be ignored. 4827 Weak Nonce Local: 16 bits. Local Weak Nonce of the adjacency as 4828 advertised in LIEs. 4830 Weak Nonce Remote: 16 bits. Remote Weak Nonce of the adjacency as 4831 received in LIEs. 4833 TIE Origin Security Envelope Header: It MUST be present if and only 4834 if the Remaining TIE Lifetime field is NOT all ones. It carries 4835 through the originators key ID and according fingerprint of the 4836 object to protect TIE from modification during flooding. This 4837 ensures origin validation and integrity (but does not provide 4838 validation of a chain of trust). 4840 Observe that due to the schema migration rules per Appendix B the 4841 contained model can be always decoded if the major version matches 4842 and the envelope integrity has been validated. Consequently, 4843 description of the TIE is available to flood it properly including 4844 unknown TIE types. 4846 4.4.4. Weak Nonces 4848 The protocol uses two 16 bit nonces to salt generated signatures. We 4849 use the term "nonce" a bit loosely since RIFT nonces are not being 4850 changed in every packet as common in cryptography. For efficiency 4851 purposes they are changed at a high enough frequency to dwarf 4852 practical replay attack attempts. Therefore, we call them "weak" 4853 nonces. 4855 Any implementation including RIFT security MUST generate and wrap 4856 around local nonces properly. When a nonce increment leads to 4857 `undefined_nonce` value, the value MUST be incremented again 4858 immediately. All implementation MUST reflect the neighbor's nonces. 4859 An implementation SHOULD increment a chosen nonce on every LIE FSM 4860 transition that ends up in a different state from the previous and 4861 MUST increment its nonce at least every 5 minutes (such 4862 considerations allow for efficient implementations without opening a 4863 significant security risk). When flooding TIEs, the implementation 4864 MUST use recent (i.e. within allowed difference) nonces reflected in 4865 the LIE exchange. The schema specifies the maximum allowable nonce 4866 value difference on a packet compared to reflected nonces in the 4867 LIEs. Any packet received with nonces deviating more than the 4868 allowed delta MUST be discarded without further computation of 4869 signatures to prevent computation load attacks. 4871 In cases where a secure implementation does not receive signatures or 4872 receives undefined nonces from a neighbor (indicating that it does 4873 not support or verify signatures), it is a matter of local policy as 4874 to how those packets are treated. A secure implementation MAY refuse 4875 forming an adjacency with an implementation that is not advertising 4876 signatures or valid nonces, or it MAY continue signing local packets 4877 while accepting a neighbor's packets without further security 4878 validation. 4880 As a necessary exception, an implementation MUST advertise the remote 4881 nonce value as `undefined_nonce` when the FSM is not in two-way or 4882 three-way state and accept an `undefined_nonce` for its local nonce 4883 value on packets in any other state than three-way. 4885 As optional optimization, an implementation MAY send one LIE with 4886 previously negotiated neighbor's nonce to try to speed up a 4887 neighbor's transition from three-way to one-way and MUST revert to 4888 sending `undefined_nonce` after that. 4890 4.4.5. Lifetime 4892 Protecting flooding lifetime may lead to an excessive number of 4893 security fingerprint computations and to avoid this the application 4894 generating the fingerprints for advertised TIEs, MAY round the value 4895 down to the next `rounddown_lifetime_interval`. Such an optimization 4896 in the presence of security hashes over advancing weak nonces, may 4897 not be feasible. 4899 4.4.6. Key Management 4901 As outlined in Section Section 7, either a private shared key or a 4902 public/private key pair is used to authenticate the adjacency. Both 4903 the key distribution and key synchronization methods are out of scope 4904 for this document. Both nodes in the adjacency MUST share the same 4905 keys, key type, and algorithm for a given key ID. Mismatched keys 4906 will not inter-operate as their security envelopes will be 4907 unverifiable. 4909 Key roll-over while the adjacency is active MAY be supported. The 4910 specific mechanism is well documented in [RFC6518]. 4912 4.4.7. Security Association Changes 4914 There in no mechanism to convert a security envelope for the same key 4915 ID from one algorithm to another once the envelope is operational. 4916 The recommended procedure to change to a new algorithm is to take the 4917 adjacency down, make the necessary changes, and bring the adjacency 4918 back up. Obviously, an implementation MAY choose to stop verifying 4919 security envelope for the duration of algorithm change to keep the 4920 adjacency up but since this introduces a security vulnerability 4921 window, such roll-over SHOULD NOT be recommended. 4923 5. Examples 4925 5.1. Normal Operation 4927 ^ N +--------+ +--------+ 4928 Level 2 | |ToF 21| |ToF 22| 4929 E <-*-> W ++-+--+-++ ++-+--+-++ 4930 | | | | | | | | | 4931 S v P111/2 |P121/2 | | | | 4932 ^ ^ ^ ^ | | | | 4933 | | | | | | | | 4934 +--------------+ | +-----------+ | | | +---------------+ 4935 | | | | | | | | 4936 South +-----------------------------+ | | ^ 4937 | | | | | | | All TIEs 4938 0/0 0/0 0/0 +-----------------------------+ | 4939 v v v | | | | | 4940 | | +-+ +<-0/0----------+ | | 4941 | | | | | | | | 4942 +-+----++ +-+----++ ++----+-+ ++-----++ 4943 Level 1 | | | | | | | | 4944 |Spin111| |Spin112| |Spin121| |Spin122| 4945 +-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 4946 | | | South | | | | 4947 | +---0/0--->-----+ 0/0 | +----------------+ | 4948 0/0 | | | | | | | 4949 | +---<-0/0-----+ | v | +--------------+ | | 4950 v | | | | | | | 4951 +-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 4952 Level 0 | | | | | | | | 4953 |Leaf111| |Leaf112| |Leaf121| |Leaf122| 4954 +-+-----+ +-+---+-+ +--+--+-+ +-+-----+ 4955 + + \ / + + 4956 Prefix111 Prefix112 \ / Prefix121 Prefix122 4957 multi-homed 4958 Prefix 4959 +---------- PoD 1 ---------+ +---------- PoD 2 ---------+ 4961 Figure 32: Normal Case Topology 4963 This section describes RIFT deployment in example topology given in 4964 Figure 32 without any node or link failures. We disregard flooding 4965 reduction for simplicity's sake and compress the node names in some 4966 cases to fit them into the picture better. 4968 First, the following bi-directional adjacencies will be established: 4970 1. ToF 21 (PoD 0) to Spine 111, Spine 112, Spine 121, and Spine 122 4972 2. ToF 22 (PoD 0) to Spine 111, Spine 112, Spine 121, and Spine 122 4974 3. Spine 111 to Leaf 111, Leaf 112 4976 4. Spine 112 to Leaf 111, Leaf 112 4978 5. Spine 121 to Leaf 121, Leaf 122 4980 6. Spine 122 to Leaf 121, Leaf 122 4982 Leaf 111 and Leaf 112 originate N-TIEs for Prefix 111 and Prefix 112 4983 (respectively) to both Spine 111 and Spine 112 (Leaf 112 also 4984 originates an N-TIE for the multi-homed prefix). Spine 111 and Spine 4985 112 will then originate their own N-TIEs, as well as flood the N-TIEs 4986 received from Leaf 111 and Leaf 112 to both ToF 21 and ToF 22. 4988 Similarly, Leaf 121 and Leaf 122 originate North TIEs for Prefix 121 4989 and Prefix 122 (respectively) to Spine 121 and Spine 122 (Leaf 121 4990 also originates an North TIE for the multi-homed prefix). Spine 121 4991 and Spine 122 will then originate their own North TIEs, as well as 4992 flood the North TIEs received from Leaf 121 and Leaf 122 to both ToF 4993 21 and ToF 22. 4995 Spines hold only North TIEs of level 0 for their PoD, while leaves 4996 only hold their own North TIEs while at this point, both ToF 21 and 4997 ToF 22 (as well as any northbound connected controllers) would have 4998 the complete network topology. 5000 ToF 21 and ToF 22 would then originate and flood South TIEs 5001 containing any established adjacencies and a default IP route to all 5002 spines. Spine 111, Spine 112, Spine 121, and Spine 122 will reflect 5003 all Node South TIEs received from ToF 21 to ToF 22, and all Node 5004 South TIEs from ToF 22 to ToF 21. South TIEs will not be re- 5005 propagated southbound. 5007 South TIEs containing a default IP route are then originated by both 5008 Spine 111 and Spine 112 toward Leaf 111 and Leaf 112. Similarly, 5009 South TIEs containing a default IP route are originated by Spine 121 5010 and Spine 122 toward Leaf 121 and Leaf 122. 5012 At this point IP connectivity across maximum number of viable paths 5013 has been established for all leaves, with routing information 5014 constrained to only the minimum amount that allows for normal 5015 operation and redundancy. 5017 5.2. Leaf Link Failure 5019 . | | | | 5020 .+-+---+-+ +-+---+-+ 5021 .| | | | 5022 .|Spin111| |Spin112| 5023 .+-+---+-+ ++----+-+ 5024 . | | | | 5025 . | +---------------+ X 5026 . | | | X Failure 5027 . | +-------------+ | X 5028 . | | | | 5029 .+-+---+-+ +--+--+-+ 5030 .| | | | 5031 .|Leaf111| |Leaf112| 5032 .+-------+ +-------+ 5033 . + + 5034 . Prefix111 Prefix112 5036 Figure 33: Single Leaf Link Failure 5038 In the event of a link failure between Spine 112 and Leaf 112, both 5039 nodes will originate new Node TIEs that contain their connected 5040 adjacencies, except for the one that just failed. Leaf 112 will send 5041 a Node North TIE to Spine 111. Spine 112 will send a Node North TIE 5042 to ToF 21 and ToF 22 as well as a new Node South TIE to Leaf 111 that 5043 will be reflected to Spine 111. Necessary SPF recomputation will 5044 occur, resulting in Spine 112 no longer being in the forwarding path 5045 for Prefix 112. 5047 Spine 111 will also disaggregate Prefix 112 by sending new Prefix 5048 South TIE to Leaf 111 and Leaf 112. Though we cover disaggregation 5049 in more detail in the following section, it is worth mentioning ini 5050 this example as it further illustrates RIFT's blackhole mitigation 5051 mechanism. Consider that Leaf 111 has yet to receive the more 5052 specific (disaggregated) route from Spine 111. In such a scenario, 5053 traffic from Leaf 111 toward Prefix 112 may still use Spine 112's 5054 default route, causing it to traverse ToF 21 and ToF 22 back down via 5055 Spine 111. While this behavior is suboptimal, it is transient in 5056 nature and preferred to black-holing traffic. 5058 5.3. Partitioned Fabric 5060 +--------+ +--------+ 5061 Level 2 |ToF 21| |ToF 22| 5062 ++-+--+-++ ++-+--+-++ 5063 | | | | | | | | 5064 | | | | | | | 0/0 5065 | | | | | | | | 5066 | | | | | | | | 5067 +--------------+ | +--- XXXXXX + | | | +---------------+ 5068 | | | | | | | | 5069 | +-----------------------------+ | | | 5070 0/0 | | | | | | | 5071 | 0/0 0/0 +- XXXXXXXXXXXXXXXXXXXXXXXXX -+ | 5072 | 1.1/16 | | | | | | 5073 | | +-+ +-0/0-----------+ | | 5074 | | | 1.1./16 | | | | 5075 +-+----++ +-+-----+ ++-----0/0 ++----0/0 5076 Level 1 | | | | | 1.1/16 | 1.1/16 5077 |Spin111| |Spin112| |Spin121| |Spin122| 5078 +-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 5079 | | | | | | | | 5080 | +---------------+ | | +----------------+ | 5081 | | | | | | | | 5082 | +-------------+ | | | +--------------+ | | 5083 | | | | | | | | 5084 +-+---+-+ +--+--+-+ +-+---+-+ +---+-+-+ 5085 Level 3 | | | | | | | | 5086 |Leaf111| |Leaf112| |Leaf121| |Leaf122| 5087 +-+-----+ ++------+ +-----+-+ +-+-----+ 5088 + + + + 5089 Prefix111 Prefix112 Prefix121 Prefix122 5090 1.1/16 5092 Figure 34: Fabric Partition 5094 Figure 34 shows one of more catastrophic scenarios where ToF 21 is 5095 completely severed from access to Prefix 121 due to a double link 5096 failure. If only default routes existed, this would result in 50% of 5097 traffic from Leaf 111 and Leaf 112 toward Prefix 121 being black- 5098 holed. 5100 The mechanism to resolve this scenario hinges on ToF 21's South TIEs 5101 being reflected from Spine 111 and Spine 112 to ToF 22. Once ToF 22 5102 sees that Prefix 121 cannot be reached from ToF 21, it will begin to 5103 disaggregate Prefix 121 by advertising a more specific route (1.1/16) 5104 along with the default IP prefix route to all spines (ToF 21 still 5105 only sends a default route). The result is Spine 111 and Spine112 5106 using the more specific route to Prefix 121 via ToF 22. All other 5107 prefixes continue to use the default IP prefix route toward both ToF 5108 21 and ToF 22. 5110 The more specific route for Prefix 121 being advertised by ToF 22 5111 does not need to be propagated further south to the leaves, as they 5112 do not benefit from this information. Spine 111 and Spine 112 are 5113 only required to reflect the new South Node TIEs received from ToF 22 5114 to ToF 21. In short, only the relevant nodes received the relevant 5115 updates, thereby restricting the failure to only the partitioned 5116 level rather than burdening the whole fabric with the flooding and 5117 recomputation of the new topology information. 5119 To finish our example, the following table shows sets computed by ToF 5120 22 using notation introduced in Section 4.2.5: 5122 |R = Prefix 111, Prefix 112, Prefix 121, Prefix 122 5124 |H (for r=Prefix 111) = Spine 111, Spine 112 5126 |H (for r=Prefix 112) = Spine 111, Spine 112 5128 |H (for r=Prefix 121) = Spine 121, Spine 122 5130 |H (for r=Prefix 122) = Spine 121, Spine 122 5132 |A (for ToF 21) = Spine 111, Spine 112 5134 With that and |H (for r=Prefix 121) and |H (for r=Prefix 122) being 5135 disjoint from |A (for ToF 21), ToF 22 will originate an South TIE 5136 with Prefix 121 and Prefix 122, which will be flooded to all spines. 5138 5.4. Northbound Partitioned Router and Optional East-West Links 5139 . + + + 5140 . X N1 | N2 | N3 5141 . X | | 5142 .+--+----+ +--+----+ +--+-----+ 5143 .| |0/0> <0/0| |0/0> <0/0| | 5144 .| A01 +----------+ A02 +----------+ A03 | Level 1 5145 .++-+-+--+ ++--+--++ +---+-+-++ 5146 . | | | | | | | | | 5147 . | | +----------------------------------+ | | | 5148 . | | | | | | | | | 5149 . | +-------------+ | | | +--------------+ | 5150 . | | | | | | | | | 5151 . | +----------------+ | +-----------------+ | 5152 . | | | | | | | | | 5153 . | | +------------------------------------+ | | 5154 . | | | | | | | | | 5155 .++-+-+--+ | +---+---+ | +-+---+-++ 5156 .| | +-+ +-+ | | 5157 .| L01 | | L02 | | L03 | Level 0 5158 .+-------+ +-------+ +--------+ 5160 Figure 35: North Partitioned Router 5162 Figure 35 shows a part of a fabric where level 1 is horizontally 5163 connected and A01 lost its only northbound adjacency. Based on N-SPF 5164 rules in Section 4.2.4.1 A01 will compute northbound reachability by 5165 using the link A01 to A02. A02 however, will NOT use this link 5166 during N-SPF. The result is A01 utilizing the horizontal link for 5167 default route advertisement and unidirectional routing. 5169 Furthermore, if A02 also loses its only northbound adjacency (N2), 5170 the situation evolves. A01 will no longer have northbound 5171 reachability while it sees A03's northbound adjacencies in South Node 5172 TIEs reflected by nodes south of it. As a result, A01 will no longer 5173 advertise its default route in accordance with Section 4.2.3.8. 5175 6. Implementation and Operation: Further Details 5177 6.1. Considerations for Leaf-Only Implementation 5179 RIFT can and is intended to be stretched to the lowest level in the 5180 IP fabric to integrate ToRs or even servers. Since those entities 5181 would run as leaves only, it is worth to observe that a leaf only 5182 version is significantly simpler to implement and requires much less 5183 resources: 5185 1. Leaf nodes only need to maintain a multipath default route under 5186 normal circumstances. However, in cases of catastrophic 5187 partitioning, leaf nodes SHOULD be capable of accommodating all 5188 the leaf routes in its own PoD to prevent black-holing. 5190 2. Leaf nodes hold only their own North TIEs and South TIEs of Level 5191 1 nodes they are connected to. 5193 3. Leaf nodes do not have to support any type of de-aggregation 5194 computation or propagation. 5196 4. Leaf nodes are not required to support overload bit. 5198 5. Leaf nodes do not need to originate S-TIEs unless optional leaf- 5199 2-leaf features are desired. 5201 6.2. Considerations for Spine Implementation 5203 Spine nodes will never act as Top of Fabric, and are therefore not 5204 required to run a full RIFT implementation. Specifically, spines do 5205 not need to perform negative disaggregation computation other than 5206 respecting northbound disaggregation advertised from the north. 5208 6.3. Adaptations to Other Proposed Data Center Topologies 5210 . +-----+ +-----+ 5211 . | | | | 5212 .+-+ S0 | | S1 | 5213 .| ++---++ ++---++ 5214 .| | | | | 5215 .| | +------------+ | 5216 .| | | +------------+ | 5217 .| | | | | 5218 .| ++-+--+ +--+-++ 5219 .| | | | | 5220 .| | A0 | | A1 | 5221 .| +-+--++ ++---++ 5222 .| | | | | 5223 .| | +------------+ | 5224 .| | +-----------+ | | 5225 .| | | | | 5226 .| +-+-+-+ +--+-++ 5227 .+-+ | | | 5228 . | L0 | | L1 | 5229 . +-----+ +-----+ 5231 Figure 36: Level Shortcut 5233 RIFT is not strictly limited to Clos topologies. The protocol only 5234 requires a sense of "compass rose directionality" either achieved 5235 through configuration or derivation of levels. So, conceptually, 5236 leaf-2-leaf links and even shortcuts between levels could be 5237 included. Figure 36 depicts an example of a shortcut between levels. 5238 In this example, sub-optimal routing will occur when traffic is sent 5239 from L0 to L1 via S0's default route and back down through A0 or A1. 5240 In order to ensure that only default routes from A0 or A1 are used, 5241 all leaves would be required to install each others routes. 5243 While various technical and operational challenges may require the 5244 use of such modifications, discussion of those topics are outside the 5245 scope of this document. 5247 6.4. Originating Non-Default Route Southbound 5249 An implementation MAY choose to originate more specific prefixes (P') 5250 southbound instead of only the default route (as described in 5251 Section 4.2.3.8). In such a scenario, all addresses carried within 5252 the RIFT domain MUST be contained within P'. 5254 7. Security Considerations 5256 7.1. General 5258 One can consider attack vectors where a router may reboot many times 5259 while changing its system ID and pollute the network with many stale 5260 TIEs or TIEs are sent with very long lifetimes and not cleaned up 5261 when the routes vanish. Those attack vectors are not unique to RIFT. 5262 Given large memory footprints available today those attacks should be 5263 relatively benign. Otherwise a node SHOULD implement a strategy of 5264 discarding contents of all TIEs that were not present in the SPF tree 5265 over a certain, configurable period of time. Since the protocol, 5266 like all modern link-state protocols, is self-stabilizing and will 5267 advertise the presence of such TIEs to its neighbors, they can be re- 5268 requested again if a computation finds that it sees an adjacency 5269 formed towards the system ID of the discarded TIEs. 5271 7.2. ZTP 5273 Section 4.2.7 presents many attack vectors in untrusted environments, 5274 starting with nodes that oscillate their level offers to the 5275 possibility of nodes offering a three-way adjacency with the highest 5276 possible level value and a very long holdtime trying to put itself 5277 "on top of the lattice" thereby allowing it to gain access to the 5278 whole southbound topology. Session authentication mechanisms are 5279 necessary in environments where this is possible and RIFT provides 5280 the security envelope to ensure this if so desired. 5282 7.3. Lifetime 5284 Traditional IGP protocols are vulnerable to lifetime modification and 5285 replay attacks that can be somewhat mitigated by using techniques 5286 like [RFC7987]. RIFT removes this attack vector by protecting the 5287 lifetime behind a signature computed over it and additional nonce 5288 combination which makes even the replay attack window very small and 5289 for practical purposes irrelevant since lifetime cannot be 5290 artificially shortened by the attacker. 5292 7.4. Packet Number 5294 Optional packet number is carried in the security envelope without 5295 any encryption protection and is hence vulnerable to replay and 5296 modification attacks. Contrary to nonces this number must change on 5297 every packet and would present a very high cryptographic load if 5298 signed. The attack vector packet number present is relatively 5299 benign. Changing the packet number by a man-in-the-middle attack 5300 will only affect operational validation tools and possibly some 5301 performance optimizations on flooding. It is expected that an 5302 implementation detecting too many "fake losses" or "misorderings" due 5303 to the attack on the packet number would simply suppress its further 5304 processing. 5306 7.5. Outer Fingerprint Attacks 5308 A node can try to inject LIE packets observing a conversation on the 5309 wire by using the outer key ID albeit it cannot generate valid hashes 5310 in case it changes the integrity of the message so the only possible 5311 attack is DoS due to excessive LIE validation. 5313 A node can try to replay previous LIEs with changed state that it 5314 recorded but the attack is hard to replicate since the nonce 5315 combination must match the ongoing exchange and is then limited to a 5316 single flap only since both nodes will advance their nonces in case 5317 the adjacency state changed. Even in the most unlikely case the 5318 attack length is limited due to both sides periodically increasing 5319 their nonces. 5321 7.6. TIE Origin Fingerprint DoS Attacks 5323 A compromised node can attempt to generate "fake TIEs" using other 5324 nodes' TIE origin key identifiers. Albeit the ultimate validation of 5325 the origin fingerprint will fail in such scenarios and not progress 5326 further than immediately peering nodes, the resulting denial of 5327 service attack seems unavoidable since the TIE origin key id is only 5328 protected by the, here assumed to be compromised, node. 5330 7.7. Host Implementations 5332 It can be reasonably expected that with the proliferation of RotH 5333 servers, rather than dedicated networking devices, will represent a 5334 significant amount of RIFT devices. Given their normally far wider 5335 software envelope and access granted to them, such servers are also 5336 far more likely to be compromised and present an attack vector on the 5337 protocol. Hijacking of prefixes to attract traffic is a trust 5338 problem and cannot be easily addressed within the protocol if the 5339 trust model is breached, i.e. the server presents valid credentials 5340 to form an adjacency and issue TIEs. In an even more devious way, 5341 the servers can present DoS (or even DDos) vectors of issuing too 5342 many LIE packets, flood large amounts of North TIEs and attempt 5343 similar resource overrun attacks. A prudent implementation forming 5344 adjacencies to leaves should implement according thresholds 5345 mechanisms and raise warnings when e.g. a leaf is advertising an 5346 excess number of TIEs or prefixes. Additionally, such implementation 5347 could refuse any topology information except the node's own TIEs and 5348 authenticated, reflected South Node TIEs at own level. 5350 To isolate possible attack vectors on the leaf to the largest 5351 possible extent a dedicated leaf-only implementation could run 5352 without any configuration by hard-coding a well-known adjacency key 5353 (which can be always rolled-over by the means of e.g. well-known key- 5354 value distributed from top of the fabric), leaf level value and 5355 always setting overload bit. All other values can be derived by 5356 automatic means as described earlier in the protocol specification. 5358 8. IANA Considerations 5360 This specification requests multicast address assignments and 5361 standard port numbers. Additionally registries for the schema are 5362 requested and suggested values provided that reflect the numbers 5363 allocated in the given schema. 5365 8.1. Requested Multicast and Port Numbers 5367 This document requests allocation in the 'IPv4 Multicast Address 5368 Space' registry the suggested value of 224.0.0.120 as 5369 'ALL_V4_RIFT_ROUTERS' and in the 'IPv6 Multicast Address Space' 5370 registry the suggested value of FF02::A1F7 as 'ALL_V6_RIFT_ROUTERS'. 5372 This document requests allocation in the 'Service Name and Transport 5373 Protocol Port Number Registry' the allocation of a suggested value of 5374 914 on udp for 'RIFT_LIES_PORT' and suggested value of 915 for 5375 'RIFT_TIES_PORT'. 5377 8.2. Requested Registries with Suggested Values 5379 This section requests registries that help govern the schema via 5380 usual IANA registry procedures. A top level 'RIFT' registry should 5381 hold the according registries requested in following sections with 5382 their pre-defined values. IANA is requested to store the schema 5383 version introducing the allocated value as well as, optionally, its 5384 description when present. This will allow to assign different values 5385 to an entry depending on schema version. Alternately, IANA is 5386 requested to consider a root RIFT/3 registry to store RIFT schema 5387 major version 3 values and may be requested in the future to create a 5388 RIFT/4 registry under that. In any case, IANA is requested to store 5389 the schema version in the entries since that will allow to 5390 distinguish between minor versions in the same major schema version. 5391 All values not suggested as to be considered `Unassigned`. The range 5392 of every registry is a 16-bit integer. Allocation of new values is 5393 always performed via `Expert Review` action. 5395 8.2.1. Registry RIFT_v4/common/AddressFamilyType 5397 Address family type. 5399 8.2.1.1. Requested Entries 5401 Name Value Schema Version Description 5402 Illegal 0 4.1 5403 AddressFamilyMinValue 1 4.1 5404 IPv4 2 4.1 5405 IPv6 3 4.1 5406 AddressFamilyMaxValue 4 4.1 5408 8.2.2. Registry RIFT_v4/common/HierarchyIndications 5410 Flags indicating node configuration in case of ZTP. 5412 8.2.2.1. Requested Entries 5414 Name Value Schema Version Description 5415 leaf_only 0 4.1 5416 leaf_only_and_leaf_2_leaf_procedures 1 4.1 5417 top_of_fabric 2 4.1 5419 8.2.3. Registry RIFT_v4/common/IEEE802_1ASTimeStampType 5421 Timestamp per IEEE 802.1AS, all values MUST be interpreted in 5422 implementation as unsigned. 5424 8.2.3.1. Requested Entries 5426 Name Value Schema Version Description 5427 AS_sec 1 4.1 5428 AS_nsec 2 4.1 5430 8.2.4. Registry RIFT_v4/common/IPAddressType 5432 IP address type. 5434 8.2.4.1. Requested Entries 5436 Name Value Schema Version Description 5437 ipv4address 1 4.1 Content is IPv4 5438 ipv6address 2 4.1 Content is IPv6 5440 8.2.5. Registry RIFT_v4/common/IPPrefixType 5442 Prefix advertisement. 5444 @note: for interface addresses the protocol can propagate the address 5445 part beyond the subnet mask and on reachability computation that has 5446 to be normalized. The non-significant bits can be used for 5447 operational purposes. 5449 8.2.5.1. Requested Entries 5451 Name Value Schema Version Description 5452 ipv4prefix 1 4.1 5453 ipv6prefix 2 4.1 5455 8.2.6. Registry RIFT_v4/common/IPv4PrefixType 5457 IPv4 prefix type. 5459 8.2.6.1. Requested Entries 5461 Name Value Schema Version Description 5462 address 1 4.1 5463 prefixlen 2 4.1 5465 8.2.7. Registry RIFT_v4/common/IPv6PrefixType 5467 IPv6 prefix type. 5469 8.2.7.1. Requested Entries 5471 Name Value Schema Version Description 5472 address 1 4.1 5473 prefixlen 2 4.1 5475 8.2.8. Registry RIFT_v4/common/PrefixSequenceType 5477 Sequence of a prefix in case of move. 5479 8.2.8.1. Requested Entries 5481 Name Value Schema Description 5482 Version 5483 timestamp 1 4.1 5484 transactionid 2 4.1 Transaction ID set by client in e.g. 5485 in 6LoWPAN. 5487 8.2.9. Registry RIFT_v4/common/RouteType 5489 RIFT route types. 5491 @note: route types which MUST be ordered on their preference PGP 5492 prefixes are most preferred attracting traffic north (towards spine) 5493 and then south normal prefixes are attracting traffic south (towards 5494 leafs), i.e. prefix in NORTH PREFIX TIE is preferred over SOUTH 5495 PREFIX TIE. 5497 @note: The only purpose of those values is to introduce an ordering 5498 whereas an implementation can choose internally any other values as 5499 long the ordering is preserved 5501 8.2.9.1. Requested Entries 5503 Name Value Schema Version Description 5504 Illegal 0 4.1 5505 RouteTypeMinValue 1 4.1 5506 Discard 2 4.1 5507 LocalPrefix 3 4.1 5508 SouthPGPPrefix 4 4.1 5509 NorthPGPPrefix 5 4.1 5510 NorthPrefix 6 4.1 5511 NorthExternalPrefix 7 4.1 5512 SouthPrefix 8 4.1 5513 SouthExternalPrefix 9 4.1 5514 NegativeSouthPrefix 10 4.1 5515 RouteTypeMaxValue 11 4.1 5517 8.2.10. Registry RIFT_v4/common/TIETypeType 5519 Type of TIE. 5521 This enum indicates what TIE type the TIE is carrying. In case the 5522 value is not known to the receiver, the TIE MUST be re-flooded. This 5523 allows for future extensions of the protocol within the same major 5524 schema with types opaque to some nodes UNLESS the flooding scope is 5525 not the same as prefix TIE, then a major version revision MUST be 5526 performed. 5528 8.2.10.1. Requested Entries 5530 Name Value Schema Description 5531 Version 5532 Illegal 0 4.1 5533 TIETypeMinValue 1 4.1 5534 NodeTIEType 2 4.1 5535 PrefixTIEType 3 4.1 5536 PositiveDisaggregationPrefixTIEType 4 4.1 5537 NegativeDisaggregationPrefixTIEType 5 4.1 5538 PGPrefixTIEType 6 4.1 5539 KeyValueTIEType 7 4.1 5540 ExternalPrefixTIEType 8 4.1 5541 PositiveExternalDisaggregationPrefixTIEType 9 4.1 5542 TIETypeMaxValue 10 4.1 5544 8.2.11. Registry RIFT_v4/common/TieDirectionType 5546 Direction of TIEs. 5548 8.2.11.1. Requested Entries 5550 Name Value Schema Version Description 5551 Illegal 0 4.1 5552 South 1 4.1 5553 North 2 4.1 5554 DirectionMaxValue 3 4.1 5556 8.2.12. Registry RIFT_v4/encoding/Community 5558 Prefix community. 5560 8.2.12.1. Requested Entries 5562 Name Value Schema Version Description 5563 top 1 4.1 Higher order bits 5564 bottom 2 4.1 Lower order bits 5566 8.2.13. Registry RIFT_v4/encoding/KeyValueTIEElement 5568 Generic key value pairs. 5570 8.2.13.1. Requested Entries 5572 Name Value Schema Version Description 5573 keyvalues 1 4.1 5575 8.2.14. Registry RIFT_v4/encoding/LIEPacket 5577 RIFT LIE Packet. 5579 @note: this node's level is already included on the packet header 5581 8.2.14.1. Requested Entries 5583 Name Value Schema Description 5584 Version 5585 name 1 4.1 Node or adjacency name. 5586 local_id 2 4.1 Local link ID. 5587 flood_port 3 4.1 UDP port to which we can 5588 receive flooded TIEs. 5589 link_mtu_size 4 4.1 Layer 3 MTU, used to 5590 discover to mismatch. 5591 link_bandwidth 5 4.1 Local link bandwidth on 5592 the interface. 5593 neighbor 6 4.1 Reflects the neighbor once 5594 received to provide 5595 3-way connectivity. 5596 pod 7 4.1 Node's PoD. 5597 node_capabilities 10 4.1 Node capabilities shown in 5598 LIE. The capabilities 5599 MUST match the capabilities 5600 shown in the Node TIEs, 5601 otherwise the behavior 5602 is unspecified. A node 5603 detecting the mismatch 5604 SHOULD generate according 5605 error. 5606 link_capabilities 11 4.1 Capabilities of this link. 5607 holdtime 12 4.1 Required holdtime of the 5608 adjacency, i.e. how much 5609 time MUST expire 5610 without LIE for the 5611 adjacency to drop. 5612 label 13 4.1 Unsolicited, downstream 5613 assigned locally 5614 significant label value 5615 for the adjacency. 5616 not_a_ztp_offer 21 4.1 Indicates that the level 5617 on the LIE MUST NOT be used 5618 to derive a ZTP level by 5619 the receiving node. 5620 you_are_flood_repeater 22 4.1 Indicates to northbound 5621 neighbor that it should 5622 be reflooding this node's 5623 N-TIEs to achieve flood 5624 reduction and balancing 5625 for northbound flooding. To 5626 be ignored if received 5627 from a northbound 5628 adjacency. 5629 you_are_sending_too_quickly 23 4.1 Can be optionally set to 5630 indicate to neighbor that 5631 packet losses are seen 5632 on reception based on 5633 packet numbers or the rate 5634 is too high. The 5635 receiver SHOULD temporarily 5636 slow down flooding 5637 rates. 5638 instance_name 24 4.1 Instance name in case 5639 multiple RIFT instances 5640 running on same 5641 interface. 5643 8.2.15. Registry RIFT_v4/encoding/LinkCapabilities 5645 Link capabilities. 5647 8.2.15.1. Requested Entries 5649 Name Value Schema Description 5650 Version 5651 bfd 1 4.1 Indicates that the link is 5652 supporting BFD. 5653 v4_forwarding_capable 2 4.1 Indicates whether the interface 5654 will support v4 forwarding. 5656 8.2.16. Registry RIFT_v4/encoding/LinkIDPair 5658 LinkID pair describes one of parallel links between two nodes. 5660 8.2.16.1. Requested Entries 5662 Name Value Schema Description 5663 Version 5664 local_id 1 4.1 Node-wide unique value for 5665 the local link. 5666 remote_id 2 4.1 Received remote link ID for 5667 this link. 5668 platform_interface_index 10 4.1 Describes the local 5669 interface index of the link. 5670 platform_interface_name 11 4.1 Describes the local 5671 interface name. 5672 trusted_outer_security_key 12 4.1 Indication whether the link 5673 is secured, i.e. protected 5674 by outer key, absence of 5675 this element means no 5676 indication, undefined 5677 outer key means not secured. 5678 bfd_up 13 4.1 Indication whether the link 5679 is protected by established 5680 BFD session. 5681 address_families 14 4.1 Optional indication which 5682 address families are up on 5683 the interface 5685 8.2.17. Registry RIFT_v4/encoding/Neighbor 5687 Neighbor structure. 5689 8.2.17.1. Requested Entries 5691 Name Value Schema Version Description 5692 originator 1 4.1 System ID of the originator. 5693 remote_id 2 4.1 ID of remote side of the link. 5695 8.2.18. Registry RIFT_v4/encoding/NodeCapabilities 5697 Capabilities the node supports. 5699 @note: The schema may add to this field future capabilities to 5700 indicate whether it will support interpretation of future schema 5701 extensions on the same major revision. Such fields MUST be optional 5702 and have an implicit or explicit false default value. If a future 5703 capability changes route selection or generates blackholes if some 5704 nodes are not supporting it then a major version increment is 5705 unavoidable. 5707 8.2.18.1. Requested Entries 5709 Name Value Schema Description 5710 Version 5711 protocol_minor_version 1 4.1 Must advertise supported minor 5712 version dialect that way. 5713 flood_reduction 2 4.1 Can this node participate in 5714 flood reduction. 5715 hierarchy_indications 3 4.1 Does this node restrict itself 5716 to be top-of-fabric or leaf 5717 only (in ZTP) and does it 5718 support leaf-2-leaf 5719 procedures. 5721 8.2.19. Registry RIFT_v4/encoding/NodeFlags 5723 Indication flags of the node. 5725 8.2.19.1. Requested Entries 5727 Name Value Schema Description 5728 Version 5729 overload 1 4.1 Indicates that node is in overload, do not 5730 transit traffic through it. 5732 8.2.20. Registry RIFT_v4/encoding/NodeNeighborsTIEElement 5734 neighbor of a node 5736 8.2.20.1. Requested Entries 5738 Name Value Schema Description 5739 Version 5740 level 1 4.1 level of neighbor 5741 cost 3 4.1 Cost to neighbor. 5742 link_ids 4 4.1 can carry description of multiple parallel 5743 links in a TIE 5744 bandwidth 5 4.1 total bandwith to neighbor, this will be 5745 normally sum of the bandwidths of all the 5746 parallel links. 5748 8.2.21. Registry RIFT_v4/encoding/NodeTIEElement 5750 Description of a node. 5752 It may occur multiple times in different TIEs but if either 5754 capabilities values do not match or 5755 flags values do not match or 5757 neighbors repeat with different values 5759 the behavior is undefined and a warning SHOULD be generated. 5760 Neighbors can be distributed across multiple TIEs however if the sets 5761 are disjoint. Miscablings SHOULD be repeated in every node TIE, 5762 otherwise the behavior is undefined. 5764 @note: Observe that absence of fields implies defined defaults. 5766 8.2.21.1. Requested Entries 5768 Name Value Schema Description 5769 Version 5770 level 1 4.1 Level of the node. 5771 neighbors 2 4.1 Node's neighbors. If neighbor systemID 5772 repeats in other node TIEs of same 5773 node the behavior is undefined. 5774 capabilities 3 4.1 Capabilities of the node. 5775 flags 4 4.1 Flags of the node. 5776 name 5 4.1 Optional node name for easier 5777 operations. 5778 pod 6 4.1 PoD to which the node belongs. 5779 startup_time 7 4.1 optional startup time of the node 5780 miscabled_links 10 4.1 If any local links are miscabled, the 5781 indication is flooded. 5783 8.2.22. Registry RIFT_v4/encoding/PacketContent 5785 Content of a RIFT packet. 5787 8.2.22.1. Requested Entries 5789 Name Value Schema Version Description 5790 lie 1 4.1 5791 tide 2 4.1 5792 tire 3 4.1 5793 tie 4 4.1 5795 8.2.23. Registry RIFT_v4/encoding/PacketHeader 5797 Common RIFT packet header. 5799 8.2.23.1. Requested Entries 5801 Name Value Schema Description 5802 Version 5803 major_version 1 4.1 Major version of protocol. 5804 minor_version 2 4.1 Minor version of protocol. 5805 sender 3 4.1 Node sending the packet, in case of 5806 LIE/TIRE/TIDE also the originator of 5807 it. 5808 level 4 4.1 Level of the node sending the packet, 5809 required on everything except LIEs. 5810 Lack of presence on LIEs indicates 5811 UNDEFINED_LEVEL and is used in ZTP 5812 procedures. 5814 8.2.24. Registry RIFT_v4/encoding/PrefixAttributes 5816 Attributes of a prefix. 5818 8.2.24.1. Requested Entries 5820 Name Value Schema Description 5821 Version 5822 metric 2 4.1 Distance of the prefix. 5823 tags 3 4.1 Generic unordered set of route tags, 5824 can be redistributed to other 5825 protocols or use within the context 5826 of real time analytics. 5827 monotonic_clock 4 4.1 Monotonic clock for mobile 5828 addresses. 5829 loopback 6 4.1 Indicates if the interface is a node 5830 loopback. 5831 directly_attached 7 4.1 Indicates that the prefix is 5832 directly attached, i.e. should be 5833 routed to even if the node is in 5834 overload. 5835 from_link 10 4.1 In case of locally originated 5836 prefixes, i.e. interface 5837 addresses this can describe which 5838 link the address belongs to. 5840 8.2.25. Registry RIFT_v4/encoding/PrefixTIEElement 5842 TIE carrying prefixes 5844 8.2.25.1. Requested Entries 5846 Name Value Schema Description 5847 Version 5848 prefixes 1 4.1 Prefixes with the associated attributes. 5849 If the same prefix repeats in multiple TIEs of 5850 same node behavior is unspecified. 5852 8.2.26. Registry RIFT_v4/encoding/ProtocolPacket 5854 RIFT packet structure. 5856 8.2.26.1. Requested Entries 5858 Name Value Schema Version Description 5859 header 1 4.1 5860 content 2 4.1 5862 8.2.27. Registry RIFT_v4/encoding/TIDEPacket 5864 TIDE with sorted TIE headers, if headers are unsorted, behavior is 5865 undefined. 5867 8.2.27.1. Requested Entries 5869 Name Value Schema Version Description 5870 start_range 1 4.1 First TIE header in the tide 5871 packet. 5872 end_range 2 4.1 Last TIE header in the tide packet. 5873 headers 3 4.1 _Sorted_ list of headers. 5875 8.2.28. Registry RIFT_v4/encoding/TIEElement 5877 Single element in a TIE. 5879 Schema enum `common.TIETypeType` in TIEID indicates which elements 5880 MUST be present in the TIEElement. In case of mismatch the 5881 unexpected elements MUST be ignored. In case of lack of expected 5882 element the TIE an error MUST be reported and the TIE MUST be 5883 ignored. 5885 This type can be extended with new optional elements for new 5886 `common.TIETypeType` values without breaking the major but if it is 5887 necessary to understand whether all nodes support the new type a node 5888 capability must be added as well. 5890 8.2.28.1. Requested Entries 5892 Name Valu Schem Description 5893 e a Ver 5894 sion 5895 node 1 4.1 Used in case of enum comm 5896 on.TIETypeType.NodeTIEType 5897 . 5898 prefixes 2 4.1 Used in case of enum comm 5899 on.TIETypeType.PrefixTIETy 5900 pe. 5901 positive_disaggregation_prefixe 3 4.1 Positive prefixes (always 5902 s southbound). It MUST 5903 NOT be advertised within a 5904 North TIE and ignored 5905 otherwise. 5906 negative_disaggregation_prefixe 5 4.1 Transitive, negative 5907 s prefixes (always 5908 southbound) which MUST 5909 be aggregated and 5910 propagated according 5911 to the specification 5912 southwards towards lower 5913 levels to heal 5914 pathological upper level 5915 partitioning, otherwise 5916 blackholes may occur in 5917 multiplane fabrics. It 5918 MUST NOT be advertised 5919 within a North TIE. 5920 external_prefixes 6 4.1 Externally reimported 5921 prefixes. 5922 positive_external_disaggregatio 7 4.1 Positive external 5923 n_prefixes disaggregated prefixes 5924 (always southbound). 5925 It MUST NOT be advertised 5926 within a North TIE and 5927 ignored otherwise. 5928 keyvalues 9 4.1 Key-Value store elements. 5930 8.2.29. Registry RIFT_v4/encoding/TIEHeader 5932 Header of a TIE. 5934 @note: TIEID space is a total order achieved by comparing the 5935 elements in sequence defined and comparing each value as an unsigned 5936 integer of according length. 5938 @note: After sequence number the lifetime received on the envelope 5939 must be used for comparison before further fields. 5941 @note: `origination_time` and `origination_lifetime` are normally 5942 disregarded for comparison purposes and carried purely for debugging/ 5943 security purposes if present. They may be used for comparison of 5944 last resort to differentiate otherwise equal ties 5946 8.2.29.1. Requested Entries 5948 Name Value Schema Description 5949 Version 5950 tieid 2 4.1 ID of the tie. 5951 seq_nr 3 4.1 Sequence number of the tie. 5952 origination_time 10 4.1 Absolute timestamp when the TIE 5953 was generated. This can be used on 5954 fabrics with synchronized 5955 clock to prevent lifetime 5956 modification attacks. 5957 origination_lifetime 12 4.1 Original lifetime when the TIE 5958 was generated. This can be used on 5959 fabrics with synchronized 5960 clock to prevent lifetime 5961 modification attacks. 5963 8.2.30. Registry RIFT_v4/encoding/TIEHeaderWithLifeTime 5965 Header of a TIE as described in TIRE/TIDE. 5967 8.2.30.1. Requested Entries 5969 Name Value Schema Description 5970 Version 5971 header 1 4.1 5972 remaining_lifetime 2 4.1 Remaining lifetime that expires 5973 down to 0 just like in ISIS. 5974 TIEs with lifetimes differing by 5975 less than `lifetime_diff2ignore` 5976 MUST be considered EQUAL. 5978 8.2.31. Registry RIFT_v4/encoding/TIEID 5980 ID of a TIE. 5982 @note: TIEID space is a total order achieved by comparing the 5983 elements in sequence defined and comparing each value as an unsigned 5984 integer of according length. 5986 8.2.31.1. Requested Entries 5988 Name Value Schema Version Description 5989 direction 1 4.1 direction of TIE 5990 originator 2 4.1 indicates originator of the TIE 5991 tietype 3 4.1 type of the tie 5992 tie_nr 4 4.1 number of the tie 5994 8.2.32. Registry RIFT_v4/encoding/TIEPacket 5996 TIE packet 5998 8.2.32.1. Requested Entries 6000 Name Value Schema Version Description 6001 header 1 4.1 6002 element 2 4.1 6004 8.2.33. Registry RIFT_v4/encoding/TIREPacket 6006 TIRE packet 6008 8.2.33.1. Requested Entries 6010 Name Value Schema Version Description 6011 headers 1 4.1 6013 9. Acknowledgments 6015 A new routing protocol in its complexity is not a product of a parent 6016 but of a village as the author list shows already. However, many 6017 more people provided input, fine-combed the specification based on 6018 their experience in design, implementation or application of 6019 protocols in IP fabrics. This section will make an inadequate 6020 attempt in recording their contribution. 6022 Many thanks to Naiming Shen for some of the early discussions around 6023 the topic of using IGPs for routing in topologies related to Clos. 6024 Russ White to be especially acknowledged for the key conversation on 6025 epistemology that allowed to tie current asynchronous distributed 6026 systems theory results to a modern protocol design presented in this 6027 scope. Adrian Farrel, Joel Halpern, Jeffrey Zhang, Krzysztof 6028 Szarkowicz, Nagendra Kumar, Melchior Aelmans, Kaushal Tank, Will 6029 Jones, Moin Ahmed, Sandy Zhang and Jordan Head (in no particular 6030 order) provided thoughtful comments that improved the readability of 6031 the document and found good amount of corners where the light failed 6032 to shine. Kris Price was first to mention single router, single arm 6033 default considerations. Jeff Tantsura helped out with some initial 6034 thoughts on BFD interactions while Jeff Haas corrected several 6035 misconceptions about BFD's finer points and helped to improve the 6036 security section around leaf considerations. Artur Makutunowicz 6037 pointed out many possible improvements and acted as sounding board in 6038 regard to modern protocol implementation techniques RIFT is 6039 exploring. Barak Gafni formalized first time clearly the problem of 6040 partitioned spine and fallen leaves on a (clean) napkin in Singapore 6041 that led to the very important part of the specification centered 6042 around multiple Top-of-Fabric planes and negative disaggregation. 6043 Igor Gashinsky and others shared many thoughts on problems 6044 encountered in design and operation of large-scale data center 6045 fabrics. Xu Benchong found a delicate error in the flooding 6046 procedures and a schema datatype size mismatch. 6048 Last but not least, Alvaro Retana guided the undertaking by asking 6049 many necessary procedural and technical questions which did not only 6050 improve the content but did also lay out the track towards 6051 publication. 6053 10. References 6055 10.1. Normative References 6057 [EUI64] IEEE, "Guidelines for Use of Extended Unique Identifier 6058 (EUI), Organizationally Unique Identifier (OUI), and 6059 Company ID (CID)", IEEE EUI, 6060 . 6062 [ISO10589] 6063 ISO "International Organization for Standardization", 6064 "Intermediate system to Intermediate system intra-domain 6065 routeing information exchange protocol for use in 6066 conjunction with the protocol for providing the 6067 connectionless-mode Network Service (ISO 8473), ISO/IEC 6068 10589:2002, Second Edition.", Nov 2002. 6070 [RFC1982] Elz, R. and R. Bush, "Serial Number Arithmetic", RFC 1982, 6071 DOI 10.17487/RFC1982, August 1996, 6072 . 6074 [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, 6075 DOI 10.17487/RFC2328, April 1998, 6076 . 6078 [RFC2365] Meyer, D., "Administratively Scoped IP Multicast", BCP 23, 6079 RFC 2365, DOI 10.17487/RFC2365, July 1998, 6080 . 6082 [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A 6083 Border Gateway Protocol 4 (BGP-4)", RFC 4271, 6084 DOI 10.17487/RFC4271, January 2006, 6085 . 6087 [RFC4291] Hinden, R. and S. Deering, "IP Version 6 Addressing 6088 Architecture", RFC 4291, DOI 10.17487/RFC4291, February 6089 2006, . 6091 [RFC5082] Gill, V., Heasley, J., Meyer, D., Savola, P., Ed., and C. 6092 Pignataro, "The Generalized TTL Security Mechanism 6093 (GTSM)", RFC 5082, DOI 10.17487/RFC5082, October 2007, 6094 . 6096 [RFC5120] Przygienda, T., Shen, N., and N. Sheth, "M-ISIS: Multi 6097 Topology (MT) Routing in Intermediate System to 6098 Intermediate Systems (IS-ISs)", RFC 5120, 6099 DOI 10.17487/RFC5120, February 2008, 6100 . 6102 [RFC5303] Katz, D., Saluja, R., and D. Eastlake 3rd, "Three-Way 6103 Handshake for IS-IS Point-to-Point Adjacencies", RFC 5303, 6104 DOI 10.17487/RFC5303, October 2008, 6105 . 6107 [RFC5549] Le Faucheur, F. and E. Rosen, "Advertising IPv4 Network 6108 Layer Reachability Information with an IPv6 Next Hop", 6109 RFC 5549, DOI 10.17487/RFC5549, May 2009, 6110 . 6112 [RFC5709] Bhatia, M., Manral, V., Fanto, M., White, R., Barnes, M., 6113 Li, T., and R. Atkinson, "OSPFv2 HMAC-SHA Cryptographic 6114 Authentication", RFC 5709, DOI 10.17487/RFC5709, October 6115 2009, . 6117 [RFC5881] Katz, D. and D. Ward, "Bidirectional Forwarding Detection 6118 (BFD) for IPv4 and IPv6 (Single Hop)", RFC 5881, 6119 DOI 10.17487/RFC5881, June 2010, 6120 . 6122 [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch, 6123 "Network Time Protocol Version 4: Protocol and Algorithms 6124 Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010, 6125 . 6127 [RFC6830] Farinacci, D., Fuller, V., Meyer, D., and D. Lewis, "The 6128 Locator/ID Separation Protocol (LISP)", RFC 6830, 6129 DOI 10.17487/RFC6830, January 2013, 6130 . 6132 [RFC7752] Gredler, H., Ed., Medved, J., Previdi, S., Farrel, A., and 6133 S. Ray, "North-Bound Distribution of Link-State and 6134 Traffic Engineering (TE) Information Using BGP", RFC 7752, 6135 DOI 10.17487/RFC7752, March 2016, 6136 . 6138 [RFC7987] Ginsberg, L., Wells, P., Decraene, B., Przygienda, T., and 6139 H. Gredler, "IS-IS Minimum Remaining Lifetime", RFC 7987, 6140 DOI 10.17487/RFC7987, October 2016, 6141 . 6143 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 6144 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 6145 May 2017, . 6147 [RFC8200] Deering, S. and R. Hinden, "Internet Protocol, Version 6 6148 (IPv6) Specification", STD 86, RFC 8200, 6149 DOI 10.17487/RFC8200, July 2017, 6150 . 6152 [RFC8202] Ginsberg, L., Previdi, S., and W. Henderickx, "IS-IS 6153 Multi-Instance", RFC 8202, DOI 10.17487/RFC8202, June 6154 2017, . 6156 [RFC8505] Thubert, P., Ed., Nordmark, E., Chakrabarti, S., and C. 6157 Perkins, "Registration Extensions for IPv6 over Low-Power 6158 Wireless Personal Area Network (6LoWPAN) Neighbor 6159 Discovery", RFC 8505, DOI 10.17487/RFC8505, November 2018, 6160 . 6162 [thrift] Apache Software Foundation, "Thrift Interface Description 6163 Language", . 6165 10.2. Informative References 6167 [CLOS] Yuan, X., "On Nonblocking Folded-Clos Networks in Computer 6168 Communication Environments", IEEE International Parallel & 6169 Distributed Processing Symposium, 2011. 6171 [DIJKSTRA] 6172 Dijkstra, E., "A Note on Two Problems in Connexion with 6173 Graphs", Journal Numer. Math. , 1959. 6175 [DOT] Ellson, J. and L. Koutsofios, "Graphviz: open source graph 6176 drawing tools", Springer-Verlag , 2001. 6178 [DYNAMO] De Candia et al., G., "Dynamo: amazon's highly available 6179 key-value store", ACM SIGOPS symposium on Operating 6180 systems principles (SOSP '07), 2007. 6182 [EPPSTEIN] 6183 Eppstein, D., "Finding the k-Shortest Paths", 1997. 6185 [FATTREE] Leiserson, C., "Fat-Trees: Universal Networks for 6186 Hardware-Efficient Supercomputing", 1985. 6188 [IEEEstd1588] 6189 IEEE, "IEEE Standard for a Precision Clock Synchronization 6190 Protocol for Networked Measurement and Control Systems", 6191 IEEE Standard 1588, 6192 . 6194 [IEEEstd8021AS] 6195 IEEE, "IEEE Standard for Local and Metropolitan Area 6196 Networks - Timing and Synchronization for Time-Sensitive 6197 Applications in Bridged Local Area Networks", 6198 IEEE Standard 802.1AS, 6199 . 6201 [ISO10589-Second-Edition] 6202 International Organization for Standardization, 6203 "Intermediate system to Intermediate system intra-domain 6204 routeing information exchange protocol for use in 6205 conjunction with the protocol for providing the 6206 connectionless-mode Network Service (ISO 8473)", Nov 2002. 6208 [RFC0826] Plummer, D., "An Ethernet Address Resolution Protocol: Or 6209 Converting Network Protocol Addresses to 48.bit Ethernet 6210 Address for Transmission on Ethernet Hardware", STD 37, 6211 RFC 826, DOI 10.17487/RFC0826, November 1982, 6212 . 6214 [RFC2131] Droms, R., "Dynamic Host Configuration Protocol", 6215 RFC 2131, DOI 10.17487/RFC2131, March 1997, 6216 . 6218 [RFC3626] Clausen, T., Ed. and P. Jacquet, Ed., "Optimized Link 6219 State Routing Protocol (OLSR)", RFC 3626, 6220 DOI 10.17487/RFC3626, October 2003, 6221 . 6223 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 6224 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 6225 DOI 10.17487/RFC4861, September 2007, 6226 . 6228 [RFC4862] Thomson, S., Narten, T., and T. Jinmei, "IPv6 Stateless 6229 Address Autoconfiguration", RFC 4862, 6230 DOI 10.17487/RFC4862, September 2007, 6231 . 6233 [RFC6518] Lebovitz, G. and M. Bhatia, "Keying and Authentication for 6234 Routing Protocols (KARP) Design Guidelines", RFC 6518, 6235 DOI 10.17487/RFC6518, February 2012, 6236 . 6238 [RFC7938] Lapukhov, P., Premji, A., and J. Mitchell, Ed., "Use of 6239 BGP for Routing in Large-Scale Data Centers", RFC 7938, 6240 DOI 10.17487/RFC7938, August 2016, 6241 . 6243 [RFC8415] Mrugalski, T., Siodelski, M., Volz, B., Yourtchenko, A., 6244 Richardson, M., Jiang, S., Lemon, T., and T. Winters, 6245 "Dynamic Host Configuration Protocol for IPv6 (DHCPv6)", 6246 RFC 8415, DOI 10.17487/RFC8415, November 2018, 6247 . 6249 [VAHDAT08] 6250 Al-Fares, M., Loukissas, A., and A. Vahdat, "A Scalable, 6251 Commodity Data Center Network Architecture", SIGCOMM , 6252 2008. 6254 [Wikipedia] 6255 Wikipedia, 6256 "https://en.wikipedia.org/wiki/Serial_number_arithmetic", 6257 2016. 6259 Appendix A. Sequence Number Binary Arithmetic 6261 The only reasonably reference to a cleaner than [RFC1982] sequence 6262 number solution is given in [Wikipedia]. It basically converts the 6263 problem into two complement's arithmetic. Assuming a straight two 6264 complement's subtractions on the bit-width of the sequence number the 6265 according >: and =: relations are defined as: 6267 U_1, U_2 are 12-bits aligned unsigned version number 6269 D_f is ( U_1 - U_2 ) interpreted as two complement signed 12-bits 6270 D_b is ( U_2 - U_1 ) interpreted as two complement signed 12-bits 6272 U_1 >: U_2 IIF D_f > 0 AND D_b < 0 6273 U_1 =: U_2 IIF D_f = 0 6275 The >: relationship is anti-symmetric but not transitive. Observe 6276 that this leaves >: of the numbers having maximum two complement 6277 distance, e.g. ( 0 and 0x800 ) undefined in our 12-bits case since 6278 D_f and D_b are both -0x7ff. 6280 A simple example of the relationship in case of 3-bit arithmetic 6281 follows as table indicating D_f/D_b values and then the relationship 6282 of U_1 to U_2: 6284 U2 / U1 0 1 2 3 4 5 6 7 6285 0 +/+ +/- +/- +/- -/- -/+ -/+ -/+ 6286 1 -/+ +/+ +/- +/- +/- -/- -/+ -/+ 6287 2 -/+ -/+ +/+ +/- +/- +/- -/- -/+ 6288 3 -/+ -/+ -/+ +/+ +/- +/- +/- -/- 6289 4 -/- -/+ -/+ -/+ +/+ +/- +/- +/- 6290 5 +/- -/- -/+ -/+ -/+ +/+ +/- +/- 6291 6 +/- +/- -/- -/+ -/+ -/+ +/+ +/- 6292 7 +/- +/- +/- -/- -/+ -/+ -/+ +/+ 6294 U2 / U1 0 1 2 3 4 5 6 7 6295 0 = > > > ? < < < 6296 1 < = > > > ? < < 6297 2 < < = > > > ? < 6298 3 < < < = > > > ? 6299 4 ? < < < = > > > 6300 5 > ? < < < = > > 6301 6 > > ? < < < = > 6302 7 > > > ? < < < = 6304 Appendix B. Information Elements Schema 6306 This section introduces the schema for information elements. The IDL 6307 is Thrift [thrift]. 6309 On schema changes that 6311 1. change field numbers or 6313 2. add new *required* fields or 6314 3. remove any fields or 6316 4. change lists into sets, unions into structures or 6318 5. change multiplicity of fields or 6320 6. changes name of any field or type or 6322 7. change data types of any field or 6324 8. adds, changes or removes a default value of any *existing* field 6325 or 6327 9. removes or changes any defined constant or constant value or 6329 10. changes any enumeration type except extending 6330 `common.TIETypeType` (use of enumeration types is generally 6331 discouraged) 6333 major version of the schema MUST increase. All other changes MUST 6334 increase minor version within the same major. 6336 The above set of rules guarantees that every decoder can process 6337 serialized content generated by a higher minor version of the schema 6338 and with that the protocol can progress without a 'fork-lift'. 6339 Additionally, based on the propagated minor version in encoded 6340 content and added optional node capabilities new TIE types or even 6341 de-facto mandatory fields can be introduced without progressing the 6342 major version albeit only nodes supporting such new extensions would 6343 decode them. Given the model is encoded at the source and never re- 6344 encoded flooding through nodes not understanding any new extensions 6345 will preserve the according fields. 6347 Content serialized using a major version X is NOT expected to be 6348 decodable by any implementation using decoder for a model with a 6349 major version lower than X. 6351 Observe especially that introducing an optional field does not cause 6352 a major version increase even if the fields inside the structure are 6353 optional with defaults. 6355 All signed integer as forced by Thrift [thrift] support must be cast 6356 for internal purposes to equivalent unsigned values without 6357 discarding the signedness bit. An implementation SHOULD try to avoid 6358 using the signedness bit when generating values. 6360 The schema is normative. 6362 B.1. common.thrift 6364 /** 6365 Thrift file with common definitions for RIFT 6366 */ 6368 /** @note MUST be interpreted in implementation as unsigned 64 bits. 6369 * The implementation SHOULD NOT use the MSB. 6370 */ 6371 typedef i64 SystemIDType 6372 typedef i32 IPv4Address 6373 /** this has to be long enough to accomodate prefix */ 6374 typedef binary IPv6Address 6375 /** @note MUST be interpreted in implementation as unsigned */ 6376 typedef i16 UDPPortType 6377 /** @note MUST be interpreted in implementation as unsigned */ 6378 typedef i32 TIENrType 6379 /** @note MUST be interpreted in implementation as unsigned */ 6380 typedef i32 MTUSizeType 6381 /** @note MUST be interpreted in implementation as unsigned 6382 rolling over number */ 6383 typedef i64 SeqNrType 6384 /** @note MUST be interpreted in implementation as unsigned */ 6385 typedef i32 LifeTimeInSecType 6386 /** @note MUST be interpreted in implementation as unsigned */ 6387 typedef i8 LevelType 6388 /** optional, recommended monotonically increasing number 6389 _per packet type per adjacency_ 6390 that can be used to detect losses/misordering/restarts. 6391 @note MUST be interpreted in implementation as unsigned 6392 rolling over number */ 6393 typedef i16 PacketNumberType 6394 /** @note MUST be interpreted in implementation as unsigned */ 6395 typedef i32 PodType 6396 /** @note MUST be interpreted in implementation as unsigned. 6397 This is carried in the 6398 security envelope and MUST fit into 8 bits. */ 6399 typedef i8 VersionType 6400 /** @note MUST be interpreted in implementation as unsigned */ 6401 typedef i16 MinorVersionType 6402 /** @note MUST be interpreted in implementation as unsigned */ 6403 typedef i32 MetricType 6404 /** @note MUST be interpreted in implementation as unsigned 6405 and unstructured */ 6406 typedef i64 RouteTagType 6407 /** @note MUST be interpreted in implementation as unstructured 6408 label value */ 6409 typedef i32 LabelType 6410 /** @note MUST be interpreted in implementation as unsigned */ 6411 typedef i32 BandwithInMegaBitsType 6412 /** @note Key Value key ID type */ 6413 typedef string KeyIDType 6414 /** node local, unique identification for a link (interface/tunnel 6415 * etc. Basically anything RIFT runs on). This is kept 6416 * at 32 bits so it aligns with BFD [RFC5880] discriminator size. 6417 */ 6418 typedef i32 LinkIDType 6419 typedef string KeyNameType 6420 typedef i8 PrefixLenType 6421 /** timestamp in seconds since the epoch */ 6422 typedef i64 TimestampInSecsType 6423 /** security nonce. 6424 @note MUST be interpreted in implementation as rolling 6425 over unsigned value */ 6426 typedef i16 NonceType 6427 /** LIE FSM holdtime type */ 6428 typedef i16 TimeIntervalInSecType 6429 /** Transaction ID type for prefix mobility as specified by RFC6550, 6430 value MUST be interpreted in implementation as unsigned */ 6431 typedef i8 PrefixTransactionIDType 6432 /** Timestamp per IEEE 802.1AS, all values MUST be interpreted in 6433 implementation as unsigned. */ 6434 struct IEEE802_1ASTimeStampType { 6435 1: required i64 AS_sec; 6436 2: optional i32 AS_nsec; 6437 } 6438 /** generic counter type */ 6439 typedef i64 CounterType 6440 /** Platform Interface Index type, i.e. index of interface on hardware, 6441 can be used e.g. with RFC5837 */ 6442 typedef i32 PlatformInterfaceIndex 6444 /** Flags indicating node configuration in case of ZTP. 6445 */ 6446 enum HierarchyIndications { 6447 /** forces level to `leaf_level` and enables according procedures */ 6448 leaf_only = 0, 6449 /** forces level to `leaf_level` and enables according procedures */ 6450 leaf_only_and_leaf_2_leaf_procedures = 1, 6451 /** forces level to `top_of_fabric` and enables according 6452 procedures */ 6453 top_of_fabric = 2, 6454 } 6455 const PacketNumberType undefined_packet_number = 0 6456 /** This MUST be used when node is configured as top of fabric in ZTP. 6457 This is kept reasonably low to alow for fast ZTP convergence on 6458 failures. */ 6459 const LevelType top_of_fabric_level = 24 6460 /** default bandwidth on a link */ 6461 const BandwithInMegaBitsType default_bandwidth = 100 6462 /** fixed leaf level when ZTP is not used */ 6463 const LevelType leaf_level = 0 6464 const LevelType default_level = leaf_level 6465 const PodType default_pod = 0 6466 const LinkIDType undefined_linkid = 0 6468 /** default distance used */ 6469 const MetricType default_distance = 1 6470 /** any distance larger than this will be considered infinity */ 6471 const MetricType infinite_distance = 0x7FFFFFFF 6472 /** represents invalid distance */ 6473 const MetricType invalid_distance = 0 6474 const bool overload_default = false 6475 const bool flood_reduction_default = true 6476 /** default LIE FSM holddown time */ 6477 const TimeIntervalInSecType default_lie_holdtime = 3 6478 /** default ZTP FSM holddown time */ 6479 const TimeIntervalInSecType default_ztp_holdtime = 1 6480 /** by default LIE levels are ZTP offers */ 6481 const bool default_not_a_ztp_offer = false 6482 /** by default everyone is repeating flooding */ 6483 const bool default_you_are_flood_repeater = true 6484 /** 0 is illegal for SystemID */ 6485 const SystemIDType IllegalSystemID = 0 6486 /** empty set of nodes */ 6487 const set empty_set_of_nodeids = {} 6488 /** default lifetime of TIE is one week */ 6489 const LifeTimeInSecType default_lifetime = 604800 6490 /** default lifetime when TIEs are purged is 5 minutes */ 6491 const LifeTimeInSecType purge_lifetime = 300 6492 /** round down interval when TIEs are sent with security hashes 6493 to prevent excessive computation. **/ 6494 const LifeTimeInSecType rounddown_lifetime_interval = 60 6495 /** any `TieHeader` that has a smaller lifetime difference 6496 than this constant is equal (if other fields equal). This 6497 constant MUST be larger than `purge_lifetime` to avoid 6498 retransmissions */ 6499 const LifeTimeInSecType lifetime_diff2ignore = 400 6501 /** default UDP port to run LIEs on */ 6502 const UDPPortType default_lie_udp_port = 914 6503 /** default UDP port to receive TIEs on, that can be peer specific */ 6504 const UDPPortType default_tie_udp_flood_port = 915 6506 /** default MTU link size to use */ 6507 const MTUSizeType default_mtu_size = 1400 6508 /** default link being BFD capable */ 6509 const bool bfd_default = true 6511 /** undefined nonce, equivalent to missing nonce */ 6512 const NonceType undefined_nonce = 0; 6513 /** outer security key id, MUST be interpreted as in implementation 6514 as unsigned */ 6515 typedef i8 OuterSecurityKeyID 6516 /** security key id, MUST be interpreted as in implementation 6517 as unsigned */ 6518 typedef i32 TIESecurityKeyID 6519 /** undefined key */ 6520 const TIESecurityKeyID undefined_securitykey_id = 0; 6521 /** Maximum delta (negative or positive) that a mirrored nonce can 6522 deviate from local value to be considered valid. If nonces are 6523 changed every minute on both sides this opens statistically 6524 a `maximum_valid_nonce_delta` minutes window of identical LIEs, 6525 TIE, TI(x)E replays. 6526 The interval cannot be too small since LIE FSM may change 6527 states fairly quickly during ZTP without sending LIEs*/ 6528 const i16 maximum_valid_nonce_delta = 5; 6530 /** Direction of TIEs. */ 6531 enum TieDirectionType { 6532 Illegal = 0, 6533 South = 1, 6534 North = 2, 6535 DirectionMaxValue = 3, 6536 } 6538 /** Address family type. */ 6539 enum AddressFamilyType { 6540 Illegal = 0, 6541 AddressFamilyMinValue = 1, 6542 IPv4 = 2, 6543 IPv6 = 3, 6544 AddressFamilyMaxValue = 4, 6545 } 6547 /** IPv4 prefix type. */ 6548 struct IPv4PrefixType { 6549 1: required IPv4Address address; 6550 2: required PrefixLenType prefixlen; 6552 } 6554 /** IPv6 prefix type. */ 6555 struct IPv6PrefixType { 6556 1: required IPv6Address address; 6557 2: required PrefixLenType prefixlen; 6558 } 6560 /** IP address type. */ 6561 union IPAddressType { 6562 /** Content is IPv4 */ 6563 1: optional IPv4Address ipv4address; 6564 /** Content is IPv6 */ 6565 2: optional IPv6Address ipv6address; 6566 } 6568 /** Prefix advertisement. 6570 @note: for interface 6571 addresses the protocol can propagate the address part beyond 6572 the subnet mask and on reachability computation that has to 6573 be normalized. The non-significant bits can be used 6574 for operational purposes. 6575 */ 6576 union IPPrefixType { 6577 1: optional IPv4PrefixType ipv4prefix; 6578 2: optional IPv6PrefixType ipv6prefix; 6579 } 6581 /** Sequence of a prefix in case of move. 6582 */ 6583 struct PrefixSequenceType { 6584 1: required IEEE802_1ASTimeStampType timestamp; 6585 /** Transaction ID set by client in e.g. in 6LoWPAN. */ 6586 2: optional PrefixTransactionIDType transactionid; 6587 } 6589 /** Type of TIE. 6591 This enum indicates what TIE type the TIE is carrying. 6592 In case the value is not known to the receiver, 6593 the TIE MUST be re-flooded. This allows for 6594 future extensions of the protocol within the same major schema 6595 with types opaque to some nodes UNLESS the flooding scope is not 6596 the same as prefix TIE, then a major version revision MUST 6597 be performed. 6598 */ 6599 enum TIETypeType { 6600 Illegal = 0, 6601 TIETypeMinValue = 1, 6602 /** first legal value */ 6603 NodeTIEType = 2, 6604 PrefixTIEType = 3, 6605 PositiveDisaggregationPrefixTIEType = 4, 6606 NegativeDisaggregationPrefixTIEType = 5, 6607 PGPrefixTIEType = 6, 6608 KeyValueTIEType = 7, 6609 ExternalPrefixTIEType = 8, 6610 PositiveExternalDisaggregationPrefixTIEType = 9, 6611 TIETypeMaxValue = 10, 6612 } 6614 /** RIFT route types. 6616 @note: route types which MUST be ordered on their preference 6617 PGP prefixes are most preferred attracting 6618 traffic north (towards spine) and then south 6619 normal prefixes are attracting traffic south 6620 (towards leafs), i.e. prefix in NORTH PREFIX TIE 6621 is preferred over SOUTH PREFIX TIE. 6623 @note: The only purpose of those values is to introduce an 6624 ordering whereas an implementation can choose internally 6625 any other values as long the ordering is preserved 6626 */ 6627 enum RouteType { 6628 Illegal = 0, 6629 RouteTypeMinValue = 1, 6630 /** First legal value. */ 6631 /** Discard routes are most preferred */ 6632 Discard = 2, 6634 /** Local prefixes are directly attached prefixes on the 6635 * system such as e.g. interface routes. 6636 */ 6637 LocalPrefix = 3, 6638 /** Advertised in S-TIEs */ 6639 SouthPGPPrefix = 4, 6640 /** Advertised in N-TIEs */ 6641 NorthPGPPrefix = 5, 6642 /** Advertised in N-TIEs */ 6643 NorthPrefix = 6, 6644 /** Externally imported north */ 6645 NorthExternalPrefix = 7, 6646 /** Advertised in S-TIEs, either normal prefix or positive 6647 disaggregation */ 6649 SouthPrefix = 8, 6650 /** Externally imported south */ 6651 SouthExternalPrefix = 9, 6652 /** Negative, transitive prefixes are least preferred */ 6653 NegativeSouthPrefix = 10, 6654 RouteTypeMaxValue = 11, 6655 } 6657 B.2. encoding.thrift 6659 /** 6660 Thrift file for packet encodings for RIFT 6661 */ 6663 include "common.thrift" 6665 namespace rs models 6666 namespace py encoding 6668 /** Represents protocol encoding schema major version */ 6669 const common.VersionType protocol_major_version = 4 6670 /** Represents protocol encoding schema minor version */ 6671 const common.MinorVersionType protocol_minor_version = 1 6673 /** Common RIFT packet header. */ 6674 struct PacketHeader { 6675 /** Major version of protocol. */ 6676 1: required common.VersionType major_version = 6677 protocol_major_version; 6678 /** Minor version of protocol. */ 6679 2: required common.MinorVersionType minor_version = 6680 protocol_minor_version; 6681 /** Node sending the packet, in case of LIE/TIRE/TIDE 6682 also the originator of it. */ 6683 3: required common.SystemIDType sender; 6684 /** Level of the node sending the packet, required on everything 6685 except LIEs. Lack of presence on LIEs indicates UNDEFINED_LEVEL 6686 and is used in ZTP procedures. 6687 */ 6688 4: optional common.LevelType level; 6689 } 6691 /** Prefix community. */ 6692 struct Community { 6693 /** Higher order bits */ 6694 1: required i32 top; 6695 /** Lower order bits */ 6696 2: required i32 bottom; 6697 } 6699 /** Neighbor structure. */ 6700 struct Neighbor { 6701 /** System ID of the originator. */ 6702 1: required common.SystemIDType originator; 6703 /** ID of remote side of the link. */ 6704 2: required common.LinkIDType remote_id; 6705 } 6707 /** Capabilities the node supports. 6709 @note: The schema may add to this 6710 field future capabilities to indicate whether it will support 6711 interpretation of future schema extensions on the same major 6712 revision. Such fields MUST be optional and have an implicit or 6713 explicit false default value. If a future capability changes route 6714 selection or generates blackholes if some nodes are not supporting 6715 it then a major version increment is unavoidable. 6716 */ 6717 struct NodeCapabilities { 6718 /** Must advertise supported minor version dialect that way. */ 6719 1: required common.MinorVersionType protocol_minor_version = 6720 protocol_minor_version; 6721 /** Can this node participate in flood reduction. */ 6722 2: optional bool flood_reduction = 6723 common.flood_reduction_default; 6724 /** Does this node restrict itself to be top-of-fabric or 6725 leaf only (in ZTP) and does it support leaf-2-leaf 6726 procedures. */ 6727 3: optional common.HierarchyIndications hierarchy_indications; 6728 } 6730 /** Link capabilities. */ 6731 struct LinkCapabilities { 6732 /** Indicates that the link is supporting BFD. */ 6733 1: optional bool bfd = 6734 common.bfd_default; 6735 /** Indicates whether the interface will support v4 forwarding. 6737 @note: This MUST be set to true when LIEs from a v4 address are 6738 sent and MAY be set to true in LIEs on v6 address. If v4 6739 and v6 LIEs indicate contradicting information the 6740 behavior is unspecified. */ 6742 2: optional bool v4_forwarding_capable = 6743 true; 6744 } 6746 /** RIFT LIE Packet. 6748 @note: this node's level is already included on the packet header 6749 */ 6750 struct LIEPacket { 6751 /** Node or adjacency name. */ 6752 1: optional string name; 6753 /** Local link ID. */ 6754 2: required common.LinkIDType local_id; 6755 /** UDP port to which we can receive flooded TIEs. */ 6756 3: required common.UDPPortType flood_port = 6757 common.default_tie_udp_flood_port; 6758 /** Layer 3 MTU, used to discover to mismatch. */ 6759 4: optional common.MTUSizeType link_mtu_size = 6760 common.default_mtu_size; 6761 /** Local link bandwidth on the interface. */ 6762 5: optional common.BandwithInMegaBitsType 6763 link_bandwidth = common.default_bandwidth; 6764 /** Reflects the neighbor once received to provide 6765 3-way connectivity. */ 6766 6: optional Neighbor neighbor; 6767 /** Node's PoD. */ 6768 7: optional common.PodType pod = 6769 common.default_pod; 6770 /** Node capabilities shown in LIE. The capabilities 6771 MUST match the capabilities shown in the Node TIEs, otherwise 6772 the behavior is unspecified. A node detecting the mismatch 6773 SHOULD generate according error. */ 6774 10: required NodeCapabilities node_capabilities; 6775 /** Capabilities of this link. */ 6776 11: optional LinkCapabilities link_capabilities; 6777 /** Required holdtime of the adjacency, i.e. how much time 6778 MUST expire without LIE for the adjacency to drop. */ 6779 12: required common.TimeIntervalInSecType 6780 holdtime = common.default_lie_holdtime; 6781 /** Unsolicited, downstream assigned locally significant label 6782 value for the adjacency. */ 6783 13: optional common.LabelType label; 6784 /** Indicates that the level on the LIE MUST NOT be used 6785 to derive a ZTP level by the receiving node. */ 6786 21: optional bool not_a_ztp_offer = 6787 common.default_not_a_ztp_offer; 6788 /** Indicates to northbound neighbor that it should 6789 be reflooding this node's N-TIEs to achieve flood reduction and 6790 balancing for northbound flooding. To be ignored if received 6791 from a northbound adjacency. */ 6792 22: optional bool you_are_flood_repeater = 6793 common.default_you_are_flood_repeater; 6794 /** Can be optionally set to indicate to neighbor that packet losses 6795 are seen on reception based on packet numbers or the rate is 6796 too high. The receiver SHOULD temporarily slow down 6797 flooding rates. 6798 */ 6799 23: optional bool you_are_sending_too_quickly = 6800 false; 6801 /** Instance name in case multiple RIFT instances running on same 6802 interface. */ 6803 24: optional string instance_name; 6804 } 6806 /** LinkID pair describes one of parallel links between two nodes. */ 6807 struct LinkIDPair { 6808 /** Node-wide unique value for the local link. */ 6809 1: required common.LinkIDType local_id; 6810 /** Received remote link ID for this link. */ 6811 2: required common.LinkIDType remote_id; 6813 /** Describes the local interface index of the link. */ 6814 10: optional common.PlatformInterfaceIndex platform_interface_index; 6815 /** Describes the local interface name. */ 6816 11: optional string platform_interface_name; 6817 /** Indication whether the link is secured, i.e. protected by 6818 outer key, absence of this element means no indication, 6819 undefined outer key means not secured. */ 6820 12: optional common.OuterSecurityKeyID 6821 trusted_outer_security_key; 6822 /** Indication whether the link is protected by established 6823 BFD session. */ 6824 13: optional bool bfd_up; 6825 /** Optional indication which address families are up on the 6826 interface */ 6827 14: optional set address_families; 6828 } 6830 /** ID of a TIE. 6832 @note: TIEID space is a total order achieved by comparing 6833 the elements in sequence defined and comparing each 6834 value as an unsigned integer of according length. 6835 */ 6836 struct TIEID { 6837 /** direction of TIE */ 6838 1: required common.TieDirectionType direction; 6839 /** indicates originator of the TIE */ 6840 2: required common.SystemIDType originator; 6841 /** type of the tie */ 6842 3: required common.TIETypeType tietype; 6843 /** number of the tie */ 6844 4: required common.TIENrType tie_nr; 6845 } 6847 /** Header of a TIE. 6849 @note: TIEID space is a total order achieved by comparing 6850 the elements in sequence defined and comparing each 6851 value as an unsigned integer of according length. 6853 @note: After sequence number the lifetime received on the envelope 6854 must be used for comparison before further fields. 6856 @note: `origination_time` and `origination_lifetime` are 6857 normally disregarded for comparison purposes and carried 6858 purely for debugging/security purposes if present. 6859 They may be used for comparison of last resort to 6860 differentiate otherwise equal ties 6861 */ 6862 struct TIEHeader { 6863 /** ID of the tie. */ 6864 2: required TIEID tieid; 6865 /** Sequence number of the tie. */ 6866 3: required common.SeqNrType seq_nr; 6868 /** Absolute timestamp when the TIE 6869 was generated. This can be used on fabrics with 6870 synchronized clock to prevent lifetime modification attacks. */ 6871 10: optional common.IEEE802_1ASTimeStampType origination_time; 6872 /** Original lifetime when the TIE 6873 was generated. This can be used on fabrics with 6874 synchronized clock to prevent lifetime modification attacks. */ 6875 12: optional common.LifeTimeInSecType origination_lifetime; 6876 } 6878 /** Header of a TIE as described in TIRE/TIDE. 6879 */ 6880 struct TIEHeaderWithLifeTime { 6881 1: required TIEHeader header; 6882 /** Remaining lifetime that expires down to 0 just like in ISIS. 6883 TIEs with lifetimes differing by less than 6884 `lifetime_diff2ignore` MUST be considered EQUAL. */ 6885 2: required common.LifeTimeInSecType remaining_lifetime; 6887 } 6889 /** TIDE with sorted TIE headers, if headers are unsorted, behavior 6890 is undefined. */ 6891 struct TIDEPacket { 6892 /** First TIE header in the tide packet. */ 6893 1: required TIEID start_range; 6894 /** Last TIE header in the tide packet. */ 6895 2: required TIEID end_range; 6896 /** _Sorted_ list of headers. */ 6897 3: required list headers; 6898 } 6900 /** TIRE packet */ 6901 struct TIREPacket { 6902 1: required set headers; 6903 } 6905 /** neighbor of a node */ 6906 struct NodeNeighborsTIEElement { 6907 /** level of neighbor */ 6908 1: required common.LevelType level; 6909 /** Cost to neighbor. 6911 @note: All parallel links to same node 6912 incur same cost, in case the neighbor has multiple 6913 parallel links at different cost, the largest distance 6914 (highest numerical value) MUST be advertised. 6916 @note: any neighbor with cost <= 0 MUST be ignored 6917 in computations */ 6918 3: optional common.MetricType cost 6919 = common.default_distance; 6920 /** can carry description of multiple parallel links in a TIE */ 6921 4: optional set link_ids; 6923 /** total bandwith to neighbor, this will be normally sum of the 6924 bandwidths of all the parallel links. */ 6925 5: optional common.BandwithInMegaBitsType 6926 bandwidth = common.default_bandwidth; 6927 } 6929 /** Indication flags of the node. */ 6930 struct NodeFlags { 6931 /** Indicates that node is in overload, do not transit traffic 6932 through it. */ 6933 1: optional bool overload = common.overload_default; 6934 } 6935 /** Description of a node. 6937 It may occur multiple times in different TIEs but if either 6938 6939 capabilities values do not match or 6940 flags values do not match or 6941 neighbors repeat with different values 6942 6944 the behavior is undefined and a warning SHOULD be generated. 6945 Neighbors can be distributed across multiple TIEs however if 6946 the sets are disjoint. Miscablings SHOULD be repeated in every 6947 node TIE, otherwise the behavior is undefined. 6949 @note: Observe that absence of fields implies defined defaults. 6950 */ 6951 struct NodeTIEElement { 6952 /** Level of the node. */ 6953 1: required common.LevelType level; 6954 /** Node's neighbors. If neighbor systemID repeats in other 6955 node TIEs of same node the behavior is undefined. */ 6956 2: required map neighbors; 6958 /** Capabilities of the node. */ 6959 3: required NodeCapabilities capabilities; 6960 /** Flags of the node. */ 6961 4: optional NodeFlags flags; 6962 /** Optional node name for easier operations. */ 6963 5: optional string name; 6964 /** PoD to which the node belongs. */ 6965 6: optional common.PodType pod; 6966 /** optional startup time of the node */ 6967 7: optional common.TimestampInSecsType startup_time; 6969 /** If any local links are miscabled, the indication is flooded. */ 6970 10: optional set miscabled_links; 6972 } 6974 /** Attributes of a prefix. */ 6975 struct PrefixAttributes { 6976 /** Distance of the prefix. */ 6977 2: required common.MetricType metric 6978 = common.default_distance; 6979 /** Generic unordered set of route tags, can be redistributed 6980 to other protocols or use within the context of real time 6981 analytics. */ 6982 3: optional set tags; 6983 /** Monotonic clock for mobile addresses. */ 6984 4: optional common.PrefixSequenceType monotonic_clock; 6985 /** Indicates if the interface is a node loopback. */ 6986 6: optional bool loopback = false; 6987 /** Indicates that the prefix is directly attached, i.e. should be 6988 routed to even if the node is in overload. */ 6989 7: optional bool directly_attached = true; 6991 /** In case of locally originated prefixes, i.e. interface 6992 addresses this can describe which link the address 6993 belongs to. */ 6994 10: optional common.LinkIDType from_link; 6995 } 6997 /** TIE carrying prefixes */ 6998 struct PrefixTIEElement { 6999 /** Prefixes with the associated attributes. 7000 If the same prefix repeats in multiple TIEs of same node 7001 behavior is unspecified. */ 7002 1: required map prefixes; 7003 } 7005 /** Generic key value pairs. */ 7006 struct KeyValueTIEElement { 7007 /** @note: if the same key repeats in multiple TIEs of same node 7008 or with different values, behavior is unspecified */ 7009 1: required map keyvalues; 7010 } 7012 /** Single element in a TIE. 7014 Schema enum `common.TIETypeType` 7015 in TIEID indicates which elements MUST be present 7016 in the TIEElement. In case of mismatch the unexpected 7017 elements MUST be ignored. In case of lack of expected 7018 element the TIE an error MUST be reported and the TIE 7019 MUST be ignored. 7021 This type can be extended with new optional elements 7022 for new `common.TIETypeType` values without breaking 7023 the major but if it is necessary to understand whether 7024 all nodes support the new type a node capability must 7025 be added as well. 7026 */ 7027 union TIEElement { 7028 /** Used in case of enum common.TIETypeType.NodeTIEType. */ 7029 1: optional NodeTIEElement node; 7030 /** Used in case of enum common.TIETypeType.PrefixTIEType. */ 7031 2: optional PrefixTIEElement prefixes; 7032 /** Positive prefixes (always southbound). 7033 It MUST NOT be advertised within a North TIE and 7034 ignored otherwise. 7035 */ 7036 3: optional PrefixTIEElement positive_disaggregation_prefixes; 7037 /** Transitive, negative prefixes (always southbound) which 7038 MUST be aggregated and propagated 7039 according to the specification 7040 southwards towards lower levels to heal 7041 pathological upper level partitioning, otherwise 7042 blackholes may occur in multiplane fabrics. 7043 It MUST NOT be advertised within a North TIE. 7044 */ 7045 5: optional PrefixTIEElement negative_disaggregation_prefixes; 7046 /** Externally reimported prefixes. */ 7047 6: optional PrefixTIEElement external_prefixes; 7048 /** Positive external disaggregated prefixes (always southbound). 7049 It MUST NOT be advertised within a North TIE and 7050 ignored otherwise. 7051 */ 7052 7: optional PrefixTIEElement 7053 positive_external_disaggregation_prefixes; 7054 /** Key-Value store elements. */ 7055 9: optional KeyValueTIEElement keyvalues; 7056 } 7058 /** TIE packet */ 7059 struct TIEPacket { 7060 1: required TIEHeader header; 7061 2: required TIEElement element; 7062 } 7064 /** Content of a RIFT packet. */ 7065 union PacketContent { 7066 1: optional LIEPacket lie; 7067 2: optional TIDEPacket tide; 7068 3: optional TIREPacket tire; 7069 4: optional TIEPacket tie; 7070 } 7072 /** RIFT packet structure. */ 7073 struct ProtocolPacket { 7074 1: required PacketHeader header; 7075 2: required PacketContent content; 7076 } 7078 Appendix C. Constants 7080 C.1. Configurable Protocol Constants 7082 This section gathers constants that are provided in the schema files 7083 and in the document. 7085 +----------------+--------------+-----------------------------------+ 7086 | | Type | Value | 7087 +----------------+--------------+-----------------------------------+ 7088 | LIE IPv4 | Default | 224.0.0.120 or all-rift-routers | 7089 | Multicast | Value, | to be assigned in IPv4 | 7090 | Address | Configurable | Multicast Address Space Registry | 7091 | | | in Local Network Control Block | 7092 +----------------+--------------+-----------------------------------+ 7093 | LIE IPv6 | Default | FF02::A1F7 or all-rift-routers to | 7094 | Multicast | Value, | be assigned in IPv6 Multicast | 7095 | Address | Configurable | Address Assignments | 7096 +----------------+--------------+-----------------------------------+ 7097 | LIE | Default | 914 | 7098 | Destination | Value, | | 7099 | Port | Configurable | | 7100 +----------------+--------------+-----------------------------------+ 7101 | Level value | Constant | 24 | 7102 | for | | | 7103 | TOP_OF_FABRIC | | | 7104 | flag | | | 7105 +----------------+--------------+-----------------------------------+ 7106 | Default LIE | Default | 3 seconds | 7107 | Holdtime | Value, | | 7108 | | Configurable | | 7109 +----------------+--------------+-----------------------------------+ 7110 | TIE | Default | 1 second | 7111 | Retransmission | Value | | 7112 | Interval | | | 7113 +----------------+--------------+-----------------------------------+ 7114 | TIDE | Default | 5 seconds | 7115 | Generation | Value, | | 7116 | Interval | Configurable | | 7117 +----------------+--------------+-----------------------------------+ 7118 | MIN_TIEID | Constant | TIE Key with minimal values: | 7119 | signifies | | TIEID(originator=0, | 7120 | start of TIDEs | | tietype=TIETypeMinValue, | 7121 | | | tie_nr=0, direction=South) | 7122 +----------------+--------------+-----------------------------------+ 7123 | MAX_TIEID | Constant | TIE Key with maximal values: | 7124 | signifies end | | TIEID(originator=MAX_UINT64, | 7125 | of TIDEs | | tietype=TIETypeMaxValue, | 7126 | | | tie_nr=MAX_UINT64, | 7127 | | | direction=North) | 7128 +----------------+--------------+-----------------------------------+ 7130 Table 6: all_constants 7132 Authors' Addresses 7134 Tony Przygienda (editor) 7135 Juniper 7136 1137 Innovation Way 7138 Sunnyvale, CA 7140 USA 7142 Email: prz@juniper.net 7144 Alankar Sharma 7145 Comcast 7146 1800 Bishops Gate Blvd 7147 Mount Laurel, NJ 08054 7148 US 7150 Email: Alankar_Sharma@comcast.com 7152 Pascal Thubert 7153 Cisco Systems, Inc 7154 Building D 7155 45 Allee des Ormes - BP1200 7156 MOUGINS - Sophia Antipolis 06254 7157 FRANCE 7159 Phone: +33 497 23 26 34 7160 Email: pthubert@cisco.com 7162 Bruno Rijsman 7163 Individual 7165 Email: brunorijsman@gmail.com 7167 Dmitry Afanasiev 7168 Yandex 7170 Email: fl0w@yandex-team.ru