idnits 2.17.1 draft-ietf-rift-applicability-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 1089: '...scovery. A node MUST NOT originate LI...' RFC 2119 keyword, line 1092: '...e on. An implementation MUST be ready...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (13 October 2020) is 1292 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-21) exists of draft-ietf-rift-rift-12 Summary: 3 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 RIFT WG Yuehua. Wei, Ed. 3 Internet-Draft Zheng. Zhang 4 Intended status: Informational ZTE Corporation 5 Expires: 16 April 2021 Dmitry. Afanasiev 6 Yandex 7 Tom. Verhaeg 8 Juniper Networks 9 Jaroslaw. Kowalczyk 10 Orange Polska 11 P. Thubert 12 Cisco Systems 13 13 October 2020 15 RIFT Applicability 16 draft-ietf-rift-applicability-03 18 Abstract 20 This document discusses the properties, applicability and operational 21 considerations of RIFT in different network scenarios. It intends to 22 provide a rough guide how RIFT can be deployed to simplify routing 23 operations in Clos topologies and their variations. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at https://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on 16 April 2021. 42 Copyright Notice 44 Copyright (c) 2020 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 49 license-info) in effect on the date of publication of this document. 50 Please review these documents carefully, as they describe your rights 51 and restrictions with respect to this document. Code Components 52 extracted from this document must include Simplified BSD License text 53 as described in Section 4.e of the Trust Legal Provisions and are 54 provided without warranty as described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 59 2. Problem Statement of Routing in Modern IP Fabric Fat Tree 60 Networks . . . . . . . . . . . . . . . . . . . . . . . . 3 61 3. Applicability of RIFT to Clos IP Fabrics . . . . . . . . . . 3 62 3.1. Overview of RIFT . . . . . . . . . . . . . . . . . . . . 4 63 3.2. Applicable Topologies . . . . . . . . . . . . . . . . . . 6 64 3.2.1. Horizontal Links . . . . . . . . . . . . . . . . . . 6 65 3.2.2. Vertical Shortcuts . . . . . . . . . . . . . . . . . 6 66 3.2.3. Generalizing to any Directed Acyclic Graph . . . . . 7 67 3.3. Use Cases . . . . . . . . . . . . . . . . . . . . . . . . 8 68 3.3.1. DC Fabrics . . . . . . . . . . . . . . . . . . . . . 8 69 3.3.2. Metro Fabrics . . . . . . . . . . . . . . . . . . . . 8 70 3.3.3. Building Cabling . . . . . . . . . . . . . . . . . . 8 71 3.3.4. Internal Router Switching Fabrics . . . . . . . . . . 9 72 3.3.5. CloudCO . . . . . . . . . . . . . . . . . . . . . . . 9 73 4. Deployment Considerations . . . . . . . . . . . . . . . . . . 11 74 4.1. South Reflection . . . . . . . . . . . . . . . . . . . . 12 75 4.2. Suboptimal Routing on Link Failures . . . . . . . . . . . 12 76 4.3. Black-Holing on Link Failures . . . . . . . . . . . . . . 14 77 4.4. Zero Touch Provisioning (ZTP) . . . . . . . . . . . . . . 15 78 4.5. Miscabling Examples . . . . . . . . . . . . . . . . . . . 15 79 4.6. Positive vs. Negative Disaggregation . . . . . . . . . . 18 80 4.7. Mobile Edge and Anycast . . . . . . . . . . . . . . . . . 19 81 4.8. IPv4 over IPv6 . . . . . . . . . . . . . . . . . . . . . 21 82 4.9. In-Band Reachability of Nodes . . . . . . . . . . . . . . 22 83 4.10. Dual Homing Servers . . . . . . . . . . . . . . . . . . . 23 84 4.11. Fabric With A Controller . . . . . . . . . . . . . . . . 24 85 4.11.1. Controller Attached to ToFs . . . . . . . . . . . . 24 86 4.11.2. Controller Attached to Leaf . . . . . . . . . . . . 24 87 4.12. Internet Connectivity With Underlay . . . . . . . . . . . 25 88 4.12.1. Internet Default on the Leaf . . . . . . . . . . . . 25 89 4.12.2. Internet Default on the ToFs . . . . . . . . . . . . 25 90 4.13. Subnet Mismatch and Address Families . . . . . . . . . . 25 91 4.14. Anycast Considerations . . . . . . . . . . . . . . . . . 26 92 5. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 26 93 6. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 27 94 7. Normative References . . . . . . . . . . . . . . . . . . . . 27 95 8. Informative References . . . . . . . . . . . . . . . . . . . 28 96 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 28 98 1. Introduction 100 This document intends to explain the properties and applicability of 101 "Routing in Fat Trees" [RIFT] in different deployment scenarios and 102 highlight the operational simplicity of the technology compared to 103 traditional routing solutions. It also documents special 104 considerations when RIFT is used with or without overlays, 105 controllers and corrects topology miscablings and/or node and link 106 failures. 108 2. Problem Statement of Routing in Modern IP Fabric Fat Tree Networks 110 Clos and Fat-Tree topologies have gained prominence in today's 111 networking, primarily as result of the paradigm shift towards a 112 centralized data-center based architecture that is poised to deliver 113 a majority of computation and storage services in the future. 115 Today's current routing protocols were geared towards a network with 116 an irregular topology and low degree of connectivity originally. 117 When they are applied to Fat-Tree topologies: 119 * they tend to need extensive configuration or provisioning during 120 bring up and re-dimensioning. 122 * spine and leaf nodes have the entire network topology and routing 123 information, which is in fact, not needed on the leaf nodes during 124 normal operation. 126 * significant Link State PDUs (LSPs) flooding duplication between 127 spine nodes and leaf nodes occurs during network bring up and 128 topology updates. It consumes both spine and leaf nodes' CPU and 129 link bandwidth resources and with that limits protocol 130 scalability. 132 3. Applicability of RIFT to Clos IP Fabrics 134 Further content of this document assumes that the reader is familiar 135 with the terms and concepts used in OSPF [RFC2328] and IS-IS 136 [ISO10589-Second-Edition] link-state protocols and at least the 137 sections of [RIFT] outlining the requirement of routing in IP fabrics 138 and RIFT protocol concepts. 140 3.1. Overview of RIFT 142 RIFT is a dynamic routing protocol for Clos and fat-tree network 143 topologies. It defines a link-state protocol when "pointing north" 144 and path-vector protocol when "pointing south". 146 It floods flat link-state information northbound only so that each 147 level obtains the full topology of levels south of it. That 148 information is never flooded east-west or back South again. So a top 149 tier node has full set of prefixes from the SPF calculation. 151 In the southbound direction the protocol operates like a "fully 152 summarizing, unidirectional" path vector protocol or rather a 153 distance vector with implicit split horizon whereas the information 154 propagates one hop south and is 're-advertised' by nodes at next 155 lower level, normally just the default route. 157 +-----------+ +-----------+ 158 | ToF | | ToF | LEVEL 2 159 + +-----+--+--+ +-+--+------+ 160 | | | | | | | | | ^ 161 + | | | +-------------------------+ | 162 Distance | +-------------------+ | | | | | 163 Vector | | | | | | | | + 164 South | | | | +--------+ | | | Link-state 165 + | | | | | | | | Flooding 166 | | | +-------------+ | | | North 167 v | | | | | | | | + 168 +-+--+-+ +------+ +-------+ +--+--+-+ | 169 |SPINE | |SPINE | | SPINE | | SPINE | | LEVEL 1 170 + ++----++ ++---+-+ +--+--+-+ ++----+-+ | 171 + | | | | | | | | | ^ N 172 Distance | +-------+ | | +--------+ | | | E 173 Vector | | | | | | | | | +------> 174 South | +-------+ | | | +-------+ | | | | 175 + | | | | | | | | | + 176 v ++--++ +-+-++ ++-+-+ +-+--++ + 177 |LEAF| |LEAF| |LEAF| |LEAF | LEVEL 0 178 +----+ +----+ +----+ +-----+ 180 Figure 1: Rift overview 182 A middle tier node has only information necessary for its level, 183 which are all destinations south of the node based on SPF 184 calculation, default route and potential disaggregated routes. 186 RIFT combines the advantage of both link-state and distance vector: 188 * Fastest Possible Convergence 190 * Automatic Detection of Topology 192 * Minimal Routes/Info on TORs 194 * High Degree of ECMP 196 * Fast De-commissioning of Nodes 198 * Maximum Propagation Speed with Flexible Prefixes in an Update 200 And RIFT eliminates the disadvantages of link-state or distance 201 vector: 203 * Reduced and Balanced Flooding 205 * Automatic Neighbor Detection 207 So there are two types of link-state database which are "north 208 representation" N-TIEs and "south representation" S-TIEs. The N-TIEs 209 contain a link-state topology description of lower levels and S-TIEs 210 carry simply default routes for the lower levels. 212 There are a bunch of more advantages unique to RIFT listed below 213 which could be understood if you read the details of [RIFT]. 215 * True ZTP 217 * Minimal Blast Radius on Failures 219 * Can Utilize All Paths Through Fabric Without Looping 221 * Automatic Disaggregation on Failures 223 * Simple Leaf Implementation that Can Scale Down to Servers 225 * Key-Value Store 227 * Horizontal Links Used for Protection Only 229 * Supports Non-Equal Cost Multipath and Can Replace MC-LAG 231 * Optimal Flooding Reduction and Load-Balancing 233 3.2. Applicable Topologies 235 Albeit RIFT is specified primarily for "proper" Clos or "fat-tree" 236 structures, it already supports PoD concepts which are strictly 237 speaking not found in original Clos concepts. 239 Further, the specification explains and supports operations of multi- 240 plane Clos variants where the protocol relies on set of rings to 241 allow the reconciliation of topology view of different planes as most 242 desirable solution making proper disaggregation viable in case of 243 failures. These observations hold not only in case of RIFT but in 244 the generic case of dynamic routing on Clos variants with multiple 245 planes and failures in bi-sectional bandwidth, especially on the 246 leafs. 248 3.2.1. Horizontal Links 250 RIFT is not limited to pure Clos divided into PoD and multi-planes 251 but supports horizontal links below the top of fabric level. Those 252 links are used however only as routes of last resort northbound when 253 a spine loses all northbound links or cannot compute a default route 254 through them. 256 A possible configuration is a "ring" of horizontal links at a level. 257 In presence of such a "ring" in any level (except ToF level) neither 258 N-SPF nor S-SPF will provide a "ring-based protection" scheme since 259 such a computation would have to deal necessarily with breaking of 260 "loops" in Dijkstra sense; an application for which RIFT is not 261 intended. 263 A full-mesh connectivity between nodes on the same level can be 264 employed and that allows N-SPF to provide for any node loosing all 265 its northbound adjacencies (as long as any of the other nodes in the 266 level are northbound connected) to still participate in northbound 267 forwarding. 269 3.2.2. Vertical Shortcuts 271 Through relaxations of the specified adjacency forming rules RIFT 272 implementations can be extended to support vertical "shortcuts" as 273 proposed by e.g. [I-D.white-distoptflood]. The RIFT specification 274 itself does not provide the exact details since the resulting 275 solution suffers from either much larger blast radius with increased 276 flooding volumes or in case of maximum aggregation routing bow-tie 277 problems. 279 3.2.3. Generalizing to any Directed Acyclic Graph 281 RIFT is an anisotropic routing protocol, meaning that it has a sense 282 of direction (northbound, southbound, east-west) and that it operates 283 differently depending on the direction. 285 * Northbound, RIFT operates as a link-state IGP, whereby the control 286 packets are reflooded first all the way North and only interpreted 287 later. All the individual fine grained routes are advertised. 289 * Southbound, RIFT operates as a distance vector IGP, whereby the 290 control packets are flooded only one hop, interpreted, and the 291 consequence of that computation is what gets flooded on more hop 292 South. In the most common use-cases, a ToF node can reach most of 293 the prefixes in the fabric. If that is the case, the ToF node 294 advertises the fabric default and disaggregates the prefixes that 295 it cannot reach. On the other hand, a ToF Node that can reach 296 only a small subset of the prefixes in the fabric will preferably 297 advertise those prefixes and refrain from aggregating. 299 In the general case, what gets advertised South is in more 300 details: 302 1. A fabric default that aggregates all the prefixes that are 303 reachable within the fabric, and that could be a default route 304 or a prefix that is dedicated to this particular fabric. 306 2. The loopback addresses of the northbound nodes, e.g., for 307 inband management. 309 3. The disaggregated prefixes for the dynamic exceptions to the 310 fabric Default, advertised to route around the black hole that 311 may form 313 * east-west routing can optionally be used, with specific 314 restrictions. It is useful in particular when a sibling has 315 access to the fabric default but this node does not. 317 A Directed Acyclic Graph (DAG) provides a sense of North (the 318 direction of the DAG) and of South (the reverse), which can be used 319 to apply RIFT. For the purpose of RIFT, an edge in the DAG that has 320 only incoming vertices is a ToF node. 322 There are a number of caveats though: 324 * The DAG structure must exist before RIFT starts, so there is a 325 need for a companion protocol to establish the logical DAG 326 structure. 328 * A generic DAG does not have a sense of east and west. The 329 operation specified for east-west links and the southbound 330 reflection between nodes are not applicable. 332 * In order to aggregate and disaggregate routes, RIFT requires that 333 all the ToF nodes share the full knowledge of the prefixes in the 334 fabric. This can be achieved with a ring as suggested by the RIFT 335 main specification, by some preconfiguration, or using a 336 synchronization with a common repository where all the active 337 prefixes are registered. 339 3.3. Use Cases 341 3.3.1. DC Fabrics 343 RIFT is largely driven by demands and hence ideally suited for 344 application in underlay of data center IP fabrics, vast majority of 345 which seem to be currently (and for the foreseeable future) Clos 346 architectures. It significantly simplifies operation and deployment 347 of such fabrics as described in Section 4 for environments compared 348 to extensive proprietary provisioning and operational solutions. 350 3.3.2. Metro Fabrics 352 The demand for bandwidth is increasing steadily, driven primarily by 353 environments close to content producers (server farms connection via 354 DC fabrics) but in proximity to content consumers as well. Consumers 355 are often clustered in metro areas with their own network 356 architectures that can benefit from simplified, regular Clos 357 structures and hence RIFT. 359 3.3.3. Building Cabling 361 Commercial edifices are often cabled in topologies that are either 362 Clos or its isomorphic equivalents. With many floors the Clos can 363 grow rather high and with that present a challenge for traditional 364 routing protocols (except BGP and by now largely phased-out PNNI) 365 which do not support an arbitrary number of levels which RIFT does 366 naturally. Moreover, due to limited sizes of forwarding tables in 367 active elements of building cabling the minimum FIB size RIFT 368 maintains under normal conditions can prove particularly cost- 369 effective in terms of hardware and operational costs. 371 3.3.4. Internal Router Switching Fabrics 373 It is common in high-speed communications switching and routing 374 devices to use fabrics when a crossbar is not feasible due to cost, 375 head-of-line blocking or size trade-offs. Normally such fabrics are 376 not self-healing or rely on 1:/+1 protection schemes but it is 377 conceivable to use RIFT to operate Clos fabrics that can deal 378 effectively with interconnections or subsystem failures in such 379 module. RIFT is neither IP specific and hence any link addressing 380 connecting internal device subnets is conceivable. 382 3.3.5. CloudCO 384 The Cloud Central Office (CloudCO) is a new stage of telecom Central 385 Office. It takes the advantage of Software Defined Networking (SDN) 386 and Network Function Virtualization (NFV) in conjunction with general 387 purpose hardware to optimize current networks. The following figure 388 illustrates this architecture at a high level. It describes a single 389 instance or macro-node of cloud CO. An Access I/O module faces a 390 Cloud CO Access Node, and the CPEs behind it. A Network I/O module 391 is facing the core network. The two I/O modules are interconnected 392 by a leaf and spine fabric. [TR-384] 393 +---------------------+ +----------------------+ 394 | Spine | | Spine | 395 | Switch | | Switch | 396 +------+---+------+-+-+ +--+-+-+-+-----+-------+ 397 | | | | | | | | | | | | 398 | | | | | +-------------------------------+ | 399 | | | | | | | | | | | | 400 | | | | +-------------------------+ | | | 401 | | | | | | | | | | | | 402 | | +----------------------+ | | | | | | | | 403 | | | | | | | | | | | | 404 | +---------------------------------+ | | | | | | | 405 | | | | | | | | | | | | 406 | | | +-----------------------------+ | | | | | 407 | | | | | | | | | | | | 408 | | | | | +--------------------+ | | | | 409 | | | | | | | | | | | | 410 +--+ +-+---+--+ +-+---+--+ +--+----+--+ +-+--+--+ +--+ 411 |L | | Leaf | | Leaf | | Leaf | | Leaf | |L | 412 |S | | Switch | | Switch | | Switch | | Switch| |S | 413 ++-+ +-+-+-+--+ +-+-+-+--+ +--+-+--+--+ ++-+--+-+ +-++ 414 | | | | | | | | | | | | | | 415 | +-+-+-+--+ +-+-+-+--+ +--+-+--+--+ ++-+--+-+ | 416 | |Compute | |Compute | | Compute | |Compute| | 417 | |Node | |Node | | Node | |Node | | 418 | +--------+ +--------+ +----------+ +-------+ | 419 | || VAS5 || || vDHCP|| || vRouter|| ||VAS1 || | 420 | |--------| |--------| |----------| |-------| | 421 | |--------| |--------| |----------| |-------| | 422 | || VAS6 || || VAS3 || || v802.1x|| ||VAS2 || | 423 | |--------| |--------| |----------| |-------| | 424 | |--------| |--------| |----------| |-------| | 425 | || VAS7 || || VAS4 || || vIGMP || ||BAA || | 426 | |--------| |--------| |----------| |-------| | 427 | +--------+ +--------+ +----------+ +-------+ | 428 | | 429 ++-----------+ +---------++ 430 |Network I/O | |Access I/O| 431 +------------+ +----------+ 433 Figure 2: An example of CloudCO architecture 435 The Spine-Leaf architecture deployed inside CloudCO meets the network 436 requirements of adaptable, agile, scalable and dynamic. 438 4. Deployment Considerations 440 RIFT presents the opportunity for organizations building and 441 operating IP fabrics to simplify their operation and deployments 442 while achieving many desirable properties of a dynamic routing on 443 such a substrate: 445 * RIFT design follows minimum blast radius and minimum necessary 446 epistemological scope philosophy which leads to very good scaling 447 properties while delivering maximum reactiveness. 449 * RIFT allows for extensive Zero Touch Provisioning within the 450 protocol. In its most extreme version RIFT does not rely on any 451 specific addressing and for IP fabric can operate using IPv6 ND 452 [RFC4861] only. 454 * RIFT has provisions to detect common IP fabric mis-cabling 455 scenarios. 457 * RIFT negotiates automatically BFD per link allowing this way for 458 IP and micro-BFD [RFC7130] to replace LAGs which do hide bandwidth 459 imbalances in case of constituent failures. Further automatic 460 link validation techniques similar to [RFC5357] could be supported 461 as well. 463 * RIFT inherently solves many difficult problems associated with the 464 use of traditional routing topologies with dense meshes and high 465 degrees of ECMP by including automatic bandwidth balancing, flood 466 reduction and automatic disaggregation on failures while providing 467 maximum aggregation of prefixes in default scenarios. 469 * RIFT reduces FIB size towards the bottom of the IP fabric where 470 most nodes reside and allows with that for cheaper hardware on the 471 edges and introduction of modern IP fabric architectures that 472 encompass e.g. server multi-homing. 474 * RIFT provides valley-free routing and with that is loop free. 475 This allows the use of any such valley-free path in bi-sectional 476 fabric bandwidth between two destination irrespective of their 477 metrics which can be used to balance load on the fabric in 478 different ways. 480 * RIFT includes a key-value distribution mechanism which allows for 481 many future applications such as automatic provisioning of basic 482 overlay services or automatic key roll-overs over whole fabrics. 484 * RIFT is designed for minimum delay in case of prefix mobility on 485 the fabric. 487 * Many further operational and design points collected over many 488 years of routing protocol deployments have been incorporated in 489 RIFT such as fast flooding rates, protection of information 490 lifetimes and operationally easily recognizable remote ends of 491 links and node names. 493 4.1. South Reflection 495 South reflection is a mechanism that South Node TIEs are "reflected" 496 back up north to allow nodes in same level without E-W links to "see" 497 each other. 499 For example, Spine111\Spine112\Spine121\Spine122 reflects Node S-TIEs 500 from ToF21 to ToF22 separately. Respectively, 501 Spine111\Spine112\Spine121\Spine122 reflects Node S-TIEs from ToF22 502 to ToF21 separately. So ToF22 and ToF21 see each other's node 503 information as level 2 nodes. 505 In an equivalent fashion, as the result of the south reflection 506 between Spine121-Leaf121-Spine122 and Spine121-Leaf122-Spine122, 507 Spine121 and Spine 122 knows each other at level 1. 509 4.2. Suboptimal Routing on Link Failures 510 +--------+ +--------+ 511 | ToF21 | | ToF22 | LEVEL 2 512 ++--+-+-++ ++-+--+-++ 513 | | | | | | | + 514 | | | | | | | linkTS8 515 +-------------+ | +-+linkTS3+-+ | | | +--------------+ 516 | | | | | | + | 517 | +----------------------------+ | linkTS7 | 518 | | | | + + + | 519 | | | +-------+linkTS4+------------+ | 520 | | | + + | | | 521 | | | +------------+--+ | | 522 | | | | | linkTS6 | | 523 +-+----++ ++-----++ ++------+ ++-----++ 524 |Spin111| |Spin112| |Spin121| |Spin122| LEVEL 1 525 +-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 526 | | | | | | | | 527 | +--------------+ | + ++XX+linkSL6+---+ + 528 | | | | linkSL5 | | linkSL8 529 | +------------+ | | + +---+linkSL7+-+ | + 530 | | | | | | | | 531 +-+---+-+ +--+--+-+ +-+---+-+ +--+-+--+ 532 |Leaf111| |Leaf112| |Leaf121| |Leaf122| LEVEL 0 533 +-+-----+ ++------+ +-----+-+ +-+-----+ 534 + + + + 535 Prefix111 Prefix112 Prefix121 Prefix122 537 Figure 3: Suboptimal routing upon link failure use case 539 As shown in Figure 3, as the result of the south reflection between 540 Spine121-Leaf121-Spine122 and Spine121-Leaf122-Spine122, Spine121 and 541 Spine 122 knows each other at level 1. 543 Without disaggregation mechanism, when linkSL6 fails, the packet from 544 leaf121 to prefix122 will probably go up through linkSL5 to linkTS3 545 then go down through linkTS4 to linkSL8 to Leaf122 or go up through 546 linkSL5 to linkTS6 then go down through linkTS4 and linkSL8 to 547 Leaf122 based on pure default route. It's the case of suboptimal 548 routing or bow-tieing. 550 With disaggregation mechanism, when linkSL6 fails, Spine122 will 551 detect the failure according to the reflected node S-TIE from 552 Spine121. Based on the disaggregation algorithm provided by RIFT, 553 Spine122 will explicitly advertise prefix122 in Disaggregated Prefix 554 S-TIE PrefixesElement(prefix122, cost 1). The packet from leaf121 to 555 prefix122 will only be sent to linkSL7 following a longest-prefix 556 match to prefix 122 directly then go down through linkSL8 to Leaf122 557 . 559 4.3. Black-Holing on Link Failures 561 +--------+ +--------+ 562 | ToF 21 | | ToF 22 | LEVEL 2 563 ++-+--+-++ ++-+--+-++ 564 | | | | | | | | 565 | | | | | | | linkTS8 566 +--------------+ | +--linkTS3-X+ | | | +--------------+ 567 linkTS1 | | | | | | | 568 | +-----------------------------+ | linkTS7 | 569 | | | | | | | | 570 | | linkTS2 +--------linkTS4-X-----------+ | 571 | | | | | | | | 572 | linkTS5 +-+ +---------------+ | | 573 | | | | | linkTS6 | | 574 +-+----++ +-+-----+ ++----+-+ ++-----++ 575 |Spin111| |Spin112| |Spin121| |Spin122| LEVEL 1 576 +-+---+-+ ++----+-+ +-+---+-+ ++---+--+ 577 | | | | | | | | 578 | +---------------+ | | +----linkSL6----+ | 579 linkSL1 | | | linkSL5 | | linkSL8 580 | +---linkSL3---+ | | | +----linkSL7--+ | | 581 | | | | | | | | 582 +-+---+-+ +--+--+-+ +-+---+-+ +--+-+--+ 583 |Leaf111| |Leaf112| |Leaf121| |Leaf122| LEVEL 0 584 +-+-----+ ++------+ +-----+-+ +-+-----+ 585 + + + + 586 Prefix111 Prefix112 Prefix121 Prefix122 588 Figure 4: Black-holing upon link failure use case 590 This scenario illustrates a case when double link failure occurs and 591 with that black-holing can happen. 593 Without disaggregation mechanism, when linkTS3 and linkTS4 both fail, 594 the packet from leaf111 to prefix122 would suffer 50% black-holing 595 based on pure default route. The packet supposed to go up through 596 linkSL1 to linkTS1 then go down through linkTS3 or linkTS4 will be 597 dropped. The packet supposed to go up through linkSL3 to linkTS2 598 then go down through linkTS3 or linkTS4 will be dropped as well. 599 It's the case of black-holing. 601 With disaggregation mechanism, when linkTS3 and linkTS4 both fail, 602 ToF22 will detect the failure according to the reflected node S-TIE 603 of ToF21 from Spine111\Spine112. Based on the disaggregation 604 algorithm provided by RITF, ToF22 will explicitly originate an S-TIE 605 with prefix 121 and prefix 122, that is flooded to spines 111, 112, 606 121 and 122. 608 The packet from leaf111 to prefix122 will not be routed to linkTS1 or 609 linkTS2. The packet from leaf111 to prefix122 will only be routed to 610 linkTS5 or linkTS7 following a longest-prefix match to prefix122. 612 4.4. Zero Touch Provisioning (ZTP) 614 Each RIFT node may operate in zero touch provisioning (ZTP) mode. It 615 has no configuration (unless it is a Top-of-Fabric at the top of the 616 topology or it is desired to confine it to leaf role w/o leaf-2-leaf 617 procedures). In such case RIFT will fully configure the node's level 618 after it is attached to the topology. 620 The most import component for ZTP is the automatic level derivation 621 procedure. All the Top-of-Fabric nodes are explicitly marked with 622 TOP_OF_FABRIC flag which are initial 'seeds' needed for other ZTP 623 nodes to derive their level in the topology. The derivation of the 624 level of each node happens then based on LIEs received from its 625 neighbors whereas each node (with possibly exceptions of configured 626 leafs) tries to attach at the highest possible point in the fabric. 627 This guarantees that even if the diffusion front reaches a node from 628 "below" faster than from "above", it will greedily abandon already 629 negotiated level derived from nodes topologically below it and 630 properly peer with nodes above. 632 4.5. Miscabling Examples 634 +----------------+ +-----------------+ 635 | ToF21 | +------+ ToF22 | LEVEL 2 636 +-------+----+---+ | +----+---+--------+ 637 | | | | | | | | | 638 | | | +----------------------------+ | 639 | +---------------------------+ | | | | 640 | | | | | | | | | 641 | | | | +-----------------------+ | | 642 | | +------------------------+ | | | 643 | | | | | | | | | 644 +-+---+-+ +-+---+-+ | +-+---+-+ +-+---+-+ 645 |Spin111| |Spin112| | |Spin121| |Spin122| LEVEL 1 646 +-+---+-+ ++----+-+ | +-+---+-+ ++----+-+ 647 | | | | | | | | | 648 | +---------+ | link-M | +---------+ | 649 | | | | | | | | | 650 | +-------+ | | | | +-------+ | | 651 | | | | | | | | | 652 +-+---+-+ +--+--+-+ | +-+---+-+ +--+--+-+ 653 |Leaf111| |Leaf112+-----+ |Leaf121| |Leaf122| LEVEL 0 654 +-------+ +-------+ +-------+ +-------+ 655 Figure 5: A single plane miscabling example 657 Figure 5 shows a single plane miscabling example. It's a perfect 658 fat-tree fabric except link-M connecting Leaf112 to ToF22. 660 The RIFT control protocol can discover the physical links 661 automatically and be able to detect cabling that violates fat-tree 662 topology constraints. It react accordingly to such mis-cabling 663 attempts, at a minimum preventing adjacencies between nodes from 664 being formed and traffic from being forwarded on those mis-cabled 665 links. Leaf112 will in such scenario use link-M to derive its level 666 (unless it is leaf) and can report links to spines 111 and 112 as 667 miscabled unless the implementations allows horizontal links. 669 Figure 6 shows a multiple plane miscabling example. Since Leaf112 670 and Spine121 belong to two different PoDs, the adjacency between 671 Leaf112 and Spine121 can not be formed. link-W would be detected and 672 prevented. 674 +-------+ +-------+ +-------+ +-------+ 675 |ToF A1| |ToF A2| |ToF B1| |ToF B2| LEVEL 2 676 +-------+ +-------+ +-------+ +-------+ 677 | | | | | | | | 678 | | | +-----------------+ | | | 679 | +--------------------------+ | | | | 680 | | | | | | | | 681 | +------+ | | | +------+ | 682 | | +-----------------+ | | | | | 683 | | | +--------------------------+ | | 684 | A | | B | | A | | B | 685 +-----+-+ +-+---+-+ +-+---+-+ +-+-----+ 686 |Spin111| |Spin112| +----+Spin121| |Spin122| LEVEL 1 687 +-+---+-+ ++----+-+ | +-+---+-+ ++----+-+ 688 | | | | | | | | | 689 | +---------+ | | | +---------+ | 690 | | | | link-W | | | | 691 | +-------+ | | | | +-------+ | | 692 | | | | | | | | | 693 +-+---+-+ +--+--+-+ | +-+---+-+ +--+--+-+ 694 |Leaf111| |Leaf112+------+ |Leaf121| |Leaf122| LEVEL 0 695 +-------+ +-------+ +-------+ +-------+ 696 +--------PoD#1----------+ +---------PoD#2---------+ 698 Figure 6: A multiple plane miscabling example 700 RIFT provides an optional level determination procedure in its Zero 701 Touch Provisioning mode. Nodes in the fabric without their level 702 configured determine it automatically. This can have possibly 703 counter-intuitive consequences however. One extreme failure scenario 704 is depicted in Figure 7 and it shows that if all northbound links of 705 spine11 fail at the same time, spine11 negotiates a lower level than 706 Leaf11 and Leaf12. 708 To prevent such scenario where leafs are expected to act as switches, 709 LEAF_ONLY flag can be set for Leaf111 and Leaf112. Since level -1 is 710 invalid, Spine11 would not derive a valid level from the topology in 711 Figure 7. It will be isolated from the whole fabric and it would be 712 up to the leafs to declare the links towards such spine as miscabled. 714 +-------+ +-------+ +-------+ +-------+ 715 |ToF A1| |ToF A2| |ToF A1| |ToF A2| 716 +-------+ +-------+ +-------+ +-------+ 717 | | | | | | 718 | +-------+ | | | 719 + + | | ====> | | 720 X X +------+ | +------+ | 721 + + | | | | 722 +----+--+ +-+-----+ +-+-----+ 723 |Spine11| |Spine12| |Spine12| 724 +-+---+-+ ++----+-+ ++----+-+ 725 | | | | | | 726 | +---------+ | | | 727 | | | | | | 728 | +-------+ | | +-------+ | 729 | | | | | | 730 +-+---+-+ +--+--+-+ +-----+-+ +-----+-+ 731 |Leaf111| |Leaf112| |Leaf111| |Leaf112| 732 +-------+ +-------+ +-+-----+ +-+-----+ 733 | | 734 | +--------+ 735 | | 736 +-+---+-+ 737 |Spine11| 738 +-------+ 740 Figure 7: Fallen spine 742 4.6. Positive vs. Negative Disaggregation 744 Disaggregation is the procedure whereby [RIFT] advertises a more 745 specific route Southwards as an exception to the aggregated fabric- 746 default North. Disaggregation is useful when a prefix within the 747 aggregation is reachable via some of the parents but not the others 748 at the same level of the fabric. It is mandatory when the level is 749 the ToF since a ToF node that cannot reach a prefix becomes a black 750 hole for that prefix. The hard problem is to know which prefixes are 751 reachable by whom. 753 In the general case, [RIFT] solves that problem by interconnecting 754 the ToF nodes so they can exchange the full list of prefixes that 755 exist in the fabric and figure when a ToF node lacks reachability and 756 to existing prefix. This requires additional ports at the ToF, 757 typically 2 ports per ToF node to form a ToF-spanning ring. [RIFT] 758 also defines the southbound reflection procedure that enables a 759 parent to explore the direct connectivity of its peers, meaning their 760 own parents and children; based on the advertisements received from 761 the shared parents and children, it may enable the parent to infer 762 the prefixes its peers can reach. 764 When a parent lacks reachability to a prefix, it may disaggregate the 765 prefix negatively, i.e., advertise that this parent can be used to 766 reach any prefix in the aggregation except that one. The Negative 767 Disaggregation signaling is simple and functions transitively from 768 ToF to ToP and then from Top to Leaf. But it is hard for a parent to 769 figure which prefix it needs to disaggregate, because it does not 770 know what it does not know; it results that the use of a spanning 771 ring at the ToF is required to operate the Negative Disaggregation. 772 Also, though it is only an implementation problem, the programmation 773 of the FIB is complex compared to normal routes, and may incur 774 recursions. 776 The more classical alternative is, for the parents that can reach a 777 prefix that peers at the same level cannot, to advertise a more 778 specific route to that prefix. This leverages the normal longest 779 prefix match in the FIB, and does not require a special 780 implementation. But as opposed to the Negative Disaggregation, the 781 Positive Disaggregation is difficult and inefficient to operate 782 transitively. 784 Transitivity is not needed to a grandchild if all its parents 785 received the Positive Disaggregation, meaning that they shall all 786 avoid the black hole; when that is the case, they collectively build 787 a ceiling that protects the grandchild. But until then, a parent 788 that received a Positive Disaggregation may believe that some peers 789 are lacking the reachability and readvertise too early, or defer and 790 maintain a black hole situation longer than necessary. 792 In a non-partitioned fabric, all the ToF nodes see one another 793 through the reflection and can figure if one is missing a child. In 794 that case it is possible to compute the prefixes that the peer cannot 795 reach and disaggregate positively without a ToF-spanning ring. The 796 ToF nodes can also ascertain that the ToP nodes are connected each to 797 at least a ToF node that can still reach the prefix, meaning that the 798 transitive operation is not required. 800 The bottom line is that in a fabric that is partitioned (e.g., using 801 multiple planes) and/or where the ToP nodes are not guaranteed to 802 always form a ceiling for their children, it is mandatory to use the 803 Negative Disaggregation. On the other hand, in a highly symmetrical 804 and fully connected fabric, (e.g., a canonical Clos Network), the 805 Positive Disaggregation methods allows to save the complexity and 806 cost associated to the ToF-spanning ring. 808 Note that in the case of Positive Disaggregation, the first ToF 809 node(s) that announces a more-specific route attracts all the traffic 810 for that route and may suffer from a transient incast. A ToP node 811 that defers injecting the longer prefix in the FIB, in order to 812 receive more advertisements and spread the packets better, also keeps 813 on sending a portion of the traffic to the black hole in the 814 meantime. In the case of Negative Disaggregation, the last ToF 815 node(s) that injects the route may also incur an incast issue; this 816 problem would occur if a prefix that becomes totally unreachable is 817 disaggregated, but doing so is mostly useless and is not recommended. 819 4.7. Mobile Edge and Anycast 821 When a physical or a virtual node changes its point of attachement in 822 the fabric from a previous-leaf to a next-leaf, new routes must be 823 installed that supersede the old ones. Since the flooding flows 824 Northwards, the nodes (if any) between the previous-leaf and the 825 common parent are not immediately aware that the path via previous- 826 leaf is obsolete, and a stale route may exist for a while. The 827 common parent needs to select the freshest route advertisement in 828 order to install the correct route via the next-leaf. This requires 829 that the fabric determines the sequence of the movements of the 830 mobile node. 832 On the one hand, a classical sequence counter provides a total order 833 for a while but it will eventually wrap. On the other hand, a 834 timestamp provides a permanent order but it may miss a movement that 835 happens too quickly vs. the granularity of the timing information. 836 It is not envisioned in the short term that the average fabric 837 supports a Precision Time Protocol, and the precision that may be 838 available with the Network Time Protocol [RFC5905], in the order of 839 100 to 200ms, may not be necessarily enough to cover, e.g., the fast 840 mobility of a Virtual Machine. 842 Section 4.3.3. "Mobility" of [RIFT] specifies an hybrid method that 843 combines a sequence counter from the mobile node and a timestamp from 844 the network taken at the leaf when the route is injected. If the 845 timestamps of the concurrent advertisements are comparable (i.e., 846 more distant than the precision of the timing protocol), then the 847 timestamp alone is used to determine the relative freshness of the 848 routes. Otherwise, the sequence counter from the mobile node, if 849 available, is used. One caveat is that the sequence counter must not 850 wrap within the precision of the timing protocol. Another is that 851 the mobile node may not even provide a sequence counter, in which 852 case the mobility itself must be slower than the precision of the 853 timing. 855 Mobility must not be confused with Anycast. In both cases, a same 856 address is injected in RIFT at different leaves. In the case of 857 mobility, only the freshest route must be conserved, since mobile 858 node changed its point of attachment for a leaf to the next. In the 859 case of anycast, the node may be either multihomed (attached to 860 multiple leaves in parallel) or reachable beyond the fabric via 861 multiple routes that are redistributed to different leaves; either 862 way, in the case of anycast, the multiple routes are equally valid 863 and should be conserved. Without further information from the 864 redistributed routing protocol, it is impossible to sort out a 865 movement from a redistribution that happens asynchronously on 866 different leaves. [RIFT] expects that anycast addresses are 867 advertised within the timing precision, which is typically the case 868 with a low-precision timing and a multihomed node. Beyond that time 869 interval, RIFT interprets the lag as a mobility and only the freshest 870 route is retained. 872 When using IPv6 [RFC8200], RIFT suggests to leverage "Registration 873 Extensions for IPv6 over Low-Power Wireless Personal Area Network 874 (6LoWPAN) Neighbor Discovery (ND)" [RFC8505] as the IPv6 ND 875 interaction between the mobile node and the leaf. This provides not 876 only a sequence counter but also a lifetime and a security token that 877 may be used to protect the ownership of an address. When using 878 [RFC8505], the parallel registration of an anycast address to 879 multiple leaves is done with the same sequence counter, whereas the 880 sequence counter is incremented when the point of attachement 881 changes. This way, it is possible to differentiate a mobile node 882 from a multihomed node, even when the mobility happens within the 883 timing precision. It is also possible for a mobile node to be 884 multihomed as well, e.g., to change only one of its points of 885 attachement. 887 4.8. IPv4 over IPv6 889 RIFT allows advertising IPv4 prefixes over IPv6 RIFT network. IPv6 890 AF configures via the usual ND mechanisms and then V4 can use V6 891 nexthops analogous to RFC5549. It is expected that the whole fabric 892 supports the same type of forwarding of address families on all the 893 links. RIFT provides an indication whether a node is v4 forwarding 894 capable and implementations are possible where different routing 895 tables are computed per address family as long as the computation 896 remains loop-free. 898 +-----+ +-----+ 899 +---+---+ | ToF | | ToF | 900 ^ +--+--+ +-----+ 901 | | | | | 902 | | +-------------+ | 903 | | +--------+ | | 904 | | | | | 905 V6 +-----+ +-+---+ 906 Forwarding |SPINE| |SPINE| 907 | +--+--+ +-----+ 908 | | | | | 909 | | +-------------+ | 910 | | +--------+ | | 911 | | | | | 912 v +-----+ +-+---+ 913 +---+---+ |LEAF | | LEAF| 914 +--+--+ +--+--+ 915 | | 916 IPv4 prefixes| |IPv4 prefixes 917 | | 918 +---+----+ +---+----+ 919 | V4 | | V4 | 920 | subnet | | subnet | 921 +--------+ +--------+ 923 Figure 8: IPv4 over IPv6 925 4.9. In-Band Reachability of Nodes 927 RIFT doesn't precondition that nodes of the fabric have reachable 928 addresses. But the operational purposes to reach the internal nodes 929 may exist. Figure 9 shows an example that the NMS attaches to LEAF1. 931 +-------+ +-------+ 932 | ToF1 | | ToF2 | 933 ++---- ++ ++-----++ 934 | | | | 935 | +----------+ | 936 | +--------+ | | 937 | | | | 938 ++-----++ +--+---++ 939 |SPINE1 | |SPINE2 | 940 ++-----++ ++-----++ 941 | | | | 942 | +----------+ | 943 | +--------+ | | 944 | | | | 945 ++-----++ +--+---++ 946 | LEAF1 | | LEAF2 | 947 +---+---+ +-------+ 948 | 949 |NMS 951 Figure 9: In-Band reachability of node 953 If NMS wants to access LEAF2, it simply works. Because loopback 954 address of LEAF2 is flooded in its Prefix North TIE. 956 If NMS wants to access SPINE2, it simply works too. Because spine 957 node always advertises its loopback address in the Prefix North TIE. 958 NMS may reach SPINE2 from LEAF1-SPINE2 or LEAF1-SPINE1-ToF1/ 959 ToF2-SPINE2. 961 If NMS wants to access ToF2, ToF2's loopback address needs to be 962 injected into its Prefix South TIE. Otherwise, the traffic from NMS 963 may be sent to ToF1. 965 And in case of failure between ToF2 and spine nodes, ToF2's loopback 966 address must be sent all the way down to the leaves. 968 4.10. Dual Homing Servers 970 Each RIFT node may operate in zero touch provisioning (ZTP) mode. It 971 has no configuration (unless it is a Top-of-Fabric at the top of the 972 topology or the must operate in the topology as leaf and/or support 973 leaf-2-leaf procedures) and it will fully configure itself after 974 being attached to the topology. 976 +---+ +---+ +---+ 977 |ToF| |ToF| |ToF| 978 +---+ +---+ +---+ 979 | | | | | | 980 | +----------------+ | | 981 | | | | | | 982 | +----------------+ | 983 | | | | | | 984 +----------+--+ +--+----------+ 985 | Spine|ToR1 | | Spine|ToR2 | 986 +--+------+---+ +--+-------+--+ 987 +---+ | | | | | | +---+ 988 | | | | | | | | 989 | +-----------------+ | | | 990 | | | +-------------+ | | 991 + | + | | |-----------------+ | 992 X | X | +--------x-----+ | X | 993 + | + | | | + | 994 +---+ +---+ +---+ +---+ 995 | | | | | | | | 996 +---+ +---+ ...............+---+ +---+ 997 SV(1) SV(2) SV(n+1) SV(n) 999 Figure 10: Dual-homing servers 1001 In the single plane, the worst condition is disaggregation of every 1002 other servers at the same level. Suppose the links from ToR1 to all 1003 the leaves become not available. All the servers' routes are 1004 disaggregated and the FIB of the servers will be expanded with n-1 1005 more specific routes. 1007 Sometimes, people may prefer to disaggregate from ToR to servers from 1008 start on, i.e. the servers have couple tens of routes in FIB from 1009 start on beside default routes to avoid breakages at rack level. 1010 Full disaggregation of the fabric could be achieved by configuration 1011 supported by RIFT. 1013 4.11. Fabric With A Controller 1015 There are many different ways to deploy the controller. One 1016 possibility is attaching a controller to the RIFT domain from ToF and 1017 another possibility is attaching a controller from the leaf. 1019 +------------+ 1020 | Controller | 1021 ++----------++ 1022 | | 1023 | | 1024 +----++ ++----+ 1025 ------- | ToF | | ToF | 1026 | +--+--+ +-----+ 1027 | | | | | 1028 | | +-------------+ | 1029 | | +--------+ | | 1030 | | | | | 1031 +-----+ +-+---+ 1032 RIFT domain |SPINE| |SPINE| 1033 +--+--+ +-----+ 1034 | | | | | 1035 | | +-------------+ | 1036 | | +--------+ | | 1037 | | | | | 1038 | +-----+ +-+---+ 1039 ------- |LEAF | | LEAF| 1040 +-----+ +-----+ 1042 Figure 11: Fabric with a controller 1044 4.11.1. Controller Attached to ToFs 1046 If a controller is attaching to the RIFT domain from ToF, it usually 1047 uses dual-homing connections. The loopback prefix of the controller 1048 should be advertised down by the ToF and spine to leaves. If the 1049 controller loses link to ToF, make sure the ToF withdraw the prefix 1050 of the controller(use different mechanisms). 1052 4.11.2. Controller Attached to Leaf 1054 If the controller is attaching from a leaf to the fabric, no special 1055 provisions are needed. 1057 4.12. Internet Connectivity With Underlay 1059 If global addressing is running without overlay, an external default 1060 route needs to be advertised through rift fabric to achieve internet 1061 connectivity. For the purpose of forwarding of the entire rift 1062 fabric, an internal fabric prefix needs to be advertised in the South 1063 Prefix TIE by ToF and spine nodes. 1065 4.12.1. Internet Default on the Leaf 1067 In case that an internet access request comes from a leaf and the 1068 internet gateway is another leaf, the leaf node as the internet 1069 gateway needs to advertise a default route in its Prefix North TIE. 1071 4.12.2. Internet Default on the ToFs 1073 In case that an internet access request comes from a leaf and the 1074 internet gateway is a ToF, the ToF and spine nodes need to advertise 1075 a default route in the Prefix South TIE. 1077 4.13. Subnet Mismatch and Address Families 1079 +--------+ +--------+ 1080 | | LIE LIE | | 1081 | A | +----> <----+ | B | 1082 | +---------------------+ | 1083 +--------+ +--------+ 1084 X/24 Y/24 1086 Figure 12: subnet mismatch 1088 LIEs are exchanged over all links running RIFT to perform Link 1089 (Neighbor) Discovery. A node MUST NOT originate LIEs on an address 1090 family if it does not process received LIEs on that family. LIEs on 1091 same link are considered part of the same negotiation independent on 1092 the address family they arrive on. An implementation MUST be ready 1093 to accept TIEs on all addresses it used as source of LIE frames. 1095 As shown in the above figure, without further checks adjacency of 1096 node A and B may form, but the forwarding between node A and node B 1097 may fail because subnet X mismatches with subnet Y. 1099 To prevent this a RIFT implementation should check for subnet 1100 mismatch just like e.g. ISIS does. This can lead to scenarios where 1101 an adjacency, despite exchange of LIEs in both address families may 1102 end up having an adjacency in a single AF only. This is a 1103 consideration especially in Section 4.8 scenarios. 1105 4.14. Anycast Considerations 1107 + traffic 1108 | 1109 v 1110 +------+------+ 1111 | ToF | 1112 +---+-----+---+ 1113 | | | | 1114 +------------+ | | +------------+ 1115 | | | | 1116 +---+---+ +-------+ +-------+ +---+---+ 1117 | | | | | | | | 1118 |Spine11| |Spine12| |Spine21| |Spine22| LEVEL 1 1119 +-+---+-+ ++----+-+ +-+---+-+ ++----+-+ 1120 | | | | | | | | 1121 | +---------+ | | +---------+ | 1122 | | | | | | | | 1123 | +-------+ | | | +-------+ | | 1124 | | | | | | | | 1125 +-+---+-+ +--+--+-+ +-+---+-+ +--+--+-+ 1126 | | | | | | | | 1127 |Leaf111| |Leaf112| |Leaf121| |Leaf122| LEVEL 0 1128 +-+-----+ ++------+ +-----+-+ +-----+-+ 1129 + + + ^ | 1130 PrefixA PrefixB PrefixA | PrefixC 1131 | 1132 + traffic 1134 Figure 13: Anycast 1136 If the traffic comes from ToF to Leaf111 or Leaf121 which has anycast 1137 prefix PrefixA. RIFT can deal with this case well. But if the 1138 traffic comes from Leaf122, it arrives Spine21 or Spine22 at level 1. 1139 But Spine21 or Spine22 doesn't know another PrefixA attaching 1140 Leaf111. So it will always get to Leaf121 and never get to Leaf111. 1141 If the intension is that the traffic should been offloaded to 1142 Leaf111, then use policy guided prefixes [PGP reference]. 1144 5. Acknowledgements 1145 6. Contributors 1147 The following people (listed in alphabetical order) contributed 1148 significantly to the content of this document and should be 1149 considered co-authors: 1151 Tony Przygienda 1153 Juniper Networks 1155 1194 N. Mathilda Ave 1157 Sunnyvale, CA 94089 1159 US 1161 Email: prz@juniper.net 1163 7. Normative References 1165 [ISO10589-Second-Edition] 1166 International Organization for Standardization, 1167 "Intermediate system to Intermediate system intra-domain 1168 routeing information exchange protocol for use in 1169 conjunction with the protocol for providing the 1170 connectionless-mode Network Service (ISO 8473)", November 1171 2002. 1173 [TR-384] Broadband Forum Technical Report, "TR-384 Cloud Central 1174 Office Reference Architectural Framework", January 2018. 1176 [RFC2328] Moy, J., "OSPF Version 2", STD 54, RFC 2328, 1177 DOI 10.17487/RFC2328, April 1998, 1178 . 1180 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 1181 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 1182 DOI 10.17487/RFC4861, September 2007, 1183 . 1185 [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J. 1186 Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)", 1187 RFC 5357, DOI 10.17487/RFC5357, October 2008, 1188 . 1190 [RFC7130] Bhatia, M., Ed., Chen, M., Ed., Boutros, S., Ed., 1191 Binderberger, M., Ed., and J. Haas, Ed., "Bidirectional 1192 Forwarding Detection (BFD) on Link Aggregation Group (LAG) 1193 Interfaces", RFC 7130, DOI 10.17487/RFC7130, February 1194 2014, . 1196 [RIFT] Przygienda, T., Sharma, A., Thubert, P., Rijsman, B., and 1197 D. Afanasiev, "RIFT: Routing in Fat Trees", Work in 1198 Progress, Internet-Draft, draft-ietf-rift-rift-12, 26 May 1199 2020, 1200 . 1202 [I-D.white-distoptflood] 1203 White, R., Hegde, S., and S. Zandi, "IS-IS Optimal 1204 Distributed Flooding for Dense Topologies", Work in 1205 Progress, Internet-Draft, draft-white-distoptflood-04, 27 1206 July 2020, 1207 . 1209 8. Informative References 1211 [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch, 1212 "Network Time Protocol Version 4: Protocol and Algorithms 1213 Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010, 1214 . 1216 [RFC8200] Deering, S. and R. Hinden, "Internet Protocol, Version 6 1217 (IPv6) Specification", STD 86, RFC 8200, 1218 DOI 10.17487/RFC8200, July 2017, 1219 . 1221 [RFC8505] Thubert, P., Ed., Nordmark, E., Chakrabarti, S., and C. 1222 Perkins, "Registration Extensions for IPv6 over Low-Power 1223 Wireless Personal Area Network (6LoWPAN) Neighbor 1224 Discovery", RFC 8505, DOI 10.17487/RFC8505, November 2018, 1225 . 1227 Authors' Addresses 1229 Yuehua Wei (editor) 1230 ZTE Corporation 1231 No.50, Software Avenue 1232 Nanjing 1233 210012 1234 China 1236 Email: wei.yuehua@zte.com.cn 1237 Zheng Zhang 1238 ZTE Corporation 1239 No.50, Software Avenue 1240 Nanjing 1241 210012 1242 China 1244 Email: zhang.zheng@zte.com.cn 1246 Dmitry Afanasiev 1247 Yandex 1249 Email: fl0w@yandex-team.ru 1251 Tom Verhaeg 1252 Juniper Networks 1254 Email: tverhaeg@juniper.net 1256 Jaroslaw Kowalczyk 1257 Orange Polska 1259 Email: jaroslaw.kowalczyk2@orange.com 1261 Pascal Thubert 1262 Cisco Systems, Inc 1263 Building D 1264 45 Allee des Ormes - BP1200 1265 06254 MOUGINS - Sophia Antipolis 1266 France 1268 Phone: +33 497 23 26 34 1269 Email: pthubert@cisco.com