idnits 2.17.1 draft-ietf-trill-rbridge-multilevel-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 3, 2017) is 2482 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TRILL Working Group Radia Perlman 2 INTERNET-DRAFT EMC 3 Intended status: Informational Donald Eastlake 4 Mingui Zhang 5 Huawei 6 Anoop Ghanwani 7 Dell 8 Hongjun Zhai 9 JIT 10 Expires: January 3, 2018 July 3, 2017 12 Alternatives for Multilevel TRILL 13 (Transparent Interconnection of Lots of Links) 14 16 Abstract 18 Although TRILL is based on IS-IS, which supports multilevel unicast 19 routing, extending TRILL to multiple levels has challenges that are 20 not addressed by the already-existing capabilities of IS-IS. One 21 issue is with the handling of multi-destination packet distribution 22 trees. Other issues are with TRILL switch nicknames. How are such 23 nicknames allocated across a multilevel TRILL network? Do nicknames 24 need to be unique across an entire multilevel TRILL network or can 25 they merely be unique within each multilevel area? 27 This informational document enumerates and examines alternatives 28 based on a number of factors including backward compatibility, 29 simplicity, and scalability and makes recommendations in some cases. 31 Status of This Memo 33 This Internet-Draft is submitted to IETF in full conformance with the 34 provisions of BCP 78 and BCP 79. Distribution of this document is 35 unlimited. Comments should be sent to the TRILL working group 36 mailing list . 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF), its areas, and its working groups. Note that 40 other groups may also distribute working documents as Internet- 41 Drafts. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 47 The list of current Internet-Drafts can be accessed at 48 http://www.ietf.org/1id-abstracts.html. The list of Internet-Draft 49 Shadow Directories can be accessed at 50 http://www.ietf.org/shadow.html. 52 Table of Contents 54 1. Introduction............................................4 55 1.1 The Motivation for Multilevel..........................4 56 1.2 Improvements Due to Multilevel.........................5 57 1.2.1. The Routing Computation Load........................5 58 1.2.2. LSDB Volatility Creating Too Much Control Traffic...5 59 1.2.3. LSDB Volatility Causing To Much Time Unconverged....6 60 1.2.4. The Size Of The LSDB................................6 61 1.2.5 Nickname Limit.......................................6 62 1.2.6 Multi-Destination Traffic............................7 63 1.3 Unique and Aggregated Nicknames........................7 64 1.4 More on Areas..........................................8 65 1.5 Terminology and Acronyms...............................8 67 2. Multilevel TRILL Issues................................10 68 2.1 Non-zero Area Addresses...............................11 69 2.2 Aggregated versus Unique Nicknames....................11 70 2.2.1 More Details on Unique Nicknames....................12 71 2.2.2 More Details on Aggregated Nicknames................13 72 2.2.2.1 Border Learning Aggregated Nicknames..............14 73 2.2.2.2 Swap Nickname Field Aggregated Nicknames..........16 74 2.2.2.3 Comparison........................................17 75 2.3 Building Multi-Area Trees.............................17 76 2.4 The RPF Check for Trees...............................18 77 2.5 Area Nickname Acquisition.............................18 78 2.6 Link State Representation of Areas....................19 80 3. Area Partition.........................................20 82 4. Multi-Destination Scope................................21 83 4.1 Unicast to Multi-destination Conversions..............21 84 4.1.1 New Tree Encoding...................................22 85 4.2 Selective Broadcast Domain Reduction..................22 87 5. Co-Existence with Old TRILL switches...................24 88 6. Multi-Access Links with End Stations...................25 90 7. Summary................................................27 92 8. Security Considerations................................28 93 9. IANA Considerations....................................28 95 Normative References......................................29 96 Informative References....................................29 97 Acknowledgements..........................................31 98 Authors' Addresses........................................32 100 1. Introduction 102 The IETF TRILL (Transparent Interconnection of Lot of Links) protocol 103 [RFC6325] [RFC7177] [RFC7780] provides optimal pair-wise data routing 104 without configuration, safe forwarding even during periods of 105 temporary loops, and support for multipathing of both unicast and 106 multicast traffic in networks with arbitrary topology and link 107 technology, including multi-access links. TRILL accomplishes this by 108 using IS-IS (Intermediate System to Intermediate System [IS-IS] 109 [RFC7176]) link state routing in conjunction with a header that 110 includes a hop count. The design supports data labels (VLANs and Fine 111 Grained Labels [RFC7172]) and optimization of the distribution of 112 multi-destination data based on data label and multicast group. 113 Devices that implement TRILL are called TRILL Switches or RBridges. 115 Familiarity with [IS-IS], [RFC6325], and [RFC7780] is assumed in this 116 document. 118 1.1 The Motivation for Multilevel 120 The primary motivation for multilevel TRILL is to improve 121 scalability. The following issues might limit the scalability of a 122 TRILL-based network: 124 1. The routing computation load 125 2. The volatility of the link state database (LSDB) creating too much 126 control traffic 127 3. The volatility of the LSDB causing the TRILL network to be in an 128 unconverged state too much of the time 129 4. The size of the LSDB 130 5. The limit of the number of TRILL switches, due to the 16-bit 131 nickname space (for further information on why this might be a 132 problem, see Section 1.2.5) 133 6. The traffic due to upper layer protocols use of broadcast and 134 multicast 135 7. The size of the end node learning table (the table that remembers 136 (egress TRILL switch, label/MAC) pairs) 138 As discussed below, extending TRILL IS-IS to be multilevel 139 (hierarchical) can help with all of these issues except issue 7. 141 IS-IS was designed to be multilevel [IS-IS]. A network can be 142 partitioned into "areas". Routing within an area is known as "Level 143 1 routing". Routing between areas is known as "Level 2 routing". 144 The Level 2 IS-IS network consists of Level 2 routers and links 145 between the Level 2 routers. Level 2 routers may participate in one 146 or more Level 1 areas, in addition to their role as Level 2 routers. 148 Each area is connected to Level 2 through one or more "border 149 routers", which participate both as a router inside the area, and as 150 a router inside the Level 2 "area". Care must be taken that it is 151 clear, when transitioning multi-destination packets between Level 2 152 and a Level 1 area in either direction, that exactly one border TRILL 153 switch will transition a particular data packet between the levels or 154 else duplication or loss of traffic can occur. 156 1.2 Improvements Due to Multilevel 158 Partitioning the network into areas directly solves the first four 159 scalability issues listed above as described in Sections 1.2.1 160 through 1.2.4. Multilevel also contributes to solving issues 5 and 6 161 as discussed in Section 1.2.5 and 1.2.6 respectively. 163 In the subsections below, N indicates the number of TRILL switches in 164 a TRILL campus. As a simplifying assumption, it is assumed that each 165 TRILL switch has k links to other TRILL switches. An "optimized" 166 multilevel campus is assumed to have Level 1 areas containing sqrt(N) 167 switches. 169 1.2.1. The Routing Computation Load 171 The Dijkstra algorithm uses computational effort on the order of the 172 number of links in a network (N*k) times the log of the number of 173 nodes to calculate least cost routes at a router (Section 12.3.3 174 [InterCon]). Thus, in a single level TRILL campus, it is on the order 175 of N*k*log(N). In an optimized multilevel campus, it is on the order 176 of sqrt(N)*k*log(N). So, for example, assuming N is 3,000, the level 177 of computational effort would be reduced by about a factor of 50. 179 1.2.2. LSDB Volatility Creating Too Much Control Traffic 181 The rate of LSDB changes is assumed to be approximately proportional 182 to the number of routers and links in the TRILL campus or N*(1+k) for 183 a single level campus. With an optimized multilevel campus, each area 184 would have about sqrt(N) routers and proportionately fewer links 185 reducing the rate of LSDB changes by about a factor of sqrt(N). 187 1.2.3. LSDB Volatility Causing To Much Time Unconverged 189 With the simplifying assumption that routing converges after each 190 topology change before the next such change, the fraction of time 191 that routing is unconverged is proportional to the product of the 192 rate of change occurrence and the convergence time. The rate of 193 topology changes per some arbitrary unit of time will be roughly 194 proportional to the number of router and links (Section 1.2.2). The 195 convergence time is approximately proportional to the computation 196 involved at each router (Section 1.2.1). Thus, based on these 197 simplifying assumptions, the time spent unconverged in a single level 198 network is proportional to (N*(1+k))*(N*k*log(N)) while that time for 199 an optimized multilevel network would be proportional to 200 (sqrt(N)*(1+k))*(sqrt(N)*k*log(N)). Thus, in changing to multilevel, 201 the time spent unconverged, using these simplifying assumptions, is 202 improved by about a factor of N. 204 1.2.4. The Size Of The LSDB 206 The size of the LSDB, which consists primarily of information about 207 routers (TRILL switches) and links, is also approximately 208 proportional to the number of routers and links. So, as with item 2 209 in Section 1.2.2 above, it should improve by about a factor of 210 sqrt(N) in going from single to multilevel. 212 1.2.5 Nickname Limit 214 For many TRILL protocol purposes, RBridges are designated by 16-bit 215 nicknames. While some values are reserved, this appears to provide 216 enough nicknames to designated over 65,000 RBridges. However, this 217 number is effectively reduced by the following two factors: 219 - Nicknames are consumed when pseudo-nicknames are used for the 220 active-active connection of end stations. Using the techniques in 221 [RFC7781], for example, could double the nickname consumption if 222 there are extensive active-active edge groups connected to 223 different sets of edge TRILL switch ports. 225 - There might be problems in multilevel campus wide contention for 226 single nickname allocation of nicknames were allocated 227 individually from a single pool for the entire campus. Thus it 228 seems likely that a hierarchical method would be chosen where 229 blocks of nicknames are allocated at Level 2 to Level 1 areas and 230 contention for a nickname by an RBridge in such a Level 1 area 231 would be only within that area. Such hierarchical allocation leads 232 to further effective loss of nicknames similar to the situation 233 with IP addresses discussed in [RFC3194]. 235 Even without the above effective reductions in nickname space, a very 236 large multilevel TRILL campus, say one with 200 areas each containing 237 500 TRILL switches, could require 100,000 or more nicknames if all 238 nicknames in the campus must be unique, which is clearly impossible 239 with 16-bit nicknames. 241 This scaling limit, namely, 16-bit nickname space, will only be 242 addressed with the aggregated nickname approach. Since the aggregated 243 nickname approach requires some complexity in the border TRILL 244 switches (for rewriting the nicknames in the TRILL header), the 245 suggested design in this document allows a campus with a mixture of 246 unique-nickname areas, and aggregated-nickname areas. Thus a TRILL 247 network could start using multilevel with the simpler unique nickname 248 method and later add aggregated areas as a later stage of network 249 growth. 251 With this design, nicknames must be unique across all Level 2 and 252 unique-nickname area TRILL switches taken together, whereas nicknames 253 inside an aggregated-nickname area are visible only inside that area. 254 Nicknames inside an aggregated-nickname area must still not conflict 255 with nicknames visible in Level 2 (which includes all nicknames 256 inside unique nickname areas), but the nicknames inside an 257 aggregated-nickname area may be the same as nicknames used within one 258 or more other aggregated-nickname areas. 260 With the design suggested in this document, TRILL switches within an 261 area need not be aware of whether they are in an aggregated nickname 262 area or a unique nickname area. The border TRILL switches in area A1 263 will indicate, in their LSP inside area A1, which nicknames (or 264 nickname ranges) are available, or alternatively which nicknames are 265 not available, for choosing as nicknames by area A1 TRILL switches. 267 1.2.6 Multi-Destination Traffic 269 Scaling limits due to protocol use of broadcast and multicast, can be 270 addressed in many cases in a multilevel campus by introducing 271 locally-scoped multi-destination delivery, limited to an area or a 272 single link. See further discussion of this issue in Section 4.2. 274 1.3 Unique and Aggregated Nicknames 276 We describe two alternatives for hierarchical or multilevel TRILL. 277 One we call the "unique nickname" alternative. The other we call the 278 "aggregated nickname" alternative. In the aggregated nickname 279 alternative, border TRILL switches replace either the ingress or 280 egress nickname field in the TRILL header of unicast packets with an 281 aggregated nickname representing an entire area. 283 The unique nickname alternative has the advantage that border TRILL 284 switches are simpler and do not need to do TRILL Header nickname 285 modification. It also simplifies testing and maintenance operations 286 that originate in one area and terminate in a different area. 288 The aggregated nickname alternative has the following advantages: 290 o it solves scaling problem #5 above, the 16-bit nickname limit, 291 in a simple way, 292 o it lessens the amount of inter-area routing information that 293 must be passed in IS-IS, and 294 o it logically reduces the RPF (Reverse Path Forwarding) Check 295 information (since only the area nickname needs to appear, 296 rather than all the ingress TRILL switches in that area). 298 In both cases, it is possible and advantageous to compute multi- 299 destination data packet distribution trees such that the portion 300 computed within a given area is rooted within that area. 302 For further discussion of the unique and aggregated nickname 303 alternatives, see Section 2.2. 305 1.4 More on Areas 307 Each area is configured with an "area address", which is advertised 308 in IS-IS messages, so as to avoid accidentally interconnecting areas. 309 For TRILL the only purpose of the area address would be to avoid 310 accidentally interconnecting areas although the area address had 311 other purposes in CLNP (Connectionless Network Layer Protocol), IS-IS 312 was originally designed for CLNP/DECnet. 314 Currently, the TRILL specification says that the area address must be 315 zero. If we change the specification so that the area address value 316 of zero is just a default, then most of IS-IS multilevel machinery 317 works as originally designed. However, there are TRILL-specific 318 issues, which we address below in Section 2.1. 320 1.5 Terminology and Acronyms 322 This document generally uses the acronyms defined in [RFC6325] plus 323 the additional acronym DBRB. However, for ease of reference, most 324 acronyms used are listed here: 326 CLNP - ConnectionLess Network Protocol 328 DECnet - a proprietary routing protocol that was used by Digital 329 Equipment Corporation. "DECnet Phase 5" was the origin of IS-IS. 331 Data Label - VLAN or Fine Grained Label [RFC7172] 333 DBRB - Designated Border RBridge 335 ESADI - End Station Address Distribution Information 337 IS-IS - Intermediate System to Intermediate System [IS-IS] 339 LSDB - Link State Data Base 341 LSP - Link State PDU 343 PDU - Protocol Data Unit 345 RBridge - Routing Bridge, an alternative name for a TRILL switch 347 RPF - Reverse Path Forwarding 349 TLV - Type Length Value 351 TRILL - Transparent Interconnection of Lots of Links or Tunneled 352 Routing in the Link Layer [RFC6325] [RFC7780] 354 TRILL switch - a device that implements the TRILL protocol 355 [RFC6325] [RFC7780], sometimes called an RBridge 357 VLAN - Virtual Local Area Network 359 2. Multilevel TRILL Issues 361 The TRILL-specific issues introduced by multilevel include the 362 following: 364 a. Configuration of non-zero area addresses, encoding them in IS-IS 365 PDUs, and possibly interworking with old TRILL switches that do 366 not understand non-zero area addresses. 368 See Section 2.1. 370 b. Nickname management. 372 See Sections 2.5 and 2.2. 374 c. Advertisement of pruning information (Data Label reachability, IP 375 multicast addresses) across areas. 377 Distribution tree pruning information is only an optimization, 378 as long as multi-destination packets are not prematurely 379 pruned. For instance, border TRILL switches could advertise 380 they can reach all possible Data Labels, and have an IP 381 multicast router attached. This would cause all multi- 382 destination traffic to be transmitted to border TRILL switches, 383 and possibly pruned there, when the traffic could have been 384 pruned earlier based on Data Label or multicast group if border 385 TRILL switches advertised more detailed Data Label and/or 386 multicast listener and multicast router attachment information. 388 d. Computation of distribution trees across areas for multi- 389 destination data. 391 See Section 2.3. 393 e. Computation of RPF information for those distribution trees. 395 See Section 2.4. 397 f. Computation of pruning information across areas. 399 See Sections 2.3 and 2.6. 401 g. Compatibility, as much as practical, with existing, unmodified 402 TRILL switches. 404 The most important form of compatibility is with existing TRILL 405 fast path hardware. Changes that require upgrade to the slow 406 path firmware/software are more tolerable. Compatibility for 407 the relatively small number of border TRILL switches is less 408 important than compatibility for non-border TRILL switches. 410 See Section 5. 412 2.1 Non-zero Area Addresses 414 The current TRILL base protocol specification [RFC6325] [RFC7177] 415 [RFC7780] says that the area address in IS-IS must be zero. The 416 purpose of the area address is to ensure that different areas are not 417 accidentally merged. Furthermore, zero is an invalid area address 418 for layer 3 IS-IS, so it was chosen as an additional safety mechanism 419 to ensure that layer 3 IS-IS packets would not be confused with TRILL 420 IS-IS packets. However, TRILL uses other techniques to avoid 421 confusion on a link, such as different multicast addresses and 422 Ethertypes on Ethernet [RFC6325], different PPP (Point-to-Point 423 Protocol) code points on PPP [RFC6361], and the like. Thus, using an 424 area address in TRILL that might be used in layer 3 IS-IS is not a 425 problem. 427 Since current TRILL switches will reject any IS-IS messages with non- 428 zero area addresses, the choices are as follows: 430 a.1 upgrade all TRILL switches that are to interoperate in a 431 potentially multilevel environment to understand non-zero area 432 addresses, 433 a.2 neighbors of old TRILL switches must remove the area address from 434 IS-IS messages when talking to an old TRILL switch (which might 435 break IS-IS security and/or cause inadvertent merging of areas), 436 a.3 ignore the problem of accidentally merging areas entirely, or 437 a.4 keep the fixed "area address" field as 0 in TRILL, and add a new, 438 optional TLV for "area name" to Hellos that, if present, could be 439 compared, by new TRILL switches, to prevent accidental area 440 merging. 442 In principal, different solutions could be used in different areas 443 but it would be much simpler to adopt one of these choices uniformly. 444 A simple solution would be a.1 above with each TRILL switch using a 445 dominant area nickname as its area address. For the unique nickname 446 alternative, the dominant nickname could be the lowest value nickname 447 held by any border RBridge of the area. For the aggregated nickname 448 alternative, it could be the lowest nickname held by a border RBridge 449 of the area or a nickname representing the area. 451 2.2 Aggregated versus Unique Nicknames 453 In the unique nickname alternative, all nicknames across the campus 454 must be unique. In the aggregated nickname alternative, TRILL switch 455 nicknames within an aggregated area are only of local significance, 456 and the only nickname externally (outside that area) visible is the 457 "area nickname" (or nicknames), which aggregates all the internal 458 nicknames. 460 The unique nickname approach simplifies border TRILL switches. 462 The aggregated nickname approach eliminates the potential problem of 463 nickname exhaustion, minimizes the amount of nickname information 464 that would need to be forwarded between areas, minimizes the size of 465 the forwarding table, and simplifies RPF calculation and RPF 466 information. 468 2.2.1 More Details on Unique Nicknames 470 With unique cross-area nicknames, it would be intractable to have a 471 flat nickname space with TRILL switches in different areas contending 472 for the same nicknames. Instead, each area would need to be 473 configured with or allocate one or more block of nicknames. Either 474 some TRILL switches would need to announce that all the nicknames 475 other than that in blocks available to the area are taken (to prevent 476 the TRILL switches inside the area from choosing nicknames outside 477 the area's nickname block), or a new TLV would be needed to announce 478 the allowable or the prohibited nicknames, and all TRILL switches in 479 the area would need to understand that new TLV. 481 Currently the encoding of nickname information in TLVs is by listing 482 of individual nicknames; this would make it painful for a border 483 TRILL switch to announce into an area that it is holding all other 484 nicknames to limit the nicknames available within that area. Painful 485 means tens of thousands of individual nickname entries in the Level 1 486 LSDB. The information could be encoded as ranges of nicknames to make 487 this manageable by specifying a new TLV similar to the Nickname Flags 488 APPsub-TLV specified in [RFC7780] but providing flags for blocks of 489 nicknames rather than single nicknames. Although this would require 490 updating software, such a new TLV is the preferred method. 492 There is also an issue with the unique nicknames approach in building 493 distribution trees, as follows: 495 With unique nicknames in the TRILL campus and TRILL header 496 nicknames not rewritten by the border TRILL switches, there would 497 have to be globally known nicknames for the trees. Suppose there 498 are k trees. For all of the trees with nicknames located outside 499 an area, the local trees would be rooted at a border TRILL switch 500 or switches. Therefore, there would be either no splitting of 501 multi-destination traffic within the area or restricted splitting 502 of multi-destination traffic between trees rooted at a highly 503 restricted set of TRILL switches. 505 As an alternative, just the "egress nickname" field of multi- 506 destination TRILL Data packets could be mapped at the border, 507 leaving known unicast packets un-mapped. However, this surrenders 508 much of the unique nickname advantage of simpler border TRILL 509 switches. 511 Scaling to a very large campus with unique nicknames might exhaust 512 the 16-bit TRILL nicknames space particularly if (1) additional 513 nicknames are consumed to support active-active end station groups at 514 the TRILL edge using the techniques standardized in [RFC7781] and (2) 515 use of the nickname space is less efficient due to the allocation of, 516 for example, power-of-two size blocks of nicknames to areas in the 517 same way that use of the IP address space is made less efficient by 518 hierarchical allocation (see [RFC3194]). One method to avoid nickname 519 exhaustion might be to expand nicknames to 24 bits; however, that 520 technique would require TRILL message format and fast path processing 521 changes and that all TRILL switches in the campus understand larger 522 nicknames. 524 2.2.2 More Details on Aggregated Nicknames 526 The aggregated nickname approach enables passing far less nickname 527 information. It works as follows, assuming both the source and 528 destination areas are using aggregated nicknames: 530 There are at least two ways areas could be identified. 532 One method would be to assign each area a 16-bit nickname. This 533 would not be the nickname of any actual TRILL switch. Instead, it 534 would be the nickname of the area itself. Border TRILL switches 535 would know the area nickname for their own area(s). For an 536 example of a more specific multilevel proposal using unique 537 nicknames, see [DraftUnique]. 539 Alternatively, areas could be identified by the set of nicknames 540 that identify the border routers for that area. (See [SingleName] 541 for a multilevel proposal using such a set of nicknames.) 543 The TRILL Header nickname fields in TRILL Data packets being 544 transported through a multilevel TRILL campus with aggregated 545 nicknames are as follows: 547 - When both the ingress and egress TRILL switches are in the same 548 area, there need be no change from the existing base TRILL 549 protocol standard in the TRILL Header nickname fields. 551 - When being transported between different Level 1 areas in Level 552 2, the ingress nickname is a nickname of the ingress TRILL 553 switch's area while the egress nickname is either a nickname of 554 the egress TRILL switch's area or a tree nickname. 556 - When being transported from Level 1 to Level 2, the ingress 557 nickname is the nickname of the ingress TRILL switch itself 558 while the egress nickname is either a nickname for the area of 559 the egress TRILL switch or a tree nickname. 561 - When being transported from Level 2 to Level 1, the ingress 562 nickname is a nickname for the ingress TRILL switch's area while 563 the egress nickname is either the nickname of the egress TRILL 564 switch itself or a tree nickname. 566 There are two variations of the aggregated nickname approach. The 567 first is the Border Learning approach, which is described in Section 568 2.2.2.1. The second is the Swap Nickname Field approach, which is 569 described in Section 2.2.2.2. Section 2.2.2.3 compares the advantages 570 and disadvantages of these two variations of the aggregated nickname 571 approach. 573 2.2.2.1 Border Learning Aggregated Nicknames 575 This section provides an illustrative example and description of the 576 border learning variation of aggregated nicknames where a single 577 nickname is used to identify an area. 579 In the following picture, RB2 and RB3 are area border TRILL switches 580 (RBridges). A source S is attached to RB1. The two areas have 581 nicknames 15961 and 15918, respectively. RB1 has a nickname, say 27, 582 and RB4 has a nickname, say 44 (and in fact, they could even have the 583 same nickname, since the TRILL switch nickname will not be visible 584 outside these aggregated areas). 586 Area 15961 level 2 Area 15918 587 +-------------------+ +-----------------+ +--------------+ 588 | | | | | | 589 | S--RB1---Rx--Rz----RB2---Rb---Rc--Rd---Re--RB3---Rk--RB4---D | 590 | 27 | | | | 44 | 591 | | | | | | 592 +-------------------+ +-----------------+ +--------------+ 594 Let's say that S transmits a frame to destination D, which is 595 connected to RB4, and let's say that D's location has already been 596 learned by the relevant TRILL switches. These relevant switches have 597 learned the following: 599 1) RB1 has learned that D is connected to nickname 15918 600 2) RB3 has learned that D is attached to nickname 44. 602 The following sequence of events will occur: 604 - S transmits an Ethernet frame with source MAC = S and destination 605 MAC = D. 607 - RB1 encapsulates with a TRILL header with ingress RBridge = 27, 608 and egress = 15918 producing a TRILL Data packet. 610 - RB2 has announced in the Level 1 IS-IS instance in area 15961, 611 that it is attached to all the area nicknames, including 15918. 612 Therefore, IS-IS routes the packet to RB2. Alternatively, if a 613 distinguished range of nicknames is used for Level 2, Level 1 614 TRILL switches seeing such an egress nickname will know to route 615 to the nearest border router, which can be indicated by the IS-IS 616 attached bit. 618 - RB2, when transitioning the packet from Level 1 to Level 2, 619 replaces the ingress TRILL switch nickname with the area nickname, 620 so replaces 27 with 15961. Within Level 2, the ingress RBridge 621 field in the TRILL header will therefore be 15961, and the egress 622 RBridge field will be 15918. Also RB2 learns that S is attached to 623 nickname 27 in area 15961 to accommodate return traffic. 625 - The packet is forwarded through Level 2, to RB3, which has 626 advertised, in Level 2, reachability to the nickname 15918. 628 - RB3, when forwarding into area 15918, replaces the egress nickname 629 in the TRILL header with RB4's nickname (44). So, within the 630 destination area, the ingress nickname will be 15961 and the 631 egress nickname will be 44. 633 - RB4, when decapsulating, learns that S is attached to nickname 634 15961, which is the area nickname of the ingress. 636 Now suppose that D's location has not been learned by RB1 and/or RB3. 637 What will happen, as it would in TRILL today, is that RB1 will 638 forward the packet as multi-destination, choosing a tree. As the 639 multi-destination packet transitions into Level 2, RB2 replaces the 640 ingress nickname with the area nickname. If RB1 does not know the 641 location of D, the packet must be flooded, subject to possible 642 pruning, in Level 2 and, subject to possible pruning, from Level 2 643 into every Level 1 area that it reaches on the Level 2 distribution 644 tree. 646 Now suppose that RB1 has learned the location of D (attached to 647 nickname 15918), but RB3 does not know where D is. In that case, RB3 648 must turn the packet into a multi-destination packet within area 649 15918. In this case, care must be taken so that in the case in which 650 RB3 is not the Designated transitioner between Level 2 and its area 651 for that multi-destination packet, but was on the unicast path, that 652 border TRILL switch in that area does not forward the now multi- 653 destination packet back into Level 2. Therefore, it would be 654 desirable to have a marking, somehow, that indicates the scope of 655 this packet's distribution to be "only this area" (see also Section 656 4). 658 In cases where there are multiple transitioners for unicast packets, 659 the border learning mode of operation requires that the address 660 learning between them be shared by some protocol such as running 661 ESADI [RFC7357] for all Data Labels of interest to avoid excessive 662 unknown unicast flooding. 664 The potential issue described at the end of Section 2.2.1 with trees 665 in the unique nickname alternative is eliminated with aggregated 666 nicknames. With aggregated nicknames, each border TRILL switch that 667 will transition multi-destination packets can have a mapping between 668 Level 2 tree nicknames and Level 1 tree nicknames. There need not 669 even be agreement about the total number of trees; just that the 670 border TRILL switch have some mapping, and replace the egress TRILL 671 switch nickname (the tree name) when transitioning levels. 673 2.2.2.2 Swap Nickname Field Aggregated Nicknames 675 There is a variant possibility where two additional fields could 676 exist in TRILL Data packets that could be called the "ingress swap 677 nickname field" and the "egress swap nickname field". This variant is 678 described below for completeness but would require fast path hardware 679 changes from the existing TRILL protocol. The changes in the example 680 above would be as follows: 682 - RB1 will have learned the area nickname of D and the TRILL switch 683 nickname of RB4 to which D is attached. In encapsulating a frame 684 to D, it puts an area nickname of D (15918) in the egress nickname 685 field of the TRILL Header and puts a nickname of RB3 (44) in a 686 egress swap nickname field. 688 - RB2 moves the ingress nickname to the ingress swap nickname field 689 and inserts 15961, an area nickname for S, into the ingress 690 nickname field. 692 - RB3 swaps the egress nickname and the egress swap nickname fields, 693 which sets the egress nickname to 44. 695 - RB4 learns the correspondence between the source MAC/VLAN of S and 696 the { ingress nickname, ingress swap nickname field } pair as it 697 decapsulates and egresses the frame. 699 See [DraftAggregated] for a multilevel proposal using aggregated swap 700 nicknames with a single nickname representing an area. 702 2.2.2.3 Comparison 704 The Border Learning variant described in Section 2.2.2.1 above 705 minimizes the change in non-border TRILL switches but imposes the 706 burden on border TRILL switches of learning and doing lookups in all 707 the end station MAC addresses within their area(s) that are used for 708 communication outside the area. This burden could be reduced by 709 decreasing the area size and increasing the number of areas. 711 The Swap Nickname Field variant described in Section 2.2.2.2 712 eliminates the extra address learning burden on border TRILL switches 713 but requires changes to the TRILL data packet header and more 714 extensive changes to non-border TRILL switches. In particular, with 715 this alternative, non-border TRILL switches must learn to associate 716 both a TRILL switch nickname and an area nickname with end station 717 MAC/label pairs (except for addresses that are local to their area). 719 The Swap Nickname Field alternative is more scalable but less 720 backward compatible for non-border TRILL switches. It would be 721 possible for border and other level 2 TRILL switches to support both 722 Border Learning, for support of legacy Level 1 TRILL switches, and 723 Swap Nickname, to support Level 1 TRILL switches that understood the 724 Swap Nickname method based on variations in the TRILL header but this 725 would be even more complex. 727 The requirement to change the TRILL header and fast path processing 728 to support the Swap Nickname Field variant make it impractical for 729 the foreseeable future. 731 2.3 Building Multi-Area Trees 733 It is easy to build a multi-area tree by building a tree in each area 734 separately, (including the Level 2 "area"), and then having only a 735 single border TRILL switch, say RBx, in each area, attach to the 736 Level 2 area. RBx would forward all multi-destination packets 737 between that area and Level 2. 739 People might find this unacceptable, however, because of the desire 740 to path split (not always sending all multi-destination traffic 741 through the same border TRILL switch). 743 This is the same issue as with multiple ingress TRILL switches 744 injecting traffic from a pseudonode, and can be solved with the 745 mechanism that was adopted for that purpose: the affinity TLV 747 [RFC7783]. For each tree in the area, at most one border RB 748 announces itself in an affinity TLV with that tree name. 750 2.4 The RPF Check for Trees 752 For multi-destination data originating locally in RBx's area, 753 computation of the RPF check is done as today. For multi-destination 754 packets originating outside RBx's area, computation of the RPF check 755 must be done based on which one of the border TRILL switches (say 756 RB1, RB2, or RB3) injected the packet into the area. 758 A TRILL switch, say RB4, located inside an area, must be able to know 759 which of RB1, RB2, or RB3 transitioned the packet into the area from 760 Level 2 (or into Level 2 from an area). 762 This could be done based on having the DBRB announce the transitioner 763 assignments to all the TRILL switches in the area, or the Affinity 764 TLV mechanism given in [RFC7783], or a New Tree Encoding mechanism 765 discussed in Section 4.1.1. 767 2.5 Area Nickname Acquisition 769 In the aggregated nickname alternative, each area must acquire a 770 unique area nickname or can be identified by the set of border TRILL 771 switches. It is probably simpler to allocate a block of nicknames 772 (say, the top 4000) to either (1) represent areas and not specific 773 TRILL switches or (2) used by border TRILL switches if the set of 774 such border TRILL switches represent the area. 776 The nicknames used for area identification need to be advertised and 777 acquired through Level 2. 779 Within an area, all the border TRILL switches can discover each other 780 through the Level 1 link state database, by using the IS-IS attach 781 bit or by explicitly advertising in their LSP "I am a border 782 RBridge". 784 Of the border TRILL switches, one will have highest priority (say 785 RB7). RB7 can dynamically participate, in Level 2, to acquire a 786 nickname for identifying the area. Alternatively, RB7 could give the 787 area a pseudonode IS-IS ID, such as RB7.5, within Level 2. So an 788 area would appear, in Level 2, as a pseudonode and the pseudonode 789 could participate, in Level 2, to acquire a nickname for the area. 791 Within Level 2, all the border TRILL switches for an area can 792 advertise reachability to the area, which would mean connectivity to 793 a nickname identifying the area. 795 2.6 Link State Representation of Areas 797 Within an area, say area A1, there is an election for the DBRB, 798 (Designated Border RBridge), say RB1. This can be done through LSPs 799 within area A1. The border TRILL switches announce themselves, 800 together with their DBRB priority. (Note that the election of the 801 DBRB cannot be done based on Hello messages, because the border TRILL 802 switches are not necessarily physical neighbors of each other. They 803 can, however, reach each other through connectivity within the area, 804 which is why it will work to find each other through Level 1 LSPs.) 806 RB1 can acquire an area nickname (in the aggregated nickname 807 approach) and may give the area a pseudonode IS-IS ID (just like the 808 DRB would give a pseudonode IS-IS ID to a link) depending on how the 809 area nickname is handled. RB1 advertises, in area A1, an area 810 nickname that RB1 has acquired (and what the pseudonode IS-IS ID for 811 the area is if needed). 813 Level 1 LSPs (possibly pseudonode) initiated by RB1 for the area 814 include any information external to area A1 that should be input into 815 area A1 (such as nicknames of external areas, or perhaps (in the 816 unique nickname variant) all the nicknames of external TRILL switches 817 in the TRILL campus and pruning information such as multicast 818 listeners and labels). All the other border TRILL switches for the 819 area announce (in their LSP) attachment to that area. 821 Within Level 2, RB1 generates a Level 2 LSP on behalf of the area. 822 The same pseudonode ID could be used within Level 1 and Level 2, for 823 the area. (There does not seem any reason why it would be useful for 824 it to be different, but there's also no reason why it would need to 825 be the same). Likewise, all the area A1 border TRILL switches would 826 announce, in their Level 2 LSPs, connection to the area. 828 3. Area Partition 830 It is possible for an area to become partitioned, so that there is 831 still a path from one section of the area to the other, but that path 832 is via the Level 2 area. 834 With multilevel TRILL, an area will naturally break into two areas in 835 this case. 837 Area addresses might be configured to ensure two areas are not 838 inadvertently connected. Area addresses appear in Hellos and LSPs 839 within the area. If two chunks, connected only via Level 2, were 840 configured with the same area address, this would not cause any 841 problems. (They would just operate as separate Level 1 areas.) 843 A more serious problem occurs if the Level 2 area is partitioned in 844 such a way that it could be healed by using a path through a Level 1 845 area. TRILL will not attempt to solve this problem. Within the Level 846 1 area, a single border RBridge will be the DBRB, and will be in 847 charge of deciding which (single) RBridge will transition any 848 particular multi-destination packets between that area and Level 2. 849 If the Level 2 area is partitioned, this will result in multi- 850 destination data only reaching the portion of the TRILL campus 851 reachable through the partition attached to the TRILL switch that 852 transitions that packet. It will not cause a loop. 854 4. Multi-Destination Scope 856 There are at least two reasons it would be desirable to be able to 857 mark a multi-destination packet with a scope that indicates the 858 packet should not exit the area, as follows: 860 1. To address an issue in the border learning variant of the 861 aggregated nickname alternative, when a unicast packet turns into 862 a multi-destination packet when transitioning from Level 2 to 863 Level 1, as discussed in Section 4.1. 865 2. To constrain the broadcast domain for certain discovery, 866 directory, or service protocols as discussed in Section 4.2. 868 Multi-destination packet distribution scope restriction could be done 869 in a number of ways. For example, there could be a flag in the packet 870 that means "for this area only". However, the technique that might 871 require the least change to TRILL switch fast path logic would be to 872 indicate this in the egress nickname that designates the distribution 873 tree being used. There could be two general tree nicknames for each 874 tree, one being for distribution restricted to the area and the other 875 being for multi-area trees. Or there would be a set of N (perhaps 16) 876 special currently reserved nicknames used to specify the N highest 877 priority trees but with the variation that if the special nickname is 878 used for the tree, the packet is not transitioned between areas. Or 879 one or more special trees could be built that were restricted to the 880 local area. 882 4.1 Unicast to Multi-destination Conversions 884 In the border learning variant of the aggregated nickname 885 alternative, the following situation may occur: 886 - a unicast packet might be known at the Level 1 to Level 2 887 transition and be forwarded as a unicast packet to the least cost 888 border TRILL switch advertising connectivity to the destination 889 area, but 890 - upon arriving at the border TRILL switch, it turns out to have an 891 unknown destination { MAC, Data Label } pair. 893 In this case, the packet must be converted into a multi-destination 894 packet and flooded in the destination area. However, if the border 895 TRILL switch doing the conversion is not the border TRILL switch 896 designated to transition the resulting multi-destination packet, 897 there is the danger that the designated transitioner may pick up the 898 packet and flood it back into Level 2 from which it may be flooded 899 into multiple areas. This danger can be avoided by restricting any 900 multi-destination packet that results from such a conversion to the 901 destination area as described above. 903 Alternatively, a multi-destination packet intended only for the area 904 could be tunneled (within the area) to the RBridge RBx, that is the 905 appointed transitioner for that form of packet (say, based on VLAN or 906 FGL), with instructions that RBx only transmit the packet within the 907 area, and RBx could initiate the multi-destination packet within the 908 area. Since RBx introduced the packet, and is the only one allowed 909 to transition that packet to Level 2, this would accomplish scoping 910 of the packet to within the area. Since this case only occurs in the 911 unusual case when unicast packets need to be turned into multi- 912 destination as described above, the suboptimality of tunneling 913 between the border TRILL switch that receives the unicast packet and 914 the appointed level transitioner for that packet, might not be an 915 issue. 917 4.1.1 New Tree Encoding 919 The current encoding, in a TRILL header, of a tree, is of the 920 nickname of the tree root. This requires all 16 bits of the egress 921 nickname field. TRILL could instead, for example, use the bottom 6 922 bits to encode the tree number (allowing 64 trees), leaving 10 bits 923 to encode information such as: 925 o scope: a flag indicating whether it should be single area only, or 926 entire campus 927 o border injector: an indicator of which of the k border TRILL 928 switches injected this packet 930 If TRILL were to adopt this new encoding, any of the TRILL switches 931 in an edge group could inject a multi-destination packet. This would 932 require all TRILL switches to be changed to understand the new 933 encoding for a tree, and it would require a TLV in the LSP to 934 indicate which number each of the TRILL switches in an edge group 935 would be. 937 While there are a number of advantages to this technique, it requires 938 fast path logic changes and thus its deployment is not practical at 939 this time. It is included here for completeness. 941 4.2 Selective Broadcast Domain Reduction 943 There are a number of service, discovery, and directory protocols 944 that, for convenience, are accessed via multicast or broadcast 945 frames. Examples are DHCP, (Dynamic Host Configuration Protocol) the 946 NetBIOS Service Location Protocol, and multicast DNS (Domain Name 947 Service). 949 Some such protocols provide means to restrict distribution to an IP 950 subnet or equivalent to reduce size of the broadcast domain they are 951 using and then provide a proxy that can be placed in that subnet to 952 use unicast to access a service elsewhere. In cases where a proxy 953 mechanism is not currently defined, it may be possible to create one 954 that references a central server or cache. With multilevel TRILL, it 955 is possible to construct very large IP subnets that could become 956 saturated with multi-destination traffic of this type unless packets 957 can be further restricted in their distribution. Such restricted 958 distribution can be accomplished for some protocols, say protocol P, 959 in a variety of ways including the following: 961 - Either (1) at all ingress TRILL switches in an area place all 962 protocol P multi-destination packets on a distribution tree in 963 such a way that the packets are restricted to the area or (2) at 964 all border TRILL switches between that area and Level 2, detect 965 protocol P multi-destination packets and do not transition them. 967 - Then place one, or a few for redundancy, protocol P proxies inside 968 each area where protocol P may be in use. These proxies unicast 969 protocol P requests or other messages to the actual campus 970 server(s) for P. They also receive unicast responses or other 971 messages from those servers and deliver them within the area via 972 unicast, multicast, or broadcast as appropriate. (Such proxies 973 would not be needed if it was acceptable for all protocol P 974 traffic to be restricted to an area.) 976 While it might seem logical to connect the campus servers to TRILL 977 switches in Level 2, they could be placed within one or more areas so 978 that, in some cases, those areas might not require a local proxy 979 server. 981 5. Co-Existence with Old TRILL switches 983 TRILL switches that are not multilevel aware may have a problem with 984 calculating RPF Check and filtering information, since they would not 985 be aware of the assignment of border TRILL switch transitioning. 987 A possible solution, as long as any old TRILL switches exist within 988 an area, is to have the border TRILL switches elect a single DBRB 989 (Designated Border RBridge), and have all inter-area traffic go 990 through the DBRB (unicast as well as multi-destination). If that 991 DBRB goes down, a new one will be elected, but at any one time, all 992 inter-area traffic (unicast as well as multi-destination) would go 993 through that one DRBR. However this eliminates load splitting at 994 level transition. 996 6. Multi-Access Links with End Stations 998 Care must be taken in the case where there are multiple TRILL 999 switches on a link with one or more end stations, keeping in mind 1000 that end stations are TRILL ignorant. In particular, it is essential 1001 that only one TRILL switch ingress/egress any given data packet 1002 from/to an end station so that connectivity is provided to that end 1003 station without duplicating end station data and that loops are not 1004 formed due to one TRILL switch egressing data in native form (i.e., 1005 with no TRILL header) and having that data re-ingressed by another 1006 TRILL switch on the link. 1008 With existing, single level TRILL, this is done by electing a single 1009 Designated RBridge per link, which appoints a single Appointed 1010 Forwarder per VLAN [RFC7177] [RFC8139]. This mechanism depends on the 1011 RBridges establishing adjacency. But suppose there are two (or more) 1012 TRILL switches on a link in different areas, say RB1 in area A1 and 1013 RB2 in area A2, as shown below, and that the link also has one or 1014 more end stations attached. If RB1 and RB2 ignore each other's 1015 Hellos because they are in different areas, as they are required to 1016 do under normal IS-IS PDU processing rules, then they will not form 1017 an adjacency. If they are not adjacent, they will ignore each other 1018 for the Appointed Forwarder mechanism and will both ingress/egress 1019 end station traffic on the link causing loops and duplication. 1021 The problem is not avoiding adjacency or avoiding TRILL Data packet 1022 transfer between RB1 and RB2. The area address mechanism of IS-IS or 1023 possibly the use of topology constraints or the like does that quite 1024 well. The problem stems from end stations being TRILL ignorant so 1025 care must be taken that multiple RBridges on a link do not ingress 1026 the same frame originated by an end station and so that an RBridge 1027 does not ingress a native frame egressed by a different RBridge 1028 because the RBridge mistakes the frame for a frame originated by an 1029 end station. 1031 +--------------------------------------------+ 1032 | Level 2 | 1033 +----------+---------------------+-----------+ 1034 | Area A1 | | Area A2 | 1035 | +---+ | | +---+ | 1036 | |RB1| | | |RB2| | 1037 | +-+-+ | | +-+-+ | 1038 | | | | | | 1039 +-----|----+ +-----|-----+ 1040 | | 1041 --+---------+-------------+--------+-- Link 1042 | | 1043 +------+------+ +--+----------+ 1044 | End Station | | End Station | 1045 +-------------+ +-------------+ 1047 A simple rule, which is preferred, is to use the TRILL switch or 1048 switches having the lowest numbered area, comparing area numbers as 1049 unsigned integers, to handle all native traffic to/from end stations 1050 on the link. This would automatically give multilevel-ignorant legacy 1051 TRILL switches, that would be using area number zero, highest 1052 priority for handling end station traffic, which they would try to do 1053 anyway. 1055 Other methods are possible. For example doing the selection of 1056 Appointed Forwarders and of the TRILL switch in charge of that 1057 selection across all TRILL switches on the link regardless of area. 1058 However, a special case would then have to be made for legacy TRILL 1059 switches using area number zero. 1061 These techniques require multilevel aware TRILL switches to take 1062 actions based on Hellos from RBridges in other areas even though they 1063 will not form an adjacency with such RBridges. However, the action is 1064 quite simple in the preferred case: if a TRILL switch sees Hellos 1065 from lower numbered areas, then they would not act as an Appointed 1066 Forwarder on the link until the Hello timer for such Hellos had 1067 expired. 1069 7. Summary 1071 This draft describes potential scaling issues in TRILL and discusses 1072 possible approaches to multilevel TRILL as a solution or element of a 1073 solution to most of them. 1075 The alternative using aggregated areas in multilevel TRILL has 1076 significant advantages in terms of scalability over using campus wide 1077 unique nicknames, not just in avoiding nickname exhaustion, but by 1078 allowing RPF Checks to be aggregated based on an entire area. 1079 However, the alternative of using unique nicknames is simpler and 1080 avoids the changes in border TRILL switches required to support 1081 aggregated nicknames. It is possible to support both. For example, a 1082 TRILL campus could use simpler unique nicknames until scaling begins 1083 to cause problems and then start to introduce areas with aggregated 1084 nicknames. 1086 Some multilevel TRILL issues are not difficult, such as dealing with 1087 partitioned areas. Other issues are more difficult, especially 1088 dealing with old TRILL switches that are multilevel ignorant. 1090 8. Security Considerations 1092 This informational document explores alternatives for the design of 1093 multilevel IS-IS in TRILL and generally does not consider security 1094 issues. 1096 If aggregated nicknames are used in two areas that have the same area 1097 address and those areas merge, there is a possibility of a transient 1098 nickname collision that would not occur with unique nicknames. Such a 1099 collision could cause a data packet to be delivered to the wrong 1100 egress TRILL switch but it would still not be delivered to any end 1101 station in the wrong Data Label; thus such delivery would still 1102 conform to security policies. 1104 For general TRILL Security Considerations, see [RFC6325]. 1106 9. IANA Considerations 1108 This document requires no IANA actions. RFC Editor: Please remove 1109 this section before publication. 1111 Normative References 1113 [IS-IS] - ISO/IEC 10589:2002, Second Edition, "Intermediate System to 1114 Intermediate System Intra-Domain Routing Exchange Protocol for 1115 use in Conjunction with the Protocol for Providing the 1116 Connectionless-mode Network Service (ISO 8473)", 2002. 1118 [RFC6325] - Perlman, R., Eastlake 3rd, D., Dutt, D., Gai, S., and A. 1119 Ghanwani, "Routing Bridges (RBridges): Base Protocol 1120 Specification", RFC 6325, July 2011. 1122 [RFC7177] - Eastlake 3rd, D., Perlman, R., Ghanwani, A., Yang, H., 1123 and V. Manral, "Transparent Interconnection of Lots of Links 1124 (TRILL): Adjacency", RFC 7177, May 2014, . 1127 [RFC7780] - Eastlake 3rd, D., Zhang, M., Perlman, R., Banerjee, A., 1128 Ghanwani, A., and S. Gupta, "Transparent Interconnection of 1129 Lots of Links (TRILL): Clarifications, Corrections, and 1130 Updates", RFC 7780, DOI 10.17487/RFC7780, February 2016, 1131 . 1133 [RFC8139] - Eastlake, D., Li, Y., Umair, M., Banerjee, A., and F. Hu, 1134 "Transparent Interconnection of Lots of Links (TRILL): 1135 Appointed Forwarders", RFC 8139, DOI 10.17487/RFC8139, June 1136 2017, . 1138 Informative References 1140 [InterCon] - Perlman, R., "Interconnections, Second Edition; Bridges, 1141 Routers, Switches, and Internetworking Protocols", Addison 1142 Wesley, ISBN 0-201-63448-1, September 1999. 1144 [RFC3194] - Durand, A. and C. Huitema, "The H-Density Ratio for 1145 Address Assignment Efficiency An Update on the H ratio", RFC 1146 3194, DOI 10.17487/RFC3194, November 2001, . 1149 [RFC6361] - Carlson, J. and D. Eastlake 3rd, "PPP Transparent 1150 Interconnection of Lots of Links (TRILL) Protocol Control 1151 Protocol", RFC 6361, August 2011. 1153 [RFC7172] - Eastlake 3rd, D., Zhang, M., Agarwal, P., Perlman, R., 1154 and D. Dutt, "Transparent Interconnection of Lots of Links 1155 (TRILL): Fine-Grained Labeling", RFC 7172, May 2014 1157 [RFC7176] - Eastlake 3rd, D., Senevirathne, T., Ghanwani, A., Dutt, 1158 D., and A. Banerjee, "Transparent Interconnection of Lots of 1159 Links (TRILL) Use of IS-IS", RFC 7176, May 2014. 1161 [RFC7357] - Zhai, H., Hu, F., Perlman, R., Eastlake 3rd, D., and O. 1162 Stokes, "Transparent Interconnection of Lots of Links (TRILL): 1163 End Station Address Distribution Information (ESADI) Protocol", 1164 RFC 7357, September 2014, . 1167 [RFC7781] - Zhai, H., Senevirathne, T., Perlman, R., Zhang, M., and 1168 Y. Li, "Transparent Interconnection of Lots of Links (TRILL): 1169 Pseudo-Nickname for Active-Active Access", RFC 7781, DOI 1170 10.17487/RFC7781, February 2016, . 1173 [RFC7783] - Senevirathne, T., Pathangi, J., and J. Hudson, 1174 "Coordinated Multicast Trees (CMT) for Transparent 1175 Interconnection of Lots of Links (TRILL)", RFC 7783, DOI 1176 10.17487/RFC7783, February 2016, . 1179 [DraftAggregated] - Bhargav Bhikkaji, Balaji Venkat Venkataswami, 1180 Narayana Perumal Swamy, "Connecting Disparate Data 1181 Center/PBB/Campus TRILL sites using BGP", draft-balaji-trill- 1182 over-ip-multi-level, Work In Progress. 1184 [DraftUnique] - M. Zhang, D. Eastlake, R. Perlman, M. Cullen, H. 1185 Zhai, D. Liu, "TRILL Multilevel Using Unique Nicknames", draft- 1186 ietf-trill-multilevel-unique-nickname, Work In Progress. 1188 [SingleName] - Mingui Zhang, et. al, "Single Area Border RBridge 1189 Nickname for TRILL Multilevel", draft-ietf-trill-multilevel- 1190 single-nickname, Work in Progress. 1192 Acknowledgements 1194 The helpful comments and contributions of the following are hereby 1195 acknowledged: 1197 Alia Atlas, David Michael Bond, Dino Farinacci, Sue Hares, Gayle 1198 Noble, Alexander Vainshtein, and Stig Venaas. 1200 The document was prepared in raw nroff. All macros used were defined 1201 within the source file. 1203 Authors' Addresses 1205 Radia Perlman 1206 EMC 1207 2010 256th Avenue NE, #200 1208 Bellevue, WA 98007 USA 1210 EMail: radia@alum.mit.edu 1212 Donald Eastlake 1213 Huawei Technologies 1214 155 Beaver Street 1215 Milford, MA 01757 USA 1217 Phone: +1-508-333-2270 1218 Email: d3e3e3@gmail.com 1220 Mingui Zhang 1221 Huawei Technologies 1222 No.156 Beiqing Rd. Haidian District, 1223 Beijing 100095 P.R. China 1225 EMail: zhangmingui@huawei.com 1227 Anoop Ghanwani 1228 Dell 1229 5450 Great America Parkway 1230 Santa Clara, CA 95054 USA 1232 EMail: anoop@alumni.duke.edu 1234 Hongjun Zhai 1235 Jinling Institute of Technology 1236 99 Hongjing Avenue, Jiangning District 1237 Nanjing, Jiangsu 211169 China 1239 EMail: honjun.zhai@tom.com 1241 Copyright and IPR Provisions 1243 Copyright (c) 2017 IETF Trust and the persons identified as the 1244 document authors. All rights reserved. 1246 This document is subject to BCP 78 and the IETF Trust's Legal 1247 Provisions Relating to IETF Documents 1248 (http://trustee.ietf.org/license-info) in effect on the date of 1249 publication of this document. Please review these documents 1250 carefully, as they describe your rights and restrictions with respect 1251 to this document. Code Components extracted from this document must 1252 include Simplified BSD License text as described in Section 4.e of 1253 the Trust Legal Provisions and are provided without warranty as 1254 described in the Simplified BSD License. The definitive version of 1255 an IETF Document is that published by, or under the auspices of, the 1256 IETF. Versions of IETF Documents that are published by third parties, 1257 including those that are translated into other languages, should not 1258 be considered to be definitive versions of IETF Documents. The 1259 definitive version of these Legal Provisions is that published by, or 1260 under the auspices of, the IETF. Versions of these Legal Provisions 1261 that are published by third parties, including those that are 1262 translated into other languages, should not be considered to be 1263 definitive versions of these Legal Provisions. For the avoidance of 1264 doubt, each Contributor to the IETF Standards Process licenses each 1265 Contribution that he or she makes as part of the IETF Standards 1266 Process to the IETF Trust pursuant to the provisions of RFC 5378. No 1267 language to the contrary, or terms, conditions or rights that differ 1268 from or are inconsistent with the rights and licenses granted under 1269 RFC 5378, shall have any effect and shall be null and void, whether 1270 published or posted by such Contributor, or included with or in such 1271 Contribution.