idnits 2.17.1 draft-ietf-trill-rbridge-multilevel-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 16, 2017) is 2505 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TRILL Working Group Radia Perlman 2 INTERNET-DRAFT EMC 3 Intended status: Informational Donald Eastlake 4 Mingui Zhang 5 Huawei 6 Anoop Ghanwani 7 Dell 8 Hongjun Zhai 9 JIT 10 Expires: December 15, 2017 June 16, 2017 12 Alternatives for Multilevel TRILL 13 (Transparent Interconnection of Lots of Links) 14 16 Abstract 18 Although TRILL is based on IS-IS, which supports multilevel unicast 19 routing, extending TRILL to multiple levels has challenges that are 20 not addressed by the already-existing capabilities of IS-IS. One 21 issue is with the handling of multi-destination packet distribution 22 trees. Other issues are with TRILL switch nicknames. How are such 23 nicknames allocated across a multilevel TRILL network? Do nicknames 24 need to be unique across an entire multilevel TRILL network or can 25 they merely be unique within each multilevel area? 27 This informational document enumerates and examines alternatives 28 based on a number of factors including backward compatibility, 29 simplicity, and scalability and makes recommendations in some cases. 31 Status of This Memo 33 This Internet-Draft is submitted to IETF in full conformance with the 34 provisions of BCP 78 and BCP 79. Distribution of this document is 35 unlimited. Comments should be sent to the TRILL working group 36 mailing list . 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF), its areas, and its working groups. Note that 40 other groups may also distribute working documents as Internet- 41 Drafts. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 47 The list of current Internet-Drafts can be accessed at 48 http://www.ietf.org/1id-abstracts.html. The list of Internet-Draft 49 Shadow Directories can be accessed at 50 http://www.ietf.org/shadow.html. 52 Table of Contents 54 1. Introduction............................................4 55 1.1 The Motivation for Multilevel..........................4 56 1.2 Improvements Due to Multilevel.........................5 57 1.2.1. The Routing Computation Load........................5 58 1.2.2. LSDB Volatility Creating Too Much Control Traffic...5 59 1.2.3. LSDB Volatility Causing To Much Time Unconverged....5 60 1.2.4. The Size Of The LSDB................................6 61 1.2.5 Nickname Limit.......................................6 62 1.2.6 Multi-Destination Traffic............................7 63 1.3 Unique and Aggregated Nicknames........................7 64 1.4 More on Areas..........................................8 65 1.5 Terminology and Acronyms...............................8 67 2. Multilevel TRILL Issues................................10 68 2.1 Non-zero Area Addresses...............................11 69 2.2 Aggregated versus Unique Nicknames....................11 70 2.2.1 More Details on Unique Nicknames....................12 71 2.2.2 More Details on Aggregated Nicknames................13 72 2.2.2.1 Border Learning Aggregated Nicknames..............14 73 2.2.2.2 Swap Nickname Field Aggregated Nicknames..........16 74 2.2.2.3 Comparison........................................17 75 2.3 Building Multi-Area Trees.............................17 76 2.4 The RPF Check for Trees...............................18 77 2.5 Area Nickname Acquisition.............................18 78 2.6 Link State Representation of Areas....................19 80 3. Area Partition.........................................20 82 4. Multi-Destination Scope................................21 83 4.1 Unicast to Multi-destination Conversions..............21 84 4.1.1 New Tree Encoding...................................22 85 4.2 Selective Broadcast Domain Reduction..................22 87 5. Co-Existence with Old TRILL switches...................24 88 6. Multi-Access Links with End Stations...................25 89 7. Summary................................................27 91 8. Security Considerations................................28 92 9. IANA Considerations....................................28 94 Normative References......................................29 95 Informative References....................................29 97 Acknowledgements..........................................31 98 Authors' Addresses........................................32 100 1. Introduction 102 The IETF TRILL (Transparent Interconnection of Lot of Links) protocol 103 [RFC6325] [RFC7177] [RFC7780] provides optimal pair-wise data routing 104 without configuration, safe forwarding even during periods of 105 temporary loops, and support for multipathing of both unicast and 106 multicast traffic in networks with arbitrary topology and link 107 technology, including multi-access links. TRILL accomplishes this by 108 using IS-IS (Intermediate System to Intermediate System [IS-IS] 109 [RFC7176]) link state routing in conjunction with a header that 110 includes a hop count. The design supports data labels (VLANs and Fine 111 Grained Labels [RFC7172]) and optimization of the distribution of 112 multi-destination data based on data label and multicast group. 113 Devices that implement TRILL are called TRILL Switches or RBridges. 115 Familiarity with [IS-IS], [RFC6325], and [RFC7780] is assumed in this 116 document. 118 1.1 The Motivation for Multilevel 120 The primary motivation for multilevel TRILL is to improve 121 scalability. The following issues might limit the scalability of a 122 TRILL-based network: 124 1. The routing computation load 125 2. The volatility of the link state database (LSDB) creating too much 126 control traffic 127 3. The volatility of the LSDB causing the TRILL network to be in an 128 unconverged state too much of the time 129 4. The size of the LSDB 130 5. The limit of the number of TRILL switches, due to the 16-bit 131 nickname space (for further information on why this might be a 132 problem, see Section 1.2.5) 133 6. The traffic due to upper layer protocols use of broadcast and 134 multicast 135 7. The size of the end node learning table (the table that remembers 136 (egress TRILL switch, label/MAC) pairs) 138 As discussed below, extending TRILL IS-IS to be multilevel 139 (hierarchical) can help with all of these issues except issue 7. 141 IS-IS was designed to be multilevel [IS-IS]. A network can be 142 partitioned into "areas". Routing within an area is known as "Level 143 1 routing". Routing between areas is known as "Level 2 routing". 144 The Level 2 IS-IS network consists of Level 2 routers and links 145 between the Level 2 routers. Level 2 routers may participate in one 146 or more Level 1 areas, in addition to their role as Level 2 routers. 148 Each area is connected to Level 2 through one or more "border 149 routers", which participate both as a router inside the area, and as 150 a router inside the Level 2 "area". Care must be taken that it is 151 clear, when transitioning multi-destination packets between Level 2 152 and a Level 1 area in either direction, that exactly one border TRILL 153 switch will transition a particular data packet between the levels or 154 else duplication or loss of traffic can occur. 156 1.2 Improvements Due to Multilevel 158 Partitioning the network into areas directly solves the first four 159 scalability issues listed above as described in Sections 1.2.1 160 through 1.2.4. Multilevel also contributes to solving issues 5 and 6 161 as discussed in Section 1.2.5 and 1.2.6 respectively. In the 162 subsections below, N indicates the number of TRILL switches in a 163 TRILL campus. 165 1.2.1. The Routing Computation Load 167 The optimized computational effort to calculate least cost routes at 168 a TRILL switch in a single level campus is on the order of N*log(N). 169 In an optimized multi-level campus, it is on the order of 170 sqrt(N)*log(N). So, for example, assuming N is 3,000, the level of 171 computational effort would be reduced by about a factor of 50. 173 1.2.2. LSDB Volatility Creating Too Much Control Traffic 175 The rate of LSDB changes would be approximately proportional to the 176 number of routers/links in the TRILL campus for a single level 177 campus. With an optimized multi-level campus, each area would have 178 about sqrt(N) routers reducing volatility by about a factor of 179 sqrt(N). 181 1.2.3. LSDB Volatility Causing To Much Time Unconverged 183 With the simplifying assumption that routing converges after each 184 change before the next change, the fraction of time that routing is 185 unconverged is proportional to the product of the volatility and the 186 convergence time. The convergence time is approximately proportional 187 to the computation involved at each router. Thus, based on these 188 simplifying assumptions, the fraction of time routing at a router is 189 not converged with the network would improve, in going from single to 190 multi-level, by about a factor of N. 192 1.2.4. The Size Of The LSDB 194 The size of the LSDB is also approximately proportional to the number 195 of routers/links and so, as with item 2 above, should improve by 196 about a factor of sqrt(N) in going from single to multi-level. 198 1.2.5 Nickname Limit 200 For many TRILL protocol purposes, RBridges are designated by 16-bit 201 nicknames. While some values are reserved, this appears to provide 202 enough nicknames to designated over 65,000 RBridges. However, this 203 number is effectively reduced by the following two factors: 205 - Nicknames are consumed when pseudo-nicknames are used for the 206 active-active connection of end stations. Using the techniques in 207 [RFC7781], for example, could double the nickname consumption if 208 there are extensive active-active edge groups connected to 209 different sets of edge TRILL switch ports. 211 - There might be problems in multi-level campus wide contention for 212 single nickname allocation of nicknames were allocated 213 individually from a single pool for the entire campus. Thus it 214 seems likely that a hierarchical method would be chosen where 215 blocks of nicknames are allocated at Level 2 to Level 1 areas and 216 contention for a nickname by an RBridge in such a Level 1 area 217 would be only within that area. Such hierarchical allocation leads 218 to further effective loss of nicknames similar to the situation 219 with IP addresses discussed in [RFC3194]. 221 Even without the above effective reductions in nickname space, a very 222 large multi-level TRILL campus, say one with 200 areas each 223 containing 500 TRILL switches, could require 100,000 or more 224 nicknames if all nicknames in the campus must be unique, which is 225 clearly impossible with 16-bit nicknames. 227 This scaling limit, namely, 16-bit nickname space, will only be 228 addressed with the aggregated nickname approach. Since the aggregated 229 nickname approach requires some complexity in the border TRILL 230 switches (for rewriting the nicknames in the TRILL header), the 231 suggested design in this document allows a campus with a mixture of 232 unique-nickname areas, and aggregated-nickname areas. Thus a TRILL 233 network could start using multilevel with the simpler unique nickname 234 method and later add aggregated areas as a later stage of network 235 growth. 237 With this design, nicknames must be unique across all Level 2 and 238 unique-nickname area TRILL switches taken together, whereas nicknames 239 inside an aggregated-nickname area are visible only inside that area. 240 Nicknames inside an aggregated-nickname area must still not conflict 241 with nicknames visible in Level 2 (which includes all nicknames 242 inside unique nickname areas), but the nicknames inside an 243 aggregated-nickname area may be the same as nicknames used within one 244 or more other aggregated-nickname areas. 246 With the design suggested in this document, TRILL switches within an 247 area need not be aware of whether they are in an aggregated nickname 248 area or a unique nickname area. The border TRILL switches in area A1 249 will indicate, in their LSP inside area A1, which nicknames (or 250 nickname ranges) are available, or alternatively which nicknames are 251 not available, for choosing as nicknames by area A1 TRILL switches. 253 1.2.6 Multi-Destination Traffic 255 Scaling limits due to protocol use of broadcast and multicast, can be 256 addressed in many cases in a mulitlevel campus by introducing 257 locally-scoped multi-destination delivery, limited to an area or a 258 single link. See further discussion of this issue in Section 4.2. 260 1.3 Unique and Aggregated Nicknames 262 We describe two alternatives for hierarchical or multilevel TRILL. 263 One we call the "unique nickname" alternative. The other we call the 264 "aggregated nickname" alternative. In the aggregated nickname 265 alternative, border TRILL switches replace either the ingress or 266 egress nickname field in the TRILL header of unicast packets with an 267 aggregated nickname representing an entire area. 269 The unique nickname alternative has the advantage that border TRILL 270 switches are simpler and do not need to do TRILL Header nickname 271 modification. It also simplifies testing and maintenance operations 272 that originate in one area and terminate in a different area. 274 The aggregated nickname alternative has the following advantages: 276 o it solves scaling problem #5 above, the 16-bit nickname limit, 277 in a simple way, 278 o it lessens the amount of inter-area routing information that 279 must be passed in IS-IS, and 280 o it logically reduces the RPF (Reverse Path Forwarding) Check 281 information (since only the area nickname needs to appear, 282 rather than all the ingress TRILL switches in that area). 284 In both cases, it is possible and advantageous to compute multi- 285 destination data packet distribution trees such that the portion 286 computed within a given area is rooted within that area. 288 For further discussion of the unique and aggregated nickname 289 alternatives, see Section 2.2. 291 1.4 More on Areas 293 Each area is configured with an "area address", which is advertised 294 in IS-IS messages, so as to avoid accidentally interconnecting areas. 295 For TRILL the only purpose of the area address would be to avoid 296 accidentally interconnecting areas although the area address had 297 other purposes in CLNP (Connectionless Network Layer Protocol), IS-IS 298 was originally designed for CLNP/DECnet. 300 Currently, the TRILL specification says that the area address must be 301 zero. If we change the specification so that the area address value 302 of zero is just a default, then most of IS-IS multilevel machinery 303 works as originally designed. However, there are TRILL-specific 304 issues, which we address below in Section 2.1. 306 1.5 Terminology and Acronyms 308 This document generally uses the acronyms defined in [RFC6325] plus 309 the additional acronym DBRB. However, for ease of reference, most 310 acronyms used are listed here: 312 CLNP - ConnectionLess Network Protocol 314 DECnet - a proprietary routing protocol that was used by Digital 315 Equipment Corporation. "DECnet Phase 5" was the origin of IS-IS. 317 Data Label - VLAN or Fine Grained Label [RFC7172] 319 DBRB - Designated Border RBridge 321 ESADI - End Station Address Distribution Information 323 IS-IS - Intermediate System to Intermediate System [IS-IS] 325 LSDB - Link State Data Base 327 LSP - Link State PDU 329 PDU - Protocol Data Unit 330 RBridge - Routing Bridge, an alternative name for a TRILL switch 332 RPF - Reverse Path Forwarding 334 TLV - Type Length Value 336 TRILL - Transparent Interconnection of Lots of Links or Tunneled 337 Routing in the Link Layer [RFC6325] [RFC7780] 339 TRILL switch - a device that implements the TRILL protocol 340 [RFC6325] [RFC7780], sometimes called an RBridge 342 VLAN - Virtual Local Area Network 344 2. Multilevel TRILL Issues 346 The TRILL-specific issues introduced by multilevel include the 347 following: 349 a. Configuration of non-zero area addresses, encoding them in IS-IS 350 PDUs, and possibly interworking with old TRILL switches that do 351 not understand non-zero area addresses. 353 See Section 2.1. 355 b. Nickname management. 357 See Sections 2.5 and 2.2. 359 c. Advertisement of pruning information (Data Label reachability, IP 360 multicast addresses) across areas. 362 Distribution tree pruning information is only an optimization, 363 as long as multi-destination packets are not prematurely 364 pruned. For instance, border TRILL switches could advertise 365 they can reach all possible Data Labels, and have an IP 366 multicast router attached. This would cause all multi- 367 destination traffic to be transmitted to border TRILL switches, 368 and possibly pruned there, when the traffic could have been 369 pruned earlier based on Data Label or multicast group if border 370 TRILL switches advertised more detailed Data Label and/or 371 multicast listener and multicast router attachment information. 373 d. Computation of distribution trees across areas for multi- 374 destination data. 376 See Section 2.3. 378 e. Computation of RPF information for those distribution trees. 380 See Section 2.4. 382 f. Computation of pruning information across areas. 384 See Sections 2.3 and 2.6. 386 g. Compatibility, as much as practical, with existing, unmodified 387 TRILL switches. 389 The most important form of compatibility is with existing TRILL 390 fast path hardware. Changes that require upgrade to the slow 391 path firmware/software are more tolerable. Compatibility for 392 the relatively small number of border TRILL switches is less 393 important than compatibility for non-border TRILL switches. 395 See Section 5. 397 2.1 Non-zero Area Addresses 399 The current TRILL base protocol specification [RFC6325] [RFC7177] 400 [RFC7780] says that the area address in IS-IS must be zero. The 401 purpose of the area address is to ensure that different areas are not 402 accidentally merged. Furthermore, zero is an invalid area address 403 for layer 3 IS-IS, so it was chosen as an additional safety mechanism 404 to ensure that layer 3 IS-IS packets would not be confused with TRILL 405 IS-IS packets. However, TRILL uses other techniques to avoid 406 confusion on a link, such as different multicast addresses and 407 Ethertypes on Ethernet [RFC6325], different PPP (Point-to-Point 408 Protocol) code points on PPP [RFC6361], and the like. Thus, using an 409 area address in TRILL that might be used in layer 3 IS-IS is not a 410 problem. 412 Since current TRILL switches will reject any IS-IS messages with non- 413 zero area addresses, the choices are as follows: 415 a.1 upgrade all TRILL switches that are to interoperate in a 416 potentially multilevel environment to understand non-zero area 417 addresses, 418 a.2 neighbors of old TRILL switches must remove the area address from 419 IS-IS messages when talking to an old TRILL switch (which might 420 break IS-IS security and/or cause inadvertent merging of areas), 421 a.3 ignore the problem of accidentally merging areas entirely, or 422 a.4 keep the fixed "area address" field as 0 in TRILL, and add a new, 423 optional TLV for "area name" to Hellos that, if present, could be 424 compared, by new TRILL switches, to prevent accidental area 425 merging. 427 In principal, different solutions could be used in different areas 428 but it would be much simpler to adopt one of these choices uniformly. 429 A simple solution would be a.1 above with each TRILL switch using a 430 dominant area nickname as its area address. For the unique nickname 431 alternative, the dominant nickname could be the lowest value nickname 432 held by any border RBridge of the area. For the aggregated nickname 433 alternative, it could be the lowest nickname held by a border RBridge 434 of the area or a nickname representing the area. 436 2.2 Aggregated versus Unique Nicknames 438 In the unique nickname alternative, all nicknames across the campus 439 must be unique. In the aggregated nickname alternative, TRILL switch 440 nicknames within an aggregated area are only of local significance, 441 and the only nickname externally (outside that area) visible is the 442 "area nickname" (or nicknames), which aggregates all the internal 443 nicknames. 445 The unique nickname approach simplifies border TRILL switches. 447 The aggregated nickname approach eliminates the potential problem of 448 nickname exhaustion, minimizes the amount of nickname information 449 that would need to be forwarded between areas, minimizes the size of 450 the forwarding table, and simplifies RPF calculation and RPF 451 information. 453 2.2.1 More Details on Unique Nicknames 455 With unique cross-area nicknames, it would be intractable to have a 456 flat nickname space with TRILL switches in different areas contending 457 for the same nicknames. Instead, each area would need to be 458 configured with or allocate one or more block of nicknames. Either 459 some TRILL switches would need to announce that all the nicknames 460 other than that in blocks available to the area are taken (to prevent 461 the TRILL switches inside the area from choosing nicknames outside 462 the area's nickname block), or a new TLV would be needed to announce 463 the allowable or the prohibited nicknames, and all TRILL switches in 464 the area would need to understand that new TLV. 466 Currently the encoding of nickname information in TLVs is by listing 467 of individual nicknames; this would make it painful for a border 468 TRILL switch to announce into an area that it is holding all other 469 nicknames to limit the nicknames available within that area. Painful 470 means tens of thousands of individual nickname entries in the Level 1 471 LSDB. The information could be encoded as ranges of nicknames to make 472 this manageable by specifying a new TLV similar to the Nickname Flags 473 APPsubTLV specified in [RFC7780] but providing flags for blocks of 474 nicknames rather than single nicknames. Although this would require 475 updating software, such a new TLV is the preferred method. 477 There is also an issue with the unique nicknames approach in building 478 distribution trees, as follows: 480 With unique nicknames in the TRILL campus and TRILL header 481 nicknames not rewritten by the border TRILL switches, there would 482 have to be globally known nicknames for the trees. Suppose there 483 are k trees. For all of the trees with nicknames located outside 484 an area, the local trees would be rooted at a border TRILL switch 485 or switches. Therefore, there would be either no splitting of 486 multi-destination traffic within the area or restricted splitting 487 of multi-destination traffic between trees rooted at a highly 488 restricted set of TRILL switches. 490 As an alternative, just the "egress nickname" field of multi- 491 destination TRILL Data packets could be mapped at the border, 492 leaving known unicast packets un-mapped. However, this surrenders 493 much of the unique nickname advantage of simpler border TRILL 494 switches. 496 Scaling to a very large campus with unique nicknames might exhaust 497 the 16-bit TRILL nicknames space particularly if (1) additional 498 nicknames are consumed to support active-active end station groups at 499 the TRILL edge using the techniques standardized in [RFC7781] and (2) 500 use of the nickname space is less efficient due to the allocation of, 501 for example, power-of-two size blocks of nicknames to areas in the 502 same way that use of the IP address space is made less efficient by 503 hierarchical allocation (see [RFC3194]). One method to avoid nickname 504 exhaustion might be to expand nicknames to 24 bits; however, that 505 technique would require TRILL message format and fast path processing 506 changes and that all TRILL switches in the campus understand larger 507 nicknames. 509 2.2.2 More Details on Aggregated Nicknames 511 The aggregated nickname approach enables passing far less nickname 512 information. It works as follows, assuming both the source and 513 destination areas are using aggregated nicknames: 515 There are at least two ways areas could be identified. 517 One method would be to assign each area a 16-bit nickname. This 518 would not be the nickname of any actual TRILL switch. Instead, it 519 would be the nickname of the area itself. Border TRILL switches 520 would know the area nickname for their own area(s). For an 521 example of a more specific multilevel proposal using unique 522 nicknames, see [DraftUnique]. 524 Alternatively, areas could be identified by the set of nicknames 525 that identify the border routers for that area. (See [SingleName] 526 for a multilevel proposal using such a set of nicknames.) 528 The TRILL Header nickname fields in TRILL Data packets being 529 transported through a multilevel TRILL campus with aggregated 530 nicknames are as follows: 532 - When both the ingress and egress TRILL switches are in the same 533 area, there need be no change from the existing base TRILL 534 protocol standard in the TRILL Header nickname fields. 536 - When being transported between different Level 1 areas in Level 537 2, the ingress nickname is a nickname of the ingress TRILL 538 switch's area while the egress nickname is either a nickname of 539 the egress TRILL switch's area or a tree nickname. 541 - When being transported from Level 1 to Level 2, the ingress 542 nickname is the nickname of the ingress TRILL switch itself 543 while the egress nickname is either a nickname for the area of 544 the egress TRILL switch or a tree nickname. 546 - When being transported from Level 2 to Level 1, the ingress 547 nickname is a nickname for the ingress TRILL switch's area while 548 the egress nickname is either the nickname of the egress TRILL 549 switch itself or a tree nickname. 551 There are two variations of the aggregated nickname approach. The 552 first is the Border Learning approach, which is described in Section 553 2.2.2.1. The second is the Swap Nickname Field approach, which is 554 described in Section 2.2.2.2. Section 2.2.2.3 compares the advantages 555 and disadvantages of these two variations of the aggregated nickname 556 approach. 558 2.2.2.1 Border Learning Aggregated Nicknames 560 This section provides an illustrative example and description of the 561 border learning variation of aggregated nicknames where a single 562 nickname is used to identify an area. 564 In the following picture, RB2 and RB3 are area border TRILL switches 565 (RBridges). A source S is attached to RB1. The two areas have 566 nicknames 15961 and 15918, respectively. RB1 has a nickname, say 27, 567 and RB4 has a nickname, say 44 (and in fact, they could even have the 568 same nickname, since the TRILL switch nickname will not be visible 569 outside these aggregated areas). 571 Area 15961 level 2 Area 15918 572 +-------------------+ +-----------------+ +--------------+ 573 | | | | | | 574 | S--RB1---Rx--Rz----RB2---Rb---Rc--Rd---Re--RB3---Rk--RB4---D | 575 | 27 | | | | 44 | 576 | | | | | | 577 +-------------------+ +-----------------+ +--------------+ 579 Let's say that S transmits a frame to destination D, which is 580 connected to RB4, and let's say that D's location has already been 581 learned by the relevant TRILL switches. These relevant switches have 582 learned the following: 584 1) RB1 has learned that D is connected to nickname 15918 585 2) RB3 has learned that D is attached to nickname 44. 587 The following sequence of events will occur: 589 - S transmits an Ethernet frame with source MAC = S and destination 590 MAC = D. 592 - RB1 encapsulates with a TRILL header with ingress RBridge = 27, 593 and egress = 15918 producing a TRILL Data packet. 595 - RB2 has announced in the Level 1 IS-IS instance in area 15961, 596 that it is attached to all the area nicknames, including 15918. 597 Therefore, IS-IS routes the packet to RB2. Alternatively, if a 598 distinguished range of nicknames is used for Level 2, Level 1 599 TRILL switches seeing such an egress nickname will know to route 600 to the nearest border router, which can be indicated by the IS-IS 601 attached bit. 603 - RB2, when transitioning the packet from Level 1 to Level 2, 604 replaces the ingress TRILL switch nickname with the area nickname, 605 so replaces 27 with 15961. Within Level 2, the ingress RBridge 606 field in the TRILL header will therefore be 15961, and the egress 607 RBridge field will be 15918. Also RB2 learns that S is attached to 608 nickname 27 in area 15961 to accommodate return traffic. 610 - The packet is forwarded through Level 2, to RB3, which has 611 advertised, in Level 2, reachability to the nickname 15918. 613 - RB3, when forwarding into area 15918, replaces the egress nickname 614 in the TRILL header with RB4's nickname (44). So, within the 615 destination area, the ingress nickname will be 15961 and the 616 egress nickname will be 44. 618 - RB4, when decapsulating, learns that S is attached to nickname 619 15961, which is the area nickname of the ingress. 621 Now suppose that D's location has not been learned by RB1 and/or RB3. 622 What will happen, as it would in TRILL today, is that RB1 will 623 forward the packet as multi-destination, choosing a tree. As the 624 multi-destination packet transitions into Level 2, RB2 replaces the 625 ingress nickname with the area nickname. If RB1 does not know the 626 location of D, the packet must be flooded, subject to possible 627 pruning, in Level 2 and, subject to possible pruning, from Level 2 628 into every Level 1 area that it reaches on the Level 2 distribution 629 tree. 631 Now suppose that RB1 has learned the location of D (attached to 632 nickname 15918), but RB3 does not know where D is. In that case, RB3 633 must turn the packet into a multi-destination packet within area 634 15918. In this case, care must be taken so that in the case in which 635 RB3 is not the Designated transitioner between Level 2 and its area 636 for that multi-destination packet, but was on the unicast path, that 637 border TRILL switch in that area does not forward the now multi- 638 destination packet back into Level 2. Therefore, it would be 639 desirable to have a marking, somehow, that indicates the scope of 640 this packet's distribution to be "only this area" (see also Section 641 4). 643 In cases where there are multiple transitioners for unicast packets, 644 the border learning mode of operation requires that the address 645 learning between them be shared by some protocol such as running 646 ESADI [RFC7357] for all Data Labels of interest to avoid excessive 647 unknown unicast flooding. 649 The potential issue described at the end of Section 2.2.1 with trees 650 in the unique nickname alternative is eliminated with aggregated 651 nicknames. With aggregated nicknames, each border TRILL switch that 652 will transition multi-destination packets can have a mapping between 653 Level 2 tree nicknames and Level 1 tree nicknames. There need not 654 even be agreement about the total number of trees; just that the 655 border TRILL switch have some mapping, and replace the egress TRILL 656 switch nickname (the tree name) when transitioning levels. 658 2.2.2.2 Swap Nickname Field Aggregated Nicknames 660 There is a variant possibility where two additional fields could 661 exist in TRILL Data packets that could be called the "ingress swap 662 nickname field" and the "egress swap nickname field". This variant is 663 described below for completeness but would require fast path hardware 664 changes from the existing TRILL protocol. The changes in the example 665 above would be as follows: 667 - RB1 will have learned the area nickname of D and the TRILL switch 668 nickname of RB4 to which D is attached. In encapsulating a frame 669 to D, it puts an area nickname of D (15918) in the egress nickname 670 field of the TRILL Header and puts a nickname of RB3 (44) in a 671 egress swap nickname field. 673 - RB2 moves the ingress nickname to the ingress swap nickname field 674 and inserts 15961, an area nickname for S, into the ingress 675 nickname field. 677 - RB3 swaps the egress nickname and the egress swap nickname fields, 678 which sets the egress nickname to 44. 680 - RB4 learns the correspondence between the source MAC/VLAN of S and 681 the { ingress nickname, ingress swap nickname field } pair as it 682 decapsulates and egresses the frame. 684 See [DraftAggregated] for a multilevel proposal using aggregated swap 685 nicknames with a single nickname representing an area. 687 2.2.2.3 Comparison 689 The Border Learning variant described in Section 2.2.2.1 above 690 minimizes the change in non-border TRILL switches but imposes the 691 burden on border TRILL switches of learning and doing lookups in all 692 the end station MAC addresses within their area(s) that are used for 693 communication outside the area. This burden could be reduced by 694 decreasing the area size and increasing the number of areas. 696 The Swap Nickname Field variant described in Section 2.2.2.2 697 eliminates the extra address learning burden on border TRILL switches 698 but requires changes to the TRILL data packet header and more 699 extensive changes to non-border TRILL switches. In particular, with 700 this alternative, non-border TRILL switches must learn to associate 701 both a TRILL switch nickname and an area nickname with end station 702 MAC/label pairs (except for addresses that are local to their area). 704 The Swap Nickname Field alternative is more scalable but less 705 backward compatible for non-border TRILL switches. It would be 706 possible for border and other level 2 TRILL switches to support both 707 Border Learning, for support of legacy Level 1 TRILL switches, and 708 Swap Nickname, to support Level 1 TRILL switches that understood the 709 Swap Nickname method based on variations in the TRILL header but this 710 would be even more complex. 712 The requirement to change the TRILL header and fast path processing 713 to support the Swap Nickname Field variant make it impractical for 714 the foreseeable future. 716 2.3 Building Multi-Area Trees 718 It is easy to build a multi-area tree by building a tree in each area 719 separately, (including the Level 2 "area"), and then having only a 720 single border TRILL switch, say RBx, in each area, attach to the 721 Level 2 area. RBx would forward all multi-destination packets 722 between that area and Level 2. 724 People might find this unacceptable, however, because of the desire 725 to path split (not always sending all multi-destination traffic 726 through the same border TRILL switch). 728 This is the same issue as with multiple ingress TRILL switches 729 injecting traffic from a pseudonode, and can be solved with the 730 mechanism that was adopted for that purpose: the affinity TLV 732 [RFC7783]. For each tree in the area, at most one border RB 733 announces itself in an affinity TLV with that tree name. 735 2.4 The RPF Check for Trees 737 For multi-destination data originating locally in RBx's area, 738 computation of the RPF check is done as today. For multi-destination 739 packets originating outside RBx's area, computation of the RPF check 740 must be done based on which one of the border TRILL switches (say 741 RB1, RB2, or RB3) injected the packet into the area. 743 A TRILL switch, say RB4, located inside an area, must be able to know 744 which of RB1, RB2, or RB3 transitioned the packet into the area from 745 Level 2 (or into Level 2 from an area). 747 This could be done based on having the DBRB announce the transitioner 748 assignments to all the TRILL switches in the area, or the Affinity 749 TLV mechanism given in [RFC7783], or a New Tree Encoding mechanism 750 discussed in Section 4.1.1. 752 2.5 Area Nickname Acquisition 754 In the aggregated nickname alternative, each area must acquire a 755 unique area nickname or can be identified by the set of border TRILL 756 switches. It is probably simpler to allocate a block of nicknames 757 (say, the top 4000) to either (1) represent areas and not specific 758 TRILL switches or (2) used by border TRILL switches if the set of 759 such border TRILL switches represent the area. 761 The nicknames used for area identification need to be advertised and 762 acquired through Level 2. 764 Within an area, all the border TRILL switches can discover each other 765 through the Level 1 link state database, by using the IS-IS attach 766 bit or by explicitly advertising in their LSP "I am a border 767 RBridge". 769 Of the border TRILL switches, one will have highest priority (say 770 RB7). RB7 can dynamically participate, in Level 2, to acquire a 771 nickname for identifying the area. Alternatively, RB7 could give the 772 area a pseudonode IS-IS ID, such as RB7.5, within Level 2. So an 773 area would appear, in Level 2, as a pseudonode and the pseudonode 774 could participate, in Level 2, to acquire a nickname for the area. 776 Within Level 2, all the border TRILL switches for an area can 777 advertise reachability to the area, which would mean connectivity to 778 a nickname identifying the area. 780 2.6 Link State Representation of Areas 782 Within an area, say area A1, there is an election for the DBRB, 783 (Designated Border RBridge), say RB1. This can be done through LSPs 784 within area A1. The border TRILL switches announce themselves, 785 together with their DBRB priority. (Note that the election of the 786 DBRB cannot be done based on Hello messages, because the border TRILL 787 switches are not necessarily physical neighbors of each other. They 788 can, however, reach each other through connectivity within the area, 789 which is why it will work to find each other through Level 1 LSPs.) 791 RB1 can acquire an area nickname (in the aggregated nickname 792 approach) and may give the area a pseudonode IS-IS ID (just like the 793 DRB would give a pseudonode IS-IS ID to a link) depending on how the 794 area nickname is handled. RB1 advertises, in area A1, an area 795 nickname that RB1 has acquired (and what the pseudonode IS-IS ID for 796 the area is if needed). 798 Level 1 LSPs (possibly pseudonode) initiated by RB1 for the area 799 include any information external to area A1 that should be input into 800 area A1 (such as nicknames of external areas, or perhaps (in the 801 unique nickname variant) all the nicknames of external TRILL switches 802 in the TRILL campus and pruning information such as multicast 803 listeners and labels). All the other border TRILL switches for the 804 area announce (in their LSP) attachment to that area. 806 Within Level 2, RB1 generates a Level 2 LSP on behalf of the area. 807 The same pseudonode ID could be used within Level 1 and Level 2, for 808 the area. (There does not seem any reason why it would be useful for 809 it to be different, but there's also no reason why it would need to 810 be the same). Likewise, all the area A1 border TRILL switches would 811 announce, in their Level 2 LSPs, connection to the area. 813 3. Area Partition 815 It is possible for an area to become partitioned, so that there is 816 still a path from one section of the area to the other, but that path 817 is via the Level 2 area. 819 With multilevel TRILL, an area will naturally break into two areas in 820 this case. 822 Area addresses might be configured to ensure two areas are not 823 inadvertently connected. Area addresses appear in Hellos and LSPs 824 within the area. If two chunks, connected only via Level 2, were 825 configured with the same area address, this would not cause any 826 problems. (They would just operate as separate Level 1 areas.) 828 A more serious problem occurs if the Level 2 area is partitioned in 829 such a way that it could be healed by using a path through a Level 1 830 area. TRILL will not attempt to solve this problem. Within the Level 831 1 area, a single border RBridge will be the DBRB, and will be in 832 charge of deciding which (single) RBridge will transition any 833 particular multi-destination packets between that area and Level 2. 834 If the Level 2 area is partitioned, this will result in multi- 835 destination data only reaching the portion of the TRILL campus 836 reachable through the partition attached to the TRILL switch that 837 transitions that packet. It will not cause a loop. 839 4. Multi-Destination Scope 841 There are at least two reasons it would be desirable to be able to 842 mark a multi-destination packet with a scope that indicates the 843 packet should not exit the area, as follows: 845 1. To address an issue in the border learning variant of the 846 aggregated nickname alternative, when a unicast packet turns into 847 a multi-destination packet when transitioning from Level 2 to 848 Level 1, as discussed in Section 4.1. 850 2. To constrain the broadcast domain for certain discovery, 851 directory, or service protocols as discussed in Section 4.2. 853 Multi-destination packet distribution scope restriction could be done 854 in a number of ways. For example, there could be a flag in the packet 855 that means "for this area only". However, the technique that might 856 require the least change to TRILL switch fast path logic would be to 857 indicate this in the egress nickname that designates the distribution 858 tree being used. There could be two general tree nicknames for each 859 tree, one being for distribution restricted to the area and the other 860 being for multi-area trees. Or there would be a set of N (perhaps 16) 861 special currently reserved nicknames used to specify the N highest 862 priority trees but with the variation that if the special nickname is 863 used for the tree, the packet is not transitioned between areas. Or 864 one or more special trees could be built that were restricted to the 865 local area. 867 4.1 Unicast to Multi-destination Conversions 869 In the border learning variant of the aggregated nickname 870 alternative, the following situation may occur: 871 - a unicast packet might be known at the Level 1 to Level 2 872 transition and be forwarded as a unicast packet to the least cost 873 border TRILL switch advertising connectivity to the destination 874 area, but 875 - upon arriving at the border TRILL switch, it turns out to have an 876 unknown destination { MAC, Data Label } pair. 878 In this case, the packet must be converted into a multi-destination 879 packet and flooded in the destination area. However, if the border 880 TRILL switch doing the conversion is not the border TRILL switch 881 designated to transition the resulting multi-destination packet, 882 there is the danger that the designated transitioner may pick up the 883 packet and flood it back into Level 2 from which it may be flooded 884 into multiple areas. This danger can be avoided by restricting any 885 multi-destination packet that results from such a conversion to the 886 destination area as described above. 888 Alternatively, a multi-destination packet intended only for the area 889 could be tunneled (within the area) to the RBridge RBx, that is the 890 appointed transitioner for that form of packet (say, based on VLAN or 891 FGL), with instructions that RBx only transmit the packet within the 892 area, and RBx could initiate the multi-destination packet within the 893 area. Since RBx introduced the packet, and is the only one allowed 894 to transition that packet to Level 2, this would accomplish scoping 895 of the packet to within the area. Since this case only occurs in the 896 unusual case when unicast packets need to be turned into multi- 897 destination as described above, the suboptimality of tunneling 898 between the border TRILL switch that receives the unicast packet and 899 the appointed level transitioner for that packet, might not be an 900 issue. 902 4.1.1 New Tree Encoding 904 The current encoding, in a TRILL header, of a tree, is of the 905 nickname of the tree root. This requires all 16 bits of the egress 906 nickname field. TRILL could instead, for example, use the bottom 6 907 bits to encode the tree number (allowing 64 trees), leaving 10 bits 908 to encode information such as: 910 o scope: a flag indicating whether it should be single area only, or 911 entire campus 912 o border injector: an indicator of which of the k border TRILL 913 switches injected this packet 915 If TRILL were to adopt this new encoding, any of the TRILL switches 916 in an edge group could inject a multi-destination packet. This would 917 require all TRILL switches to be changed to understand the new 918 encoding for a tree, and it would require a TLV in the LSP to 919 indicate which number each of the TRILL switches in an edge group 920 would be. 922 While there are a number of advantages to this technique, it requires 923 fast path logic changes and thus its deployment is not practical at 924 this time. It is included here for completeness. 926 4.2 Selective Broadcast Domain Reduction 928 There are a number of service, discovery, and directory protocols 929 that, for convenience, are accessed via multicast or broadcast 930 frames. Examples are DHCP, (Dynamic Host Configuration Protocol) the 931 NetBIOS Service Location Protocol, and multicast DNS (Domain Name 932 Service). 934 Some such protocols provide means to restrict distribution to an IP 935 subnet or equivalent to reduce size of the broadcast domain they are 936 using and then provide a proxy that can be placed in that subnet to 937 use unicast to access a service elsewhere. In cases where a proxy 938 mechanism is not currently defined, it may be possible to create one 939 that references a central server or cache. With multilevel TRILL, it 940 is possible to construct very large IP subnets that could become 941 saturated with multi-destination traffic of this type unless packets 942 can be further restricted in their distribution. Such restricted 943 distribution can be accomplished for some protocols, say protocol P, 944 in a variety of ways including the following: 946 - Either (1) at all ingress TRILL switches in an area place all 947 protocol P multi-destination packets on a distribution tree in 948 such a way that the packets are restricted to the area or (2) at 949 all border TRILL switches between that area and Level 2, detect 950 protocol P multi-destination packets and do not transition them. 952 - Then place one, or a few for redundancy, protocol P proxies inside 953 each area where protocol P may be in use. These proxies unicast 954 protocol P requests or other messages to the actual campus 955 server(s) for P. They also receive unicast responses or other 956 messages from those servers and deliver them within the area via 957 unicast, multicast, or broadcast as appropriate. (Such proxies 958 would not be needed if it was acceptable for all protocol P 959 traffic to be restricted to an area.) 961 While it might seem logical to connect the campus servers to TRILL 962 switches in Level 2, they could be placed within one or more areas so 963 that, in some cases, those areas might not require a local proxy 964 server. 966 5. Co-Existence with Old TRILL switches 968 TRILL switches that are not multilevel aware may have a problem with 969 calculating RPF Check and filtering information, since they would not 970 be aware of the assignment of border TRILL switch transitioning. 972 A possible solution, as long as any old TRILL switches exist within 973 an area, is to have the border TRILL switches elect a single DBRB 974 (Designated Border RBridge), and have all inter-area traffic go 975 through the DBRB (unicast as well as multi-destination). If that 976 DBRB goes down, a new one will be elected, but at any one time, all 977 inter-area traffic (unicast as well as multi-destination) would go 978 through that one DRBR. However this eliminates load splitting at 979 level transition. 981 6. Multi-Access Links with End Stations 983 Care must be taken in the case where there are multiple TRILL 984 switches on a link with one or more end stations, keeping in mind 985 that end stations are TRILL ignorant. In particular, it is essential 986 that only one TRILL switch ingress/egress any given data packet 987 from/to an end station so that connectivity is provided to that end 988 station without duplicating end station data and that loops are not 989 formed due to one TRILL switch egressing data in native form (i.e., 990 with no TRILL header) and having that data re-ingressed by another 991 TRILL switch on the link. 993 With existing, single level TRILL, this is done by electing a single 994 Designated RBridge per link, which appoints a single Appointed 995 Forwarder per VLAN [RFC7177] [RFC8139]. This mechanism depends on the 996 RBridges establishing adjacency. But suppose there are two (or more) 997 TRILL switches on a link in different areas, say RB1 in area A1 and 998 RB2 in area A2, as shown below, and that the link also has one or 999 more end stations attached. If RB1 and RB2 ignore each other's 1000 Hellos because they are in different areas, as they are required to 1001 do under normal IS-IS PDU processing rules, then they will not form 1002 an adjacency. If they are not adjacent, they will ignore each other 1003 for the Appointed Forwarder mechanism and will both ingress/egress 1004 end station traffic on the link causing loops and duplication. 1006 The problem is not avoiding adjacency or avoiding TRILL Data packet 1007 transfer between RB1 and RB2. The area address mechanism of IS-IS or 1008 possibly the use of topology constraints or the like does that quite 1009 well. The problem stems from end stations being TRILL ignorant so 1010 care must be taken that multiple RBridges on a link do not ingress 1011 the same frame originated by an end station and so that an RBridge 1012 does not ingress a native frame egressed by a different RBridge 1013 because the RBridge mistakes the frame for a frame originated by an 1014 end station. 1016 +--------------------------------------------+ 1017 | Level 2 | 1018 +----------+---------------------+-----------+ 1019 | Area A1 | | Area A2 | 1020 | +---+ | | +---+ | 1021 | |RB1| | | |RB2| | 1022 | +-+-+ | | +-+-+ | 1023 | | | | | | 1024 +-----|----+ +-----|-----+ 1025 | | 1026 --+---------+-------------+--------+-- Link 1027 | | 1028 +------+------+ +--+----------+ 1029 | End Station | | End Station | 1030 +-------------+ +-------------+ 1032 A simple rule, which is preferred, is to use the TRILL switch or 1033 switches having the lowest numbered area, comparing area numbers as 1034 unsigned integers, to handle all native traffic to/from end stations 1035 on the link. This would automatically give multilevel-ignorant legacy 1036 TRILL switches, that would be using area number zero, highest 1037 priority for handling end station traffic, which they would try to do 1038 anyway. 1040 Other methods are possible. For example doing the selection of 1041 Appointed Forwarders and of the TRILL switch in charge of that 1042 selection across all TRILL switches on the link regardless of area. 1043 However, a special case would then have to be made for legacy TRILL 1044 switches using area number zero. 1046 These techniques require multilevel aware TRILL switches to take 1047 actions based on Hellos from RBridges in other areas even though they 1048 will not form an adjacency with such RBridges. However, the action is 1049 quite simple in the preferred case: if a TRILL switch sees Hellos 1050 from lower numbered areas, then they would not act as an Appointed 1051 Forwarder on the link until the Hello timer for such Hellos had 1052 expired. 1054 7. Summary 1056 This draft describes potential scaling issues in TRILL and discusses 1057 possible approaches to multilevel TRILL as a solution or element of a 1058 solution to most of them. 1060 The alternative using aggregated areas in multilevel TRILL has 1061 significant advantages in terms of scalability over using campus wide 1062 unique nicknames, not just in avoiding nickname exhaustion, but by 1063 allowing RPF Checks to be aggregated based on an entire area. 1064 However, the alternative of using unique nicknames is simpler and 1065 avoids the changes in border TRILL switches required to support 1066 aggregated nicknames. It is possible to support both. For example, a 1067 TRILL campus could use simpler unique nicknames until scaling begins 1068 to cause problems and then start to introduce areas with aggregated 1069 nicknames. 1071 Some multilevel TRILL issues are not difficult, such as dealing with 1072 partitioned areas. Other issues are more difficult, especially 1073 dealing with old TRILL switches that are multilevel ignorant. 1075 8. Security Considerations 1077 This informational document explores alternatives for the design of 1078 multilevel IS-IS in TRILL and generally does not consider security 1079 issues. 1081 If aggregated nicknames are used in two areas that have the same area 1082 address and those areas merge, there is a possibility of a transient 1083 nickname collision that would not occur with unique nicknames. Such a 1084 collision could cause a data packet to be delivered to the wrong 1085 egress TRILL switch but it would still not be delivered to any end 1086 station in the wrong Data Label; thus such delivery would still 1087 conform to security policies. 1089 For general TRILL Security Considerations, see [RFC6325]. 1091 9. IANA Considerations 1093 This document requires no IANA actions. RFC Editor: Please remove 1094 this section before publication. 1096 Normative References 1098 [IS-IS] - ISO/IEC 10589:2002, Second Edition, "Intermediate System to 1099 Intermediate System Intra-Domain Routing Exchange Protocol for 1100 use in Conjunction with the Protocol for Providing the 1101 Connectionless-mode Network Service (ISO 8473)", 2002. 1103 [RFC6325] - Perlman, R., Eastlake 3rd, D., Dutt, D., Gai, S., and A. 1104 Ghanwani, "Routing Bridges (RBridges): Base Protocol 1105 Specification", RFC 6325, July 2011. 1107 [RFC7177] - Eastlake 3rd, D., Perlman, R., Ghanwani, A., Yang, H., 1108 and V. Manral, "Transparent Interconnection of Lots of Links 1109 (TRILL): Adjacency", RFC 7177, May 2014, . 1112 [RFC7780] - Eastlake 3rd, D., Zhang, M., Perlman, R., Banerjee, A., 1113 Ghanwani, A., and S. Gupta, "Transparent Interconnection of 1114 Lots of Links (TRILL): Clarifications, Corrections, and 1115 Updates", RFC 7780, DOI 10.17487/RFC7780, February 2016, 1116 . 1118 [RFC8139] - Eastlake, D., Li, Y., Umair, M., Banerjee, A., and F. Hu, 1119 "Transparent Interconnection of Lots of Links (TRILL): 1120 Appointed Forwarders", RFC 8139, DOI 10.17487/RFC8139, June 1121 2017, . 1123 Informative References 1125 [RFC3194] - Durand, A. and C. Huitema, "The H-Density Ratio for 1126 Address Assignment Efficiency An Update on the H ratio", RFC 1127 3194, DOI 10.17487/RFC3194, November 2001, . 1130 [RFC6361] - Carlson, J. and D. Eastlake 3rd, "PPP Transparent 1131 Interconnection of Lots of Links (TRILL) Protocol Control 1132 Protocol", RFC 6361, August 2011. 1134 [RFC7172] - Eastlake 3rd, D., Zhang, M., Agarwal, P., Perlman, R., 1135 and D. Dutt, "Transparent Interconnection of Lots of Links 1136 (TRILL): Fine-Grained Labeling", RFC 7172, May 2014 1138 [RFC7176] - Eastlake 3rd, D., Senevirathne, T., Ghanwani, A., Dutt, 1139 D., and A. Banerjee, "Transparent Interconnection of Lots of 1140 Links (TRILL) Use of IS-IS", RFC 7176, May 2014. 1142 [RFC7357] - Zhai, H., Hu, F., Perlman, R., Eastlake 3rd, D., and O. 1144 Stokes, "Transparent Interconnection of Lots of Links (TRILL): 1145 End Station Address Distribution Information (ESADI) Protocol", 1146 RFC 7357, September 2014, . 1149 [RFC7781] - Zhai, H., Senevirathne, T., Perlman, R., Zhang, M., and 1150 Y. Li, "Transparent Interconnection of Lots of Links (TRILL): 1151 Pseudo-Nickname for Active-Active Access", RFC 7781, DOI 1152 10.17487/RFC7781, February 2016, . 1155 [RFC7783] - Senevirathne, T., Pathangi, J., and J. Hudson, 1156 "Coordinated Multicast Trees (CMT) for Transparent 1157 Interconnection of Lots of Links (TRILL)", RFC 7783, DOI 1158 10.17487/RFC7783, February 2016, . 1161 [DraftAggregated] - Bhargav Bhikkaji, Balaji Venkat Venkataswami, 1162 Narayana Perumal Swamy, "Connecting Disparate Data 1163 Center/PBB/Campus TRILL sites using BGP", draft-balaji-trill- 1164 over-ip-multi-level, Work In Progress. 1166 [DraftUnique] - M. Zhang, D. Eastlake, R. Perlman, M. Cullen, H. 1167 Zhai, D. Liu, "TRILL Multilevel Using Unique Nicknames", draft- 1168 ietf-trill-multilevel-unique-nickname, Work In Progress. 1170 [SingleName] - Mingui Zhang, et. al, "Single Area Border RBridge 1171 Nickname for TRILL Multilevel", draft-ietf-trill-multilevel- 1172 single-nickname, Work in Progress. 1174 Acknowledgements 1176 The helpful comments and contributions of the following are hereby 1177 acknowledged: 1179 Alia Atlas, David Michael Bond, Dino Farinacci, Sue Hares, Gayle 1180 Noble, Alexander Vainshtein, and Stig Venaas. 1182 The document was prepared in raw nroff. All macros used were defined 1183 within the source file. 1185 Authors' Addresses 1187 Radia Perlman 1188 EMC 1189 2010 256th Avenue NE, #200 1190 Bellevue, WA 98007 USA 1192 EMail: radia@alum.mit.edu 1194 Donald Eastlake 1195 Huawei Technologies 1196 155 Beaver Street 1197 Milford, MA 01757 USA 1199 Phone: +1-508-333-2270 1200 Email: d3e3e3@gmail.com 1202 Mingui Zhang 1203 Huawei Technologies 1204 No.156 Beiqing Rd. Haidian District, 1205 Beijing 100095 P.R. China 1207 EMail: zhangmingui@huawei.com 1209 Anoop Ghanwani 1210 Dell 1211 5450 Great America Parkway 1212 Santa Clara, CA 95054 USA 1214 EMail: anoop@alumni.duke.edu 1216 Hongjun Zhai 1217 Jinling Institute of Technology 1218 99 Hongjing Avenue, Jiangning District 1219 Nanjing, Jiangsu 211169 China 1221 EMail: honjun.zhai@tom.com 1223 Copyright and IPR Provisions 1225 Copyright (c) 2017 IETF Trust and the persons identified as the 1226 document authors. All rights reserved. 1228 This document is subject to BCP 78 and the IETF Trust's Legal 1229 Provisions Relating to IETF Documents 1230 (http://trustee.ietf.org/license-info) in effect on the date of 1231 publication of this document. Please review these documents 1232 carefully, as they describe your rights and restrictions with respect 1233 to this document. Code Components extracted from this document must 1234 include Simplified BSD License text as described in Section 4.e of 1235 the Trust Legal Provisions and are provided without warranty as 1236 described in the Simplified BSD License. The definitive version of 1237 an IETF Document is that published by, or under the auspices of, the 1238 IETF. Versions of IETF Documents that are published by third parties, 1239 including those that are translated into other languages, should not 1240 be considered to be definitive versions of IETF Documents. The 1241 definitive version of these Legal Provisions is that published by, or 1242 under the auspices of, the IETF. Versions of these Legal Provisions 1243 that are published by third parties, including those that are 1244 translated into other languages, should not be considered to be 1245 definitive versions of these Legal Provisions. For the avoidance of 1246 doubt, each Contributor to the IETF Standards Process licenses each 1247 Contribution that he or she makes as part of the IETF Standards 1248 Process to the IETF Trust pursuant to the provisions of RFC 5378. No 1249 language to the contrary, or terms, conditions or rights that differ 1250 from or are inconsistent with the rights and licenses granted under 1251 RFC 5378, shall have any effect and shall be null and void, whether 1252 published or posted by such Contributor, or included with or in such 1253 Contribution.