idnits 2.17.1 draft-ietf-trill-rbridge-multilevel-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (April 27, 2017) is 2549 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TRILL Working Group Radia Perlman 2 INTERNET-DRAFT EMC 3 Intended status: Informational Donald Eastlake 4 Mingui Zhang 5 Huawei 6 Anoop Ghanwani 7 Dell 8 Hongjun Zhai 9 JIT 10 Expires: October 26, 2017 April 27, 2017 12 Alternatives for Multilevel TRILL 13 (Transparent Interconnection of Lots of Links) 14 16 Abstract 18 Although TRILL is based on IS-IS, which supports multilevel unicast 19 routing, extending TRILL to multiple levels has challenges that are 20 not addressed by the already-existing capabilities of IS-IS. One 21 issue is with the handling of multi-destination packet distribution 22 trees. Other issues are with TRILL switch nicknames. How are such 23 nicknames allocated across a multilevel TRILL network? Do nicknames 24 need to be unique across an entire multilevel TRILL network or can 25 they merely be unique within each multilevel area? 27 This informational document enumerates and examines alternatives 28 based on a number of factors including backward compatibility, 29 simplicity, and scalability and makes recommendations in some cases. 31 Status of This Memo 33 This Internet-Draft is submitted to IETF in full conformance with the 34 provisions of BCP 78 and BCP 79. Distribution of this document is 35 unlimited. Comments should be sent to the TRILL working group 36 mailing list . 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF), its areas, and its working groups. Note that 40 other groups may also distribute working documents as Internet- 41 Drafts. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 47 The list of current Internet-Drafts can be accessed at 48 http://www.ietf.org/1id-abstracts.html. The list of Internet-Draft 49 Shadow Directories can be accessed at 50 http://www.ietf.org/shadow.html. 52 Table of Contents 54 1. Introduction............................................4 55 1.1 The Motivation for Multilevel..........................4 56 1.2 Improvements Due to Multilevel.........................5 57 1.2.1. The Routing Computation Load........................5 58 1.2.2. LSDB Volatility Creating Too Much Control Traffic...5 59 1.2.3. LSDB Volatility Causing To Much Time Unconverged....5 60 1.2.4. The Size Of The LSDB................................6 61 1.2.5 Nickname Limit.......................................6 62 1.2.6 Multi-Destination Traffic............................7 63 1.3 Unique and Aggregated Nicknames........................7 64 1.4 More on Areas..........................................8 65 1.5 Terminology and Acronyms...............................8 67 2. Multilevel TRILL Issues................................10 68 2.1 Non-zero Area Addresses...............................11 69 2.2 Aggregated versus Unique Nicknames....................11 70 2.2.1 More Details on Unique Nicknames....................12 71 2.2.2 More Details on Aggregated Nicknames................13 72 2.2.2.1 Border Learning Aggregated Nicknames..............14 73 2.2.2.2 Swap Nickname Field Aggregated Nicknames..........16 74 2.2.2.3 Comparison........................................17 75 2.3 Building Multi-Area Trees.............................17 76 2.4 The RPF Check for Trees...............................18 77 2.5 Area Nickname Acquisition.............................18 78 2.6 Link State Representation of Areas....................19 80 3. Area Partition.........................................20 82 4. Multi-Destination Scope................................21 83 4.1 Unicast to Multi-destination Conversions..............21 84 4.1.1 New Tree Encoding...................................22 85 4.2 Selective Broadcast Domain Reduction..................22 87 5. Co-Existence with Old TRILL switches...................24 88 6. Multi-Access Links with End Stations...................25 89 7. Summary................................................27 90 8. Security Considerations................................28 91 9. IANA Considerations....................................28 93 Normative References......................................29 94 Informative References....................................29 96 1. Introduction 98 The IETF TRILL (Transparent Interconnection of Lot of Links) protocol 99 [RFC6325] [RFC7177] [RFC7780] provides optimal pair-wise data routing 100 without configuration, safe forwarding even during periods of 101 temporary loops, and support for multipathing of both unicast and 102 multicast traffic in networks with arbitrary topology and link 103 technology, including multi-access links. TRILL accomplishes this by 104 using IS-IS (Intermediate System to Intermediate System [IS-IS] 105 [RFC7176]) link state routing in conjunction with a header that 106 includes a hop count. The design supports data labels (VLANs and Fine 107 Grained Labels [RFC7172]) and optimization of the distribution of 108 multi-destination data based on data label and multicast group. 109 Devices that implement TRILL are called TRILL Switches or RBridges. 111 Familiarity with [IS-IS], [RFC6325], and [RFC7780] is assumed in this 112 document. 114 1.1 The Motivation for Multilevel 116 The primary motivation for multilevel TRILL is to improve 117 scalability. The following issues might limit the scalability of a 118 TRILL-based network: 120 1. The routing computation load 121 2. The volatility of the link state database (LSDB) creating too much 122 control traffic 123 3. The volatility of the LSDB causing the TRILL network to be in an 124 unconverged state too much of the time 125 4. The size of the LSDB 126 5. The limit of the number of TRILL switches, due to the 16-bit 127 nickname space (for further information on why this might be a 128 problem, see Section 1.2.5) 129 6. The traffic due to upper layer protocols use of broadcast and 130 multicast 131 7. The size of the end node learning table (the table that remembers 132 (egress TRILL switch, label/MAC) pairs) 134 As discussed below, extending TRILL IS-IS to be multilevel 135 (hierarchical) can help with all of these issues except issue 7. 137 IS-IS was designed to be multilevel [IS-IS]. A network can be 138 partitioned into "areas". Routing within an area is known as "Level 139 1 routing". Routing between areas is known as "Level 2 routing". 140 The Level 2 IS-IS network consists of Level 2 routers and links 141 between the Level 2 routers. Level 2 routers may participate in one 142 or more Level 1 areas, in addition to their role as Level 2 routers. 144 Each area is connected to Level 2 through one or more "border 145 routers", which participate both as a router inside the area, and as 146 a router inside the Level 2 "area". Care must be taken that it is 147 clear, when transitioning multi-destination packets between Level 2 148 and a Level 1 area in either direction, that exactly one border TRILL 149 switch will transition a particular data packet between the levels or 150 else duplication or loss of traffic can occur. 152 1.2 Improvements Due to Multilevel 154 Partitioning the network into areas directly solves the first four 155 scalability issues listed above as described in Sections 1.2.1 156 through 1.2.4. Multilevel also contributes to solving issues 5 and 6 157 as discussed in Section 1.2.5 and 1.2.6 respectively. In the 158 subsections below, N indicates the number of TRILL switches in a 159 TRILL campus. 161 1.2.1. The Routing Computation Load 163 The optimized computational effort to calculate least cost routes at 164 a TRILL switch in a single level campus is on the order of N*log(N). 165 In an optimized multi-level campus, it is on the order of 166 sqrt(N)*log(N). So, for example, assuming N is 3,000, the level of 167 computational effort would be reduced by about a factor of 50. 169 1.2.2. LSDB Volatility Creating Too Much Control Traffic 171 The rate of LSDB changes would be approximately proportional to the 172 number of routers/links in the TRILL campus for a single level 173 campus. With an optimized multi-level campus, each area would have 174 about sqrt(N) routers reducing volatility by about a factor of 175 sqrt(N). 177 1.2.3. LSDB Volatility Causing To Much Time Unconverged 179 With the simplifying assumption that routing converges after each 180 change before the next change, the fraction of time that routing is 181 unconverged is proportional to the product of the volatility and the 182 convergence time. The convergence time is approximately proportional 183 to the computation involved at each router. Thus, based on these 184 simplifying assumptions, the fraction of time routing at a router is 185 not converged with the network would improve, in going from single to 186 multi-level, by about a factor of N. 188 1.2.4. The Size Of The LSDB 190 The size of the LSDB is also approximately proportional to the number 191 of routers/links and so, as with item 2 above, should improve by 192 about a factor of sqrt(N) in going from single to multi-level. 194 1.2.5 Nickname Limit 196 For many TRILL protocol purposes, RBridges are designated by 16-bit 197 nicknames. While some values are reserved, this appears to provide 198 enough nicknames to designated over 65,000 RBridges. However, this 199 number is effectively reduced by the following two factors: 201 - Nicknames are consumed when pseudo-nicknames are used for the 202 active-active connection of end stations. Using the techniques in 203 [RFC7781], for example, could double the nickname consumption if 204 there are extensive active-active edge groups connected to 205 different sets of edge TRILL switch ports. 207 - There might be problems in multi-level campus wide contention for 208 single nickname allocation of nicknames were allocated 209 individually from a single pool for the entire campus. Thus it 210 seems likely that a hierarchical method would be chosen where 211 blocks of nicknames are allocated at Level 2 to Level 1 areas and 212 contention for a nickname by an RBridge in such a Level 1 area 213 would be only within that area. Such hierarchical allocation leads 214 to further effective loss of nicknames similar to the situation 215 with IP addresses discussed in [RFC3194]. 217 Even without the above effective reductions in nickname space, a very 218 large multi-level TRILL campus, say one with 200 areas each 219 containing 500 TRILL switches, could require 100,000 or more 220 nicknames if all nicknames in the campus must be unique, which is 221 clearly impossible with 16-bit nicknames. 223 This scaling limit, namely, 16-bit nickname space, will only be 224 addressed with the aggregated nickname approach. Since the aggregated 225 nickname approach requires some complexity in the border TRILL 226 switches (for rewriting the nicknames in the TRILL header), the 227 suggested design in this document allows a campus with a mixture of 228 unique-nickname areas, and aggregated-nickname areas. Thus a TRILL 229 network could start using multilevel with the simpler unique nickname 230 method and later add aggregated areas as a later stage of network 231 growth. 233 With this design, nicknames must be unique across all Level 2 and 234 unique-nickname area TRILL switches taken together, whereas nicknames 235 inside an aggregated-nickname area are visible only inside that area. 236 Nicknames inside an aggregated-nickname area must still not conflict 237 with nicknames visible in Level 2 (which includes all nicknames 238 inside unique nickname areas), but the nicknames inside an 239 aggregated-nickname area may be the same as nicknames used within one 240 or more other aggregated-nickname areas. 242 With the design suggested in this document, TRILL switches within an 243 area need not be aware of whether they are in an aggregated nickname 244 area or a unique nickname area. The border TRILL switches in area A1 245 will indicate, in their LSP inside area A1, which nicknames (or 246 nickname ranges) are available, or alternatively which nicknames are 247 not available, for choosing as nicknames by area A1 TRILL switches. 249 1.2.6 Multi-Destination Traffic 251 Scaling limits due to protocol use of broadcast and multicast, can be 252 addressed in many cases in a mulitlevel campus by introducing 253 locally-scoped multi-destination delivery, limited to an area or a 254 single link. See further discussion of this issue in Section 4.2. 256 1.3 Unique and Aggregated Nicknames 258 We describe two alternatives for hierarchical or multilevel TRILL. 259 One we call the "unique nickname" alternative. The other we call the 260 "aggregated nickname" alternative. In the aggregated nickname 261 alternative, border TRILL switches replace either the ingress or 262 egress nickname field in the TRILL header of unicast packets with an 263 aggregated nickname representing an entire area. 265 The unique nickname alternative has the advantage that border TRILL 266 switches are simpler and do not need to do TRILL Header nickname 267 modification. It also simplifies testing and maintenance operations 268 that originate in one area and terminate in a different area. 270 The aggregated nickname alternative has the following advantages: 272 o it solves scaling problem #5 above, the 16-bit nickname limit, 273 in a simple way, 274 o it lessens the amount of inter-area routing information that 275 must be passed in IS-IS, and 276 o it logically reduces the RPF (Reverse Path Forwarding) Check 277 information (since only the area nickname needs to appear, 278 rather than all the ingress TRILL switches in that area). 280 In both cases, it is possible and advantageous to compute multi- 281 destination data packet distribution trees such that the portion 282 computed within a given area is rooted within that area. 284 For further discussion of the unique and aggregated nickname 285 alternatives, see Section 2.2. 287 1.4 More on Areas 289 Each area is configured with an "area address", which is advertised 290 in IS-IS messages, so as to avoid accidentally interconnecting areas. 291 For TRILL the only purpose of the area address would be to avoid 292 accidentally interconnecting areas although the area address had 293 other purposes in CLNP (Connectionless Network Layer Protocol), IS-IS 294 was originally designed for CLNP/DECnet. 296 Currently, the TRILL specification says that the area address must be 297 zero. If we change the specification so that the area address value 298 of zero is just a default, then most of IS-IS multilevel machinery 299 works as originally designed. However, there are TRILL-specific 300 issues, which we address below in Section 2.1. 302 1.5 Terminology and Acronyms 304 This document generally uses the acronyms defined in [RFC6325] plus 305 the additional acronym DBRB. However, for ease of reference, most 306 acronyms used are listed here: 308 CLNP - ConnectionLess Network Protocol 310 DECnet - a proprietary routing protocol that was used by Digital 311 Equipment Corporation. "DECnet Phase 5" was the origin of IS-IS. 313 Data Label - VLAN or Fine Grained Label [RFC7172] 315 DBRB - Designated Border RBridge 317 ESADI - End Station Address Distribution Information 319 IS-IS - Intermediate System to Intermediate System [IS-IS] 321 LSDB - Link State Data Base 323 LSP - Link State PDU 325 PDU - Protocol Data Unit 326 RBridge - Routing Bridge, an alternative name for a TRILL switch 328 RPF - Reverse Path Forwarding 330 TLV - Type Length Value 332 TRILL - Transparent Interconnection of Lots of Links or Tunneled 333 Routing in the Link Layer [RFC6325] [RFC7780] 335 TRILL switch - a device that implements the TRILL protocol 336 [RFC6325] [RFC7780], sometimes called an RBridge 338 VLAN - Virtual Local Area Network 340 2. Multilevel TRILL Issues 342 The TRILL-specific issues introduced by multilevel include the 343 following: 345 a. Configuration of non-zero area addresses, encoding them in IS-IS 346 PDUs, and possibly interworking with old TRILL switches that do 347 not understand non-zero area addresses. 349 See Section 2.1. 351 b. Nickname management. 353 See Sections 2.5 and 2.2. 355 c. Advertisement of pruning information (Data Label reachability, IP 356 multicast addresses) across areas. 358 Distribution tree pruning information is only an optimization, 359 as long as multi-destination packets are not prematurely 360 pruned. For instance, border TRILL switches could advertise 361 they can reach all possible Data Labels, and have an IP 362 multicast router attached. This would cause all multi- 363 destination traffic to be transmitted to border TRILL switches, 364 and possibly pruned there, when the traffic could have been 365 pruned earlier based on Data Label or multicast group if border 366 TRILL switches advertised more detailed Data Label and/or 367 multicast listener and multicast router attachment information. 369 d. Computation of distribution trees across areas for multi- 370 destination data. 372 See Section 2.3. 374 e. Computation of RPF information for those distribution trees. 376 See Section 2.4. 378 f. Computation of pruning information across areas. 380 See Sections 2.3 and 2.6. 382 g. Compatibility, as much as practical, with existing, unmodified 383 TRILL switches. 385 The most important form of compatibility is with existing TRILL 386 fast path hardware. Changes that require upgrade to the slow 387 path firmware/software are more tolerable. Compatibility for 388 the relatively small number of border TRILL switches is less 389 important than compatibility for non-border TRILL switches. 391 See Section 5. 393 2.1 Non-zero Area Addresses 395 The current TRILL base protocol specification [RFC6325] [RFC7177] 396 [RFC7780] says that the area address in IS-IS must be zero. The 397 purpose of the area address is to ensure that different areas are not 398 accidentally merged. Furthermore, zero is an invalid area address 399 for layer 3 IS-IS, so it was chosen as an additional safety mechanism 400 to ensure that layer 3 IS-IS packets would not be confused with TRILL 401 IS-IS packets. However, TRILL uses other techniques to avoid 402 confusion on a link, such as different multicast addresses and 403 Ethertypes on Ethernet [RFC6325], different PPP (Point-to-Point 404 Protocol) code points on PPP [RFC6361], and the like. Thus, using an 405 area address in TRILL that might be used in layer 3 IS-IS is not a 406 problem. 408 Since current TRILL switches will reject any IS-IS messages with non- 409 zero area addresses, the choices are as follows: 411 a.1 upgrade all TRILL switches that are to interoperate in a 412 potentially multilevel environment to understand non-zero area 413 addresses, 414 a.2 neighbors of old TRILL switches must remove the area address from 415 IS-IS messages when talking to an old TRILL switch (which might 416 break IS-IS security and/or cause inadvertent merging of areas), 417 a.3 ignore the problem of accidentally merging areas entirely, or 418 a.4 keep the fixed "area address" field as 0 in TRILL, and add a new, 419 optional TLV for "area name" to Hellos that, if present, could be 420 compared, by new TRILL switches, to prevent accidental area 421 merging. 423 In principal, different solutions could be used in different areas 424 but it would be much simpler to adopt one of these choices uniformly. 425 A simple solution would be a.1 above with each TRILL switch using a 426 dominant area nickname as its area address. For the unique nickname 427 alternative, the dominant nickname could be the lowest value nickname 428 held by any border RBridge of the area. For the aggregated nickname 429 alternative, it could be the lowest nickname held by a border RBridge 430 of the area or a nickname representing the area. 432 2.2 Aggregated versus Unique Nicknames 434 In the unique nickname alternative, all nicknames across the campus 435 must be unique. In the aggregated nickname alternative, TRILL switch 436 nicknames within an aggregated area are only of local significance, 437 and the only nickname externally (outside that area) visible is the 438 "area nickname" (or nicknames), which aggregates all the internal 439 nicknames. 441 The unique nickname approach simplifies border TRILL switches. 443 The aggregated nickname approach eliminates the potential problem of 444 nickname exhaustion, minimizes the amount of nickname information 445 that would need to be forwarded between areas, minimizes the size of 446 the forwarding table, and simplifies RPF calculation and RPF 447 information. 449 2.2.1 More Details on Unique Nicknames 451 With unique cross-area nicknames, it would be intractable to have a 452 flat nickname space with TRILL switches in different areas contending 453 for the same nicknames. Instead, each area would need to be 454 configured with or allocate one or more block of nicknames. Either 455 some TRILL switches would need to announce that all the nicknames 456 other than that in blocks available to the area are taken (to prevent 457 the TRILL switches inside the area from choosing nicknames outside 458 the area's nickname block), or a new TLV would be needed to announce 459 the allowable or the prohibited nicknames, and all TRILL switches in 460 the area would need to understand that new TLV. 462 Currently the encoding of nickname information in TLVs is by listing 463 of individual nicknames; this would make it painful for a border 464 TRILL switch to announce into an area that it is holding all other 465 nicknames to limit the nicknames available within that area. Painful 466 means tens of thousands of individual nickname entries in the Level 1 467 LSDB. The information could be encoded as ranges of nicknames to 468 make this manageable by specifying a new TLV similar to the Nickname 469 Flags APPsubTLV specified in [RFC7780] but providing flags for blocks 470 of nicknames rather than single nicknames. Although this would 471 require updating software, such a new TLV is the preferred method. 473 There is also an issue with the unique nicknames approach in building 474 distribution trees, as follows: 476 With unique nicknames in the TRILL campus and TRILL header 477 nicknames not rewritten by the border TRILL switches, there would 478 have to be globally known nicknames for the trees. Suppose there 479 are k trees. For all of the trees with nicknames located outside 480 an area, the local trees would be rooted at a border TRILL switch 481 or switches. Therefore, there would be either no splitting of 482 multi-destination traffic within the area or restricted splitting 483 of multi-destination traffic between trees rooted at a highly 484 restricted set of TRILL switches. 486 As an alternative, just the "egress nickname" field of multi- 487 destination TRILL Data packets could be mapped at the border, 488 leaving known unicast packets un-mapped. However, this surrenders 489 much of the unique nickname advantage of simpler border TRILL 490 switches. 492 Scaling to a very large campus with unique nicknames might exhaust 493 the 16-bit TRILL nicknames space particularly if (1) additional 494 nicknames are consumed to support active-active end station groups at 495 the TRILL edge using the techniques standardized in [RFC7781] and (2) 496 use of the nickname space is less efficient due to the allocation of, 497 for example, power-of-two size blocks of nicknames to areas in the 498 same way that use of the IP address space is made less efficient by 499 hierarchical allocation (see [RFC3194]). One method to avoid nickname 500 exhaustion might be to expand nicknames to 24 bits; however, that 501 technique would require TRILL message format and fast path processing 502 changes and that all TRILL switches in the campus understand larger 503 nicknames. 505 2.2.2 More Details on Aggregated Nicknames 507 The aggregated nickname approach enables passing far less nickname 508 information. It works as follows, assuming both the source and 509 destination areas are using aggregated nicknames: 511 There are at least two ways areas could be identified. 513 One method would be to assign each area a 16-bit nickname. This 514 would not be the nickname of any actual TRILL switch. Instead, it 515 would be the nickname of the area itself. Border TRILL switches 516 would know the area nickname for their own area(s). For an 517 example of a more specific multilevel proposal using unique 518 nicknames, see [DraftUnique]. 520 Alternatively, areas could be identified by the set of nicknames 521 that identify the border routers for that area. (See [SingleName] 522 for a multilevel proposal using such a set of nicknames.) 524 The TRILL Header nickname fields in TRILL Data packets being 525 transported through a multilevel TRILL campus with aggregated 526 nicknames are as follows: 528 - When both the ingress and egress TRILL switches are in the same 529 area, there need be no change from the existing base TRILL 530 protocol standard in the TRILL Header nickname fields. 532 - When being transported between different Level 1 areas in Level 533 2, the ingress nickname is a nickname of the ingress TRILL 534 switch's area while the egress nickname is either a nickname of 535 the egress TRILL switch's area or a tree nickname. 537 - When being transported from Level 1 to Level 2, the ingress 538 nickname is the nickname of the ingress TRILL switch itself 539 while the egress nickname is either a nickname for the area of 540 the egress TRILL switch or a tree nickname. 542 - When being transported from Level 2 to Level 1, the ingress 543 nickname is a nickname for the ingress TRILL switch's area while 544 the egress nickname is either the nickname of the egress TRILL 545 switch itself or a tree nickname. 547 There are two variations of the aggregated nickname approach. The 548 first is the Border Learning approach, which is described in Section 549 2.2.2.1. The second is the Swap Nickname Field approach, which is 550 described in Section 2.2.2.2. Section 2.2.2.3 compares the advantages 551 and disadvantages of these two variations of the aggregated nickname 552 approach. 554 2.2.2.1 Border Learning Aggregated Nicknames 556 This section provides an illustrative example and description of the 557 border learning variation of aggregated nicknames where a single 558 nickname is used to identify an area. 560 In the following picture, RB2 and RB3 are area border TRILL switches 561 (RBridges). A source S is attached to RB1. The two areas have 562 nicknames 15961 and 15918, respectively. RB1 has a nickname, say 27, 563 and RB4 has a nickname, say 44 (and in fact, they could even have the 564 same nickname, since the TRILL switch nickname will not be visible 565 outside these aggregated areas). 567 Area 15961 level 2 Area 15918 568 +-------------------+ +-----------------+ +--------------+ 569 | | | | | | 570 | S--RB1---Rx--Rz----RB2---Rb---Rc--Rd---Re--RB3---Rk--RB4---D | 571 | 27 | | | | 44 | 572 | | | | | | 573 +-------------------+ +-----------------+ +--------------+ 575 Let's say that S transmits a frame to destination D, which is 576 connected to RB4, and let's say that D's location has already been 577 learned by the relevant TRILL switches. These relevant switches have 578 learned the following: 580 1) RB1 has learned that D is connected to nickname 15918 581 2) RB3 has learned that D is attached to nickname 44. 583 The following sequence of events will occur: 585 - S transmits an Ethernet frame with source MAC = S and destination 586 MAC = D. 588 - RB1 encapsulates with a TRILL header with ingress RBridge = 27, 589 and egress = 15918 producing a TRILL Data packet. 591 - RB2 has announced in the Level 1 IS-IS instance in area 15961, 592 that it is attached to all the area nicknames, including 15918. 593 Therefore, IS-IS routes the packet to RB2. Alternatively, if a 594 distinguished range of nicknames is used for Level 2, Level 1 595 TRILL switches seeing such an egress nickname will know to route 596 to the nearest border router, which can be indicated by the IS-IS 597 attached bit. 599 - RB2, when transitioning the packet from Level 1 to Level 2, 600 replaces the ingress TRILL switch nickname with the area nickname, 601 so replaces 27 with 15961. Within Level 2, the ingress RBridge 602 field in the TRILL header will therefore be 15961, and the egress 603 RBridge field will be 15918. Also RB2 learns that S is attached to 604 nickname 27 in area 15961 to accommodate return traffic. 606 - The packet is forwarded through Level 2, to RB3, which has 607 advertised, in Level 2, reachability to the nickname 15918. 609 - RB3, when forwarding into area 15918, replaces the egress nickname 610 in the TRILL header with RB4's nickname (44). So, within the 611 destination area, the ingress nickname will be 15961 and the 612 egress nickname will be 44. 614 - RB4, when decapsulating, learns that S is attached to nickname 615 15961, which is the area nickname of the ingress. 617 Now suppose that D's location has not been learned by RB1 and/or RB3. 618 What will happen, as it would in TRILL today, is that RB1 will 619 forward the packet as multi-destination, choosing a tree. As the 620 multi-destination packet transitions into Level 2, RB2 replaces the 621 ingress nickname with the area nickname. If RB1 does not know the 622 location of D, the packet must be flooded, subject to possible 623 pruning, in Level 2 and, subject to possible pruning, from Level 2 624 into every Level 1 area that it reaches on the Level 2 distribution 625 tree. 627 Now suppose that RB1 has learned the location of D (attached to 628 nickname 15918), but RB3 does not know where D is. In that case, RB3 629 must turn the packet into a multi-destination packet within area 630 15918. In this case, care must be taken so that in the case in which 631 RB3 is not the Designated transitioner between Level 2 and its area 632 for that multi-destination packet, but was on the unicast path, that 633 border TRILL switch in that area does not forward the now multi- 634 destination packet back into Level 2. Therefore, it would be 635 desirable to have a marking, somehow, that indicates the scope of 636 this packet's distribution to be "only this area" (see also Section 637 4). 639 In cases where there are multiple transitioners for unicast packets, 640 the border learning mode of operation requires that the address 641 learning between them be shared by some protocol such as running 642 ESADI [RFC7357] for all Data Labels of interest to avoid excessive 643 unknown unicast flooding. 645 The potential issue described at the end of Section 2.2.1 with trees 646 in the unique nickname alternative is eliminated with aggregated 647 nicknames. With aggregated nicknames, each border TRILL switch that 648 will transition multi-destination packets can have a mapping between 649 Level 2 tree nicknames and Level 1 tree nicknames. There need not 650 even be agreement about the total number of trees; just that the 651 border TRILL switch have some mapping, and replace the egress TRILL 652 switch nickname (the tree name) when transitioning levels. 654 2.2.2.2 Swap Nickname Field Aggregated Nicknames 656 There is a variant possibility where two additional fields could 657 exist in TRILL Data packets that could be called the "ingress swap 658 nickname field" and the "egress swap nickname field". This variant is 659 described below for completeness but would require fast path hardware 660 changes from the existing TRILL protocol. The changes in the example 661 above would be as follows: 663 - RB1 will have learned the area nickname of D and the TRILL switch 664 nickname of RB4 to which D is attached. In encapsulating a frame 665 to D, it puts an area nickname of D (15918) in the egress nickname 666 field of the TRILL Header and puts a nickname of RB3 (44) in a 667 egress swap nickname field. 669 - RB2 moves the ingress nickname to the ingress swap nickname field 670 and inserts 15961, an area nickname for S, into the ingress 671 nickname field. 673 - RB3 swaps the egress nickname and the egress swap nickname fields, 674 which sets the egress nickname to 44. 676 - RB4 learns the correspondence between the source MAC/VLAN of S and 677 the { ingress nickname, ingress swap nickname field } pair as it 678 decapsulates and egresses the frame. 680 See [DraftAggregated] for a multilevel proposal using aggregated swap 681 nicknames with a single nickname representing an area. 683 2.2.2.3 Comparison 685 The Border Learning variant described in Section 2.2.2.1 above 686 minimizes the change in non-border TRILL switches but imposes the 687 burden on border TRILL switches of learning and doing lookups in all 688 the end station MAC addresses within their area(s) that are used for 689 communication outside the area. This burden could be reduced by 690 decreasing the area size and increasing the number of areas. 692 The Swap Nickname Field variant described in Section 2.2.2.2 693 eliminates the extra address learning burden on border TRILL switches 694 but requires changes to the TRILL data packet header and more 695 extensive changes to non-border TRILL switches. In particular, with 696 this alternative, non-border TRILL switches must learn to associate 697 both a TRILL switch nickname and an area nickname with end station 698 MAC/label pairs (except for addresses that are local to their area). 700 The Swap Nickname Field alternative is more scalable but less 701 backward compatible for non-border TRILL switches. It would be 702 possible for border and other level 2 TRILL switches to support both 703 Border Learning, for support of legacy Level 1 TRILL switches, and 704 Swap Nickname, to support Level 1 TRILL switches that understood the 705 Swap Nickname method based on variations in the TRILL header but this 706 would be even more complex. 708 The requirement to change the TRILL header and fast path processing 709 to support the Swap Nickname Field variant make it impractical for 710 the foreseeable future. 712 2.3 Building Multi-Area Trees 714 It is easy to build a multi-area tree by building a tree in each area 715 separately, (including the Level 2 "area"), and then having only a 716 single border TRILL switch, say RBx, in each area, attach to the 717 Level 2 area. RBx would forward all multi-destination packets 718 between that area and Level 2. 720 People might find this unacceptable, however, because of the desire 721 to path split (not always sending all multi-destination traffic 722 through the same border TRILL switch). 724 This is the same issue as with multiple ingress TRILL switches 725 injecting traffic from a pseudonode, and can be solved with the 726 mechanism that was adopted for that purpose: the affinity TLV 728 [RFC7783]. For each tree in the area, at most one border RB 729 announces itself in an affinity TLV with that tree name. 731 2.4 The RPF Check for Trees 733 For multi-destination data originating locally in RBx's area, 734 computation of the RPF check is done as today. For multi-destination 735 packets originating outside RBx's area, computation of the RPF check 736 must be done based on which one of the border TRILL switches (say 737 RB1, RB2, or RB3) injected the packet into the area. 739 A TRILL switch, say RB4, located inside an area, must be able to know 740 which of RB1, RB2, or RB3 transitioned the packet into the area from 741 Level 2 (or into Level 2 from an area). 743 This could be done based on having the DBRB announce the transitioner 744 assignments to all the TRILL switches in the area, or the Affinity 745 TLV mechanism given in [RFC7783], or a New Tree Encoding mechanism 746 discussed in Section 4.1.1. 748 2.5 Area Nickname Acquisition 750 In the aggregated nickname alternative, each area must acquire a 751 unique area nickname or can be identified by the set of border TRILL 752 switches. It is probably simpler to allocate a block of nicknames 753 (say, the top 4000) to either (1) represent areas and not specific 754 TRILL switches or (2) used by border TRILL switches if the set of 755 such border TRILL switches represent the area. 757 The nicknames used for area identification need to be advertised and 758 acquired through Level 2. 760 Within an area, all the border TRILL switches can discover each other 761 through the Level 1 link state database, by using the IS-IS attach 762 bit or by explicitly advertising in their LSP "I am a border 763 RBridge". 765 Of the border TRILL switches, one will have highest priority (say 766 RB7). RB7 can dynamically participate, in Level 2, to acquire a 767 nickname for identifying the area. Alternatively, RB7 could give the 768 area a pseudonode IS-IS ID, such as RB7.5, within Level 2. So an 769 area would appear, in Level 2, as a pseudonode and the pseudonode 770 could participate, in Level 2, to acquire a nickname for the area. 772 Within Level 2, all the border TRILL switches for an area can 773 advertise reachability to the area, which would mean connectivity to 774 a nickname identifying the area. 776 2.6 Link State Representation of Areas 778 Within an area, say area A1, there is an election for the DBRB, 779 (Designated Border RBridge), say RB1. This can be done through LSPs 780 within area A1. The border TRILL switches announce themselves, 781 together with their DBRB priority. (Note that the election of the 782 DBRB cannot be done based on Hello messages, because the border TRILL 783 switches are not necessarily physical neighbors of each other. They 784 can, however, reach each other through connectivity within the area, 785 which is why it will work to find each other through Level 1 LSPs.) 787 RB1 can acquire an area nickname (in the aggregated nickname 788 approach) and may give the area a pseudonode IS-IS ID (just like the 789 DRB would give a pseudonode IS-IS ID to a link) depending on how the 790 area nickname is handled. RB1 advertises, in area A1, an area 791 nickname that RB1 has acquired (and what the pseudonode IS-IS ID for 792 the area is if needed). 794 Level 1 LSPs (possibly pseudonode) initiated by RB1 for the area 795 include any information external to area A1 that should be input into 796 area A1 (such as nicknames of external areas, or perhaps (in the 797 unique nickname variant) all the nicknames of external TRILL switches 798 in the TRILL campus and pruning information such as multicast 799 listeners and labels). All the other border TRILL switches for the 800 area announce (in their LSP) attachment to that area. 802 Within Level 2, RB1 generates a Level 2 LSP on behalf of the area. 803 The same pseudonode ID could be used within Level 1 and Level 2, for 804 the area. (There does not seem any reason why it would be useful for 805 it to be different, but there's also no reason why it would need to 806 be the same). Likewise, all the area A1 border TRILL switches would 807 announce, in their Level 2 LSPs, connection to the area. 809 3. Area Partition 811 It is possible for an area to become partitioned, so that there is 812 still a path from one section of the area to the other, but that path 813 is via the Level 2 area. 815 With multilevel TRILL, an area will naturally break into two areas in 816 this case. 818 Area addresses might be configured to ensure two areas are not 819 inadvertently connected. Area addresses appear in Hellos and LSPs 820 within the area. If two chunks, connected only via Level 2, were 821 configured with the same area address, this would not cause any 822 problems. (They would just operate as separate Level 1 areas.) 824 A more serious problem occurs if the Level 2 area is partitioned in 825 such a way that it could be healed by using a path through a Level 1 826 area. TRILL will not attempt to solve this problem. Within the Level 827 1 area, a single border RBridge will be the DBRB, and will be in 828 charge of deciding which (single) RBridge will transition any 829 particular multi-destination packets between that area and Level 2. 830 If the Level 2 area is partitioned, this will result in multi- 831 destination data only reaching the portion of the TRILL campus 832 reachable through the partition attached to the TRILL switch that 833 transitions that packet. It will not cause a loop. 835 4. Multi-Destination Scope 837 There are at least two reasons it would be desirable to be able to 838 mark a multi-destination packet with a scope that indicates the 839 packet should not exit the area, as follows: 841 1. To address an issue in the border learning variant of the 842 aggregated nickname alternative, when a unicast packet turns into 843 a multi-destination packet when transitioning from Level 2 to 844 Level 1, as discussed in Section 4.1. 846 2. To constrain the broadcast domain for certain discovery, 847 directory, or service protocols as discussed in Section 4.2. 849 Multi-destination packet distribution scope restriction could be done 850 in a number of ways. For example, there could be a flag in the packet 851 that means "for this area only". However, the technique that might 852 require the least change to TRILL switch fast path logic would be to 853 indicate this in the egress nickname that designates the distribution 854 tree being used. There could be two general tree nicknames for each 855 tree, one being for distribution restricted to the area and the other 856 being for multi-area trees. Or there would be a set of N (perhaps 16) 857 special currently reserved nicknames used to specify the N highest 858 priority trees but with the variation that if the special nickname is 859 used for the tree, the packet is not transitioned between areas. Or 860 one or more special trees could be built that were restricted to the 861 local area. 863 4.1 Unicast to Multi-destination Conversions 865 In the border learning variant of the aggregated nickname 866 alternative, the following situation may occur: 867 - a unicast packet might be known at the Level 1 to Level 2 868 transition and be forwarded as a unicast packet to the least cost 869 border TRILL switch advertising connectivity to the destination 870 area, but 871 - upon arriving at the border TRILL switch, it turns out to have an 872 unknown destination { MAC, Data Label } pair. 874 In this case, the packet must be converted into a multi-destination 875 packet and flooded in the destination area. However, if the border 876 TRILL switch doing the conversion is not the border TRILL switch 877 designated to transition the resulting multi-destination packet, 878 there is the danger that the designated transitioner may pick up the 879 packet and flood it back into Level 2 from which it may be flooded 880 into multiple areas. This danger can be avoided by restricting any 881 multi-destination packet that results from such a conversion to the 882 destination area as described above. 884 Alternatively, a multi-destination packet intended only for the area 885 could be tunneled (within the area) to the RBridge RBx, that is the 886 appointed transitioner for that form of packet (say, based on VLAN or 887 FGL), with instructions that RBx only transmit the packet within the 888 area, and RBx could initiate the multi-destination packet within the 889 area. Since RBx introduced the packet, and is the only one allowed 890 to transition that packet to Level 2, this would accomplish scoping 891 of the packet to within the area. Since this case only occurs in the 892 unusual case when unicast packets need to be turned into multi- 893 destination as described above, the suboptimality of tunneling 894 between the border TRILL switch that receives the unicast packet and 895 the appointed level transitioner for that packet, might not be an 896 issue. 898 4.1.1 New Tree Encoding 900 The current encoding, in a TRILL header, of a tree, is of the 901 nickname of the tree root. This requires all 16 bits of the egress 902 nickname field. TRILL could instead, for example, use the bottom 6 903 bits to encode the tree number (allowing 64 trees), leaving 10 bits 904 to encode information such as: 906 o scope: a flag indicating whether it should be single area only, or 907 entire campus 908 o border injector: an indicator of which of the k border TRILL 909 switches injected this packet 911 If TRILL were to adopt this new encoding, any of the TRILL switches 912 in an edge group could inject a multi-destination packet. This would 913 require all TRILL switches to be changed to understand the new 914 encoding for a tree, and it would require a TLV in the LSP to 915 indicate which number each of the TRILL switches in an edge group 916 would be. 918 While there are a number of advantages to this technique, it requires 919 fast path logic changes and thus its deployment is not practical at 920 this time. It is included here for completeness. 922 4.2 Selective Broadcast Domain Reduction 924 There are a number of service, discovery, and directory protocols 925 that, for convenience, are accessed via multicast or broadcast 926 frames. Examples are DHCP, (Dynamic Host Configuration Protocol) the 927 NetBIOS Service Location Protocol, and multicast DNS (Domain Name 928 Service). 930 Some such protocols provide means to restrict distribution to an IP 931 subnet or equivalent to reduce size of the broadcast domain they are 932 using and then provide a proxy that can be placed in that subnet to 933 use unicast to access a service elsewhere. In cases where a proxy 934 mechanism is not currently defined, it may be possible to create one 935 that references a central server or cache. With multilevel TRILL, it 936 is possible to construct very large IP subnets that could become 937 saturated with multi-destination traffic of this type unless packets 938 can be further restricted in their distribution. Such restricted 939 distribution can be accomplished for some protocols, say protocol P, 940 in a variety of ways including the following: 942 - Either (1) at all ingress TRILL switches in an area place all 943 protocol P multi-destination packets on a distribution tree in 944 such a way that the packets are restricted to the area or (2) at 945 all border TRILL switches between that area and Level 2, detect 946 protocol P multi-destination packets and do not transition them. 948 - Then place one, or a few for redundancy, protocol P proxies inside 949 each area where protocol P may be in use. These proxies unicast 950 protocol P requests or other messages to the actual campus 951 server(s) for P. They also receive unicast responses or other 952 messages from those servers and deliver them within the area via 953 unicast, multicast, or broadcast as appropriate. (Such proxies 954 would not be needed if it was acceptable for all protocol P 955 traffic to be restricted to an area.) 957 While it might seem logical to connect the campus servers to TRILL 958 switches in Level 2, they could be placed within one or more areas so 959 that, in some cases, those areas might not require a local proxy 960 server. 962 5. Co-Existence with Old TRILL switches 964 TRILL switches that are not multilevel aware may have a problem with 965 calculating RPF Check and filtering information, since they would not 966 be aware of the assignment of border TRILL switch transitioning. 968 A possible solution, as long as any old TRILL switches exist within 969 an area, is to have the border TRILL switches elect a single DBRB 970 (Designated Border RBridge), and have all inter-area traffic go 971 through the DBRB (unicast as well as multi-destination). If that 972 DBRB goes down, a new one will be elected, but at any one time, all 973 inter-area traffic (unicast as well as multi-destination) would go 974 through that one DRBR. However this eliminates load splitting at 975 level transition. 977 6. Multi-Access Links with End Stations 979 Care must be taken in the case where there are multiple TRILL 980 switches on a link with one or more end stations, keeping in mind 981 that end stations are TRILL ignorant. In particular, it is essential 982 that only one TRILL switch ingress/egress any given data packet 983 from/to an end station so that connectivity is provided to that end 984 station without duplicating end station data and that loops are not 985 formed due to one TRILL switch egressing data in native form (i.e., 986 with no TRILL header) and having that data re-ingressed by another 987 TRILL switch on the link. 989 With existing, single level TRILL, this is done by electing a single 990 Designated RBridge per link, which appoints a single Appointed 991 Forwarder per VLAN [RFC7177] [rfc6439bis]. This mechanism depends on 992 the RBridges establishing adjacency. But suppose there are two (or 993 more) TRILL switches on a link in different areas, say RB1 in area A1 994 and RB2 in area A2, as shown below, and that the link also has one or 995 more end stations attached. If RB1 and RB2 ignore each other's 996 Hellos because they are in different areas, as they are required to 997 do under normal IS-IS PDU processing rules, then they will not form 998 an adjacency. If they are not adjacent, they will ignore each other 999 for the Appointed Forwarder mechanism and will both ingress/egress 1000 end station traffic on the link causing loops and duplication. 1002 The problem is not avoiding adjacency of avoiding TRILL Data packet 1003 transfer between RB1 and RB2. The area address mechanism of IS-IS or 1004 possibly the use of topology constraints or the like does that quite 1005 well. The problem stems from end stations being TRILL ignorant so 1006 care must be taken that multiple RBridges on a link do not ingress 1007 the same frame originated by an end station and so that an RBridge 1008 does not ingress a native frame egressed by a different RBridge 1009 because it mistakes if for a frame originated by an end station. 1011 +--------------------------------------------+ 1012 | Level 2 | 1013 +----------+---------------------+-----------+ 1014 | Area A1 | | Area A2 | 1015 | +---+ | | +---+ | 1016 | |RB1| | | |RB2| | 1017 | +-+-+ | | +-+-+ | 1018 | | | | | | 1019 +-----|----+ +-----|-----+ 1020 | | 1021 --+---------+-------------+--------+-- Link 1022 | | 1023 +------+------+ +--+----------+ 1024 | End Station | | End Station | 1025 +-------------+ +-------------+ 1027 A simple rule, which is preferred, is to use the TRILL switch or 1028 switches having the lowest numbered area, comparing area numbers as 1029 unsigned integers, to handle all native traffic to/from end stations 1030 on the link. This would automatically give multilevel-ignorant legacy 1031 TRILL switches, that would be using area number zero, highest 1032 priority for handling end station traffic, which they would try to do 1033 anyway. 1035 Other methods are possible. For example doing the selection of 1036 Appointed Forwarders and of the TRILL switch in charge of that 1037 selection across all TRILL switches on the link regardless of area. 1038 However, a special case would then have to be made for legacy TRILL 1039 switches using area number zero. 1041 These techniques require multilevel aware TRILL switches to take 1042 actions based on Hellos from RBridges in other areas even though they 1043 will not form an adjacency with such RBridges. However, the action is 1044 quite simple in the preferred case: if a TRILL switch seems Hellos 1045 from lower numbered areas, then they would not act as an Appointed 1046 Forwarder on the link until the Hello timer for such Hellos had 1047 expired. 1049 7. Summary 1051 This draft describes potential scaling issues in TRILL and discusses 1052 possible approaches to multilevel TRILL as a solution or element of a 1053 solution to most of them. 1055 The alternative using aggregated areas in multilevel TRILL has 1056 significant advantages in terms of scalability over using campus wide 1057 unique nicknames, not just in avoiding nickname exhaustion, but by 1058 allowing RPF Checks to be aggregated based on an entire area. 1059 However, the alternative of using unique nicknames is simpler and 1060 avoids the changes in border TRILL switches required to support 1061 aggregated nicknames. It is possible to support both. For example, a 1062 TRILL campus could use simpler unique nicknames until scaling begins 1063 to cause problems and then start to introduce areas with aggregated 1064 nicknames. 1066 Some multilevel TRILL issues are not difficult, such as dealing with 1067 partitioned areas. Other issues are more difficult, especially 1068 dealing with old TRILL switches that are multilevel ignorant. 1070 8. Security Considerations 1072 This informational document explores alternatives for the design of 1073 multilevel IS-IS in TRILL and generally does not consider security 1074 issues. 1076 If aggregated nicknames are used in two areas that have the same area 1077 address and those areas merge, there is a possibility of a transient 1078 nickname collision that would not occur with unique nicknames. Such a 1079 collision could cause a data packet to be delivered to the wrong 1080 egress TRILL switch but it would still not be delivered to any end 1081 station in the wrong Data Label; thus such delivery would still 1082 conform to security policies. 1084 For general TRILL Security Considerations, see [RFC6325]. 1086 9. IANA Considerations 1088 This document requires no IANA actions. RFC Editor: Please remove 1089 this section before publication. 1091 Normative References 1093 [IS-IS] - ISO/IEC 10589:2002, Second Edition, "Intermediate System to 1094 Intermediate System Intra-Domain Routing Exchange Protocol for 1095 use in Conjunction with the Protocol for Providing the 1096 Connectionless-mode Network Service (ISO 8473)", 2002. 1098 [RFC6325] - Perlman, R., Eastlake 3rd, D., Dutt, D., Gai, S., and A. 1099 Ghanwani, "Routing Bridges (RBridges): Base Protocol 1100 Specification", RFC 6325, July 2011. 1102 [RFC7177] - Eastlake 3rd, D., Perlman, R., Ghanwani, A., Yang, H., 1103 and V. Manral, "Transparent Interconnection of Lots of Links 1104 (TRILL): Adjacency", RFC 7177, May 2014, . 1107 [RFC7780] - Eastlake 3rd, D., Zhang, M., Perlman, R., Banerjee, A., 1108 Ghanwani, A., and S. Gupta, "Transparent Interconnection of 1109 Lots of Links (TRILL): Clarifications, Corrections, and 1110 Updates", RFC 7780, DOI 10.17487/RFC7780, February 2016, 1111 . 1113 [rfc6439bis] - Eastlake, D., Li, Y., Umair, M., Banerjee, A., and F. 1114 Hu, "Routing Bridges (RBridges): Appointed Forwarders", draft- 1115 ietf-trill-rfc6439bis, work in progress, February 2017. 1117 Informative References 1119 [RFC3194] - Durand, A. and C. Huitema, "The H-Density Ratio for 1120 Address Assignment Efficiency An Update on the H ratio", RFC 1121 3194, DOI 10.17487/RFC3194, November 2001, . 1124 [RFC6361] - Carlson, J. and D. Eastlake 3rd, "PPP Transparent 1125 Interconnection of Lots of Links (TRILL) Protocol Control 1126 Protocol", RFC 6361, August 2011. 1128 [RFC7172] - Eastlake 3rd, D., Zhang, M., Agarwal, P., Perlman, R., 1129 and D. Dutt, "Transparent Interconnection of Lots of Links 1130 (TRILL): Fine-Grained Labeling", RFC 7172, May 2014 1132 [RFC7176] - Eastlake 3rd, D., Senevirathne, T., Ghanwani, A., Dutt, 1133 D., and A. Banerjee, "Transparent Interconnection of Lots of 1134 Links (TRILL) Use of IS-IS", RFC 7176, May 2014. 1136 [RFC7357] - Zhai, H., Hu, F., Perlman, R., Eastlake 3rd, D., and O. 1137 Stokes, "Transparent Interconnection of Lots of Links (TRILL): 1139 End Station Address Distribution Information (ESADI) Protocol", 1140 RFC 7357, September 2014, . 1143 [RFC7781] - Zhai, H., Senevirathne, T., Perlman, R., Zhang, M., and 1144 Y. Li, "Transparent Interconnection of Lots of Links (TRILL): 1145 Pseudo-Nickname for Active-Active Access", RFC 7781, DOI 1146 10.17487/RFC7781, February 2016, . 1149 [RFC7783] - Senevirathne, T., Pathangi, J., and J. Hudson, 1150 "Coordinated Multicast Trees (CMT) for Transparent 1151 Interconnection of Lots of Links (TRILL)", RFC 7783, DOI 1152 10.17487/RFC7783, February 2016, . 1155 [DraftAggregated] - Bhargav Bhikkaji, Balaji Venkat Venkataswami, 1156 Narayana Perumal Swamy, "Connecting Disparate Data 1157 Center/PBB/Campus TRILL sites using BGP", draft-balaji-trill- 1158 over-ip-multi-level, Work In Progress. 1160 [DraftUnique] - M. Zhang, D. Eastlake, R. Perlman, M. Cullen, H. 1161 Zhai, D. Liu, "TRILL Multilevel Using Unique Nicknames", draft- 1162 ietf-trill-multilevel-unique-nickname, Work In Progress. 1164 [SingleName] - Mingui Zhang, et. al, "Single Area Border RBridge 1165 Nickname for TRILL Multilevel", draft-ietf-trill-multilevel- 1166 single-nickname, Work in Progress. 1168 Acknowledgements 1170 The helpful comments and contributions of the following are hereby 1171 acknowledged: 1173 Alia Atlas, David Michael Bond, Dino Farinacci, Sue Hares, Gayle 1174 Noble, Alexander Vainshtein, and Stig Venaas. 1176 The document was prepared in raw nroff. All macros used were defined 1177 within the source file. 1179 Authors' Addresses 1181 Radia Perlman 1182 EMC 1183 2010 256th Avenue NE, #200 1184 Bellevue, WA 98007 USA 1186 EMail: radia@alum.mit.edu 1188 Donald Eastlake 1189 Huawei Technologies 1190 155 Beaver Street 1191 Milford, MA 01757 USA 1193 Phone: +1-508-333-2270 1194 Email: d3e3e3@gmail.com 1196 Mingui Zhang 1197 Huawei Technologies 1198 No.156 Beiqing Rd. Haidian District, 1199 Beijing 100095 P.R. China 1201 EMail: zhangmingui@huawei.com 1203 Anoop Ghanwani 1204 Dell 1205 5450 Great America Parkway 1206 Santa Clara, CA 95054 USA 1208 EMail: anoop@alumni.duke.edu 1210 Hongjun Zhai 1211 Jinling Institute of Technology 1212 99 Hongjing Avenue, Jiangning District 1213 Nanjing, Jiangsu 211169 China 1215 EMail: honjun.zhai@tom.com 1217 Copyright and IPR Provisions 1219 Copyright (c) 2017 IETF Trust and the persons identified as the 1220 document authors. All rights reserved. 1222 This document is subject to BCP 78 and the IETF Trust's Legal 1223 Provisions Relating to IETF Documents 1224 (http://trustee.ietf.org/license-info) in effect on the date of 1225 publication of this document. Please review these documents 1226 carefully, as they describe your rights and restrictions with respect 1227 to this document. Code Components extracted from this document must 1228 include Simplified BSD License text as described in Section 4.e of 1229 the Trust Legal Provisions and are provided without warranty as 1230 described in the Simplified BSD License. The definitive version of 1231 an IETF Document is that published by, or under the auspices of, the 1232 IETF. Versions of IETF Documents that are published by third parties, 1233 including those that are translated into other languages, should not 1234 be considered to be definitive versions of IETF Documents. The 1235 definitive version of these Legal Provisions is that published by, or 1236 under the auspices of, the IETF. Versions of these Legal Provisions 1237 that are published by third parties, including those that are 1238 translated into other languages, should not be considered to be 1239 definitive versions of these Legal Provisions. For the avoidance of 1240 doubt, each Contributor to the IETF Standards Process licenses each 1241 Contribution that he or she makes as part of the IETF Standards 1242 Process to the IETF Trust pursuant to the provisions of RFC 5378. No 1243 language to the contrary, or terms, conditions or rights that differ 1244 from or are inconsistent with the rights and licenses granted under 1245 RFC 5378, shall have any effect and shall be null and void, whether 1246 published or posted by such Contributor, or included with or in such 1247 Contribution.