idnits 2.17.1 draft-ietf-armd-problem-statement-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 22, 2012) is 4204 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force T. Narten 3 Internet-Draft IBM 4 Intended status: Informational M. Karir 5 Expires: April 25, 2013 Merit Network Inc. 6 I. Foo 7 Huawei Technologies 8 October 22, 2012 10 Address Resolution Problems in Large Data Center Networks 11 draft-ietf-armd-problem-statement-04 13 Abstract 15 This document examines address resolution issues related to the 16 scaling of data centers with a very large numbers of hosts. The 17 initial scope is relatively narrow. Specifically, it focuses on 18 address resolution (ARP and ND) within the data center. 20 Status of this Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at http://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on April 25, 2013. 37 Copyright Notice 39 Copyright (c) 2012 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (http://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 56 3. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 4. Address Resolution in IPv4 . . . . . . . . . . . . . . . . . . 6 58 5. Address Resolution in IPv6 . . . . . . . . . . . . . . . . . . 7 59 6. Generalized Data Center Design . . . . . . . . . . . . . . . . 7 60 6.1. Access Layer . . . . . . . . . . . . . . . . . . . . . . . 8 61 6.2. Aggregation Layer . . . . . . . . . . . . . . . . . . . . 8 62 6.3. Core . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 63 6.4. L3 / L2 Topological Variations . . . . . . . . . . . . . . 9 64 6.4.1. L3 to Access Switches . . . . . . . . . . . . . . . . 9 65 6.4.2. L3 to Aggregation Switches . . . . . . . . . . . . . . 9 66 6.4.3. L3 in the Core only . . . . . . . . . . . . . . . . . 10 67 6.4.4. Overlays . . . . . . . . . . . . . . . . . . . . . . . 10 68 6.5. Factors that Affect Data Center Design . . . . . . . . . . 10 69 6.5.1. Traffic Patterns . . . . . . . . . . . . . . . . . . . 10 70 6.5.2. Virtualization . . . . . . . . . . . . . . . . . . . . 11 71 6.5.3. Summary . . . . . . . . . . . . . . . . . . . . . . . 11 72 7. Problem Itemization . . . . . . . . . . . . . . . . . . . . . 12 73 7.1. ARP Processing on Routers . . . . . . . . . . . . . . . . 12 74 7.2. IPv6 Neighbor Discovery . . . . . . . . . . . . . . . . . 14 75 7.3. MAC Address Table Size Limitations in Switches . . . . . . 15 76 8. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 77 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 15 78 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 79 11. Security Considerations . . . . . . . . . . . . . . . . . . . 16 80 12. Change Log . . . . . . . . . . . . . . . . . . . . . . . . . . 16 81 12.1. Changes from -03 to -04 . . . . . . . . . . . . . . . . . 16 82 12.2. Changes from -02 to -03 . . . . . . . . . . . . . . . . . 16 83 12.3. Changes from -01 . . . . . . . . . . . . . . . . . . . . . 16 84 12.4. Changes from -00 . . . . . . . . . . . . . . . . . . . . . 16 85 13. Informative References . . . . . . . . . . . . . . . . . . . . 16 86 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 88 1. Introduction 90 This document examines issues related to the large scaling of data 91 centers. Specifically, this document focuses on address resolution 92 (ARP in IPv4 and Neighbor Discovery in IPv6) within the data center. 93 Although strictly speaking the scope of address resolution is 94 confined to a single L2 broadcast domain (i.e., ARP runs at the L2 95 layer below IP), the issue is complicated by routers having many 96 interfaces on which address resolution must be performed or with the 97 presence of IEEE 802.1Q domains, where individual VLANs effectively 98 form their own L2 broadcast domains. Thus, the scope of address 99 resolution spans both the L2 link and the devices attached to those 100 links. 102 This document identifies potential issues associated with address 103 resolution in data centers with a large number of hosts. The scope 104 of this document is intentionally relatively narrow as it mirrors the 105 ARMD WG charter. This document lists "pain points" that are being 106 experienced in current data centers. The goal of this document is to 107 focus on address resolution issues and not other broader issues that 108 might arise in data centers. 110 2. Terminology 112 Address Resolution: the process of determining the link-layer 113 address corresponding to a given IP address. In IPv4, address 114 resolution is performed by ARP [RFC0826]; in IPv6, it is provided 115 by Neighbor Discovery (ND) [RFC4861]. 117 Application: software that runs on either a physical or virtual 118 machine, providing a service (e.g., web server, database server, 119 etc.) 121 L2 Broadcast Domain: The set of all links, repeaters, and switches 122 that are traversed to reach all nodes that are members of a given 123 L2 broadcast domain. In IEEE 802.1Q networks, a broadcast domain 124 corresponds to a single VLAN. 126 Host (or server): A computer system on the network. 128 Hypervisor: Software running on a host that allows multiple VMs to 129 run on the same host. 131 Virtual machine (VM): A software implementation of a physical 132 machine that runs programs as if they were executing on a 133 physical, non-virtualized machine. Applications (generally) do 134 not know they are running on a VM as opposed to running on a 135 "bare" host or server, though some systems provide a 136 paravirtualization environment that allows an operating systems or 137 application to be aware of the presences of virtualization for 138 optimization purposes. 140 ToR: Top of Rack Switch. A switch placed in a single rack to 141 aggregate network connectivity to and from hosts in that rack. 143 EoR: End of Row Switch. A switch used to aggregate network 144 connectivity from multiple racks. EoR switches are the next level 145 of switching above ToR switches. 147 3. Background 149 Large, flat L2 networks have long been known to have scaling 150 problems. As the size of an L2 broadcast domain increases, the level 151 of broadcast traffic from protocols like ARP increases. Large 152 amounts of broadcast traffic pose a particular burden because every 153 device (switch, host and router) must process and possibly act on 154 such traffic. In extreme cases, "broadcast storms" can occur where 155 the quantity of broadcast traffic reaches a level that effectively 156 brings down part or all of a network. For example, poor 157 implementations of loop detection and prevention or misconfiguration 158 errors can create conditions that lead to broadcast storms as network 159 conditions change. The conventional wisdom for addressing such 160 problems has been to say "don't do that". That is, split large L2 161 networks into multiple smaller L2 networks, each operating as its own 162 L3/IP subnet. Numerous data center networks have been designed with 163 this principle, e.g., with each rack placed within its own L3 IP 164 subnet. By doing so, the broadcast domain (and address resolution) 165 is confined to one Top of Rack switch, which works well from a 166 scaling perspective. Unfortunately, this conflicts in some ways with 167 the current trend towards dynamic work load shifting in data centers 168 and increased virtualization as discussed below. 170 Workload placement has become a challenging task within data centers. 171 Ideally, it is desirable to be able to dynamically reassign workloads 172 within a data center in order to optimize server utilization, add 173 additional servers in response to increased demand, etc. However, 174 servers are often pre-configured to run with a given set of IP 175 addresses. Placement of such servers is then subject to constraints 176 of the IP addressing restrictions of the data center. For example, 177 servers configured with addresses from a particular subnet could only 178 be placed where they connect to the IP subnet corresponding to their 179 IP addresses. If each top of rack switch is acting as a gateway for 180 its own subnet, a server can only be connected to the one top of rack 181 switch. This gateway switch represents the L2/L3 boundary. A 182 similar constraint occurs in virtualized environments, as discussed 183 next. 185 Server virtualization is fast becoming the norm in data centers. 186 With server virtualization, each physical server supports multiple 187 virtual machines, each running its own operating system, middleware 188 and applications. Virtualization is a key enabler of workload 189 agility, i.e., allowing any server to host any application (on its 190 own VM) and providing the flexibility of adding, shrinking, or moving 191 VMs within the physical infrastructure. Server virtualization 192 provides numerous benefits, including higher utilization, increased 193 data security, reduced user downtime, and even significant power 194 conservation, along with the promise of a more flexible and dynamic 195 computing environment. 197 The discussion below focuses on VM placement and migration. Keep in 198 mind, however, that even in a non-virtualized environment, many of 199 the same issues apply to individual workloads running on standalone 200 machines. For example, when increasing the number of servers running 201 a particular workload to meet demand, placement of those workloads 202 may be constrained by IP subnet numbering considerations, as 203 discussed earlier. 205 The greatest flexibility in VM and workload management occurs when it 206 is possible to place a VM (or workload) anywhere in the data center 207 regardless of what IP addresses the VM uses and how the physical 208 network is laid out. In practice, movement of VMs within a data 209 center is easiest when VM placement and movement does not conflict 210 with the IP subnet boundaries of the data center's network, so that 211 the VM's IP address need not be changed to reflect its actual point 212 of attachment on the network from an L3/IP perspective. In contrast, 213 if a VM moves to a new IP subnet, its address must change, and 214 clients will need to be made aware of that change. From a VM 215 management perspective, management is simplified if all servers are 216 on a single large L2 network. 218 With virtualization, it is not uncommon to have a single physical 219 server host ten (or more) VMs, each having its own IP (and MAC) 220 addresses. Consequently, the number of addresses per machine (and 221 hence per subnet) is increasing, even when the number of physical 222 machines stays constant. In a few years, the numbers will likely be 223 even higher. 225 In the past, applications were static in the sense that they tended 226 to stay in one physical place. An application installed on a 227 physical machine would stay on that machine because the cost of 228 moving an application elsewhere was generally high. Moreover, 229 physical servers hosting applications would tend to be placed in such 230 a way as to facilitate communication locality. That is, applications 231 running on servers would be physically located near the servers 232 hosting the applications they communicated with most heavily. The 233 network traffic patterns in such environments could thus be 234 optimized, in some cases keeping significant traffic local to one 235 network segment. In these more static and carefully managed 236 environments, it was possible to build networks that approached 237 scaling limitations, but did not actually cross the threshold. 239 Today, with the proliferation of VMs, traffic patterns are becoming 240 more diverse and less predictable. In particular, there can easily 241 be less locality of network traffic as VMs hosting applications are 242 moved for such reasons as reducing overall power usage (by 243 consolidating VMs and powering off idle machine) or to move a VM to a 244 physical server with more capacity or a lower load. In today's 245 changing environments, it is becoming more difficult to engineer 246 networks as traffic patterns continually shift as VMs move around. 248 In summary, both the size and density of L2 networks is increasing. 249 In addition, increasingly dynamic workloads and the increased usage 250 of VMs is creating pressure for ever larger L2 networks. Today, 251 there are already data centers with over 100,000 physical machines 252 and many times that number of VMs. This number will only increase 253 going forward. In addition, traffic patterns within a data center 254 are also constantly changing. Ultimately, the issues described in 255 this document might be observed at any scale depending on the 256 particular design of the data center. 258 4. Address Resolution in IPv4 260 In IPv4 over Ethernet, ARP provides the function of address 261 resolution. To determine the link-layer address of a given IP 262 address, a node broadcasts an ARP Request. The request is delivered 263 to all portions of the L2 network, and the node with the requested IP 264 address replies with an ARP Reply. ARP is an old protocol, and by 265 current standards, is sparsely documented. For example, there are no 266 clear requirement for retransmitting ARP Requests in the absence of 267 replies. Consequently, implementations vary in the details of what 268 they actually implement [RFC0826][RFC1122]. 270 From a scaling perspective, there are a number of problems with ARP. 271 First, it uses broadcast, and any network with a large number of 272 attached hosts will see a correspondingly large amount of broadcast 273 ARP traffic. The second problem is that it is not feasible to change 274 host implementations of ARP - current implementations are too widely 275 entrenched, and any changes to host implementations of ARP would take 276 years to become sufficiently deployed to matter. That said, it may 277 be possible to change ARP implementations in hypervisors, L2/L3 278 boundary routers, and/or ToR access switches, to leverage such 279 techniques as Proxy ARP. Finally, ARP implementations need to take 280 steps to flush out stale or otherwise invalid entries. 281 Unfortunately, existing standards do not provide clear implementation 282 guidelines for how to do this. Consequently, implementations vary 283 significantly, and some implementations are "chatty" in that they 284 just periodically flush caches every few minutes and send new ARP 285 queries. 287 5. Address Resolution in IPv6 289 Broadly speaking, from the perspective of address resolution, IPv6's 290 Neighbor Discovery (ND) behaves much like ARP, with a few notable 291 differences. First, ARP uses broadcast, whereas ND uses multicast. 292 Specifically, when querying for a target IP address, ND maps the 293 target address into an IPv6 Solicited Node multicast address. Using 294 multicast rather than broadcast has the benefit that the multicast 295 frames do not necessarily need to be sent to all parts of the 296 network, i.e., only to segments where listeners for the Solicited 297 Node multicast address reside. In the case where multicast frames 298 are delivered to all parts of the network, sending to a multicast 299 still has the advantage that most (if not all) nodes will filter out 300 the (unwanted) multicast query via filters installed in the NIC 301 rather than burdening host software with the need to process such 302 packets. Thus, whereas all nodes must process every ARP query, ND 303 queries are processed only by the nodes to which they are intended. 304 In cases where multicast filtering can't effectively be implemented 305 in the NIC (e.g., as on hypervisors supporting virtualization), 306 filtering would need to be done in software (e.g., in the 307 hypervisor's vSwitch). 309 6. Generalized Data Center Design 311 There are many different ways in which data center networks might be 312 designed. The designs are usually engineered to suit the particular 313 workloads that are being deployed in the data center. For example, a 314 large web server farm might be engineered in a very different way 315 than a general-purpose multi-tenant cloud hosting service. However 316 in most cases the designs can be abstracted into a typical three- 317 layer model consisting of an access layer, an aggregation layer and 318 the Core. The access layer generally refers to the switches that are 319 closest to the physical or virtual severs, the aggregation layer 320 serves to interconnect multiple access layer devices. The Core 321 switches connect the aggregation switches to the larger network core. 322 Figure 1 shows a generalized data center design, which captures the 323 essential elements of various alternatives. 325 +-----+-----+ +-----+-----+ 326 | Core0 | | Core1 | Core 327 +-----+-----+ +-----+-----+ 328 / \ / / 329 / \----------\ / 330 / /---------/ \ / 331 +-------+ +------+ 332 +/------+ | +/-----+ | 333 | Aggr11| + --------|AggrN1| + Aggregation Layer 334 +---+---+/ +------+/ 335 / \ / \ 336 / \ / \ 337 +---+ +---+ +---+ +---+ 338 |T11|... |T1x| |TN1| |TNy| Access Layer 339 +---+ +---+ +---+ +---+ 340 | | | | | | | | 341 +---+ +---+ +---+ +---+ 342 | |... | | | | | | 343 +---+ +---+ +---+ +---+ Server racks 344 | |... | | | | | | 345 +---+ +---+ +---+ +---+ 346 | |... | | | | | | 347 +---+ +---+ +---+ +---+ 349 Typical Layered Architecture in DC 351 Figure 1 353 6.1. Access Layer 355 The access switches provide connectivity directly to/from physical 356 and virtual servers. The access layer may be implemented by wiring 357 the servers within a rack to a top-of-rack (ToR) switch or, less 358 commonly, the servers could be wired directly to an end-of-row (EoR) 359 switch. A server rack may have a single uplink to one access switch, 360 or may have dual uplinks to two different access switches. 362 6.2. Aggregation Layer 364 In a typical data center, aggregation switches interconnect many ToR 365 switches. Usually there are multiple parallel aggregation switches, 366 serving the same group of ToRs to achieve load sharing. It is no 367 longer uncommon to see aggregation switches interconnecting hundreds 368 of ToR switches in large data centers. 370 6.3. Core 372 Core switches connect multiple aggregation switches and interface 373 with data center gateway(s) to external networks or interconnect to 374 different sets of racks within one data center. 376 6.4. L3 / L2 Topological Variations 378 6.4.1. L3 to Access Switches 380 In this scenario the L3 domain is extended all the way to the access 381 switches. Each rack enclosure consists of a single L2 domain, which 382 is confined to the rack. In general, there are no significant ARP/ND 383 scaling issues in this scenario as the L2 domain cannot grow very 384 large. This topology has benefits in scenarios where servers 385 attached to a particular access switch generally run VMs that are 386 confined to using a single subnet. These VMs and the applications 387 they host aren't moved (migrated) to other racks which might be 388 attached to different access switches (and different IP subnets). A 389 small server farm or very static compute cluster might be best served 390 via this design. 392 6.4.2. L3 to Aggregation Switches 394 When the L3 domain only extends to aggregation switches, hosts in any 395 of the IP subnets configured on the aggregation switches can be 396 reachable via L2 through any access switches if access switches 397 enable all the VLANs. This topology allows a greater level of 398 flexibility as servers attached to any access switch can be reloaded 399 with VMs that have been provisioned with IP addresses from multiple 400 prefixes as needed. Further, in such an environment, VMs can migrate 401 between racks without IP address changes. The drawback of this 402 design however is that multiple VLANs have to be enabled on all 403 access switches and all access-facing ports on aggregation switches. 404 Even though L2 traffic is still partitioned by VLANs, the fact that 405 all VLANs are enabled on all ports can lead to broadcast traffic on 406 all VLANs to traverse all links and ports, which is same effect as 407 one big L2 domain on the access-facing side of the aggregation 408 switch. In addition, internal traffic itself might have to cross 409 different L2 boundaries resulting in significant ARP/ND load at the 410 aggregation switches. This design provides a good tradeoff between 411 flexibility and L2 domain size. A moderate sized data center might 412 utilize this approach to provide high availability services at a 413 single location. 415 6.4.3. L3 in the Core only 417 In some cases where a wider range of VM mobility is desired (i.e. 418 greater number of racks among which VMs can move without IP address 419 change), the L3 routed domain might be terminated at the core routers 420 themselves. In this case VLANs can span across multiple groups of 421 aggregation switches, which allow hosts to be moved among more number 422 of server racks without IP address change. This scenario results in 423 the largest ARP/ND performance impact as explained later. A data 424 center with very rapid workload shifting may consider this kind of 425 design. 427 6.4.4. Overlays 429 There are several approaches where overlay networks can be used to 430 build very large L2 networks to enable VM mobility. Overlay networks 431 using various L2 or L3 mechanisms allow interior switches/routers to 432 mask host addresses. In addition, L3 overlays can help the data 433 center designer control the size of the L2 domain and also enhance 434 the ability to provide multi tenancy in data center networks. 435 However, the use of overlays does not eliminate traffic associated 436 with address resolution, it simply moves it to regular data traffic. 437 That is, address resolution is implemented in the overlay, and is not 438 directly visible to the switches of the DC network. 440 A potential problem that arises in a large data center is when a 441 large number of hosts communicate with their peers in different 442 subnets, all these hosts send (and receive) data packets to their 443 respective L2/L3 boundary nodes as the traffic flows are generally 444 bi-directional. This has the potential to further highlight any 445 scaling problems. These L2/L3 boundary nodes have to process ARP/ND 446 requests sent from originating subnets and resolve physical (MAC) 447 addresses in the target subnets for what are generally bi-directional 448 flows. Therefore, for maximum flexibility in managing the data 449 center workload, it is often desirable to use overlays to place 450 related groups of hosts in the same topological subnet to avoid the 451 L2/L3 boundary translation. The use of overlays in the data center 452 network can be a useful design mechanism to help manage a potential 453 bottleneck at the L2 / L3 boundary by redefining where that boundary 454 exists. 456 6.5. Factors that Affect Data Center Design 458 6.5.1. Traffic Patterns 460 Expected traffic patterns play an important role in designing the 461 appropriately sized access, aggregation and core networks. Traffic 462 patterns also vary based on the expected use of the data center. 464 Broadly speaking it is desirable to keep as much traffic as possible 465 on the access layer in order to minimize the bandwidth usage at the 466 aggregation layer. If the expected use of the data center is to 467 serve as a large web server farm, where thousands of nodes are doing 468 similar things and the traffic pattern is largely in and out a large 469 data center, an access layer with EoR switches might be used as it 470 minimizes complexity, allows for servers and databases to be located 471 in the same L2 domain and provides for maximum density. 473 A data center that is expected to host a multi-tenant cloud hosting 474 service might have some completely unique requirements. In order to 475 isolate inter-customer traffic smaller L2 domains might be preferred 476 and though the size of the overall data center might be comparable to 477 the previous example, the multi-tenant nature of the cloud hosting 478 application requires a smaller more compartmentalized access layer. 479 A multi-tenant environment might also require the use of L3 all the 480 way to the access layer ToR switch. 482 Yet another example of a work load with a unique traffic pattern is a 483 high performance compute cluster where most of the traffic is 484 expected to stay within the cluster but at the same time there is a 485 high degree of crosstalk between the nodes. This would once again 486 call for a large access layer in order to minimize the requirements 487 at the aggregation layer. 489 6.5.2. Virtualization 491 Using virtualization in the data center further serves to increase 492 the possible densities that can be achieved. Virtualization also 493 further complicates the requirements on the access layer as that 494 determines the scope of server migrations or failover of servers on 495 physical hardware failures. 497 Virtualization also can place additional requirements on the m 498 aggregation switches in terms of address resolution table size and 499 the scalability of any address learning protocols that might be used 500 on those switches. The use of virtualization often also requires the 501 use of additional VLANs for High Availability beaconing which would 502 need to span across the entire virtualized infrastructure. This 503 would require the access layer to span as wide as the virtualized 504 infrastructure. 506 6.5.3. Summary 508 The designs described in this section have a number of tradeoffs. 509 The L3 to access switches design described in Section 6.4.1 is the 510 only design that constrains L2 domain size in a fashion that avoids 511 ARP/ND scaling problems. However, that design has limitations and 512 does not address some of the other requirements that lead to 513 configurations that make use of larger L2 domains. Consequently, 514 ARP/ND scaling issues are a real problem in practice. 516 7. Problem Itemization 518 This section articulates some specific problems or "pain points" that 519 are related to large data centers. It is a future activity to 520 determine which of these areas can or will be addressed by ARMD or 521 some other IETF WG. 523 7.1. ARP Processing on Routers 525 One pain point with large L2 broadcast domains is that the routers 526 connected to the L2 domain may need to process a significant amount 527 of ARP traffic in some cases. In particular, environments where the 528 aggregate level of ARP traffic is very large may lead to a heavy ARP 529 load on routers. Even though the vast majority of ARP traffic may 530 well not be aimed at that router, the router still has to process 531 enough of the ARP Request to determine whether it can safely be 532 ignored. The ARP algorithm specifies that a recipient must update 533 its ARP cache if it receives an ARP query from a source for which it 534 has an entry [RFC0826]. 536 ARP processing in routers is commonly handled in a "slow path" 537 software processor rather than directly by a hardware ASIC as is the 538 case when forwarding packets. Such a design significantly limits the 539 rate at which ARP traffic can be processed compared to the rate at 540 which ASICs can forward traffic. Current implementations at the time 541 of this writing can support ARP processing in the low thousands of 542 ARP packets per second. In some deployments, limitations on the rate 543 of ARP processing have been cited as being a problem. 545 To further reduce the ARP load, some routers have implemented 546 additional optimizations in their forwarding ASIC paths. For 547 example, some routers can be configured to discard ARP Requests for 548 target addresses other than those assigned to the router. That way, 549 the router's software processor only receives ARP Requests for 550 addresses it owns and must respond to. This can significantly reduce 551 the number of ARP Requests that must be processed by the router. 553 Another optimization concerns reducing the number of ARP queries 554 targeted at routers, whether for address resolution or to validate 555 existing cache entries. Some routers can be configured to broadcast 556 periodic gratuitous ARPs [RFC5227]. Upon receipt of a gratuitous 557 ARP, implementations mark the associated entry as "fresh", resetting 558 the aging timer to its maximum setting. Consequently, sending out 559 periodic gratuitous ARPs can effectively prevent nodes from needing 560 to send ARP Requests intended to revalidate stale entries for a 561 router. The net result is an overall reduction in the number of ARP 562 queries routers receive. Gratuitous ARPs, broadcast to all nodes in 563 the L2 broadcast domain, may in some cases also pre-populate ARP 564 caches on neighboring devices, further reducing ARP traffic. But it 565 is not believed that pre-population of ARP entries is supported by 566 most implementations, as the ARP specification [RFC0826] recommends 567 only that pre-existing ARP entries be updated upon receipt of ARP 568 messages; it does not call for the creation of new entries when when 569 none already exist. 571 Finally, another area concerns the overhead of processing IP packets 572 for which no ARP entry exists. Existing standards specify that one 573 (or more) IP packets for which no ARP entry exists should be queued 574 pending successful completion of the address resolution process 575 [RFC1122] [RFC1812]. Once an ARP query has been resolved, any queued 576 packets can be forwarded on. Again, the processing of such packets 577 is handled in the "slow path", effectively limiting the rate at which 578 a router can process ARP "cache misses" and is viewed as a problem in 579 some deployments today. Additionally, if no response is received, 580 the router may send the ARP/ND query multiple times. If no response 581 is received after a number of ARP/ND requests, the router needs to 582 drop any queued data packets, and may send an ICMP destination 583 unreachable message as well [RFC0792]. This entire process can be 584 CPU intensive. 586 Although address-resolution traffic remains local to one L2 network, 587 some data center designs terminate L2 domains at individual 588 aggregation switches/routers (e.g., see Section 4.4.2). Such routers 589 can be connected to a large number of interfaces (e.g., 100 or more). 590 While the address resolution traffic on any one interface may be 591 manageable, the aggregate address resolution traffic across all 592 interfaces can become problematic. 594 Another variant of the above issue has individual routers servicing a 595 relatively small number of interfaces, with the individual interfaces 596 themselves serving very large subnets. Once again, it is the 597 aggregate quantity of ARP traffic seen across all of the router's 598 interfaces that can be problematic. This "pain point" is essentially 599 the same as the one discussed above, the only difference being 600 whether a given number of hosts are spread across a few large IP 601 subnets or many smaller ones. 603 When hosts in two different subnets under the same L2/L3 boundary 604 router need to communicate with each other, the L2/L3 router not only 605 has to initiate ARP/ND requests to the target's subnet, it also has 606 to process the ARP/ND requests from the originating subnet. This 607 process further adds to the overall ARP processing load. 609 7.2. IPv6 Neighbor Discovery 611 Though IPv6's Neighbor Discovery behaves much like ARP there are 612 several notable differences which result in a different set of 613 potential issues. From an L2 perspective, an important difference is 614 that ND address resolution requests are sent via multicast, which 615 results in ND queries only being processed by the nodes for which 616 they are intended. This reduces the total number of ND packets that 617 an implementation will receive compared with broadcast ARPs. 619 Another key difference concerns revalidating stale ND entries. ND 620 requires that nodes periodically re-validate any entries they are 621 using, to ensure that bad entries are timed out quickly enough that 622 TCP does not terminate a connection. Consequently, some 623 implementations will send out "probe" ND queries to validate in-use 624 ND entries as frequently as every 35 seconds [RFC4861]. Such probes 625 are sent via unicast (unlike in the case of ARP). However, on larger 626 networks, such probes can result in routers receiving many such 627 queries (i.e., many more than with ARP, which does not specify such 628 behavior). Unfortunately, the IPv4 mitigation technique of sending 629 gratuitous ARPs (as described in section 7.1) does not work in IPv6. 630 The ND specification specifically states that gratuitous ND "updates" 631 cannot cause an ND entry to be marked "valid". Rather, such entries 632 are marked "probe", which causes the receiving node to (eventually) 633 generate a probe back to the sender, which in this case is precisely 634 the behavior that the router is trying to prevent! 636 Routers implementing NUD (for neighboring destinations) will need to 637 process neighbor cache state changes such as transitioning entries 638 from REACHABLE to STALE. How this capability is implemented may 639 impact the scalability of ND on a router. For example, one possible 640 implementation is to have the forwarding operation detect when an ND 641 entry is referenced that needs to transition from REACHABLE to STALE, 642 by signaling an event that would need to be processed by the software 643 processor. Such an implementation could increase the load on the 644 service processor much in the same way that a high rate of ARP 645 requests have led to problems on some routers. 647 It should be noted that ND does not require the sending of probes in 648 all cases. Section 7.3.1 of [RFC4861] describes a technique whereby 649 hints from TCP can be used to verify that an existing ND entry is 650 working fine and does not need to be revalidated. 652 Finally, IPv6 and IPv4 are often run simultaneously and in parallel 653 on the same network, i.e., in dual-stack mode. In such environments, 654 the IPv4 and IPv6 issues enumerated above compound each other. 656 7.3. MAC Address Table Size Limitations in Switches 658 L2 switches maintain L2 MAC address forwarding tables for all sources 659 and destinations traversing through the switch. These tables are 660 populated through learning and are used to forward L2 frames to their 661 correct destination. The larger the L2 domain, the larger the tables 662 have to be. While in theory a switch only needs to keep track of 663 addresses it is actively using (sometimes called "conversational 664 learning"), switches flood broadcast frames (e.g., from ARP), 665 multicast frames (e.g., from Neighbor Discovery) and unicast frames 666 to unknown destinations. Switches add entries for the source 667 addresses of such flooded frames to their forwarding tables. 668 Consequently, MAC address table size can become a problem as the size 669 of the L2 domain increases. The table size problem is made worse 670 with VMs, where a single physical machine now hosts many VMs (in the 671 10's today, but growing rapidly as the number of cores per CPU 672 increases), since each VM has its own MAC address that is visible to 673 switches. 675 When L3 extends all the way to access switches (see Section 4.4.1), 676 the size of MAC address tables in switches is not generally a 677 problem. When L3 extends only to aggregation switches (see Section 678 4.4.2), however, MAC table size limitations can be a real issue. 680 8. Summary 682 This document has outlined a number of issues related to address 683 resolution in large data centers. In particular this document has 684 described different scenarios where such issues might arise, what 685 these potential issues are, and along with outlining fundamental 686 factors that cause them. It is hoped that describing specific pain 687 points will facilitate a discussion as to whether and how to best 688 address them. 690 9. Acknowledgments 692 This document has been significantly improved by comments from Manov 693 Bhatia, David Black, Stewart Bryant, Ralph Droms, Linda Dunbar, 694 Donald Eastlake, Wesley Eddy, Anoop Ghanwani, Sue Hares, Joel 695 Halpurn, Pete Resnick, Benson Schliesser, T. Sridhar and Lucy Yong. 696 Igor Gashinsky deserves additional credit for highlighting some of 697 the ARP-related pain points and for clarifying the difference between 698 what the standards require and what some router vendors have actually 699 implemented in response to operator requests. 701 10. IANA Considerations 703 This document makes no request of IANA. [Note: this section should 704 be removed upon final RFC publication.] 706 11. Security Considerations 708 This document does not create any security implications nor does it 709 have any security implications. The security vulnerabilities in ARP 710 are well known and this document does not change or mitigate them in 711 any way. Security considerations for Neighbor Discovery are 712 discussed in [RFC4861] and [RFC6583]. 714 12. Change Log 716 12.1. Changes from -03 to -04 718 1. Numerous editorial changes in response to IESG reviews and Gen 719 Art reviews from Joel Halpurn and Manav Bhatia. 721 12.2. Changes from -02 to -03 723 1. Wordsmithing and editorial improvements in response to comments 724 from David Black, Donald Eastlake, Anoop Ghanwani, Benson 725 Schliesser, T. Sridhar and Lucy Yong. 727 12.3. Changes from -01 729 1. Wordsmithing and editorial improvements. 731 12.4. Changes from -00 733 1. Merged draft-karir-armd-datacenter-reference-arch-00.txt into 734 this document. 736 2. Added section explaining how ND differs from ARP and the 737 implication on address resolution "pain". 739 13. Informative References 741 [RFC0792] Postel, J., "Internet Control Message Protocol", STD 5, 742 RFC 792, September 1981. 744 [RFC0826] Plummer, D., "Ethernet Address Resolution Protocol: Or 745 converting network protocol addresses to 48.bit Ethernet 746 address for transmission on Ethernet hardware", STD 37, 747 RFC 826, November 1982. 749 [RFC1122] Braden, R., "Requirements for Internet Hosts - 750 Communication Layers", STD 3, RFC 1122, October 1989. 752 [RFC1812] Baker, F., "Requirements for IP Version 4 Routers", 753 RFC 1812, June 1995. 755 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 756 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 757 September 2007. 759 [RFC5227] Cheshire, S., "IPv4 Address Conflict Detection", RFC 5227, 760 July 2008. 762 [RFC6583] Gashinsky, I., Jaeggli, J., and W. Kumari, "Operational 763 Neighbor Discovery Problems", RFC 6583, March 2012. 765 Authors' Addresses 767 Thomas Narten 768 IBM 770 Email: narten@us.ibm.com 772 Manish Karir 773 Merit Network Inc. 775 Email: mkarir@merit.edu 777 Ian Foo 778 Huawei Technologies 780 Email: Ian.Foo@huawei.com