idnits 2.17.1 draft-mahalingam-dutt-dcops-vxlan-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 20 longer pages, the longest (page 14) being 68 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 20 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 356 instances of too long lines in the document, the longest one being 11 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 24 has weird spacing: '...t-Draft is s...' == Line 34 has weird spacing: '... at any ...' == Line 37 has weird spacing: '... The list ...' == Line 111 has weird spacing: '.... This is n...' == Line 124 has weird spacing: '... the indiv...' == (38 more instances...) -- The document date (August 22, 2012) is 4265 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- == Missing Reference: 'ECMP' is mentioned on line 119, but not defined == Unused Reference: 'RFC4601' is defined on line 748, but no explicit reference was found in the text == Unused Reference: 'RFC5015' is defined on line 752, but no explicit reference was found in the text == Unused Reference: 'RFC4541' is defined on line 756, but no explicit reference was found in the text -- Obsolete informational reference (is this intentional?): RFC 4601 (Obsoleted by RFC 7761) Summary: 1 error (**), 0 flaws (~~), 13 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Mahalingam 3 Internet Draft D. Dutt 4 Intended Status: Experimental K. Duda 5 Expires: February 2013 Arista 6 P. Agarwal 7 Broadcom 8 L. Kreeger 9 Cisco 10 T. Sridhar 11 VMware 12 M. Bursell 13 Citrix 14 C. Wright 15 Red Hat 16 August 22, 2012 18 VXLAN: A Framework for Overlaying Virtualized Layer 2 Networks over 19 Layer 3 Networks 20 draft-mahalingam-dutt-dcops-vxlan-02.txt 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF), its areas, and its working groups. Note that 29 other groups may also distribute working documents as Internet- 30 Drafts. 32 Internet-Drafts are draft documents valid for a maximum of six 33 months and may be updated, replaced, or obsoleted by other documents 34 at any time. It is inappropriate to use Internet-Drafts as 35 reference material or to cite them other than as "work in progress." 37 The list of current Internet-Drafts can be accessed at 38 http://www.ietf.org/ietf/1id-abstracts.txt 40 The list of Internet-Draft Shadow Directories can be accessed at 41 http://www.ietf.org/shadow.html 43 This Internet-Draft will expire on February 22, 2013. 45 Copyright Notice 47 Copyright (c) 2012 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with 55 respect to this document. 57 Abstract 59 This document describes Virtual eXtensible Local Area Network 60 (VXLAN), which is used to address the need for overlay networks 61 within virtualized data centers accommodating multiple tenants. The 62 scheme and the related protocols can be used in cloud service 63 provider and enterprise data center networks. 65 Table of Contents 67 1. Introduction...................................................3 68 1.1. Acronyms & Definitions....................................3 69 2. Conventions used in this document..............................4 70 3. VXLAN Problem Statement........................................5 71 3.1. Limitations imposed by Spanning Tree & VLAN Ranges........5 72 3.2. Multitenant Environments..................................5 73 3.3. Inadequate Table Sizes at ToR Switch......................6 74 4. Virtual eXtensible Local Area Network (VXLAN)..................6 75 4.1. Unicast VM to VM communication............................7 76 4.2. Broadcast Communication and Mapping to Multicast..........8 77 4.3. Physical Infrastructure Requirements......................9 78 5. VXLAN Frame Format.............................................9 79 6. VXLAN Deployment Scenarios....................................12 80 6.1. Inner VLAN Tag Handling..................................16 81 7. IETF Network Virtualization Overlays (nvo3) Working Group.....16 82 8. Security Considerations.......................................17 83 9. IANA Considerations...........................................18 84 10. Conclusion...................................................18 85 11. References...................................................18 86 11.1. Normative References....................................18 87 11.2. Informative References..................................18 88 12. Acknowledgments..............................................19 90 1. Introduction 92 Server virtualization has placed increased demands on the physical 93 network infrastructure. At a minimum, there is a need for more MAC 94 address table entries throughout the switched Ethernet network due 95 to potential attachment of hundreds of thousands of Virtual Machines 96 (VMs), each with its own MAC address. 98 Second, the VMs may be grouped according to their Virtual LAN 99 (VLAN). In a data center one might need thousands of VLANs to 100 partition the traffic according to the specific group that the VM 101 may belong to. The current VLAN limit of 4094 is inadequate in such 102 situations. A related requirement for virtualized environments is 103 having the Layer 2 network scale across the entire data center or 104 even between data centers for efficient allocation of compute, 105 network and storage resources. Using traditional approaches like 106 Spanning Tree Protocol (STP) for a loop free topology can result in 107 a large number of disabled links in such environments. 109 Another type of demand that is being placed on data centers is the 110 need to host multiple tenants, each with their own isolated network 111 domain. This is not economical to realize with dedicated 112 infrastructure, so network administrators opt to implement this over 113 a shared network. A concomitant problem is that each tenant may 114 independently assign MAC addresses and VLAN IDs leading to potential 115 duplication of these on the physical network. 117 The last scenario is the case where the network operator prefers to 118 use IP for interconnection of the physical infrastructure (e.g. to 119 achieve multipath scalability through Equal Cost Multipath [ECMP]) 120 while still preserving the Layer 2 model for inter-VM communication. 122 The scenarios described above lead to a requirement for an overlay 123 network. This overlay would be used to carry the MAC traffic from 124 the individual VMs in an encapsulated format over a logical 125 "tunnel". 127 This document details a framework termed Virtual eXtensible Local 128 Area Network (VXLAN) which provides such an encapsulation scheme to 129 address the various requirements specified above. 131 1.1. Acronyms & Definitions 133 ACL - Access Control List 135 ECMP - Equal Cost Multipath 137 IGMP - Internet Group Management Protocol 139 PIM - Protocol Independent Multicast 141 SPB - Shortest Path Bridging 143 STP - Spanning Tree Protocol 145 ToR - Top of Rack 147 TRILL - Transparent Interconnection of Lots of Links 149 VXLAN - Virtual eXtensible Local Area Network 151 VXLAN Segment - VXLAN Layer 2 overlay network over which VMs 153 communicate 155 VXLAN Overlay Network - another term for VXLAN Segment 157 VXLAN Gateway - an entity which forwards traffic between VXLAN 159 and non-VXLAN environments 161 VTEP - VXLAN Tunnel End Point - an entity which originates 162 and/or terminates VXLAN tunnels 164 VLAN - Virtual Local Area Network 166 VM - Virtual Machine 168 VNI - VXLAN Network Identifier (or VXLAN Segment ID) 170 2. Conventions used in this document 172 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 173 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 174 document are to be interpreted as described in RFC-2119 [RFC2119]. 176 In this document, these words will appear with that interpretation 177 only when in ALL CAPS. Lower case uses of these words are not to be 178 interpreted as carrying RFC-2119 significance. 180 3. VXLAN Problem Statement 182 This section details the problems that VXLAN is intended to address. 183 The focus is on the networking infrastructure within the data center 184 and the issues related to them. 186 3.1. Limitations imposed by Spanning Tree & VLAN Ranges 188 Current Layer 2 networks use the Spanning Tree Protocol (STP) to 189 avoid loops in the network due to duplicate paths. STP will turn off 190 links to avoid the replication and looping of frames. Some data 191 center operators see this as a problem with Layer 2 networks in 192 general since with STP they are effectively paying for more ports 193 and links than they can really use. In addition, resiliency due to 194 multipathing is not available with the STP model. Newer initiatives 195 like TRILL/Shortest Path Bridging (SPB) have been proposed to help 196 with multipathing and thus surmount some of the problems with STP. 197 STP limitations may also be avoided by configuring servers within a 198 rack to be on the same Layer 3 network with switching happening at 199 Layer 3 both within the rack and between racks. However, this is 200 incompatible with a Layer 2 model for inter-VM communication. 202 Another characteristic of Layer 2 data center networks is their use 203 of Virtual LANs (VLANs) to provide broadcast isolation. A 12 bit 204 VLAN ID is used in the Ethernet data frames to divide the larger 205 Layer 2 network into multiple broadcast domains. This has served 206 well for several data centers which require fewer than 4094 VLANs. 207 With the growing adoption of virtualization, this upper limit is 208 seeing pressure. Moreover, due to STP, several data centers limit 209 the number of VLANs that could be used. In addition, requirements 210 for multitenant environments accelerate the need for larger VLAN 211 limits, as discussed in Section 3.3. 213 3.2. Multitenant Environments 215 Cloud computing involves on demand elastic provisioning of resources 216 for multitenant environments. The most common example of cloud 217 computing is the public cloud, where a cloud service provider offers 218 these elastic services to multiple customers/tenants over the same 219 physical infrastructure. 221 Isolation of network traffic by tenant could be done via Layer 2 or 222 Layer 3 networks. For Layer 2 networks, VLANs are often used to 223 segregate traffic - so a tenant could be identified by its own VLAN, 224 for example. Due to the large number of tenants that a cloud 225 provider might service, the 4094 VLAN limit is often inadequate. In 226 addition, there is often a need for multiple VLANs per tenant, which 227 exacerbates the issue. 229 Another use case is cross pod expansion. A pod typically consists of 230 one or more racks of servers with associated network and storage 231 connectivity. Tenants may start off on a pod and, due to expansion, 232 require servers/VMs on other pods, especially the case when tenants 233 on the other pods are not fully utilizing all their resources. This 234 use case requires a "stretched" Layer 2 environment connecting the 235 individual servers/VMs. 237 Layer 3 networks are not a complete solution for multi tenancy 238 either. Two tenants might use the same set of Layer 3 addresses 239 within their networks which requires the cloud provider to provide 240 isolation in some other form. Further, requiring all tenants to use 241 IP excludes customers relying on direct Layer 2 or non-IP Layer 3 242 protocols for inter VM communication. 244 3.3. Inadequate Table Sizes at ToR Switch 246 Today's virtualized environments place additional demands on the MAC 247 address tables of Top of Rack (ToR) switches which connect to the 248 servers. Instead of just one MAC address per server link, the ToR 249 now has to learn the MAC addresses of the individual VMs (which 250 could range in the 100s per server). This is a requirement since 251 traffic from/to the VMs to the rest of the physical network will 252 traverse the link to the switch. A typical ToR switch could connect 253 to 24 or 48 servers depending upon the number of its server facing 254 ports. A data center might consist of several racks, so each ToR 255 switch would need to maintain an address table for the communicating 256 VMs across the various physical servers. This places a much larger 257 demand on the table capacity compared to non-virtualized 258 environments. 260 If the table overflows, the switch may stop learning new addresses 261 until idle entries age out, leading to significant flooding of 262 subsequent unknown destination frames. 264 4. Virtual eXtensible Local Area Network (VXLAN) 266 VXLAN (Virtual eXtensible Local Area Network) addresses the above 267 requirements of the Layer 2 and Layer 3 data center network 268 infrastructure in the presence of VMs in a multitenant environment. 269 It runs over the existing networking infrastructure and provides a 270 means to "stretch" a Layer 2 network. In short, VXLAN is a Layer 2 271 overlay scheme over a Layer 3 network. Each overlay is termed a 272 VXLAN segment. Only VMs within the same VXLAN segment can 273 communicate with each other. Each VXLAN segment is scoped through a 274 24 bit segment ID, hereafter termed the VXLAN Network Identifier 275 (VNI). This allows up to 16M VXLAN segments to coexist within the 276 same administrative domain. 278 The VNI scopes the inner MAC frame originated by the individual VM. 279 Thus, you could have overlapping MAC addresses across segments but 280 never have traffic "cross over" since the traffic is isolated using 281 the VNI qualifier. This qualifier is in an outer header envelope 282 over the inner MAC frame originated by the VM. In the following 283 sections, the term "VXLAN segment" is used interchangeably with the 284 term "VXLAN overlay network". 286 Due to this encapsulation, VXLAN could also be termed a tunneling 287 scheme to overlay Layer 2 networks on top of Layer 3 networks. The 288 tunnels are stateless, so each frame is encapsulated according to a 289 set of rules. The end point of the tunnel (VTEP) discussed in the 290 following sections is located within the hypervisor on the server 291 which houses the VM. Thus, the VNI and VXLAN related tunnel/outer 292 header encapsulation are known only to the VTEP - the VM never sees 293 it (see Figure 1). Note that it is possible that VTEPs could also be 294 on a physical switch or physical server and could be implemented in 295 software or hardware. One use case where the VTEP is a physical 296 switch is discussed in Section 6 VXLAN on VXLAN deployment 297 scenarios. 299 The following sections discuss typical traffic flow scenarios in a 300 VXLAN environment using one type of control scheme - data plane 301 learning. Here, the association of VM's MAC to VTEP's IP is 302 discovered via source learning. Multicast is used for carrying 303 unknown destination, broadcast and multicast frames. 305 In addition to a learning based control plane, there are other 306 schemes possible for the distribution of the VTEP IP to VM MAC 307 mapping information. Options could include a central directory based 308 lookup by the individual VTEPs, distribution of this mapping 309 information to the VTEPs by the central directory, and so on. These 310 are sometimes characterized as push and pull models respectively. 311 This draft will focus on the data plane learning scheme as the 312 control plane for VXLAN. 314 4.1. Unicast VM to VM communication 316 Consider a VM within a VXLAN overlay network. This VM is unaware of 317 VXLAN. To communicate with a VM on a different host, it sends a MAC 318 frame destined to the target as before. The VTEP on the physical 319 host looks up the VNI to which this VM is associated. It then 320 determines if the destination MAC is on the same segment. If so, an 321 outer header comprising an outer MAC, outer IP address and VXLAN 322 header (see Figure 1 in Section 5 for frame format) are inserted in 323 front of the original MAC frame. The final packet is transmitted out 324 to the destination. This is the IP address of the remote VTEP 325 connecting the destination VM (represented by the inner MAC 326 destination address). 328 Upon reception, the remote VTEP verifies that the VNI is a valid one 329 and is used by the destination VM. If so, the packet is stripped of 330 its outer header and passed on to the destination VM. The 331 destination VM never knows about the VNI or that the frame was 332 transported with a VXLAN encapsulation. 334 In addition to forwarding the packet to the destination VM, the 335 remote VTEP learns the Inner Source MAC to outer Source IP address 336 mapping. It stores this mapping in a table so that when the 337 destination VM sends a response packet, there is no need for an 338 "unknown destination" flooding of the response packet. 340 Determining the MAC address of the destination VM prior to the 341 transmission by the source VM is performed as with non-VXLAN 342 environments except as described below. Broadcast frames are used 343 but are encapsulated within a multicast packet, as detailed in the 344 next section. 346 4.2. Broadcast Communication and Mapping to Multicast 348 Consider the VM on the source host attempting to communicate with 349 the destination VM using IP. Assuming that they are both on the 350 same subnet, the VM sends out an ARP broadcast frame. In the non- 351 VXLAN environment, this frame would be sent out using MAC broadcast 352 which all switches carrying that VLAN. 354 With VXLAN, a header including the VXLAN VNI is inserted at the 355 beginning of the packet along with the IP header and UDP header. 356 However, this broadcast packet is sent out to the IP multicast group 357 on which that VXLAN overlay network is realized. 359 To effect this, we need to have a mapping between the VXLAN VNI and 360 the IP multicast group that it will use. This mapping is done at the 361 management layer and provided to the individual VTEPs through a 362 management channel. Using this mapping, the VTEP can provide IGMP 363 membership reports to the upstream switch/router to join/leave the 364 VXLAN related IP multicast groups as needed. This will enable 365 pruning of the leaf nodes for specific multicast traffic addresses 366 based on whether a member is available on this host using the 367 specific multicast address. In addition, use of multicast routing 368 protocols like Protocol Independent Multicast - Sparse Mode (PIM-SM) 369 will provide efficient multicast trees within the Layer 3 network. 371 The VTEP will use (*,G) joins. This is needed as the set of VXLAN 372 tunnel sources is unknown and may change often, as the VMs come 373 up/go down across different hosts. A side note here is that since 374 each VTEP can act as both the source and destination for multicast 375 packets, a protocol like PIM-bidir would be more efficient. 377 The destination VM sends a standard ARP response using IP unicast. 378 This frame will be encapsulated back to the VTEP connecting the 379 originating VM using IP unicast VXLAN encapsulation. This is 380 possible since the mapping of the ARP response's destination MAC to 381 the VXLAN tunnel end point IP was learned earlier through the ARP 382 request. 384 Another point to note is that multicast frames and "unknown MAC 385 destination" frames are also sent using the multicast tree, similar 386 to the broadcast frames. 388 4.3. Physical Infrastructure Requirements 390 When IP multicast is used within the network infrastructure, a 391 multicast routing protocol like PIM-SM can be used by the individual 392 Layer 3 IP routers/switches within the network. This is used to 393 build efficient multicast forwarding trees so that multicast frames 394 are only sent to those hosts which have requested to receive them. 396 Similarly, there is no requirement that the actual network 397 connecting the source VM and destination VM should be a Layer 3 398 network - VXLAN can also work over Layer 2 networks. In either case, 399 efficient multicast replication within the Layer 2 network can be 400 achieved using IGMP snooping. 402 5. VXLAN Frame Format 404 The VXLAN frame format is shown below. Parsing this from the bottom, 405 there is an inner MAC frame with its own Ethernet header with 406 source, destination MAC addresses along with the Ethernet type plus 407 an optional VLAN. One use case of the inner VLAN tag is with VM 408 based VLAN tagging in a virtualized environment. See Section 6 for 409 further details of inner VLAN tag handling. 411 The inner MAC frame is encapsulated with the following four headers 412 (starting from the innermost header): 414 O VXLAN Header: This is an 8 byte field which has: 416 o Flags (8 bits) where the I flag MUST be set to 1 for a valid 417 VXLAN Network ID (VNI). The remaining 7 bits (designated "R") are 418 reserved fields and MUST be set to zero. 420 o VXLAN Segment ID/VXLAN Network Identifier (VNI) - this is a 24 421 bit value used to designate the individual VXLAN overlay network 422 on which the communicating VMs are situated. VMs in different 423 VXLAN overlay networks cannot communicate with each other. 425 o Reserved fields (24 bits and 8 bits) - MUST be set to zero. 427 O Outer UDP Header: This is the outer UDP header with a source 428 port provided by the VTEP and the destination port being a well- 429 known UDP port to be obtained by IANA assignment. It is recommended 430 that the source port be a hash of the inner Ethernet frame's 431 headers. This is to enable a level of entropy for ECMP/load 432 balancing of the VM to VM traffic across the VXLAN overlay. 434 The UDP checksum field SHOULD be transmitted as zero. When a packet 435 is received with a UDP checksum of zero, it MUST be accepted for 436 decapsulation. Optionally, if the encapsulating endpoint includes a 437 non-zero UDP checksum, it MUST be correctly calculated across the 438 entire packet including the IP header, UDP header, VXLAN header and 439 encapsulated MAC frame. When a decapsulating endpoint receives a 440 packet with a non-zero checksum it MAY choose to verify the 441 checksum value. If it chooses to perform such verification, and the 442 verification fails, the packet MUST be dropped. If the 443 decapsulating destination chooses not to perform the verification, 444 or performs it successfully, the packet MUST be accepted for 445 decapsulation. 447 O Outer IP Header: This is the outer IP header with the source IP 448 address indicating the IP address of the VTEP over which the 449 communicating VM (as represented by the inner source MAC address) is 450 running. The destination IP address is the IP address of the VTEP 451 connecting the communicating VM as represented by the inner 452 destination MAC address. 454 O Outer Ethernet Header (example): Figure 1 is an example of an 455 inner Ethernet frame encapsulated within an outer Ethernet + IP + 456 UDP + VXLAN header. The outer destination MAC address in this frame 457 may be the address of the target VTEP or of an intermediate Layer 3 458 router. The outer VLAN tag is optional. If present, it may be used 459 for delineating VXLAN traffic on the LAN. 461 0 1 2 3 462 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 464 Outer Ethernet Header: | 465 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 466 | Outer Destination MAC Address | 467 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 468 | Outer Destination MAC Address | Outer Source MAC Address | 469 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 470 | Outer Source MAC Address | 471 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 472 Optional Ethertype = C-Tag 802.1Q | Outer.VLAN Tag Information | 473 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 474 | Ethertype 0x0800 | 475 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 476 Outer IP Header: 477 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 478 |Version| IHL |Type of Service| Total Length | 479 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 480 | Identification |Flags| Fragment Offset | 481 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 482 | Time to Live | Protocol | Header Checksum | 483 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 484 | Outer Source Address | 485 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 486 | Outer Destination Address | 487 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 488 Outer UDP Header: 489 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 490 | Source Port = xxxx | Dest Port = VXLAN Port | 491 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 492 | UDP Length | UDP Checksum | 493 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 494 0 1 2 3 495 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 497 VXLAN Header: 498 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 499 |R|R|R|R|I|R|R|R| Reserved | 500 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 501 | VXLAN Network Identifier (VNI) | Reserved | 502 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 0 504 Inner Ethernet Header: | 505 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 506 | Inner Destination MAC Address | 507 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 508 | Inner Destination MAC Address | Inner Source MAC Address | 509 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 510 | Inner Source MAC Address | 511 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 512 Optional Ethertype = C-Tag [802.1Q] | Inner.VLAN Tag Information | 513 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 514 Payload: 515 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 516 | Ethertype of Original Payload | | 517 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | 518 | Original Ethernet Payload | 519 | | 520 | (Note that the original Ethernet Frame's FCS is not included) | 521 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 522 Frame Check Sequence: 523 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 524 | New FCS (Frame Check Sequence) for Outer Ethernet Frame | 525 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 527 Figure 1 VXLAN Frame Format 529 The frame format above shows tunneling of Ethernet frames using IPv4 530 for transport. Use of VXLAN with IPv6 transport will be addressed 531 in a future version of this draft. 533 6. VXLAN Deployment Scenarios 535 VXLAN is typically deployed in data centers on virtualized hosts, 536 which may be spread across multiple racks. The individual racks may 537 be parts of a different Layer 3 network or they could be in a single 538 Layer 2 network. The VXLAN segments/overlay networks are overlaid on 539 top of these Layer 2 or Layer 3 networks. 541 Consider Figure 2 below depicting two virtualized servers attached 542 to a Layer 3 infrastructure. The servers could be on the same rack, 543 or on different racks or potentially across data centers within the 544 same administrative domain. There are 4 VXLAN overlay networks 545 identified by the VNIs 22, 34, 74 and 98. Consider the case of VM1-1 546 in Server 1 and VM2-4 on Server 2 which are on the same VXLAN 547 overlay network identified by VNI 22. The VMs do not know about the 548 overlay networks and transport method since the encapsulation and 549 decapsulation happen transparently at the VTEPs on Servers 1 and 2. 550 The other overlay networks and the corresponding VMs are: VM1-2 on 551 Server 1 and VM2-1 on Server 2 both on VNI 34, VM1-3 on Server 1 and 552 VM2-2 on Server 2 on VNI 74, and finally VM1-4 on Server 1 and VM2-3 553 on Server 2 on VNI 98. 555 +------------+-------------+ 556 | Server 1 | 557 | +----+----+ +----+----+ | 558 | |VM1-1 | |VM1-2 | | 559 | |VNI 22 | |VNI 34 | | 560 | | | | | | 561 | +---------+ +---------+ | 562 | | 563 | +----+----+ +----+----+ | 564 | |VM1-3 | |VM1-4 | | 565 | |VNI 74 | |VNI 98 | | 566 | | | | | | 567 | +---------+ +---------+ | 568 | Hypervisor VTEP (IP1) | 569 +--------------------------+ 570 | 571 | 572 | 573 | 574 | 575 | 576 | +-------------+ 577 | | Layer 3 | 578 |---| Network | 579 | | 580 +-------------+ 581 | 582 | 583 + --------+ 584 | 585 +------------+-------------+ 586 | Server 2 | 587 | +----+----+ +----+----+ | 588 | |VM2-1 | |VM2-2 | | 589 | |VNI 34 | |VNI 74 | | 590 | | | | | | 591 | +---------+ +---------+ | 592 | | 593 | +----+----+ +----+----+ | 594 | |VM2-3 | |VM2-4 | | 595 | |VNI 98 | |VNI 22 | | 596 | | | | | | 597 | +---------+ +---------+ | 598 | Hypervisor VTEP (IP2) | 599 +--------------------------+ 601 Figure 2 VXLAN Deployment - VTEPs across a Layer 3 Network 603 One deployment scenario is where the tunnel termination point is a 604 physical server which understands VXLAN. Another scenario is where 605 nodes on a VXLAN overlay network need to communicate with nodes on 606 legacy networks which could be VLAN based. These nodes may be 607 physical nodes or virtual machines. To enable this communication, a 608 network can include VXLAN gateways (see Figure 3 below with a switch 609 acting as a VXLAN gateway) which forward traffic between VXLAN and 610 non-VXLAN environments. 612 Consider Figure 3 for the following discussion. For incoming frames 613 on the VXLAN connected interface, the gateway strips out the VXLAN 614 header and forwards to a physical port based on the destination MAC 615 address of the inner Ethernet frame. Decapsulated frames with the 616 inner VLAN ID SHOULD be discarded unless configured explicitly to be 617 passed on to the non-VXLAN interface. In the reverse direction, 618 incoming frames for the non-VXLAN interfaces are mapped to a 619 specific VXLAN overlay network based on the VLAN ID in the frame. 620 Unless configured explicitly to be passed on in the encapsulated 621 VXLAN frame, this VLAN ID is removed before the frame is 622 encapsulated for VXLAN. 624 These gateways which provide VXLAN tunnel termination functions 625 could be ToR/access switches or switches higher up in the data 626 center network topology - e.g. core or even WAN edge devices. The 627 last case (WAN edge) could involve a Provider Edge (PE) router which 628 terminates VXLAN tunnels in a hybrid cloud environment. Note that in 629 all these instances, the gateway functionality could be implemented 630 in software or hardware. 632 +---+-----+---+ +---+-----+---+ 633 | Server 1 | | Non VXLAN | 634 (VXLAN enabled)<-----+ +---->| server | 635 +-------------+ | | +-------------+ 636 | | 637 +---+-----+---+ | | +---+-----+---+ 638 |Server 2 | | | | Non VXLAN | 639 (VXLAN enabled)<-----+ +---+-----+---+ +---->| server | 640 +-------------+ | |Switch acting| | +-------------+ 641 |---| as VXLAN |-----| 642 +---+-----+---+ | | Gateway | 643 | Server 3 | | +-------------+ 644 (VXLAN enabled)<-----+ 645 +-------------+ | 646 | 647 +---+-----+---+ | 648 | Server 4 | | 649 (VXLAN enabled)<-----+ 650 +-------------+ 651 Figure 3 VXLAN Deployment - VXLAN Gateway 653 6.1. Inner VLAN Tag Handling 655 Inner VLAN Tag Handling in VTEP and VXLAN Gateway should conform to 656 the following: 658 Decapsulated VXLAN frames with the inner VLAN tag SHOULD be 659 discarded unless configured otherwise. On the encapsulation side, a 660 VTEP SHOULD NOT include an inner VLAN tag on tunnel packets unless 661 configured otherwise. When a VLAN-tagged packet is a candidate for 662 VXLAN tunneling, the encapsulating VTEP SHOULD strip the VLAN tag 663 unless configured otherwise. 665 7. IETF Network Virtualization Overlays (nvo3) Working Group 667 The IETF has recently chartered the Network Virtualization Overlays 668 (nvo3) Working Group (WG) under the Routing Area. The charter 669 (http://datatracker.ietf.org/wg/nvo3/charter/) indicates that the WG 670 will consider the multi tenancy approaches residing at the network 671 layer. The WG will provide a problem statement, architectural 672 framework and requirements for the control and data plane for such 673 network virtualization overlay schemes. Operations, Administration 674 and Management (OA&M) requirements for the nvo3 are also within the 675 scope of the WG. The active Internet drafts being considered by the 676 working group are at http://datatracker.ietf.org/wg/nvo3/. This 677 draft on VXLAN addresses the requirements outlined in the nvo3 WG 678 charter. It outlines the data plane requirements as well as the 679 method to establish the forwarding entries in each VTEP. 681 8. Security Considerations 683 Traditionally, layer 2 networks can only be attacked from 'within' 684 by rogue endpoints - either by having inappropriate access to a LAN 685 and snooping on traffic or by injecting spoofed packets to 'take 686 over' another MAC address or by flooding and causing denial of 687 service. A MAC-over-IP mechanism for delivering Layer 2 traffic 688 significantly extends this attack surface. This can happen by rogues 689 injecting themselves into the network by subscribing to one or 690 more multicast groups that carry broadcast traffic for VXLAN 691 segments and also by sourcing MAC-over-UDP frames into the transport 692 network to inject spurious traffic, possibly to hijack MAC 693 addresses. 695 This proposal does not, at this time, incorporate specific measures 696 against such attacks, relying instead on other traditional 697 mechanisms layered on top of IP. This section, instead, sketches 698 out some possible approaches to security in the VXLAN environment. 700 Traditional Layer 2 attacks by rogue end points can be mitigated by 701 limiting the management and administrative scope of who deploys and 702 manages VMs/gateways in a VXLAN environment. In addition, such 703 administrative measures may be augmented by schemes like 802.1X for 704 admission control of individual end points. Also, the use of the 705 UDP based encapsulation of VXLAN enables exploiting the 5 tuple 706 based ACLs (Access Control Lists) functionality in physical 707 switches. 709 Tunneled traffic over the IP network can be secured with traditional 710 security mechanisms like IPsec that authenticate and optionally 711 encrypt VXLAN traffic. This will, of course, need to be coupled with 712 an authentication infrastructure for authorized endpoints to obtain 713 and distribute credentials. 715 VXLAN overlay networks are designated and operated over the existing 716 LAN infrastructure. To ensure that VXLAN end points and their VTEPs 717 are authorized on the LAN, it is recommended that a VLAN be 718 designated for VXLAN traffic and the servers/VTEPs send VXLAN 719 traffic over this VLAN to provide a measure of security. 721 In addition, VXLAN requires proper mapping of VNIs and VM membership 722 in these overlay networks. It is expected that this mapping be done 723 and communicated to the management entity on the VTEP and the 724 gateways using existing secure methods. 726 9. IANA Considerations 728 An IANA port will be requested for the VXLAN destination UDP port. 730 10. Conclusion 732 This document has introduced VXLAN, an overlay framework for 733 transporting MAC frames generated by VMs in isolated Layer 2 734 networks over an IP network. Through this scheme, it is possible to 735 stretch Layer 2 networks across Layer 3 networks. This finds use in 736 virtualized data center environments where Layer 2 networks may need 737 to span across the entire data center, or even between data centers. 739 11. References 741 11.1. Normative References 743 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 744 Requirement Levels", BCP 14, RFC 2119, March 1997. 746 11.2. Informative References 748 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and Kouvelas, I., 749 "Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol 750 Specification", RFC 4601, August 2006. 752 [RFC5015] Handley, M., Kouvelas, I., Speakman, T., and Vicisano, L., 753 "Bidirectional Protocol Independent Multicast (BIDIR-PIM)", RFC 754 5015, October 2007. 756 [RFC4541] Christensen, M., Kimball, K., and Solensky, F., 757 "Considerations for Internet Group Management Protocol (IGMP) 758 and Multicast Listener Discovery (MLD) Snooping Switches", RFC 4541, 759 May 2006. 761 [nv03-Charter] Network Virtualization Working Overlays (nvo3) 762 charter, http://datatracker.ietf.org/wg/nvo3/charter/ 764 12. Acknowledgments 766 The authors wish to thank Ajit Sanzgiri for contributions to the 767 Security Considerations section and editorial inputs, Joseph Cheng, 768 Margaret Petrus and Milin Desai for their editorial reviews, inputs 769 and comments. 771 Authors' Addresses 773 Mallik Mahalingam 775 Email: mallik_mahalingam@yahoo.com 777 Dinesh G. Dutt 779 Email: ddutt.ietf@hobbesdutt.com 781 Kenneth Duda 782 Arista Networks 783 5470 Great America Parkway 784 Santa Clara, CA 95054 786 Email: kduda@aristanetworks.com 788 Puneet Agarwal 789 Broadcom Corporation 790 3151 Zanker Road 791 San Jose, CA 95134 793 Email: pagarwal@broadcom.com 794 Lawrence Kreeger 795 Cisco Systems, Inc. 796 170 W. Tasman Avenue 797 Palo Alto, CA 94304 799 Email: kreeger@cisco.com 801 T. Sridhar 802 VMware Inc. 803 3401 Hillview 804 Palo Alto, CA 94304 806 Email: tsridhar@vmware.com 808 Mike Bursell 809 Citrix Systems Research & Development Ltd. 810 Building 101 811 Cambridge Science Park 812 Milton Road 813 Cambridge CB4 0FY 814 United Kingdom 816 Email: mike.bursell@citrix.com 818 Chris Wright 819 Red Hat Inc. 820 1801 Varsity Drive 821 Raleigh, NC 27606 823 Email: chrisw@redhat.com