idnits 2.17.1 draft-kreeger-nvo3-overlay-cp-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 14, 2013) is 3941 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-09) exists of draft-ietf-nvo3-framework-02 == Outdated reference: A later version (-04) exists of draft-ietf-nvo3-overlay-problem-statement-03 Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force L. Kreeger 3 Internet-Draft Cisco 4 Intended status: Informational D. Dutt 5 Expires: December 16, 2013 Cumulus Networks 6 T. Narten 7 IBM 8 D. Black 9 EMC 10 M. Sridharan 11 Microsoft 12 June 14, 2013 14 Network Virtualization Overlay Control Protocol Requirements 15 draft-kreeger-nvo3-overlay-cp-04 17 Abstract 19 The document "Problem Statement: Overlays for Network Virtualization" 20 discusses the needs for network virtualization using overlay networks 21 in highly virtualized data centers. The problem statement outlines a 22 need for control protocols to facilitate running these overlay 23 networks. This document outlines the high level requirements to be 24 fulfilled by the control protocols related to building and managing 25 the mapping tables and other state information used by the Network 26 Virtualization Edge to transmit encapsulated packets across the 27 underlying network. 29 Status of This Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF). Note that other groups may also distribute 36 working documents as Internet-Drafts. The list of current Internet- 37 Drafts is at http://datatracker.ietf.org/drafts/current/. 39 Internet-Drafts are draft documents valid for a maximum of six months 40 and may be updated, replaced, or obsoleted by other documents at any 41 time. It is inappropriate to use Internet-Drafts as reference 42 material or to cite them other than as "work in progress." 44 This Internet-Draft will expire on December 16, 2013. 46 Copyright Notice 47 Copyright (c) 2013 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 63 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 3. Control Plane Protocol Functionality . . . . . . . . . . . . 4 65 3.1. Inner to Outer Address Mapping . . . . . . . . . . . . . 5 66 3.2. Underlying Network Multi-Destination Delivery Address(es) 6 67 3.3. VN Connect/Disconnect Notification . . . . . . . . . . . 6 68 3.4. VN Name to VN ID Mapping . . . . . . . . . . . . . . . . 6 69 4. Control Plane Characteristics . . . . . . . . . . . . . . . . 7 70 5. Security Considerations . . . . . . . . . . . . . . . . . . . 9 71 6. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 9 72 7. Informative References . . . . . . . . . . . . . . . . . . . 10 73 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 10 75 1. Introduction 77 "Problem Statement: Overlays for Network Virtualization" 78 [I-D.ietf-nvo3-overlay-problem-statement] discusses the needs for 79 network virtualization using overlay networks in highly virtualized 80 data centers and provides a general motivation for building such 81 networks. "Framework for DC Network Virtualization" 82 [I-D.ietf-nvo3-framework] provides a framework for discussing overlay 83 networks generally and the various components that must work together 84 in building such systems. The reader is assumed to be familiar with 85 both documents. 87 Section 4.5 of [I-D.ietf-nvo3-overlay-problem-statement] describes 88 three separate work areas that fall under the general category of a 89 control protocol for NVO3. This document focuses entirely on those 90 aspects of the control protocol related to the building and 91 distributing the mapping tables an NVE uses to tunnel traffic from 92 one VM to another. Specifically, this document focuses on work areas 93 1 and 2 given in Section 4.5 of 94 [I-D.ietf-nvo3-overlay-problem-statement]. Work areas 1 and 2 cover 95 the interaction between an NVE and the Network Virtualization 96 Authority (NVA) (work area 2) or operation of the NVA itself (work 97 area 1). Requirements related to interaction between a hypervisor 98 and NVE when the two entities reside on separate physical devices 99 (work area 3) are covered in [I-D.kreeger-nvo3-hypervisor-nve-cp- 100 req]. 102 2. Terminology 104 This document uses the same terminology as found in 105 [I-D.ietf-nvo3-framework]. This section defines additional 106 terminology used by this document. 108 Network Service Appliance: A stand-alone physical device or a 109 virtual device that provides a network service, such as a 110 firewall, load balancer, etc. Such appliances may embed Network 111 Virtualization Edge (NVE) functionality within them in order to 112 more efficiently operate as part of a virtualized network. 114 VN Alias: A string name for a VN as used by administrators and 115 customers to name a specific VN. A VN Alias is a human-usable 116 string that can be listed in contracts, customer forms, email, 117 configuration files, etc. and that can be communicated easily 118 vocally (e.g., over the phone). A VN Alias is independent of the 119 underlying technology used to implement a VN and will generally 120 not be carried in protocol fields of control protocols used in 121 virtual networks. Rather, a VN Alias will be mapped into a VN 122 Name where precision is required. 124 VN Name: A globally unique identifier for a VN suitable for use 125 within network protocols. A VN Name will usually be paired with a 126 VN Alias, with the VN Alias used by humans as a shorthand way to 127 name and identify a specific VN. A VN Name should have a compact 128 representation to minimize protocol overhead where a VN Name is 129 carried in a protocol field. Using a Universally Unique 130 Identifier (UUID) as discussed in RFC 4122, may work well because 131 it is both compact and a fixed size and can be generated locally 132 with a very high likelihood of global uniqueness. 134 VN ID: A unique and compact identifier for a VN within the scope of 135 a specific NVO3 administrative domain. It will generally be more 136 efficient to carry VN IDs as fields in control protocols than VN 137 Names or VN Aliases. There is a one-to-one mapping between a VN 138 Name and a VN ID within an NVO3 Administrative Domain. Depending 139 on the technology used to implement an overlay network, the VN ID 140 could be used as the VN Context in the data plane, or would need 141 to be mapped to a locally-significant context ID. 143 3. Control Plane Protocol Functionality 145 The NVO3 problem statement [I-D.ietf-nvo3-overlay-problem-statement], 146 discusses the needs for a control plane protocol (or protocols) to 147 populate each NVE with the state needed to perform its functions. 149 In one common scenario, an NVE provides overlay encapsulation/ 150 decapsulation packet forwarding services to Tenant Systems that are 151 co-resident with the NVE on the same End Device (e.g. when the NVE is 152 embedded within a hypervisor or a Network Service Appliance). 153 Alternatively, a Tenant System may use an externally connected NVE 154 (e.g. an NVE residing on a physical Network Switch connected to the 155 hypervisor via an access network). The latter scenario is not 156 discussed in this document, but is covered in [I-D.kreeger-nvo3 157 -hypervisor-nve-cp-req]. 159 The following figures show examples of scenarios in which the NVE is 160 co-resident within the same End Device as the Tenant System connected 161 to a given VN. 163 Hypervisor 164 +-----------------------+ 165 | +--+ +-------+---+ | 166 | |VM|---| | | | 167 | +--+ |Virtual|NVE|----- Underlying 168 | +--+ |Switch | | | Network 169 | |VM|---| | | | 170 | +--+ +-------+---+ | 171 +-----------------------+ 173 Hypervisor with an Embedded NVE. 175 Figure 1 177 Network Service Appliance 178 +---------------------------+ 179 | +------------+ +-----+ | 180 | |Net Service |---| | | 181 | |Instance | | | | 182 | +------------+ | NVE |------ Underlying 183 | +------------+ | | | Network 184 | |Net Service |---| | | 185 | |Instance | | | | 186 | +------------+ +-----+ | 187 +---------------------------+ 188 Network Service Appliance (physical or virtual) with an Embedded NVE. 190 Figure 2 192 To support an NVE, a control plane protocol is necessary to provide 193 an NVE with the information it needs to maintain its own internal 194 state necessary to carry out its forwarding functions as explained in 195 detail below. 197 1. An NVE maintains a per-VN table of mappings from Tenant System 198 (inner) addresses to Underlying Network (outer) addresses of 199 remote NVEs. 201 2. An NVE maintains per-VN state for delivering tenant multicast and 202 broadcast packets to other Tenant Systems. Such state could 203 include a list of multicast addresses and/or unicast addresses on 204 the Underlying Network for the NVEs associated with a particular 205 VN. 207 3. End Devices (such as a Hypervisor or Network Service Appliance) 208 utilizing an external NVE need to "attach to" and "detach from" 209 an NVE. Specifically, a mechanism is needed to notify an NVE 210 when a Tenant System attaches to or detaches from a specific VN. 211 Such a mechanism would provide the necessary information to the 212 NVE that it needs to provide service to a particular Tenant 213 System. The details of such a mechanism are out-of-scope for 214 this document and are covered in [I-D.kreeger-nvo3-hypervisor- 215 nve-cp-req]. 217 4. An NVE needs a mapping from each unique VN name to the VN Context 218 value used within encapsulated data packets within the 219 administrative domain that the VN is instantiated. 221 3.1. Inner to Outer Address Mapping 223 When presented with a data packet to forward to a Tenant System 224 within a VN, the NVE needs to know the mapping of the Tenant System 225 destination (inner) address to the (outer) address on the Underlying 226 Network of the remote NVE which can deliver the packet to the 227 destination Tenant System. In addition, the NVE needs to know what 228 VN Context to use when sending to a destination Tenant System. 230 A protocol is needed to provide this inner to outer mapping and VN 231 Context to each NVE that requires it and keep the mapping updated in 232 a timely manner. Timely updates are important for maintaining 233 connectivity between Tenant Systems when one Tenant System is a VM. 235 Note that one technique that could be used to create this mapping 236 without the need for a control protocol is via data plane learning; 237 However, the learning approach requires packets to be flooded to all 238 NVEs participating in the VN when no mapping exists. One goal of 239 using a control protocol is to eliminate this flooding. 241 3.2. Underlying Network Multi-Destination Delivery Address(es) 243 Each NVE needs a way to deliver multi-destination packets (i.e. 244 tenant broadcast/multicast) within a given VN to each remote NVE 245 which has a destination Tenant System for these packets. Three 246 possible ways of accomplishing this are: 248 o Use the multicast capabilities of the Underlying Network. 250 o Have each NVE replicate the packets and send a copy across the 251 Underlying Network to each remote NVE currently participating in 252 the VN. 254 o Use one or more distribution servers that replicate the packets on 255 the behalf of the NVEs. 257 Whichever method is used, a protocol is needed to provide on a per VN 258 basis, one or more multicast addresses (assuming the Underlying 259 Network supports multicast), and/or one or more unicast addresses of 260 either the remote NVEs which are not multicast reachable, or of one 261 or more distribution servers for the VN. 263 The protocol must also keep the list of addresses up to date in a 264 timely manner as the set of NVEs for a given VN changes over time. 265 For example, the set of NVEs for a VN could change as VMs power on/ 266 off or migrate to different hypervisors. 268 3.3. VN Connect/Disconnect Notification 270 For the purposes of this document, it is assumed that an NVE receives 271 appropriate notifications when a Tenant System attaches to or 272 detaches from a specific VN. The details of how that is done are 273 orthogonal to the NVE-to-NVA control plane, so long as such 274 notification provides the necessary information needed by the control 275 plane. As one example, the attach/detach notification would 276 presumably include a VN Name that identifies the specific VN to which 277 the attach/detach operation applies to. 279 3.4. VN Name to VN ID Mapping 281 Once an NVE (embedded or external) receives a VN connect indication 282 with a specified VN Name, the NVE must determine what VN Context 283 value and other necessary information to use to forward Tenant System 284 traffic to remote NVEs. In one approach, the NVE-to-NVA protocol 285 uses VN Names directly when interacting, with the NVA providing such 286 information as the VN Context (or VN ID) along with egress NVE's 287 address. Alternatively, it may be desirable for the NVE-to-NVA 288 protocol to use a more compact representation of the VN name, that 289 is, a VN ID. In such a case, a specific NVE-to-NVA operation might 290 be needed to first map the VN Name into a VN ID, with subsequent NVE- 291 to-NVA operations utilizing the VN ID directly. Thus, it may be 292 useful for the NVE-to-NVA protocol to support an operation that maps 293 VN Names into VN IDs. 295 4. Control Plane Characteristics 297 NVEs are expected to be implemented within both hypervisors (or 298 Network Service Appliances) and within access switches. Any 299 resources used by these protocols (e.g. processing or memory) takes 300 away resources that could be better used by these devices to perform 301 their intended functions (e.g. providing resources for hosted VMs). 303 A large scale data center may contain hundreds of thousands of these 304 NVEs (which may be several independent implementations); Therefore, 305 any savings in per-NVE resources can be multiplied hundreds of 306 thousands of times. 308 Given this, the control plane protocol(s) implemented by NVEs to 309 provide the functionality discussed above should have the below 310 characteristics. 312 1. Minimize the amount of state needed to be stored on each NVE. 313 The NVE should only be required to cache state that it is 314 actively using, and be able to discard any cached state when it 315 is no longer required. For example, an NVE should only need to 316 maintain an inner-to-outer address mapping for destinations to 317 which it is actively sending traffic as opposed to maintaining 318 mappings for all possible destinations. 320 2. Fast acquisition of needed state. For example, when a Tenant 321 System emits a packet destined to an inner address that the NVE 322 does not have a mapping for, the NVE should be able to acquire 323 the needed mapping quickly. 325 3. Fast detection/update of stale cached state information. This 326 only applies if the cached state is actually being used. For 327 example, when a VM moves such that it is connected to a 328 different NVE, the inner to outer mapping for this VM's address 329 that is cached on other NVEs must be updated in a timely manner 330 (if they are actively in use). If the update is not timely, the 331 NVEs will forward data to the wrong NVE until it is updated. 333 4. Minimize processing overhead. This means that an NVE should 334 only be required to perform protocol processing directly related 335 to maintaining state for the Tenant Systems it is actively 336 communicating with. This requirement is for the NVE 337 functionality only. The network node that contains the NVE may 338 be involved in other functionality for the underlying network 339 that maintains connectivity that the NVE is not actively using 340 (e.g., routing and multicast distribution protocols for the 341 underlying network). 343 5. Highly scalable. This means scaling to hundreds of thousands of 344 NVEs and several million VNs within a single administrative 345 domain. As the number of NVEs and/or VNs within a data center 346 grows, the protocol overhead at any one NVE should not increase 347 significantly. 349 6. Minimize the complexity of the implementation. This argues for 350 using the least number of protocols to achieve all the 351 functionality listed above. Ideally a single protocol should be 352 able to be used. The less complex the protocol is on the NVE, 353 the more likely interoperable implementations will be created in 354 a timely manner. 356 7. Extensible. The protocol should easily accommodate extension to 357 meet related future requirements. For example, access control 358 or QoS policies, or new address families for either inner or 359 outer addresses should be easy to add while maintaining 360 interoperability with NVEs running older versions. 362 8. Simple protocol configuration. A minimal amount of 363 configuration should be required for a new NVE to be 364 provisioned. Existing NVEs should not require any configuration 365 changes when a new NVE is provisioned. Ideally NVEs should be 366 able to auto configure themselves. 368 9. Do not rely on IP Multicast in the Underlying Network. Many 369 data centers do not have IP multicast routing enabled. If the 370 Underlying Network is an IP network, the protocol should allow 371 for, but not require the presence of IP multicast services 372 within the data center. 374 10. Flexible mapping sources. It should be possible for either NVEs 375 themselves, or other third party entities (e.g. data center 376 management or orchestration systems) to create inner to outer 377 address mappings in the NVA. The protocol should allow for 378 mappings created by an NVE to be automatically removed from all 379 other NVEs if it fails or is brought down unexpectedly. 381 11. Secure. See the Security Considerations section below. 383 5. Security Considerations 385 Editor's Note: This is an initial start on the security 386 considerations section; it will need to be expanded, and suggestions 387 for material to add are welcome. 389 The protocol(s) should protect the integrity of the mapping against 390 both off-path and on-path attacks. It should authenticate the 391 systems that are creating mappings, and rely on light weight security 392 mechanisms to minimize the impact on scalability and allow for simple 393 configuration. 395 Use of an overlay exposes virtual networks to attacks on the 396 underlying network beyond attacks on the control protocol that is the 397 subject of this draft. In addition to the directly applicable 398 security considerations for the networks involved, the use of an 399 overlay enables attacks on encapsulated virtual networks via the 400 underlying network. Examples of such attacks include traffic 401 injection into a virtual network via injection of encapsulated 402 traffic into the underlying network and modifying underlying network 403 traffic to forward traffic among virtual networks that should have no 404 connectivity. The control protocol should provide functionality to 405 help counter some of these attacks, e.g., distribution of NVE access 406 control lists for each virtual network to enable packets from non- 407 participating NVEs to be discarded, but the primary security measures 408 for the underlying network need to be applied to the underlying 409 network. For example, if the underlying network includes 410 connectivity across the public Internet, use of secure gateways 411 (e.g., based on IPsec [RFC4301]) may be appropriate. 413 The inner to outer address mappings used for forwarding data towards 414 a remote NVE could also be used to filter incoming traffic to ensure 415 the inner address sourced packet came from the correct NVE source 416 address, allowing access control to discard traffic that does not 417 originate from the correct NVE. This destination filtering 418 functionality should be optional to use. 420 6. Acknowledgements 421 Thanks to the following people for reviewing and providing feedback: 422 Fabio Maino, Victor Moreno, Ajit Sanzgiri, Chris Wright. 424 7. Informative References 426 [I-D.ietf-nvo3-framework] 427 Lasserre, M., Balus, F., Morin, T., Bitar, N., and Y. 428 Rekhter, "Framework for DC Network Virtualization", draft- 429 ietf-nvo3-framework-02 (work in progress), February 2013. 431 [I-D.ietf-nvo3-overlay-problem-statement] 432 Narten, T., Gray, E., Black, D., Fang, L., Kreeger, L., 433 and M. Napierala, "Problem Statement: Overlays for Network 434 Virtualization", draft-ietf-nvo3-overlay-problem- 435 statement-03 (work in progress), May 2013. 437 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 438 Internet Protocol", RFC 4301, December 2005. 440 Authors' Addresses 442 Lawrence Kreeger 443 Cisco 445 Email: kreeger@cisco.com 447 Dinesh Dutt 448 Cumulus Networks 450 Email: ddutt@cumulusnetworks.com 452 Thomas Narten 453 IBM 455 Email: narten@us.ibm.com 457 David Black 458 EMC 460 Email: david.black@emc.com 461 Murari Sridharan 462 Microsoft 464 Email: muraris@microsoft.com