idnits 2.17.1 draft-sb-nvo3-sdn-federation-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 22, 2012) is 4201 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- No information found for draft-kompella-nv03-server2nve - is the name correct? Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Internet Engineering Task Force Florin Balus 2 Internet Draft Dimitri Stiliadis 3 Intended status: standards track Nuage Networks 4 Expires: April 2013 5 Nabil Bitar 6 Wim Henderickx Verizon 7 Marc Lasserre 8 Alcatel-Lucent Kenichi Ogaki 9 KDDI 11 October 22, 2012 13 Federated SDN-based Controllers for NVO3 14 draft-sb-nvo3-sdn-federation-01.txt 16 Status of this Memo 18 This Internet-Draft is submitted in full conformance with the 19 provisions of BCP 78 and BCP 79. 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as Internet- 24 Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six 27 months and may be updated, replaced, or obsoleted by other documents 28 at any time. It is inappropriate to use Internet-Drafts as 29 reference material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html 37 This Internet-Draft will expire on April 22, 2013. 39 Copyright Notice 41 Copyright (c) 2012 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with 49 respect to this document. 51 Abstract 53 The familiar toolset of VPN and IP/MPLS protocols defined in IETF 54 has been discussed as a good starting point for addressing several 55 of the problems described in the NVO3 problem statement and 56 requirements drafts. However, there is a significant gap between the 57 VPN technologies and the scale and complexity required when the NVEs 58 are running in server hypervisors. 60 This draft proposes a solution that bridges the gap between the 61 concepts familiar to the data center IT community with some of the 62 best concepts developed by the networking industry. 64 The proposed solution is based on the understanding that in a cloud 65 environment NVEs may reside in end-devices that should not be 66 burdened with the complexities of control plane protocols. The 67 complex control plane functionalities are decoupled from the 68 forwarding plane to minimize the required NVE processing, 69 recognizing that major hypervisor distributions already employ 70 Openflow to address this issue. 72 The solution also defines the mechanisms for scaling data center 73 network control horizontally and interoperates seamlessly with 74 existing L2 and L3 VPN devices. 76 Table of Contents 78 1. Introduction...................................................3 79 2. Conventions used in this document..............................4 80 2.1. General terminology.......................................4 81 3. Solution Overview..............................................4 82 4. Control plane details..........................................6 83 4.1. Tenant System State Discovery.............................6 84 4.1.1. Tenant System states and related information.........7 85 4.1.2. Tracking local TS events.............................8 86 4.1.2.1. NVE to Controller signaling of TS events........8 87 4.2. Address advertisement and FIB population..................9 88 4.2.1. Push versus Pull.....................................9 89 4.3. Underlay awareness........................................9 90 4.4. Controllers federation...................................10 92 5. Data plane considerations.....................................11 93 5.1. L2 and L3 services.......................................11 94 5.2. NVO3 encapsulations......................................11 95 6. Resiliency considerations.....................................12 96 6.1. Controller resiliency....................................12 97 6.2. Data plane resiliency....................................12 98 7. Practical deployment considerations...........................12 99 7.1. Controller distribution..................................12 100 7.2. Hypervisor NVE processing................................13 101 7.3. Interoperating with non-NVO3 domains.....................13 102 7.4. VM Mobility..............................................13 103 7.5. Openflow and Open vSwitch................................13 104 8. Security Considerations.......................................14 105 9. IANA Considerations...........................................14 106 10. References...................................................14 107 10.1. Normative References....................................14 108 10.2. Informative References..................................14 109 11. Acknowledgments..............................................15 111 1. Introduction 113 Several data center networking challenges are described in the NVO3 114 problem statement and requirements drafts. A number of documents 115 propose extensions to or re-use of existing IETF protocols to 116 address these challenges. 118 The data center environment though, is dominated by the presence of 119 software networking components (vswitches) in server hypervisors, 120 which may outnumber by several orders of magnitude the physical 121 networking nodes. Limited resources are available for software 122 networking as hypervisor software is designed to maximize the 123 revenue generating compute resources rather than expending them in 124 network protocol processing. 126 More importantly the cloud environment is driven by the need to 127 innovate and bring new IT services fast to market, so network 128 automation and network flexibility is of paramount importance. 130 This document proposes for NVO3 control plane a combination between 131 IETF VPN mechanisms and the Software Defined Network (SDN) concepts 132 developed by the Open Networking Foundation and already in use in a 133 number of cloud deployments. Existing routing mechanisms are 134 employed when a scaled out data center deployment is required to 135 federate a number of SDN controllers in a multi-vendor environment 136 or to interoperate with non-NVO3 domains. 138 The proposed solution can be implemented with minimal extensions to 139 existing protocols and to a certain extent is already operational in 140 several cloud environments. It also proposes a simple mechanism that 141 enables NVO3 domains to seamlessly interoperate with VPN deployments 142 without requiring new functionality in the existing VPN networks. 144 The focus of this draft is on the NVO3 controller; it describes how 145 the SDN concepts defined in [ONF] can be used and extended to 146 perform the NVO3 control plane functions and how SDN controller 147 domains can be federated using existing routing protocols. The 148 handling and support for different NVO3 encapsulations is described 149 briefly in the later sections. The terminology of NVO3 controller 150 may be subject to further changes in the framework draft [NVO3-FWK]. 152 2. Conventions used in this document 154 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 155 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 156 document are to be interpreted as described in RFC-2119 [RFC2119]. 158 In this document, these words will appear with that interpretation 159 only when in ALL CAPS. Lower case uses of these words are not to be 160 interpreted as carrying RFC-2119 significance. 162 2.1. General terminology 164 This document uses the terminology defined in NVO3 framework 165 document [NVO3-FWK]. 167 3. Solution Overview 169 The concept of a NVO3 generic architecture is discussed in the NVO3 170 framework draft [NVO3-FWK]. 172 This section describes how NVO3 control plane functions can be 173 implemented starting from the SDN concepts defined in [ONF]. The 174 proposed architecture is depicted in the following diagram. 176 +-----------+ 177 +-------| NVO3 |-------Y 178 | |Controller | | 179 | | Function | | 180 | +-----------+ | 181 +--+-+ +-+--+ 182 | NV | Controller Domain | NV | 183 TS--|Edge| |Edge|--TS 184 +--+-+ +-+--+ 185 | +----+ | 186 | | NV | | 187 '`''''''''''|Edge|''''''''''' 188 +----+ 189 | 190 | 191 TS 192 Figure 1 Controller-based architecture for NVO3 194 NVO3 controller function is implemented using the concept of a SDN 195 controller [ONF]. The Controller is a software component residing in 196 a logically centralized location, responsible for a specific NVE 197 domain [NVO3-FWK]. Each NVE has a control session to the Controller 198 which in turn may run external control plane sessions to communicate 199 to the outside world or to other controllers. All the NVEs sharing a 200 controller represent a controller domain. 202 The Controller provides a generic API for learning about the Tenant 203 System profile and related events. The mechanism can be through a 204 north bound API of the Controller itself, or through an event 205 tracking mechanism. 207 As soon as a Tenant System is configured and attached to a 208 Controller domain, the cloud management system informs the 209 Controller about this activation. Existing cloud management systems 210 [Openstack, Cloudstack] assume the Controller provides an API that 211 will enable the cloud management system to notify the Controller 212 about the new service activation. The Quantum plugin in Openstack is 213 an example of an API interface between cloud management and 214 Controller. 216 Alternative mechanisms can rely on a direct notification of a TS 217 (VM) attachment to the NVE that can be done at the hypervisor layer. 218 The NVE can then pass the TS information to the controller using the 219 procedures described in section 4.1.2.1. For Tenant Systems located 220 in a remote Controller domain, (MP)-BGP is used to advertise and 221 learn the related information to/from the remote controller. 223 Once the Controller is aware of a new Tenant System attached to a 224 NVE, the TS IP and/or MAC addresses are populated in the 225 controller's routing database. Since the Controller has full 226 visibility of all Tenant Systems belonging to the same tenant within 227 its domain, it will generate the related FIBs and populate the 228 required entries on the NVEs that have a corresponding tenant VNI. 229 The controller can rely on the Openflow protocol [Openflow] for this 230 function, since it is only populating FIB entries, and can use 231 either a pull or a push method or a combination based on the scale 232 and dynamics of the FIB entries that need to be populated in the 233 NVEs, and/or based on a FIB population policy. 235 When the TS is deleted, the reverse process takes place. The 236 Controller, either from API calls or based on events received from 237 the NVEs, will learn about the TS removal and it will proceed to 238 remove the TS IP and/or MAC from its routing and FIB database. BGP 239 will be used to withdraw these addresses from the remote Controllers 240 to which those addresses were previously advertised and/or managing 241 NVEs with VNIs belonging to the same corresponding tenant VN, if any 242 are present. These changes will also be communicated then to the 243 impacted NVEs by the corresponding remote Controller. 245 4. Control plane details 247 The controller is the central component for the NVO3 control plane 248 architecture. In the proposed solution the NVO3 controller utilizes 249 mechanisms to monitor events and create the required connectivity. 251 This section discusses how the current SDN model [ONF] with 252 appropriate extensions can be used to address the requirements 253 outlined in [NVO3-CPREQ]. 255 To automatically instantiate the required data plane connectivity 256 the NVO3 controller has to perform the following functions: 258 - Learning of TS profiles and current state (TS State Discovery). 259 - Auto-instantiation of NVO3 service. 260 o Address advertisement and associated tunnel encapsulation 261 mapping 262 o FIB population. 263 - Underlay aware routing. 265 4.1. Tenant System State Discovery 267 This section sets the stage for a generic set of events and actions 268 that MUST be supported by any NVO3 controller if automatic 269 instantiation of the service is desired. The goal is to converge 270 first on a common information model between the cloud management 271 system and the NVO3 controller. It is also highly desirable to 272 converge to a common protocol and information model that will allow 273 cloud management systems and Controllers from different vendors to 274 interoperate. At the current time the integration of a controller 275 with different cloud management systems requires customization work 276 and this will lead to interoperability problems and duplicated 277 development efforts to interoperate a matrix of cloud management 278 systems and Controllers. 280 4.1.1. Tenant System states and related information 282 There is a large variety of Tenant Systems that may need to receive 283 service from a NVO3 domain. For example in the Cloud networking 284 space a TS may be a VM or an appliance instance (Firewall, LB) 285 assigned to a particular tenant. There are a number of possible 286 states for the Tenant Systems that need to be signaled to and 287 interpreted by the Controller. The following is an initial set of TS 288 states that is mainly derived by mirroring the model of virtual 289 machine lifecycle management encountered in several cloud management 290 tools: 292 - Not-deployed: TS exists in management system but is not 293 instantiated. The NVE does not need to know about these 294 instances. 295 - Running: TS is instantiated, active and ready to send traffic. 296 The appropriate NVEs with instances of the corresponding tenant 297 VNs, must have all required functionality to send and receive 298 traffic for the particular TS. 299 - Suspended: TS is instantiated, but currently in a suspended 300 state. Traffic from/to the TS can be ignored. Routes to this TS 301 may be withdrawn from the corresponding tenant VN. 302 - Shutdown: TS is in the process of shutting down. A complete 303 shutdown is not known though, and it will depend on the 304 capabilities of the TS. Traffic from/to the TS must be 305 forwarded. 306 - Shut off: TS is in the power-off mode and not attached to the 307 NVE. This is similar to the suspend state. Traffic from/to the 308 TS can be ignored. Routes corresponding to the TS must be 309 withdrawn from corresponding tenant VN and the forwarding state 310 at the local NVE must be removed. 311 - Moving: TS is active but a TS Move command was originated. The 312 Controller must participate in any state transfer functions. 313 The goal is to directly forward traffic to the TS at the new 314 location and possibly tunnel traffic in transit to the old 315 location from the old location to the new one. 317 - Other: Opaque state that refers to additional states defined by 318 a specialized TS. 320 Even though, the states above are often related to virtual machines, 321 the model or a subset can cover the physical appliance states as 322 well. Depending on the TS, some of these states might not be easily 323 identifiable (additional mechanisms, liveliness check, are required 324 to detect a crashed or shutdown physical machine). 326 4.1.2. Tracking local TS events 328 The controller must have full information about the TS state. This 329 can be achieved in one of two ways: 331 1. The cloud management system utilizes an API exposed by the 332 Controller to update the TS state. This is the model deployed by 333 the Openstack Quantum API or the Cloudstack Network Guru API. 334 2. The NVE tracks the above events when it is co-located with the 335 hypervisor using internal mechanisms and reports it to the 336 Controller using a signaling mechanism. When the NVE is not 337 implemented in the hypervisor hosting the TS, a tracking protocol 338 between the hypervisor and NVE can allow the tracking of TS state 339 events. A standard protocol for this tracking function can 340 significantly assist in the interoperability between different 341 hypervisors and NVEs. One such mechanism is discussed in 342 [Server2NVE]. 344 The following section discusses the NVE to controller signaling 345 procedure for the second scenario. 347 4.1.2.1. NVE to Controller signaling of TS events 349 In the case where the NVE directly tracks VM events, there is also a 350 need for a standard signaling mechanism between the NVE and the 351 Controller. This proposal utilizes extensions to Openflow protocol 352 [Openflow] in order to accommodate this signaling. Openflow is 353 supported by a number of hypervisor distributions and is already 354 active in some large cloud deployments. Moreover it has already the 355 messaging base required to perform this function. 357 In the current Openflow specification, the first packet of an 358 unknown flow can be forwarded to the Controller when no match is 359 found in the local openflow switch. One possible method to implement 360 TS event signaling is to extend this functionality and use the TS 361 event as a trigger for a generic "flow request" from the NVE that 362 will carry sufficient information to the controller to allow for 363 service initialization. However, the flow request is extended here 364 as it does not contain parts of a TS packet, but information of a TS 365 event. Alternatively a new request type can be defined. Details of 366 this procedure will be added in a future revision. 368 4.2. Address advertisement and FIB population 370 Once the controller learns about the TS state event from North bound 371 API or from the NVE it performs the following actions: 373 - Identify the required NVO3 service attributes. Service 374 attributes could include acces lists and policies for certain 375 actions. 376 - Populate the VN routing and FIB tables with the TS address(es). 377 - If a push-model is used, it downloads the required FIB updates 378 and service attributes to the NVEs that participate in the 379 related VN (pre-population). 380 - If a pull-model is selected, it waits for the first packets of 381 the corresponding flows or potentially other requests triggered 382 by some events before establishing flow state, as per the 383 Openflow specification, or some FIB state in the NVE 384 - A combination of push and pull models may be beneficial. For 385 example flows that require consistent latency and/or no packet 386 loss may be pre-populated using a push model while other less 387 important flows may be populated using a pull model. Similarly, 388 ARP entries may be pre-populated or pulled-in when the NVE sees 389 an ARP request with no corresponding ARP entry in its local 390 cache. 392 4.2.1. Push versus Pull 394 The Openflow model described in the previous section enables both a 395 push and a pull model. The selection of a model will depend on the 396 controller and NVE capabilities and deployment requirements. 397 Different implementations might choose alternative methods or a 398 combination of pull/push. This is though an implementation detail 399 that is supported by the basic solution framework. 401 4.3. Underlay awareness 403 The Underlay network consists of a core often running IP routing 404 protocols to control the topology and to provide reachability among 405 NVEs. A routing module in the Controller may participate in the 406 underlay IP routing to maintain a view of the network underlay. The 407 routing information exchanged may be used to control path selection 408 or to make forward/drop decisions in the ingress NVEs, so that 409 network bandwidth resources are not unnecessarily wasted. 411 4.4. Controllers federation 413 To address a scale-out NVO3 deployment multiple Controllers may be 414 required. The Controllers need to exchange their routing information 415 to allow for NVO3 services to extend across NVO3 domains managed by 416 individual Controllers. The following diagram depicts this scenario. 418 +-----------+ +-----------+ 419 | Controller| ..,,.. | Controller| 420 | Domain 2 |" "| Domain n | 421 +----+------+ +-------+---+ 422 " Controller Federation " 423 ' (IP Routing) ] 424 `-.. ,,.-' 425 `''+-----------+''' 426 | Controller| 427 | Domain 1 | 428 +-----------+ 429 Figure 2 Controller Federation 431 The fundamental requirement in any such function is a system that 432 enables state distribution between the different instances. When 433 Controllers from the same vendors are used this can be achieved 434 through a pub/sub mechanism or through a distributed database. For 435 example an ActiveMQ mechanism such as the one deployed by Openstack 436 [Openstack] or a DHT based database like Cassandra. 438 When Controllers from multiple vendors are in place or when 439 interoperability with existing WAN services is required a standard 440 way of distributing TS information is required. 442 One can envision that existing VPN mechanisms can be utilized for 443 this function, distributing private reachability information 444 pertaining to the tenant VNs. A Routing module implementing the 445 related MP-BGP procedures can be used as follows: 447 - If L3 (IP) services need to be implemented across domains, the 448 procedures described in [BGP-VPN], specifically the IP VPN SAFI 449 and related NLRI can be used to exchange the Tenant IP 450 addresses. 451 - For L2 services the procedures described in [BGP-EVPN], 452 specifically the EVPN NLRI(s) can be employed to advertise 453 Tenant L2 addresses and related information among multiple 454 Controllers. 455 - For both service types, mechanisms like Route Distinguisher and 456 Route Targets provide support for overlapping addressing space 457 across VPNs (tenant VNs), and respectively for controlled 458 tenant service topology and route distribution only to the NVEs 459 that have at least one TS in that VPN (tenant VN) 461 Each Controller can then consolidate the external and internal 462 routing tables and generate required FIB entries that can be 463 downloaded as required to the NVEs. 465 The advantage of utilizing (MP)-BGP to federate Controllers is that 466 it enables interoperability between Controllers of different vendors 467 as well as interoperability with existing WAN services, including 468 Internet and L2/L3 VPNs. BGP is a proven mechanism that can be used 469 to achieve the required scale and secure isolation for multi-tenant, 470 multi-domain and multi-provider environments. 472 5. Data plane considerations 474 The Controller can be used to control different data planes for 475 different solution options. 477 5.1. L2 and L3 services 479 MP-BGP VPN and Openflow can be used to program the required NVE data 480 plane for both Layer 2 and Layer 3 services: Openflow is able/can be 481 extended to handle both L2 and L3 FIB entries and multiple tunnel 482 encapsulations while MP-BGP support for multiple address families 483 ([BGP-VPN] and [BGP-EVPN]) allows the extension of both L2 and L3 484 services across Controller domains. 486 After the local TS discovery and MP-BGP exchanges the L2 and/or L3 487 forwarding entries are computed first in the Controller and mapped 488 to different types of tunnel encapsulations based on the type of 489 core network, addressing type and the negotiated service 490 encapsulation type. 492 The resulting FIB updates can be downloaded using Openflow to NVEs 493 that have VNIs corresponding to tenant VNs associated with FIB 494 entries. The NVEs make use of these entries to forward packets at L2 495 or L3 depending on the type of service and the desired processing 496 sequence. 498 5.2. NVO3 encapsulations 500 A number of vSwitch distributions already provide support for some 501 of the encapsulations that had been proposed in some IETF drafts and 502 allow the mapping of FIB entries to tunnel encapsulations based on 503 these protocols. Opensource code for these encapsulations is also 504 available. FIB entries with associated tunneling encapsulations can 505 be communicated from the Controller to the NVE using Openflow where 506 supported or via Openflow extensions as required. Openflow supports 507 also the MPLS VPN encapsulations easing the way for interoperability 508 between NVE and VPN domains. Moreover the use of BGP for MAC and IP 509 advertisement for different NVO3 encapsulations has been proposed in 510 [EVPN-NVO3]. 512 6. Resiliency considerations 514 This section will discuss resiliency for a Controller domain 515 implementing the control framework in this document. 517 6.1. Controller resiliency 519 For a large domain, controller resiliency may be required. Non Stop 520 control-plane schemes may be extended to cover the Controller 521 Openflow component in addition to the BGP component. Alternatively 522 the controller resiliency schemes proposed in [Openflow] may be 523 employed in conjunction with Graceful Restart (for example [BGP 524 Graceful-Restart]) for the routing modules. 526 6.2. Data plane resiliency 528 From a data plane perspective, there are a number of ways to ensure 529 the NVE can be multi-homed towards the IP core. Existing mechanisms 530 like ECMP may be used to ensure load distribution across multiple 531 paths. 533 The access multi-homing of Tenant System to NVEs on the other hand 534 is applicable only when the NVE and TS are on different physical 535 devices. Several mechanisms can be utilized for this function 536 depending on whether the TS to NVE communication is over L2 or L3. 537 These use cases will be addressed in a future revision of this 538 document. 540 7. Practical deployment considerations 542 7.1. Controller distribution 544 The Controller is providing the control plane functionality for all 545 the NVEs in its domain. The number of NVEs per Controller domain may 546 vary depending on different scaling factors: for example the number 547 of tenant systems, VAPs, VNs and tunnel endpoints. The concept of 548 Controller federation allows for modular growth enabling a highly 549 distributed deployment where the Controller may be deployed even 550 down to server rack/ToR level in cases where scalable control plane 551 may be required or if a BGP-based operational environment is the 552 preferred option. 554 7.2. Hypervisor NVE processing 556 The proposed solution is designed to minimize the required 557 processing on the hypervisor NVEs. Complex control-plane policy and 558 routing functions are delegated to the Controller that can be 559 deployed in dedicated processors. Hypervisors only run the Openflow 560 agents required to download the FIB entries and process events in 561 some models, as well as perform the NVo3 forwarding function. 563 7.3. Interoperating with non-NVO3 domains 565 The routing module of the Controller enables interoperability with 566 existing non-NVO3 VPN domains where the whole Controller domain 567 appears as just another PE to those domains. A VPN interworking 568 function may need, depending on the encapsulation used, to be 569 implemented in the data plane of the gateway NVEs to existing non- 570 NVO3 VPN domains. 572 7.4. VM Mobility 574 VM mobility is handled by a tight interaction between the cloud 575 management system and the Controller. When a VM move is 576 instantiated, the cloud management system will notify the Controller 577 about the move. The VM will transition from a running state in one 578 hypervisor to a running state in another hypervisor. The transition 579 of these events will instruct the Controller to properly update the 580 routes in each of the NVEs. 582 7.5. Openflow and Open vSwitch 584 Utilizing Openflow as the basic mechanism for controller to NVE 585 communication offers several advantages: 587 1. Openflow is a binary protocol that is optimized for fast FIB 588 updates. Since it relies on a binary format it minimizes the 589 amount of data that needs to be transferred and the required 590 processing to provide increased scalability. 591 2. Openflow is already implemented in multiple hypervisors and is 592 deployed in some large cloud environments. The current 593 specification supports L2, L3 service FIB and flexible flow 594 definition providing a good starting point for future extensions. 596 From a practical and deployment perspective, the Open vSwitch [OVS] 597 is already part of the latest Linux kernel and most major Linux 598 distributions for the major hypervisors (KVM and Xen). Minimizing 599 the new protocols that need to be deployed into servers and relying 600 on the existing hypervisor capabilities can significantly simplify 601 and accelerate the adoption of NVO3 technologies. 603 8. Security Considerations 605 The tenant to overlay mapping function can introduce significant 606 security risks if appropriate protocols are not used that can 607 support mutual authentication. Proper configuration of Controller 608 and NVEs, and a mutual authentication mechanism is required. The 609 Openflow specification includes a TLS option for the controller to 610 NVE communication that can address the mutual authentication 611 requirement. 613 No other new security issues are introduced beyond those described 614 already in the related L2VPN and L3VPN RFCs. 616 9. IANA Considerations 618 IANA does not need to take any action for this draft. 620 10. References 622 10.1. Normative References 624 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 625 Requirement Levels", BCP 14, RFC 2119, March 1997. 627 [BGP-VPN] Rosen, E. and Y. Rekhter, "BGP/MPLS IP Virtual Private 628 Networks (VPNs)", RFC 4364, February 2006. 630 [BGP Graceful-Restart] Sangli, S. et al, "Graceful Restart Mechanism 631 for BGP", RFC 4724, January 2007 633 10.2. Informative References 635 [NVO3-FWK] Lasserre, M et.al "Framework for DC Network 636 Virtualization", draft-ietf-nvo3-framework (work in progress) 638 [NVO3-CPREQ] Kreeger, L. et al, "Network Virtualization Overlay 639 Control Protocol Requirements", draft-kreeger-nvo3- 640 overlay-cp (work in progress) 642 [Server2NVE] Kompella, K. et al, "Using signaling to simplify 643 network virtualization provisioning", draft-kompella-nv03-server2nve 644 (work in progress) 646 [EVPN-NVO3] Drake, J. et al, "A Control Plane for Network 647 Virtualized Overlays", draft-drake-nvo3-evpn-control-plane 648 (work in progress) 650 [Openflow] Openflow Switch Specification, 651 http://www.opennetworking.org 653 [ONF] Open Networking Foundation https://www.opennetworking.org/ 655 [Openstack] Openstack cloud software, http://www.openstack.org 657 [Cloudstack] Cloudstack, http://www.cloudstack.org 659 [OVS] Open vSwitch, http://www.openvswitch.org 661 [BGP-EVPN] Aggarwal, R. et al, "BGP MPLS Based Ethernet VPN", draft- 662 ietf-l2vpn-evpn (work in progress) 664 11. Acknowledgments 666 In addition to the authors the following people have contributed to 667 this document: Thomas Morin, Rotem Salomonovitch. 669 This document was prepared using 2-Word-v2.0.template.dot. 671 Authors' Addresses 673 Florin Balus 674 Nuage Networks 675 805 E. Middlefield Road 676 Mountain View, CA, USA 94043 677 Email: florin@nuagenetworks.net 679 Dimitri Stiliadis 680 Nuage Networks 681 805 E. Middlefield Road 682 Mountain View, CA, USA 94043 683 Email: dimitri@nuagenetworks.net 685 Nabil Bitar 686 Verizon 687 40 Sylvan Road 688 Waltham, MA 02145 689 Email: nabil.bitar@verizon.com 691 Kenichi Ogaki 692 KDDI 693 3-10-10 Iidabashi, 694 Chiyoda-ku Tokyo, 102-8460 JAPAN 695 Email: ke-oogaki@kddi.com 697 Marc Lasserre 698 Alcatel-Lucent 699 Email: marc.lasserre@alcatel-lucent.com 701 Wim Henderickx 702 Alcatel-Lucent 703 Email: wim.henderickx@alcatel-lucent.com