idnits 2.17.1 draft-maino-nvo3-lisp-cp-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (July 9, 2012) is 4306 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- == Unused Reference: 'I-D.farinacci-lisp-mr-signaling' is defined on line 719, but no explicit reference was found in the text ** Obsolete normative reference: RFC 4601 (Obsoleted by RFC 7761) == Outdated reference: A later version (-10) exists of draft-farinacci-lisp-lcaf-07 == Outdated reference: A later version (-06) exists of draft-farinacci-lisp-mr-signaling-00 == Outdated reference: A later version (-12) exists of draft-farinacci-lisp-te-00 == Outdated reference: A later version (-04) exists of draft-fuller-lisp-ddt-01 == Outdated reference: A later version (-24) exists of draft-ietf-lisp-23 == Outdated reference: A later version (-16) exists of draft-ietf-lisp-ms-14 == Outdated reference: A later version (-29) exists of draft-ietf-lisp-sec-02 == Outdated reference: A later version (-03) exists of draft-lasserre-nvo3-framework-02 == Outdated reference: A later version (-09) exists of draft-mahalingam-dutt-dcops-vxlan-01 == Outdated reference: A later version (-04) exists of draft-narten-nvo3-overlay-problem-statement-02 == Outdated reference: A later version (-03) exists of draft-smith-lisp-layer2-00 == Outdated reference: A later version (-08) exists of draft-sridharan-virtualization-nvgre-00 Summary: 1 error (**), 0 flaws (~~), 15 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group F. Maino 3 Internet-Draft V. Ermagan 4 Intended status: Experimental D. Farinacci 5 Expires: January 10, 2013 Cisco Systems 6 M. Smith 7 Insieme Networks 8 July 9, 2012 10 LISP Control Plane for Network Virtualization Overlays 11 draft-maino-nvo3-lisp-cp-00 13 Abstract 15 The purpose of this draft is to analyze the mapping between the 16 Network Virtualization over L3 (NVO3) requirements and the 17 capabilities of the Locator/ID Separation Protocol (LISP) control 18 plane. This information is provided as input to the NVO3 analysis of 19 the suitability of existing IETF protocols to the NVO3 requirements. 21 Requirements Language 23 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 24 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 25 document are to be interpreted as described in [RFC2119]. 27 Status of this Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on January 10, 2013. 44 Copyright Notice 46 Copyright (c) 2012 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2. Definition of Terms . . . . . . . . . . . . . . . . . . . . . 4 63 3. LISP Overview . . . . . . . . . . . . . . . . . . . . . . . . 4 64 3.1. LISP Site Configuration . . . . . . . . . . . . . . . . . 6 65 3.2. End System Provisioning . . . . . . . . . . . . . . . . . 7 66 3.3. End System Registration . . . . . . . . . . . . . . . . . 7 67 3.4. Packet Flow and Control Plane Operations . . . . . . . . . 7 68 3.4.1. Supporting ARP Resolution with LISP Mapping System . . 8 69 3.5. L3 LISP . . . . . . . . . . . . . . . . . . . . . . . . . 10 70 4. Reference Model . . . . . . . . . . . . . . . . . . . . . . . 10 71 4.1. Generic LISP NVE Reference Model . . . . . . . . . . . . . 10 72 4.2. LISP NVE Service Types . . . . . . . . . . . . . . . . . . 12 73 4.2.1. LISP L2 NVE Services . . . . . . . . . . . . . . . . . 12 74 4.2.2. LISP L3 NVE Services . . . . . . . . . . . . . . . . . 12 75 5. Functional Components . . . . . . . . . . . . . . . . . . . . 12 76 5.1. Generic Service Virtualization Components . . . . . . . . 12 77 5.1.1. Virtual Attachment Points (VAPs) . . . . . . . . . . . 13 78 5.1.2. Overlay Modules and Tenant ID . . . . . . . . . . . . 13 79 5.1.3. Tenant Instance . . . . . . . . . . . . . . . . . . . 14 80 5.1.4. Tunnel Overlays and Encapsulation Options . . . . . . 14 81 5.1.5. Control Plane Components . . . . . . . . . . . . . . . 14 82 6. Key Aspects of Overlay . . . . . . . . . . . . . . . . . . . . 15 83 6.1. Overlay Issues to Consider . . . . . . . . . . . . . . . . 15 84 6.1.1. Data Plane vs. Control Plane Driven . . . . . . . . . 15 85 6.1.2. Data Plane and Control Plane Separation . . . . . . . 15 86 6.1.3. Handling Broadcast, Unknown Unicast and Multicast 87 (BUM) Traffic . . . . . . . . . . . . . . . . . . . . 15 88 7. Security Considerations . . . . . . . . . . . . . . . . . . . 16 89 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 16 90 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16 91 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 16 92 10.1. Normative References . . . . . . . . . . . . . . . . . . . 16 93 10.2. Informative References . . . . . . . . . . . . . . . . . . 17 94 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 18 96 1. Introduction 98 The purpose of this draft is to analyze the mapping between the 99 Network Virtualization over L3 (NVO3) 100 [I-D.narten-nvo3-overlay-problem-statement] requirements and the 101 capabilities of the Locator/ID Separation Protocol (LISP) 102 [I-D.ietf-lisp] control plane. This information is provided as input 103 to the NVO3 analysis of the suitability of existing IETF protocols to 104 the NVO3 requirements. 106 LISP is a flexible map and encap framework that can be used for 107 overlay network applications, including Data Center Network 108 Virtualization. 110 The LISP framework provides two main tools for NVO3: (1) a Data Plane 111 that specifies how Endpoint Identifiers (EIDs) are encapsulated in 112 Routing Locators (RLOCs), and (2) a Control Plane that specifies the 113 interfaces to the LISP Mapping System that provides the mapping 114 between EIDs and RLOCs. 116 This document focuses on the control plane for L2 over L3 LISP 117 encapsulation, where EIDs are MAC addresses. As such the LISP 118 control plane can be used with the data path encapsulations defined 119 in VXLAN [I-D.mahalingam-dutt-dcops-vxlan] and in NVGRE 120 [I-D.sridharan-virtualization-nvgre]. The LISP control plane can, of 121 course, be used with the L2 LISP data path encapsulation defined in 122 [I-D.smith-lisp-layer2]. 124 The LISP control plane provides the Mapping Service for the Network 125 Virtualization Edge (NVE), mapping per-tenant end system identity 126 information on the corresponding location at the NVE. As required by 127 NVO3, LISP supports network virtualization and tenant separation to 128 hide tenant addressing information, tenant-related control plane 129 activity and service contexts from the underlay network. 131 The LISP control plane is extensible, and can support non-LISP data 132 path encapsulations such as [I-D.sridharan-virtualization-nvgre], or 133 other encapsulations that provide support for network virtualization. 134 [I-D.ietf-lisp-interworking] specifies an open interworking framework 135 to allow LISP to non-LISP sites communication. 137 Broadcast, unknown unicast, and multicast in the overlay network are 138 supported by either replicated unicast, or core-based multicast as 139 specified in [I-D.ietf-lisp-multicast], [I-D.farinacci-lisp-mr- 140 signaling], and [I-D.farinacci-lisp-te]. 142 Finally, the LISP architecture has a modular design that allows the 143 use of different Mapping Databases, provided that the interface to 144 the Mapping System remains the same [I-D.ietf-lisp-ms]. This allows 145 for different Mapping Databases that may fit different NVO3 146 deployments. As an example of the modularity of the LISP Mapping 147 System, a worldwide LISP pilot network is currently using an 148 hierarchical Delegated Database Tree [I-D.fuller-lisp-ddt], after 149 having been operated for years with an overlay BGP mapping 150 infrastructure [I-D.ietf-lisp-alt]. 152 The LISP mapping system supports network virtualization, and a single 153 mapping infrastructure can run multiple instances, either public or 154 private, of the mapping database. 156 The rest of this document, after giving a quick a LISP overview in 157 Section 3, follows the functional model defined in 158 [I-D.lasserre-nvo3-framework] that provides in Section 4 an overview 159 of the LISP NVO3 reference model, and in Section 5 a description of 160 its functional components. Section 6 contains various considerations 161 on key aspects of LISP NVO3, followed by security considerations in 162 Section 7. 164 2. Definition of Terms 166 flood-and-learn: the use of dynamic (data plane) learning in VXLAN 167 to discover the location of a given Ethernet/IEEE 802 MAC address 168 in the underlay network. 170 ARP-agent reply: the ARP proxy-reply of an agent (e.g. an ITR) 171 with a MAC address of some other system in response to an ARP 172 request to a target which is not the agent's IP address 174 For definition of NVO3 related terms, notably Virtual Network (VN), 175 Virtual Network Identifier (VNI), Network Virtualization Edge (NVE), 176 Data Center (DC), please consult [I-D.lasserre-nvo3-framework]. 178 For definitions of LISP related terms, notably Map-Request, Map- 179 Reply, Ingress Tunnel Router (ITR), Egress Tunnel Router (ETR), Map- 180 Server (MS) and Map-Resolver (MR) please consult the LISP 181 specification [I-D.ietf-lisp]. 183 3. LISP Overview 185 This section provides a quick overview of L2 LISP, illustrating the 186 use of a L2 data path encapsulation (such as VXLAN, L2 LISP, or 187 NVGRE) in combination with LISP control plane to provide L2 DC 188 network virtualization services. In L2 LISP, the LISP control plane 189 replaces the use of dynamic data plane learning (flood-and-learn), as 190 specified in [I-D.mahalingam-dutt-dcops-vxlan] improving scalability 191 and mitigating multicast requirements in the underlay network. 193 For a detailed LISP overview please refer to [I-D.ietf-lisp] and 194 related drafts. 196 To exemplify LISP operations let's consider two data centers (LISP 197 sites) A and B that provide L2 network virtualization services to a 198 number of tenant end systems, as depicted in Figure 1. The Endpoint 199 Identifiers (EIDs) are Ethernet/IEEE 802 MAC addresses. 201 The data centers are connected via a L3 underlay network, hence the 202 Routing Locators (RLOCs) are IP addresses (either IPv4 or IPv6). 204 In LISP the network virtualization edge function is performed by 205 Ingress Tunnel Routers (ITRs) that are responsible for encapsulating 206 the LISP ingress traffic, and Egress Tunnel Routers (ETRs) that are 207 responsible for decapsulating the LISP egress traffic. ETRs are also 208 responsible to register the EID-to-RLOC mapping for a given LISP site 209 in the LISP mapping database. ITRs and ETRs are collectively 210 referred as xTRs. 212 The EID-to-RLOC mapping is stored in the LISP mapping database, a 213 distributed mapping infrastructure accessible via Map Servers (MS) 214 and Map Resolvers (MR). [I-D.fuller-lisp-ddt] is an example of a 215 mapping database used in many LISP deployments. Another example of 216 of mapping database is [I-D.ietf-lisp-alt]. 218 For small deployments the mapping infrastructure can be very minimal, 219 in some cases even a single system running as MS/MR. 221 ,---------. 222 ,' `. 223 (Mapping System ) 224 `. ,' 225 `-+------+' 226 +--+--+ +-+---+ 227 |MS/MR| |MS/MR| 228 +-+---+ +-----+ 229 | | 230 .--..--. .--. .. 231 ( ' '.--. 232 .-.' L3 ' 233 ( Underlay ) 234 ( '-' 235 ._.'--'._.'.-._.'.-._) 236 RLOC=IP_A // \\ RLOC=IP_B 237 +---+--+ +-+--+--+ 238 .--.-.|xTR A |'.-. .| xTR B |.-. 239 ( +---+--+ ) ( +-+--+--+ ) 240 ( __. ( '. 241 ..' LISP Site A ) .' LISP Site B ) 242 ( .'-' ( .'-' 243 '--'._.'. )\ '--'._.'. )\ 244 / '--' \ / '--' \ 245 '--------' '--------' '--------' '--------' 246 : End : : End : : End : : End : 247 : Device : : Device : : Device : : Device : 248 '--------' '--------' '--------' '--------' 249 IID=1 IID=2 IID=1 IID=1 250 EID=MAC_W EID-MAC_X EID=MAC_Y EID=MAC_Z 252 Figure 1: Example of L2 NVO3 Services 254 3.1. LISP Site Configuration 256 In each LISP site the xTRs are configured with an IP address (the 257 site RLOCs) per each interface facing the underlay network. 259 Similarly the MS/MR are assigned an IP address in the RLOC space. 261 The configuration of the xTRs includes the RLOCs of the MS/MR and a 262 shared secret that is optionally used to secure the communication 263 between xTRs and MS/MR. 265 To provide support for multi-tenancy multiple instances of the 266 mapping database are identified by a LISP Instance ID (IID), that is 267 equivalent to the 24-bit VXLAN Network Identifier (VNI) or Tenant 268 Network Identifier (TNI) that identifies tenants in 270 [I-D.mahalingam-dutt-dcops-vxlan]. 272 3.2. End System Provisioning 274 We assume that a provisioning framework will be responsible for 275 provisioning end systems (e.g. VMs) in each data center. The 276 provisioning configures each end system with an Ethernet/IEEE 802 MAC 277 address (EID) and provision the network with other end system 278 specific attributes such as IP addresses, and VLAN information. LISP 279 does not introduce new addressing requirements for end systems. 281 The provisioning infrastructure is also responsible to provide a 282 network attach function, that notifies the network virtualization 283 edge (the LISP site ETR) that the end system is attached to a given 284 Virtual Network (identified by its VNI/IID) and is identified by a 285 given EID. 287 3.3. End System Registration 289 Upon notification of end system network attach, that includes the 290 tuple that identifies that end system, the ETR sends a LISP 291 Map-Register to the Mapping System. The Map-Register includes the 292 IID, EID and RLOCs of the LISP site. The EID-to-RLOC mapping is now 293 available, via the Mapping System Infrastructure, to other LISP sites 294 that are hosting end systems that belong to the same tenant. 296 For more details on end system registration see [I-D.ietf-lisp-ms]. 298 3.4. Packet Flow and Control Plane Operations 300 This section provides an example of the unicast packet flow and the 301 control plane operations when in the topology shown in Figure 1 end 302 system W, in LISP site A, wants to communicate to end system Y in 303 LISP site B. We'll assume that W knows Y's EID MAC address (e.g. 304 learned via ARP). 306 o W sends an Ethernet/IEEE 802 MAC frame with destination EID MAC_Y 307 and source EID MAC_W. 309 o ITR A does a lookup in its local map-cache for the destination 310 EID=MAC_Y (for tenant IID=1). Since this is the first packet sent 311 to MAC_Y, the map-cache is a miss, and the ITR sends a Map-request 312 to the mapping database system asking for . 314 o The mapping systems forwards the Map-Request to ETR B, that is 315 aware of the EID-to-RLOC mapping for MAC_Y. Alternatively, 316 depending on the mapping system configuration, a Map-Server in the 317 mapping system may send directly a Map-Reply to ITR A. 319 o ETR B sends a Map-Reply to ITR A that includes the EID-to-RLOC 320 mapping: -> RLOC=IP_B, where IP_B is the locator 321 of ETR B, hence the locator of LISP site B. In order to facilitate 322 interoperability, the Map-Reply may also include attributes such 323 as the data plane encapsulations supported by the ETR. 325 o ITR A populates the local map-cache with the EID to RLOC mapping, 326 and either L2 LISP, VXLAN, or NVGRE encapsulates all subsequent 327 packets with a destination EID=MAC_Y with a destination RLOC=IP_B. 329 It should be noted how the LISP mapping system replaces the use of 330 flood-and-learn based on multicast distribution trees instantiated in 331 the underlay network (required by VXLAN's dynamic data plane 332 learning), with a unicast control plane and a cache mechanism that 333 "pulls" on-demand the EID-to-RLOC mapping from the LISP mapping 334 database. This improves scalability, and simplifies the 335 configuration of the underlay network. 337 3.4.1. Supporting ARP Resolution with LISP Mapping System 339 A large majority of data center applications are IP based, and in 340 those use cases end systems are provisioned with IP addresses as well 341 as MAC addresses. 343 In this case, to eliminate the flooding of ARP traffic and further 344 reduce the need for multicast in the underlay network, the LISP 345 mapping system is used to support ARP resolution at the ITR. We 346 assume that as shown in Figure 2: (1) end system W has an IP address 347 IP_W, and end system Y has an IP address IP_Y, (2) end system W knows 348 Y's IP address (e.g. via DNS lookup). We also assume that during 349 registration Y has registered both its MAC address and its IP address 350 as EID. End system Y is then identified by the tuple . 353 ,---------. 354 ,' `. 355 (Mapping System ) 356 `. ,' 357 `-+------+' 358 +--+--+ +-+---+ 359 |MS/MR| |MS/MR| 360 +-+---+ +-----+ 361 | | 362 .--..--. .--. .. 363 ( ' '.--. 364 .-.' L3 ' 365 ( Underlay ) 366 ( '-' 367 ._.'--'._.'.-._.'.-._) 368 RLOC=IP_A // \\ RLOC=IP_B 369 +---+--+ +-+--+--+ 370 .--.-.|xTR A |'.-. .| xTR B |.-. 371 ( +---+--+ ) ( +-+--+--+ ) 372 ( __. ( '. 373 ..' LISP Site A ) .' LISP Site B ) 374 ( .'-' ( .'-' 375 '--'._.'. )\ '--'._.'. )\ 376 / '--' \ / '--' \ 377 '--------' '--------' '--------' '--------' 378 : End : : End : : End : : End : 379 : Device : : Device : : Device : : Device : 380 '--------' '--------' '--------' '--------' 381 IID=1 IID=2 IID=1 IID=1 382 EID=IP_W, EID=IP_X, EID=IP_Y, EID=IP_Z, 383 MAC_W MAC_X MAC_Y MAC_Z 385 Figure 2: Example of L3 NVO3 Services 387 The packet flow and control plane operation are as follows: 389 o End system W sends a broadcast ARP message to discover the MAC 390 address of endd system Y. The message contains IP_Y in the ARP 391 message payload. 393 o ITR A, acting on as a switch, will receive the ARP message, but 394 rather than flooding it on the overlay network sends a Map-Request 395 to the mapping database system for . 397 o The Map-Request is routed by the mapping system infrastructure to 398 ETR B, that will send a Map-Reply back to ITR A containing the 399 mapping -> RLOC=IP_B, (the locator of ETR 400 B). Alternatively, depending on the mapping system configuration, 401 a Map-Server in the mapping system may send directly a Map-Reply 402 to ITR A. 404 o ITR A populates the map-cache with the received entry, and sends 405 an ARP-agent reply to W that includes MAC_Y and IP_Y. 407 o End system W learns MAC_Y from the ARP message and can now send a 408 packet to end system Y by including MAC_Y, and IP_Y, as 409 destination addresses. 411 o ITR A will then process the packet as specified in Section 3.4. 413 This example shows how LISP, by replacing dynamic data plane learning 414 (flood-and-learn) largely reduces the need for multicast in the 415 underlay network, that is needed only when broadcast, unknown unicast 416 or multicast are required by the applications in the overlay. In 417 practice, the LISP mapping system, constrains ARP within the 418 boundaries of a link-local protocol. This simplifies the 419 configuration of the underlay network and removes the significant 420 scalability limitation imposed by VXLAN flood-and-learn. 422 It's important to note that the use of the LISP mapping system, by 423 pulling the EID-to-RLOC mapping on demand, also improves end system 424 mobility across data centers. 426 3.5. L3 LISP 428 The two examples above shows how the LISP control plane can be used 429 in combination with L2 LISP, VXLAN, and NVGRE encapsulation to 430 provide L2 network virtualization services across data centers. 432 There is a trend, led by Massive Scalable Data Centers, that is 433 accelerating the adoption of L3 network services in the data center, 434 to preserve the many benefits introduced by L3 (scalability, multi- 435 homing, ...). 437 LISP, as defined in [I-D.ietf-lisp], provides L3 network 438 virtualization services over an L3 underlay network that matches the 439 requirements of DC Network Virtualization. 441 4. Reference Model 443 4.1. Generic LISP NVE Reference Model 445 In the generic NVO3 reference model described in 446 [I-D.lasserre-nvo3-framework], a Tenant End System attaches to a 447 Network Virtualization Edge (NVE) either directly or via a switched 448 network. 450 In a LISP NVO3 network the Tenant End Systems are part of a LISP 451 site, and the NVE function is provided by LISP xTRs. xTRs provide for 452 tenant separation, perform the encap/decap function, and interface 453 with the LISP Mapping System that maps tenant addressing information 454 (in the EID name space) on the underlay L3 infrastructure (in the 455 RLOC name space). 457 Tenant segregation across LISP sites is provided by the LISP Instance 458 ID (IID), a 24-bit value that is used by the LISP routers as the 459 Virtual Network Identifier (VNI). Virtualization and Segmentation 460 with LISP is addressed in section 5.5 of [I-D.ietf-lisp]. 462 ............... ,---------. .............. 463 . +--------+ . ,' `. . +--------+ . 464 . | Tenant | . (Mapping System ) . | Tenant | . 465 . | End +---+ `. ,' +---| End | . 466 . | System | . | `-+------+' | . | System | . 467 . +--------+ . | ................... | . +--------+ . 468 . . | +-+--+ +--+-+ | . . 469 . . | | NV | | NV | | . . 470 . LISP Site . +--|Edge| |Edge|--+ . LISP Site . 471 . . +-+--+ +--+-+ . . 472 . . / (xTR) L3 Overlay (xTR)\ . . 473 . +--------+ . / . Network . \ . +--------+. 474 . | Tenant +---+ . . +----| Tenant |. 475 . | End | . . (xTR) . . | End |. 476 . | System | . . +----+ . . | System |. 477 . +--------+ . .....| NV |........ . +--------+. 478 ............... |Edge| ............. 479 +----+ 480 .........|............ 481 . |LISP Site . 482 . | . 483 . +--------+ . 484 . | Tenant | . 485 . | End | . 486 . | System | . 487 . +--------+ . 488 ...................... 490 Generic reference model for DC NVO3 LISP infrastructure 492 4.2. LISP NVE Service Types 494 LISP can be used to support both L2 NVE and L3 NVE service types 495 thanks to the flexibility provided by the LISP Canonical Address 496 Format [I-D.farinacci-lisp-lcaf], that allows for EIDs to be encoded 497 either as MAC addresses or IP addresses. 499 4.2.1. LISP L2 NVE Services 501 The frame format defined in [I-D.mahalingam-dutt-dcops-vxlan], has a 502 header compatible with the LISP data path encapsulation header, when 503 MAC addresses are used as EIDs, as described in section 4.12.2 of 504 [I-D.farinacci-lisp-lcaf]. 506 The LISP control plane is extensible, and can support non-LISP data 507 path encapsulations such as NVGRE 508 [I-D.sridharan-virtualization-nvgre], or other encapsulations that 509 provide support for network virtualization. 511 4.2.2. LISP L3 NVE Services 513 LISP is defined as a virtualized IP routing and forwarding service in 514 [I-D.ietf-lisp], and as such can be used to provide L3 NVE services. 516 5. Functional Components 518 This section describes the functional components of a LISP NVE as 519 defined in Section 3 of [I-D.lasserre-nvo3-framework]. 521 5.1. Generic Service Virtualization Components 523 The generic reference model for NVE is depicted in Section 3.1 of 524 [I-D.lasserre-nvo3-framework]. 526 +------- L3 Network ------+ 527 | | 528 | Tunnel Overlay | 529 +------------+---------+ +---------+------------+ 530 | +----------+-------+ | | +---------+--------+ | 531 | | Overlay Module | | | | Overlay Module | | 532 | +---------+--------+ | | +---------+--------+ | 533 | |VN context| | VN context| | 534 | | | | | | 535 | +--------+-------+ | | +--------+-------+ | 536 | | VNI | | | | VNI | | 537 NVE1 | +-+------------+-+ | | +-+-----------+--+ | NVE2 538 | | VAPs | | | | VAPs | | 539 +----+------------+----+ +----+------------+----+ 540 | | | | 541 -------+------------+-----------------+------------+------- 542 | | Tenant | | 543 | | Service IF | | 544 Tenant End Systems Tenant End Systems 546 Generic reference model for NV Edge 548 5.1.1. Virtual Attachment Points (VAPs) 550 In a LISP NVE, Tunnel Routers (xTRs) implement the NVE functionality 551 on ToRs or Virtual Switches. Tenant End Systems attach to the 552 Virtual Access Points (VAPs) provided by the xTRs (either a physical 553 port or a virtual interface). 555 5.1.2. Overlay Modules and Tenant ID 557 The xTR also implements the function of NVE Overlay Module, by 558 mapping the addressing information (EIDs) of the tenant packet on the 559 appropriate locations (RLOCs) in the underlay network. The Tenant 560 Network Identifier (TNI) is encoded in the encapsulated packet 561 (either in the 24-bit IID field of the LISP header for L2/L3 LISP 562 encapsulation, or in the 24-bit VXLAN Network Identifier field for 563 VXLAN encapsulation, or in the 24-bit NVGRE Tenant Network Identifier 564 field of NVGRE). In a LISP NVE globally unique (per administrative 565 domain) TNIs are used to identify the Tenant instances. 567 The mapping of the tenant packet address onto the underlay network 568 location is "pulled" on-demand from the mapping system, and cached at 569 the NVE in a per-TNI map-cache. 571 5.1.3. Tenant Instance 573 Tenants are mapped on LISP Instance IDs (IIDs), and the xTR keeps an 574 instance of the LISP control protocol per each IID. The ETR is 575 responsible to register the Tenant End System to the LISP mapping 576 system, via the Map-Register service provided by LISP Map-Servers 577 (MS). The Map-Register includes the IID that is used to identify the 578 tenant. 580 5.1.4. Tunnel Overlays and Encapsulation Options 582 The LISP control protocol, as defined today, provides support for L2 583 LISP and VXLAN L2 over L3 encapsulation, and LISP L3 over L3 584 encapsulation. 586 We believe that the LISP control Protocol can be easily extended to 587 support different IP tunneling options (such as NVGRE). 589 5.1.5. Control Plane Components 591 5.1.5.1. Auto-provisioning/Service Discovery 593 The LISP framework does not include mechanisms to provision the local 594 NVE with the appropriate Tenant Instance for each Tenant End Systems. 595 Other protocols, such as VDP (in IEEE P802.1Qbg), should be used to 596 implement a network attach/detach function. 598 The LISP control plane can take advantage of such a network attach/ 599 detach function to trigger the registration of a Tenant End System to 600 the Mapping System. This is particularly helpful to handle mobility 601 across DC of the Tenant End System. 603 It is possible to extend the LISP control protocol to advertise the 604 tenant service instance (tenant and service type provided) to other 605 NVEs, and facilitate interoperability between NVEs that are using 606 different service types. 608 5.1.5.2. Address Advertisement and Tunnel mapping 610 As traffic reaches an ingress NVE, the corresponding ITR uses the 611 LISP Map-Request/Reply service to determine the location of the 612 destination End System. 614 The LISP mapping system combines the distribution of address 615 advertisement and (stateless) tunneling provisioning. 617 When EIDs are mapped on both IP addresses and MACs, the need to flood 618 ARP messages at the NVE is eliminated resolving the issues with 619 explosive ARP handling. 621 5.1.5.3. Tunnel Management 623 LISP defines several mechanisms for determining RLOC reachability, 624 including Locator Status Bits, "nonce echoing", and RLOC probing. 625 Please see Sections 5.3 and 6.3 of [I-D.ietf-lisp]. 627 6. Key Aspects of Overlay 629 6.1. Overlay Issues to Consider 631 6.1.1. Data Plane vs. Control Plane Driven 633 The use of LISP control plane minimizes the need for multicast in the 634 underlay network overcoming the scalability limitations of VXLAN 635 dynamic data plane learning (flood-and-learn). 637 Multicast or ingress replication in the underlay network are still 638 required, as specified in [I-D.ietf-lisp-multicast], [I-D.farinacci- 639 lisp-mr-signaling], and [I-D.farinacci-lisp-te], to support 640 broadcast, unknown, and multicast traffic in the overlay, but 641 multicast in the underlay is no longer required (at least for IP 642 traffic) for unicast overlay services. 644 6.1.2. Data Plane and Control Plane Separation 646 LISP introduces a clear separation between data plane and control 647 plane functions. LISP modular design allows for different mapping 648 databases, to achieve different scalability goals and to meet 649 requirements of different deployments. 651 6.1.3. Handling Broadcast, Unknown Unicast and Multicast (BUM) Traffic 653 Packet replication in the underlay network to support broadcast, 654 unknown unicast and multicast overlay services can be done by: 656 o Ingress replication 658 o Use of underlay multicast trees 660 [I-D.ietf-lisp-multicast] specifies how to map a multicast flow in 661 the EID space during distribution tree setup and packet delivery in 662 the underlay network. LISP-multicast doesn't require packet format 663 changes in multicast routing protocols, and doesn't impose changes in 664 the internal operation of multicast in a LISP site. The only 665 operational changes are required in PIM-ASM [RFC4601], MSDP 667 [RFC3618], and PIM-SSM [RFC4607]. 669 7. Security Considerations 671 [I-D.ietf-lisp-sec] defines a set of security mechanisms that provide 672 origin authentication, integrity and anti-replay protection to LISP's 673 EID-to-RLOC mapping data conveyed via mapping lookup process. LISP- 674 SEC also enables verification of authorization on EID-prefix claims 675 in Map-Reply messages. 677 Additional security mechanisms to protect the LISP Map-Register 678 messages are defined in [I-D.ietf-lisp-ms]. 680 The security of the Mapping System Infrastructure depends on the 681 particular mapping database used. The [I-D.fuller-lisp-ddt] 682 specification, as an example, defines a public-key based mechanism 683 that provides origin authentication and integrity protection to the 684 LISP DDT protocol. 686 8. IANA Considerations 688 This document has no IANA implications 690 9. Acknowledgements 692 The authors want to thank Victor Moreno and Paul Quinn for the early 693 review, insightful comments and suggestions. 695 10. References 697 10.1. Normative References 699 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 700 Requirement Levels", BCP 14, RFC 2119, March 1997. 702 [RFC3618] Fenner, B. and D. Meyer, "Multicast Source Discovery 703 Protocol (MSDP)", RFC 3618, October 2003. 705 [RFC4601] Fenner, B., Handley, M., Holbrook, H., and I. Kouvelas, 706 "Protocol Independent Multicast - Sparse Mode (PIM-SM): 707 Protocol Specification (Revised)", RFC 4601, August 2006. 709 [RFC4607] Holbrook, H. and B. Cain, "Source-Specific Multicast for 710 IP", RFC 4607, August 2006. 712 10.2. Informative References 714 [I-D.farinacci-lisp-lcaf] 715 Farinacci, D., Meyer, D., and J. Snijders, "LISP Canonical 716 Address Format (LCAF)", draft-farinacci-lisp-lcaf-07 (work 717 in progress), March 2012. 719 [I-D.farinacci-lisp-mr-signaling] 720 Farinacci, D., and M. Napierala, 721 "LISP Control-Plane Multicast Signaling", 722 draft-farinacci-lisp-mr-signaling-00 (work in progress), 723 July 2012. 725 [I-D.farinacci-lisp-te] 726 Farinacci, D., Lahiri, P., and M. Kowal, "LISP Traffic 727 Engineering Use-Cases", draft-farinacci-lisp-te-00 (work 728 in progress), March 2012. 730 [I-D.fuller-lisp-ddt] 731 Fuller, V. and D. Lewis, "LISP Delegated Database Tree", 732 draft-fuller-lisp-ddt-01 (work in progress), March 2012. 734 [I-D.ietf-lisp] 735 Farinacci, D., Fuller, V., Meyer, D., and D. Lewis, 736 "Locator/ID Separation Protocol (LISP)", 737 draft-ietf-lisp-23 (work in progress), May 2012. 739 [I-D.ietf-lisp-alt] 740 Fuller, V., Farinacci, D., Meyer, D., and D. Lewis, "LISP 741 Alternative Topology (LISP+ALT)", draft-ietf-lisp-alt-10 742 (work in progress), December 2011. 744 [I-D.ietf-lisp-interworking] 745 Lewis, D., Meyer, D., Farinacci, D., and V. Fuller, 746 "Interworking LISP with IPv4 and IPv6", 747 draft-ietf-lisp-interworking-06 (work in progress), 748 March 2012. 750 [I-D.ietf-lisp-ms] 751 Fuller, V. and D. Farinacci, "LISP Map Server Interface", 752 draft-ietf-lisp-ms-14 (work in progress), December 2011. 754 [I-D.ietf-lisp-multicast] 755 Farinacci, D., Meyer, D., Zwiebel, J., and S. Venaas, 756 "LISP for Multicast Environments", 757 draft-ietf-lisp-multicast-14 (work in progress), 758 February 2012. 760 [I-D.ietf-lisp-sec] 761 Maino, F., Ermagan, V., Cabellos-Aparicio, A., Saucez, D., 762 and O. Bonaventure, "LISP-Security (LISP-SEC)", 763 draft-ietf-lisp-sec-02 (work in progress), March 2012. 765 [I-D.lasserre-nvo3-framework] 766 Lasserre, M., Balus, F., Morin, T., Bitar, N., and Y. 767 Rekhter, "Framework for DC Network Virtualization", 768 draft-lasserre-nvo3-framework-02 (work in progress), 769 June 2012. 771 [I-D.mahalingam-dutt-dcops-vxlan] 772 Sridhar, T., Bursell, M., Kreeger, L., Dutt, D., Wright, 773 C., Mahalingam, M., Duda, K., and P. Agarwal, "VXLAN: A 774 Framework for Overlaying Virtualized Layer 2 Networks over 775 Layer 3 Networks", draft-mahalingam-dutt-dcops-vxlan-01 776 (work in progress), February 2012. 778 [I-D.narten-nvo3-overlay-problem-statement] 779 Narten, T., Sridhavan, M., Dutt, D., Black, D., and L. 780 Kreeger, "Problem Statement: Overlays for Network 781 Virtualization", 782 draft-narten-nvo3-overlay-problem-statement-02 (work in 783 progress), June 2012. 785 [I-D.smith-lisp-layer2] 786 Smith, M. and D. Dutt, "Layer 2 (L2) LISP Encapsulation 787 Format", draft-smith-lisp-layer2-00 (work in progress), 788 March 2011. 790 [I-D.sridharan-virtualization-nvgre] 791 Sridhavan, M., Duda, K., Ganga, I., Greenberg, A., Lin, 792 G., Pearson, M., Thaler, P., Tumuluri, C., and Y. Wang, 793 "NVGRE: Network Virtualization using Generic Routing 794 Encapsulation", draft-sridharan-virtualization-nvgre-00 795 (work in progress), September 2011. 797 Authors' Addresses 799 Fabio Maino 800 Cisco Systems 801 170 Tasman Drive 802 San Jose, California 95134 803 USA 805 Email: fmaino@cisco.com 806 Vina Ermagan 807 Cisco Systems 808 170 Tasman Drive 809 San Jose, California 95134 810 USA 812 Email: vermagan@cisco.com 814 Dino Farinacci 815 Cisco Systems 816 170 Tasman Drive 817 San Jose, California 95134 818 USA 820 Email: dino@cisco.com 822 Michael Smith 823 Insieme Networks 824 California 825 USA 827 Email: michsmit@insiemenetworks.com