idnits 2.17.1 draft-ietf-lisp-deployment-12.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 596 has weird spacing: '...ructure x ...' -- The document date (January 17, 2014) is 3751 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 6830 (Obsoleted by RFC 9300, RFC 9301) ** Obsolete normative reference: RFC 6833 (Obsoleted by RFC 9301) == Outdated reference: A later version (-09) exists of draft-ietf-lisp-ddt-01 == Outdated reference: A later version (-29) exists of draft-ietf-lisp-sec-05 == Outdated reference: A later version (-15) exists of draft-ietf-lisp-threats-08 -- Obsolete informational reference (is this intentional?): RFC 6834 (Obsoleted by RFC 9302) Summary: 2 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group L. Jakab 3 Internet-Draft Cisco Systems 4 Intended status: Experimental A. Cabellos-Aparicio 5 Expires: July 21, 2014 F. Coras 6 J. Domingo-Pascual 7 Technical University of 8 Catalonia 9 D. Lewis 10 Cisco Systems 11 January 17, 2014 13 LISP Network Element Deployment Considerations 14 draft-ietf-lisp-deployment-12.txt 16 Abstract 18 This document is a snapshot of different Locator/Identifier 19 Separation Protocol (LISP) deployment scenarios. It discusses the 20 placement of new network elements introduced by the protocol, 21 representing the thinking of the LISP working group as of Summer 22 2013. LISP deployment scenarios may have evolved since. This memo 23 represents one stable point in that evolution of understanding. 25 Status of this Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on July 21, 2014. 42 Copyright Notice 44 Copyright (c) 2014 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 2. Tunnel Routers . . . . . . . . . . . . . . . . . . . . . . . . 4 61 2.1. Deployment Scenarios . . . . . . . . . . . . . . . . . . . 4 62 2.1.1. Customer Edge . . . . . . . . . . . . . . . . . . . . 4 63 2.1.2. Provider Edge . . . . . . . . . . . . . . . . . . . . 6 64 2.1.3. Tunnel Routers Behind NAT . . . . . . . . . . . . . . 7 65 2.1.3.1. ITR . . . . . . . . . . . . . . . . . . . . . . . 7 66 2.1.3.2. ETR . . . . . . . . . . . . . . . . . . . . . . . 8 67 2.1.3.3. Additional Notes . . . . . . . . . . . . . . . . . 8 68 2.2. Functional Models with Tunnel Routers . . . . . . . . . . 8 69 2.2.1. Split ITR/ETR . . . . . . . . . . . . . . . . . . . . 8 70 2.2.2. Inter-Service Provider Traffic Engineering . . . . . . 10 71 2.3. Summary and Feature Matrix . . . . . . . . . . . . . . . . 12 72 3. Map Resolvers and Map Servers . . . . . . . . . . . . . . . . 13 73 3.1. Map Servers . . . . . . . . . . . . . . . . . . . . . . . 13 74 3.2. Map Resolvers . . . . . . . . . . . . . . . . . . . . . . 15 75 4. Proxy Tunnel Routers . . . . . . . . . . . . . . . . . . . . . 16 76 4.1. P-ITR . . . . . . . . . . . . . . . . . . . . . . . . . . 16 77 4.2. P-ETR . . . . . . . . . . . . . . . . . . . . . . . . . . 17 78 5. Migration to LISP . . . . . . . . . . . . . . . . . . . . . . 18 79 5.1. LISP+BGP . . . . . . . . . . . . . . . . . . . . . . . . . 18 80 5.2. Mapping Service Provider (MSP) P-ITR Service . . . . . . . 19 81 5.3. Proxy-ITR Route Distribution (PITR-RD) . . . . . . . . . . 19 82 5.4. Migration Summary . . . . . . . . . . . . . . . . . . . . 22 83 6. Security Considerations . . . . . . . . . . . . . . . . . . . 22 84 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 85 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 23 86 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 23 87 9.1. Normative References . . . . . . . . . . . . . . . . . . . 23 88 9.2. Informative References . . . . . . . . . . . . . . . . . . 23 89 Appendix A. Step-by-Step Example BGP to LISP Migration 90 Procedure . . . . . . . . . . . . . . . . . . . . . . 24 91 A.1. Customer Pre-Install and Pre-Turn-up Checklist . . . . . . 24 92 A.2. Customer Activating LISP Service . . . . . . . . . . . . . 26 93 A.3. Cut-Over Provider Preparation and Changes . . . . . . . . 27 94 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 27 96 1. Introduction 98 The Locator/Identifier Separation Protocol (LISP) is designed to 99 address the scaling issues of the global Internet routing system 100 identified in [RFC4984] by separating the current addressing scheme 101 into Endpoint IDentifiers (EIDs) and Routing LOCators (RLOCs). The 102 main protocol specification [RFC6830] describes how the separation is 103 achieved, which new network elements are introduced, and details the 104 packet formats for the data and control planes. 106 LISP assumes that such separation is between the edge and core and 107 uses mapping and encapsulation for forwarding. While the boundary 108 between both is not strictly defined, one widely accepted definition 109 places it at the border routers of stub autonomous systems, which may 110 carry a partial or complete default-free zone (DFZ) routing table. 111 The initial design of LISP took this location as a baseline for 112 protocol development. However, the applications of LISP go beyond 113 just decreasing the size of the DFZ routing table, and include 114 improved multihoming and ingress traffic engineering (TE) support for 115 edge networks, and even individual hosts. Throughout the document we 116 will use the term LISP site to refer to these networks/hosts behind a 117 LISP Tunnel Router. We formally define the following two terms: 119 Network element: Facility or equipment used in the provision of a 120 communications service over the Internet [TELCO96]. 122 LISP site: A single host or a set of network elements in an edge 123 network under the administrative control of a single organization, 124 delimited from other networks by LISP Tunnel Router(s). 126 Since LISP is a protocol which can be used for different purposes, it 127 is important to identify possible deployment scenarios and the 128 additional requirements they may impose on the protocol specification 129 and other protocols. Additionally, this document is intended as a 130 guide for the operational community for LISP deployments in their 131 networks. It is expected to evolve as LISP deployment progresses, 132 and the described scenarios are better understood or new scenarios 133 are discovered. 135 Each subsection considers an element type, discussing the impact of 136 deployment scenarios on the protocol specification. For definition 137 of terms, please refer to the appropriate documents (as cited in the 138 respective sections). 140 This experimental document describing deployment considerations and 141 the LISP specifications have areas that require additional experience 142 and measurement. LISP is not recommended for deployment beyond 143 experimental situations. Results of experimentation may lead to 144 modifications and enhancements of the LISP protocol mechanisms. 145 Additionally, at the time of this writing there is no standardized 146 security to implement. Beware that there are no counter measures for 147 any of the threads identified in [I-D.ietf-lisp-threats]. See 148 Section 15 [of RFC 6830] for specific, known issues that are in need 149 of further work during development, implementation, and 150 experimentation, and [I-D.ietf-lisp-threats] for recommendations to 151 ameliorate the above-mentioned security threats. 153 2. Tunnel Routers 155 The device that is the gateway between the edge and the core is 156 called a Tunnel Router (xTR), performing one or both of two separate 157 functions: 159 1. Encapsulating packets originating from an end host to be 160 transported over intermediary (transit) networks towards the 161 other end-point of the communication 163 2. Decapsulating packets entering from intermediary (transit) 164 networks, originated at a remote end host. 166 The first function is performed by an Ingress Tunnel Router (ITR), 167 the second by an Egress Tunnel Router (ETR). 169 Section 8 of the main LISP specification [RFC6830] has a short 170 discussion of where Tunnel Routers can be deployed and some of the 171 associated advantages and disadvantages. This section adds more 172 detail to the scenarios presented there, and provides additional 173 scenarios as well. Furthermore this section discusses functional 174 models, that is, network functions that can be achieved by deploying 175 Tunnel Routers in specific ways. 177 2.1. Deployment Scenarios 179 2.1.1. Customer Edge 181 The first scenario we discuss is customer edge, when xTR 182 functionality is placed on the router(s) that connect the LISP site 183 to its upstream(s), but are under its control. As such, this is the 184 most common expected scenario for xTRs, and this document considers 185 it the reference location, comparing the other scenarios to this one. 187 ISP1 ISP2 188 | | 189 | | 190 +----+ +----+ 191 +--|xTR1|--|xTR2|--+ 192 | +----+ +----+ | 193 | | 194 | LISP site | 195 +------------------+ 197 Figure 1: xTRs at the customer edge 199 From the LISP site perspective the main advantage of this type of 200 deployment (compared to the one described in the next section) is 201 having direct control over its ingress traffic engineering. This 202 makes it easy to set up and maintain active/active, active/backup, or 203 more complex TE policies, adding ISPs and additional xTRs at will, 204 without involving third parties. 206 Being under the same administrative control, reachability information 207 of all ETRs is easier to synchronize, because the necessary control 208 traffic can be allowed between the locators of the ETRs. A correct 209 synchronous global view of the reachability status is thus available, 210 and the Locator Status Bits (Loc-Status-Bits, defined in [RFC6830]) 211 can be set correctly in the LISP data header of outgoing packets. 213 By placing the tunnel router at the edge of the site, existing 214 internal network configuration does not need to be modified. 215 Firewall rules, router configurations and address assignments inside 216 the LISP site remain unchanged. This helps with incremental 217 deployment and allows a quick upgrade path to LISP. For larger sites 218 with many external connections, distributed in geographically diverse 219 points of presence (PoPs), and complex internal topology, it may 220 however make more sense to both encapsulate and decapsulate as soon 221 as possible, to benefit from the information in the IGP to choose the 222 best path (see Section 2.2.1 for a discussion of this scenario). 224 Another thing to consider when placing tunnel routers is MTU issues. 225 Encapsulation increases the amount of overhead associated with each 226 packet. This added overhead decreases the effective end-to-end path 227 MTU (unless fragmentation and reassembly is used). Some transit 228 networks are known to provide larger MTU than the typical value of 229 1500 bytes of popular access technologies used at end hosts (e.g., 230 IEEE 802.3 and 802.11). However, placing the LISP router connecting 231 to such a network at the customer edge could possibly bring up MTU 232 issues, depending on the link type to the provider as opposed to the 233 following scenario. See [RFC4459] for MTU considerations of 234 tunneling protocols on how to mitigate potential issues. Still, even 235 with these mitigations, path MTU issues are still possible. 237 2.1.2. Provider Edge 239 The other location at the core-edge boundary for deploying LISP 240 routers is at the Internet service provider edge. The main incentive 241 for this case is that the customer does not have to upgrade the CE 242 router(s), or change the configuration of any equipment. 243 Encapsulation/decapsulation happens in the provider's network, which 244 may be able to serve several customers with a single device. For 245 large ISPs with many residential/business customers asking for LISP 246 this can lead to important savings, since there is no need to upgrade 247 the software (or hardware, if it's the case) at each client's 248 location. Instead, they can upgrade the software (or hardware) on a 249 few PE routers serving the customers. This scenario is depicted in 250 Figure 2. 252 +----------+ +------------------+ 253 | ISP1 | | ISP2 | 254 | | | | 255 | +----+ | | +----+ +----+ | 256 +--|xTR1|--+ +--|xTR2|--|xTR3|--+ 257 +----+ +----+ +----+ 258 | | | 259 | | | 260 +--<[LISP site]>---+-------+ 262 Figure 2: xTR at the PE 264 While this approach can make transition easy for customers and may be 265 cheaper for providers, the LISP site loses one of the main benefits 266 of LISP: ingress traffic engineering. Since the provider controls 267 the ETRs, additional complexity would be needed to allow customers to 268 modify their mapping entries. 270 The problem is aggravated when the LISP site is multihomed. Consider 271 the scenario in Figure 2: whenever a change to TE policies is 272 required, the customer contacts both ISP1 and ISP2 to make the 273 necessary changes on the routers (if they provide this possibility). 274 It is however unlikely, that both ISPs will apply changes 275 simultaneously, which may lead to inconsistent state for the mappings 276 of the LISP site. Since the different upstream ISPs are usually 277 competing business entities, the ETRs may even be configured to 278 compete, either to attract all the traffic or to get no traffic. The 279 former will happen if the customer pays per volume, the latter if the 280 connectivity has a fixed price. A solution could be to configure the 281 Map Server(s) to do proxy-replying and have the Mapping Service 282 Provider (MSP) apply policies. 284 Additionally, since xTR1, xTR2, and xTR3 are in different 285 administrative domains, locator reachability information is unlikely 286 to be exchanged among them, making it difficult to set Loc-Status- 287 Bits (LSB) correctly on encapsulated packets. Because of this, and 288 due to the security concerns about LSB described in 289 [I-D.ietf-lisp-threats] their use is discouraged (set the L-bit to 290 0). Mapping versioning is another alternative [RFC6834]. 292 Compared to the customer edge scenario, deploying LISP at the 293 provider edge might have the advantage of diminishing potential MTU 294 issues, because the tunnel router is closer to the core, where links 295 typically have higher MTUs than edge network links. 297 2.1.3. Tunnel Routers Behind NAT 299 NAT in this section refers to IPv4 network address and port 300 translation. 302 2.1.3.1. ITR 304 _.--. _.--. 305 ,-'' `--. +-------+ ,-'' `--. 306 ' EID ` (Private) | NAT | (Public) ,' RLOC `. 307 ( )---[ITR]---| |---------( ) 308 . space ,' (Address) | Box |(Address) . space ,' 309 `--. _.-' +-------+ `--. _.-' 310 `--'' `--'' 312 Figure 3: ITR behind NAT 314 Packets encapsulated by an ITR are just UDP packets from a NAT 315 device's point of view, and they are handled like any UDP packet, 316 there are no additional requirements for LISP data packets. 318 Map-Requests sent by an ITR, which create the state in the NAT table, 319 have a different 5-tuple in the IP header than the Map-Reply 320 generated by the authoritative ETR. Since the source address of this 321 packet is different from the destination address of the request 322 packet, no state will be matched in the NAT table and the packet will 323 be dropped. To avoid this, the NAT device has to do the following: 325 o Send all UDP packets with source port 4342, regardless of the 326 destination port, to the RLOC of the ITR. The most simple way to 327 achieve this is configuring 1:1 NAT mode from the external RLOC of 328 the NAT device to the ITR's RLOC (Called "DMZ" mode in consumer 329 broadband routers). 331 o Rewrite the ITR-AFI and "Originating ITR RLOC Address" fields in 332 the payload. 334 This setup supports only a single ITR behind the NAT device. 336 2.1.3.2. ETR 338 An ETR placed behind NAT is reachable from the outside by the 339 Internet-facing locator of the NAT device. It needs to know this 340 locator (and configure a loopback interface with it), so that it can 341 use it in Map-Reply and Map-Register messages. Thus support for 342 dynamic locators for the mapping database is needed in LISP 343 equipment. 345 Again, only one ETR behind the NAT device is supported. 347 _.--. _.--. 348 ,-'' `--. +-------+ ,-'' `--. 349 ' EID ` (Private) | NAT | (Public) ,' RLOC `. 350 ( )---[ETR]---| |---------( ) 351 . space ,' (Address) | Box |(Address) . space ,' 352 `--. _.-' +-------+ `--. _.-' 353 `--'' `--'' 355 Figure 4: ETR behind NAT 357 2.1.3.3. Additional Notes 359 An implication of the issues described above is that LISP sites with 360 xTRs can not be behind carrier based NATs, since two different sites 361 would collide on the port forwarding. An alternative to static hole- 362 punching to explore is the use of the Port Control Protocol (PCP) 363 [RFC6887]. 365 We only include this scenario due to completeness, to show that a 366 LISP site can be deployed behind NAT, should it become necessary. 367 However, LISP deployments behind NAT should be avoided, if possible. 369 2.2. Functional Models with Tunnel Routers 371 This section describes how certain LISP deployments can provide 372 network functions. 374 2.2.1. Split ITR/ETR 376 In a simple LISP deployment, xTRs are located at the border of the 377 LISP site (see Section 2.1.1). In this scenario packets are routed 378 inside the domain according to the EID. However, more complex 379 networks may want to route packets according to the destination RLOC. 380 This would enable them to choose the best egress point. 382 The LISP specification separates the ITR and ETR functionality and 383 allows both entities to be deployed in separated network equipment. 384 ITRs can be deployed closer to the host (i.e., access routers). This 385 way packets are encapsulated as soon as possible, and egress point 386 selection is driven by operational policy. In turn, ETRs can be 387 deployed at the border routers of the network, and packets are 388 decapsulated as soon as possible. Once decapsulated, packets are 389 routed based on destination EID, according to internal routing 390 policy. 392 In the following figure we can see an example. The Source (S) 393 transmits packets using its EID and in this particular case packets 394 are encapsulated at ITR_1. The encapsulated packets are routed 395 inside the domain according to the destination RLOC, and can egress 396 the network through the best point (i.e., closer to the RLOC's AS). 397 On the other hand, inbound packets are received by ETR_1 which 398 decapsulates them. Then packets are routed towards S according to 399 the EID, again following the best path. 401 +---------------------------------------+ 402 | | 403 | +-------+ +-------+ +-------+ 404 | | ITR_1 |---------+ | ETR_1 |-RLOC_A--| ISP_A | 405 | +-------+ | +-------+ +-------+ 406 | +-+ | | | 407 | |S| | IGP | | 408 | +-+ | | | 409 | +-------+ | +-------+ +-------+ 410 | | ITR_2 |---------+ | ETR_2 |-RLOC_B--| ISP_B | 411 | +-------+ +-------+ +-------+ 412 | | 413 +---------------------------------------+ 415 Figure 5: Split ITR/ETR Scenario 417 This scenario has a set of implications: 419 o The site must carry more specific routes in order to choose the 420 best egress point, and typically BGP is used for this, increasing 421 the complexity of the network. However, this is usually already 422 the case for LISP sites that would benefit from this scenario. 424 o If the site is multihomed to different ISPs and any of the 425 upstream ISPs are doing uRPF filtering, this scenario may become 426 impractical. ITRs need to determine the exit ETR, for setting the 427 correct source RLOC in the encapsulation header. This adds 428 complexity and reliability concerns. 430 o In LISP, ITRs set the reachability bits when encapsulating data 431 packets. Hence, ITRs need a mechanism to be aware of the liveness 432 of all ETRs serving their site. 434 o MTU within the site network must be large enough to accommodate 435 encapsulated packets. 437 o In this scenario, each ITR is serving fewer hosts than in the case 438 when it is deployed at the border of the network. It has been 439 shown that cache hit ratio grows logarithmically with the amount 440 of users [CACHE]. Taking this into account, when ITRs are 441 deployed closer to the host the effectiveness of the mapping cache 442 may be lower (i.e., the miss ratio is higher). Another 443 consequence of this is that the site may transmit a higher amount 444 of Map-Requests, increasing the load on the distributed mapping 445 database. 447 o By placing the ITRs inside the site, they will still need global 448 RLOCs, and this may add complexity to intra-site routing 449 configuration, and further intra-site issues when there is a 450 change of providers. 452 2.2.2. Inter-Service Provider Traffic Engineering 454 At the time of this writing, if two ISPs want to control their 455 ingress TE policies for transit traffic between them, they need to 456 rely on existing BGP mechanisms. This typically means deaggregating 457 prefixes to choose on which upstream link packets should enter. This 458 is either not feasible (if fine-grained per-customer control is 459 required, the very specific prefixes may not be propagated) or 460 increases DFZ table size. 462 Typically, LISP is seen applicable only to stub networks, however the 463 LISP protocol can be also applied in a recursive manner, providing 464 service provider ingress/egress TE capabilities without impacting the 465 DFZ table size. 467 In order to implement this functionality with LISP consider the 468 scenario depicted in Figure 6. The two ISPs willing to achieve 469 ingress/egress TE are labeled as ISP_A and ISP_B, they are servicing 470 Stub1 and Stub2 respectively, both are required to be LISP sites with 471 their own xTRs. In this scenario we assume that Stub1 and Stub2 are 472 communicating with each other and thus, ISP_A and ISP_B offer transit 473 for such communications. ISP_A has RLOC_A1 and RLOC_A2 as upstream 474 IP addresses while ISP_B has RLOC_B1 and RLOC_B2. The shared goal 475 among ISP_A and ISP_B is to control the transit traffic flow between 476 RLOC_A1/A2 and RLOC_B1/B2. 478 _.--. 479 Stub1 ... +-------+ ,-'' `--. +-------+ ... Stub2 480 \ | R_A1|----,' `. ---|R_B1 | / 481 --| | ( Transit ) | |-- 482 ... .../ | R_A2|-----. ,' ---|R_B2 | \... ... 483 +-------+ `--. _.-' +-------+ 484 ... ... ISP_A `--'' ISP_B ... ... 486 Figure 6: Inter-Service provider TE scenario 488 Both ISPs deploy xTRs on on RLOC_A1/A2 and RLOC_B1/B2 respectively 489 and reach a bilateral agreement to deploy their own private mapping 490 system. This mapping system contains bindings between the RLOCs of 491 Stub1 and Stub2 (owned by ISP_A and ISP_B respectively) and 492 RLOC_A1/A2 and RLOC_B1/B2. Such bindings are in fact the TE policies 493 between both ISPs and the convergence time is expected to be fast, 494 since ISPs only have to update/query a mapping to/from the database. 496 The packet flow is as follows. First, a packet originated at Stub1 497 towards Stub2 is LISP encapsulated by Stub1's xTR. The xTR of ISP_A 498 recursively encapsulates it and, according to the TE policies stored 499 in the private mapping system, the ISP_A xTR chooses RLOC_B1 or 500 RLOC_B2 as the outer encapsulation destination. Note that the packet 501 transits between ISP_A and ISP_B double-encapsulated. Upon reception 502 at the xTR of ISP_B the packet is decapsulated and sent towards Stub2 503 which performs the last decapsulation. 505 This deployment scenario, which uses recursive LISP, includes three 506 important caveats. First, it is intended to be deployed between only 507 two ISPs. If more than two ISPs use this approach, then the xTRs 508 deployed at the participating ISPs must either query multiple mapping 509 systems, or the ISPs must agree on a common shared mapping system. 510 Furthermore, keeping this deployment scenario restricted to only two 511 ISPs maintains the solution scalable, given that only two entities 512 need to agree on using recursive LISP, and only one private mapping 513 system is involved. 515 Second, the scenario is only recommended for ISPs providing 516 connectivity to LISP sites, such that source RLOCs of packets to be 517 recursively encapsulated belong to said ISP. Otherwise the 518 participating ISPs must register prefixes they do not own in the 519 above mentioned private mapping system. This results in either 520 requiring complex authentication mechanisms or enabling simple 521 traffic redirection attacks. Failure to follow these recommendations 522 may lead to operational security issues when deploying this scenario. 524 And third, recursive encapsulation models are typically complex to 525 troubleshoot and debug. 527 Besides these recommendations, the main disadvantages of this 528 deployment case are: 530 o Extra LISP header is needed. This increases the packet size and 531 requires that the MTU between both ISPs accommodates double- 532 encapsulated packets. 534 o The ISP ITR must encapsulate packets and therefore must know the 535 RLOC-to-RLOC binding. These bindings are stored in a mapping 536 database and may be cached in the ITR's mapping cache. Cache 537 misses lead to an additional lookup latency, unless a push based 538 mapping system is used for the private mapping system. 540 o The operational overhead of maintaining the shared mapping 541 database. 543 2.3. Summary and Feature Matrix 545 When looking at the deployment scenarios and functional models above, 546 there are several things to consider when choosing the approprate 547 one, depending on the type of the organization doing the deployment. 549 For home users and small site who wish to multihome and have control 550 over their ISP options, the "CE" scenario offers the most advantages: 551 it's simple to deploy, in some cases it only requires a software 552 upgrade of the CPE, getting mapping serice, and configuring the 553 router. It ratains control of TE and choosing upstreams by the user. 554 It doesn't provide too many advantages to ISPs, due to the lessened 555 dependence on their services in case of multihomed clients. It is 556 also unlikely that ISP wiching to offer LISP to their customers will 557 choose the "CE" placement: they need to send a technician to each 558 customer, and potentially a new CPE. Even if they have remote 559 control over the router, and a software upgrade could add LISP 560 support, the operation is too risky. 562 For a network operator a good option to deploy is the "PE" scenario, 563 unless a hardware upgrade is required for its edge routers to support 564 LISP (in which case upgrading CPEs may be simpler). It retains 565 control of TE, choice of PETR, and MS/MR. It also lowers potential 566 MTU issues, as dicussed above. Network operators should also explore 567 the "Inter-SP TE" (recursive) functional model for their TE needs. 569 Large organizations can benefit the most from the "Split ITR/ETR" 570 functional model, to optimize their traffic flow. 572 The following table gives a quick overview of the features supported 573 by each of the deployment scenarios discussed above (marked with an 574 "x") in the appropriate column: "CE" for customer edge, "PE" for 575 provider edge, "Split" for split ITR/ETR, and "Recursive" for inter- 576 service provider traffic engineering. The discussed features 577 include: 579 Control of ingress TE: The scenario allows the LISP site to easily 580 control LISP ingress traffic engineering policies. 582 No modifcations to existing int. network infrastruncture: The 583 scenario doesn't require the LISP site to modify internal network 584 configurations. 586 Loc-Status-Bits sync: The scenario allows easy synchronization of 587 the Locator Status Bits. 589 MTU/PMTUD issues minimized: The scenario minimizes potential MTU and 590 Path MTU Discovery issues. 592 Feature CE PE Split Recursive NAT 593 -------------------------------------------------------------------- 594 Control of ingress TE x - x x x 595 No modifications to existing 596 int. network infrastructure x x - - x 597 Loc-Status-Bits sync x - x x - 598 MTU/PMTUD issues minimized - x - - - 600 3. Map Resolvers and Map Servers 602 Map Resolvers and Map Servers make up the LISP mapping system and 603 provide a means to find authoritative EID-to-RLOC mapping 604 information, conforming to [RFC6833]. They are meant to be deployed 605 in RLOC space, and their operation behind NAT is not supported. 607 3.1. Map Servers 609 The Map Server learns EID-to-RLOC mapping entries from an 610 authoritative source and publishes them in the distributed mapping 611 database. These entries are learned through authenticated Map- 612 Register messages sent by authoritative ETRs. Also, upon reception 613 of a Map-Request, the Map Server verifies that the destination EID 614 matches an EID-prefix for which it is authoritative for, and then re- 615 encapsulates and forwards it to a matching ETR. Map Server 616 functionality is described in detail in [RFC6833]. 618 The Map Server is provided by a Mapping Service Provider (MSP). The 619 MSP participates in the global distributed mapping database 620 infrastructure, by setting up connections to other participants, 621 according to the specific mapping system that is employed (e.g., ALT 622 [RFC6836], DDT [I-D.ietf-lisp-ddt]). Participation in the mapping 623 database, and the storing of EID-to-RLOC mapping data is subject to 624 the policies of the "root" operators, who should check ownership 625 rights for the EID prefixes stored in the database by participants. 626 These policies are out of the scope of this document. 628 The LISP DDT protocol is used by LISP Mapping Service providers to 629 provide reachability between those providers' Map-Resolvers and Map- 630 Servers. The DDT Root is currently operated by a collection of 631 organizations on an open basis. See [DDT-ROOT] for more details. 632 Similarly to the DNS root, it has several different server instances 633 using names of the letters of the Greek alphabet (alpha, delta, 634 etc.), operated by independent organizations. When this document was 635 published, there were 5 such instances, one of them being anycasted. 636 The Root provides the list of server instances on their web site and 637 configuration files for several map server implementations. The DDT 638 Root, and LISP Mapping Providers both rely on and abide by existing 639 allocation policies by Regional Internet Registries to determine 640 prefix ownership for use as EIDs. 642 It is expected that the DDT root organizations will continue to 643 evolve in response to experimentation with LISP deployments for 644 Internet edge multi-homing and VPN use cases. 646 In all cases, the MSP configures its Map Server(s) to publish the 647 prefixes of its clients in the distributed mapping database and start 648 encapsulating and forwarding Map-Requests to the ETRs of the AS. 649 These ETRs register their prefix(es) with the Map Server(s) through 650 periodic authenticated Map-Register messages. In this context, for 651 some LISP sites, there is a need for mechanisms to: 653 o Automatically distribute EID prefix(es) shared keys between the 654 ETRs and the EID-registrar Map Server. 656 o Dynamically obtain the address of the Map Server in the ETR of the 657 AS. 659 The Map Server plays a key role in the reachability of the EID- 660 prefixes it is serving. On the one hand it is publishing these 661 prefixes into the distributed mapping database and on the other hand 662 it is encapsulating and forwarding Map-Requests to the authoritative 663 ETRs of these prefixes. ITRs encapsulating towards EIDs under the 664 responsibility of a failed Map Server will be unable to look up any 665 of their covering prefixes. The only exception are the ITRs that 666 already contain the mappings in their local cache. In this case ITRs 667 can reach ETRs until the entry expires (typically 24 hours). For 668 this reason, redundant Map Server deployments are desirable. A set 669 of Map Servers providing high-availability service to the same set of 670 prefixes is called a redundancy group. ETRs are configured to send 671 Map-Register messages to all Map Servers in the redundancy group. 672 The configuration for fail-over (or load-balancing, if desired) among 673 the members of the group depends on the technology behind the mapping 674 system being deployed. Since ALT is based on BGP and DDT was 675 inspired from the Domain Name System (DNS), deployments can leverage 676 current industry best practices for redundancy in BGP and DNS. These 677 best practices are out of the scope of this document. 679 Additionally, if a Map Server has no reachability for any ETR serving 680 a given EID block, it should not originate that block into the 681 mapping system. 683 3.2. Map Resolvers 685 A Map Resolver is a network infrastructure component which accepts 686 LISP encapsulated Map-Requests, typically from an ITR, and finds the 687 appropriate EID-to-RLOC mapping by consulting the distributed mapping 688 database. Map Resolver functionality is described in detail in 689 [RFC6833]. 691 Anyone with access to the distributed mapping database can set up a 692 Map Resolver and provide EID-to-RLOC mapping lookup service. 693 Database access setup is mapping system specific. 695 For performance reasons, it is recommended that LISP sites use Map 696 Resolvers that are topologically close to their ITRs. ISPs 697 supporting LISP will provide this service to their customers, 698 possibly restricting access to their user base. LISP sites not in 699 this position can use open access Map Resolvers, if available. 700 However, regardless of the availability of open access resolvers, the 701 MSP providing the Map Server(s) for a LISP site should also make 702 available Map Resolver(s) for the use of that site. 704 In medium to large-size ASes, ITRs must be configured with the RLOC 705 of a Map Resolver, operation which can be done manually. However, in 706 Small Office Home Office (SOHO) scenarios a mechanism for 707 autoconfiguration should be provided. 709 One solution to avoid manual configuration in LISP sites of any size 710 is the use of anycast RLOCs [RFC4786] for Map Resolvers similar to 711 the DNS root server infrastructure. Since LISP uses UDP 712 encapsulation, the use of anycast would not affect reliability. LISP 713 routers are then shipped with a preconfigured list of well know Map 714 Resolver RLOCs, which can be edited by the network administrator, if 715 needed. 717 The use of anycast also helps improve mapping lookup performance. 718 Large MSPs can increase the number and geographical diversity of 719 their Map Resolver infrastructure, using a single anycasted RLOC. 720 Once LISP deployment is advanced enough, very large content providers 721 may also be interested running this kind of setup, to ensure minimal 722 connection setup latency for those connecting to their network from 723 LISP sites. 725 While Map Servers and Map Resolvers implement different 726 functionalities within the LISP mapping system, they can coexist on 727 the same device. For example, MSPs offering both services, can 728 deploy a single Map Resolver/Map Server in each PoP where they have a 729 presence. 731 4. Proxy Tunnel Routers 733 4.1. P-ITR 735 Proxy Ingress Tunnel Routers (P-ITRs) are part of the non-LISP/LISP 736 transition mechanism, allowing non-LISP sites to reach LISP sites. 737 They announce via BGP certain EID prefixes (aggregated, whenever 738 possible) to attract traffic from non-LISP sites towards EIDs in the 739 covered range. They do the mapping system lookup, and encapsulate 740 received packets towards the appropriate ETR. Note that for the 741 reverse path LISP sites can reach non-LISP sites simply by not 742 encapsulating traffic. See [RFC6832] for a detailed description of 743 P-ITR functionality. 745 The success of new protocols depends greatly on their ability to 746 maintain backwards compatibility and inter-operate with the 747 protocol(s) they intend to enhance or replace, and on the incentives 748 to deploy the necessary new software or equipment. A LISP site needs 749 an interworking mechanism to be reachable from non-LISP sites. A 750 P-ITR can fulfill this role, enabling early adopters to see the 751 benefits of LISP, similar to tunnel brokers helping the transition 752 from IPv4 to IPv6. A site benefits from new LISP functionality 753 (proportionally with existing global LISP deployment) when going 754 LISP, so it has the incentives to deploy the necessary tunnel 755 routers. In order to be reachable from non-LISP sites it has two 756 options: keep announcing its prefix(es) with BGP, or have a P-ITR 757 announce prefix(es) covering them. 759 If the goal of reducing the DFZ routing table size is to be reached, 760 the second option is preferred. Moreover, the second option allows 761 LISP-based ingress traffic engineering from all sites. However, the 762 placement of P-ITRs significantly influences performance and 763 deployment incentives. Section 5 is dedicated to the migration to a 764 LISP-enabled Internet, and includes deployment scenarios for P-ITRs. 766 4.2. P-ETR 768 In contrast to P-ITRs, P-ETRs are not required for the correct 769 functioning of all LISP sites. There are two cases, where they can 770 be of great help: 772 o LISP sites with unicast reverse path forwarding (uRPF) 773 restrictions, and 775 o Communication between sites using different address family RLOCs. 777 In the first case, uRPF filtering is applied at their upstream PE 778 router. When forwarding traffic to non-LISP sites, an ITR does not 779 encapsulate packets, leaving the original IP headers intact. As a 780 result, packets will have EIDs in their source address. Since we are 781 discussing the transition period, we can assume that a prefix 782 covering the EIDs belonging to the LISP site is advertised to the 783 global routing tables by a P-ITR, and the PE router has a route 784 towards it. However, the next hop will not be on the interface 785 towards the CE router, so non-encapsulated packets will fail uRPF 786 checks. 788 To avoid this filtering, the affected ITR encapsulates packets 789 towards the locator of the P-ETR for non-LISP destinations. Now the 790 source address of the packets, as seen by the PE router is the ITR's 791 locator, which will not fail the uRPF check. The P-ETR then 792 decapsulates and forwards the packets. 794 The second use case is IPv4-to-IPv6 transition. Service providers 795 using older access network hardware, which only supports IPv4 can 796 still offer IPv6 to their clients, by providing a CPE device running 797 LISP, and P-ETR(s) for accessing IPv6-only non-LISP sites and LISP 798 sites, with IPv6-only locators. Packets originating from the client 799 LISP site for these destinations would be encapsulated towards the 800 P-ETR's IPv4 locator. The P-ETR is in a native IPv6 network, 801 decapsulating and forwarding packets. For non-LISP destination, the 802 packet travels natively from the P-ETR. For LISP destinations with 803 IPv6-only locators, the packet will go through a P-ITR, in order to 804 reach its destination. 806 For more details on P-ETRs see [RFC6832]. 808 P-ETRs can be deployed by ISPs wishing to offer value-added services 809 to their customers. As is the case with P-ITRs, P-ETRs too may 810 introduce path stretch (the ratio between the cost of the selected 811 path and that of the optimal path). Because of this the ISP needs to 812 consider the tradeoff of using several devices, close to the 813 customers, to minimize it, or few devices, farther away from the 814 customers, minimizing cost instead. 816 Since the deployment incentives for P-ITRs and P-ETRs are different, 817 it is likely they will be deployed in separate devices, except for 818 the CDN case, which may deploy both in a single device. 820 In all cases, the existence of a P-ETR involves another step in the 821 configuration of a LISP router. CPE routers, which are typically 822 configured by DHCP, stand to benefit most from P-ETRs. 823 Autoconfiguration of the P-ETR locator could be achieved by a DHCP 824 option, or adding a P-ETR field to either Map-Notifys or Map-Replies. 826 5. Migration to LISP 828 This section discusses a deployment architecture to support the 829 migration to a LISP-enabled Internet. The loosely defined terms of 830 "early transition phase", "late transition phase", and "LISP Internet 831 phase" refer to time periods when LISP sites are a minority, a 832 majority, or represent all edge networks respectively. 834 5.1. LISP+BGP 836 For sites wishing to go LISP with their PI prefix the least 837 disruptive way is to upgrade their border routers to support LISP, 838 register the prefix into the LISP mapping system, but keep announcing 839 it with BGP as well. This way LISP sites will reach them over LISP, 840 while legacy sites will be unaffected by the change. The main 841 disadvantage of this approach is that no decrease in the DFZ routing 842 table size is achieved. Still, just increasing the number of LISP 843 sites is an important gain, as an increasing LISP/non-LISP site ratio 844 may decrease the need for BGP-based traffic engineering that leads to 845 prefix deaggregation. That, in turn, may lead to a decrease in the 846 DFZ size and churn in the late transition phase. 848 This scenario is not limited to sites that already have their 849 prefixes announced with BGP. Newly allocated EID blocks could follow 850 this strategy as well during the early LISP deployment phase, 851 depending on the cost/benefit analysis of the individual networks. 852 Since this leads to an increase in the DFZ size, the following 853 architecture should be preferred for new allocations. 855 5.2. Mapping Service Provider (MSP) P-ITR Service 857 In addition to publishing their clients' registered prefixes in the 858 mapping system, MSPs with enough transit capacity can offer them 859 P-ITR service as a separate service. This service is especially 860 useful for new PI allocations, to sites without existing BGP 861 infrastructure, that wish to avoid BGP altogether. The MSP announces 862 the prefix into the DFZ, and the client benefits from ingress traffic 863 engineering without prefix deaggregation. The downside of this 864 scenario is adding path stretch. 866 Routing all non-LISP ingress traffic through a third party which is 867 not one of its ISPs is only feasible for sites with modest amounts of 868 traffic (like those using the IPv6 tunnel broker services today), 869 especially in the first stage of the transition to LISP, with a 870 significant number of legacy sites. This is because the handling of 871 said traffic is likely to result in additional costs, which would be 872 passed down to the client. When the LISP/non-LISP site ratio becomes 873 high enough, this approach can prove increasingly attractive. 875 Compared to LISP+BGP, this approach avoids DFZ bloat caused by prefix 876 deaggregation for traffic engineering purposes, resulting in slower 877 routing table increase in the case of new allocations and potential 878 decrease for existing ones. Moreover, MSPs serving different clients 879 with adjacent aggregatable prefixes may lead to additional decrease, 880 but quantifying this decrease is subject to future research study. 882 5.3. Proxy-ITR Route Distribution (PITR-RD) 884 Instead of a LISP site, or the MSP, announcing their EIDs with BGP to 885 the DFZ, this function can be outsourced to a third party, a P-ITR 886 Service Provider (PSP). This will result in a decrease of the 887 operational complexity both at the site and at the MSP. 889 The PSP manages a set of distributed P-ITR(s) that will advertise the 890 corresponding EID prefixes through BGP to the DFZ. These P-ITR(s) 891 will then encapsulate the traffic they receive for those EIDs towards 892 the RLOCs of the LISP site, ensuring their reachability from non-LISP 893 sites. 895 While it is possible for a PSP to manually configure each client's 896 EID routes to be announced, this approach offers little flexibility 897 and is not scalable. This section presents a scalable architecture 898 that offers automatic distribution of EID routes to LISP sites and 899 service providers. 901 The architecture requires no modification to existing LISP network 902 elements, but it introduces a new (conceptual) network element, the 903 EID Route Server, defined as a router that either propagates routes 904 learned from other EID Route Servers, or it originates EID Routes. 905 The EID-Routes that it originates are those that it is authoritative 906 for. It propagates these routes to Proxy-ITRs within the AS of the 907 EID Route Server. It is worth to note that a BGP capable router can 908 be also considered as an EID Route Server. 910 Further, an EID-Route is defined as a prefix originated via the Route 911 Server of the mapping service provider, which should be aggregated if 912 the MSP has multiple customers inside a single large continuous 913 prefix. This prefix is propagated to other P-ITRs both within the 914 MSP and to other P-ITR operators it peers with. EID Route Servers 915 are operated either by the LISP site, MSPs or PSPs, and they may be 916 collocated with a Map Server or P-ITR, but are a functionally 917 discrete entity. They distribute EID-Routes, using BGP, to other 918 domains, according to policies set by participants. 920 MSP (AS64500) 921 RS ---> P-ITR 922 | / 923 | _.--./ 924 ,-'' /`--. 925 LISP site ---,' | v `. 926 ( | DFZ )----- Mapping system 927 non-LISP site ----. | ^ ,' 928 `--. / _.-' 929 | `--'' 930 v / 931 P-ITR 932 PSP (AS64501) 934 Figure 7: The P-ITR Route Distribution architecture 936 The architecture described above decouples EID origination from route 937 propagation, with the following benefits: 939 o Can accurately represent business relationships between P-ITR 940 operators 942 o More mapping system agnostic 944 o Minor changes to P-ITR implementation, no changes to other 945 components 947 In the example in the figure we have a MSP providing services to the 948 LISP site. The LISP site does not run BGP, and gets an EID 949 allocation directly from a RIR, or from the MSP, who may be a LIR. 950 Existing PI allocations can be migrated as well. The MSP ensures the 951 presence of the prefix in the mapping system, and runs an EID Route 952 Server to distribute it to P-ITR service providers. Since the LISP 953 site does not run BGP, the prefix will be originated with the AS 954 number of the MSP. 956 In the simple case depicted in Figure 7 the EID-Route of LISP site 957 will be originated by the Route Server, and announced to the DFZ by 958 the PSP's P-ITRs with AS path 64501 64500. From that point on, the 959 usual BGP dynamics apply. This way, routes announced by P-ITR are 960 still originated by the authoritative Route Server. Note that the 961 peering relationships between MSP/PSPs and those in the underlying 962 forwarding plane may not be congruent, making the AS path to a P-ITR 963 shorter than it is in reality. 965 The non-LISP site will select the best path towards the EID-prefix, 966 according to its local BGP policies. Since AS-path length is usually 967 an important metric for selecting paths, a careful placement of P-ITR 968 could significantly reduce path-stretch between LISP and non-LISP 969 sites. 971 The architecture allows for flexible policies between MSP/PSPs. 972 Consider the EID Route Server networks as control plane overlays, 973 facilitating the implementation of policies necessary to reflect the 974 business relationships between participants. The results are then 975 injected to the common underlying forwarding plane. For example, 976 some MSP/PSPs may agree to exchange EID-Prefixes and only announce 977 them to each of their forwarding plane customers. Global 978 reachability of an EID-prefix depends on the MSP the LISP site buys 979 service from, and is also subject to agreement between the mentioned 980 parties. 982 In terms of impact on the DFZ, this architecture results in a slower 983 routing table increase for new allocations, since traffic engineering 984 will be done at the LISP level. For existing allocations migrating 985 to LISP, the DFZ may decrease since MSPs may be able to aggregate the 986 prefixes announced. 988 Compared to LISP+BGP, this approach avoids DFZ bloat caused by prefix 989 deaggregation for traffic engineering purposes, resulting in slower 990 routing table increase in the case of new allocations and potential 991 decrease for existing ones. Moreover, MSPs serving different clients 992 with adjacent aggregatable prefixes may lead to additional decrease, 993 but quantifying this decrease is subject to future research study. 995 The flexibility and scalability of this architecture does not come 996 without a cost however: A PSP operator has to establish either 997 transit or peering relationships to improve their connectivity. 999 5.4. Migration Summary 1001 Registering a domain name typically entails an annual fee that should 1002 cover the operating expenses for publishing the domain in the global 1003 DNS. The situation is similar with several other registration 1004 services. A LISP mapping service provider (MSR) client publishing an 1005 EID prefix in the LISP mapping system has the option of signing up 1006 for PITR services as well, for an extra fee. These services may be 1007 offered by the MSP itself, but it is expected that specialized P-ITR 1008 service providers (PSPs) will do it. Clients not signing up become 1009 responsible for getting non-LISP traffic to their EIDs (using the 1010 LISP+BGP scenario). 1012 Additionally, Tier 1 ISPs have incentives to offer P-ITR services to 1013 non-subscribers in strategic places just to attract more traffic from 1014 competitors, thus more revenue. 1016 The following table presents the expected effects of the different 1017 transition scenarios during a certain phase on the DFZ routing table 1018 size: 1020 Phase | LISP+BGP | MSP P-ITR | PITR-RD 1021 -----------------+--------------+-----------------+---------------- 1022 Early transition | no change | slower increase | slower increase 1023 Late transition | may decrease | slower increase | slower increase 1024 LISP Internet | considerable decrease 1026 It is expected that PITR-RD will co-exist with LISP+BGP during the 1027 migration, with the latter being more popular in the early transition 1028 phase. As the transition progresses and the MSP P-ITR and PITR-RD 1029 ecosystem gets more ubiquitous, LISP+BGP should become less 1030 attractive, slowing down the increase of the number of routes in the 1031 DFZ. 1033 Note that throughout Section 5 we focused on the effects of LISP 1034 deployment on the DFZ route table size. Other metrics may be 1035 impacted as well, but to the best of our knowlegde have not been 1036 measured as of yet. 1038 6. Security Considerations 1040 All security implications of LISP deployments are to be discussed in 1041 separate documents. [I-D.ietf-lisp-threats] gives an overview of 1042 LISP threat models, including ETR operators attracting traffic by 1043 overclaiming an EID-prefix (Section 4.4.3). Securing mapping lookups 1044 is discussed in [I-D.ietf-lisp-sec]. 1046 7. IANA Considerations 1048 This memo includes no request to IANA. 1050 8. Acknowledgements 1052 Many thanks to Margaret Wasserman for her contribution to the IETF76 1053 presentation that kickstarted this work. The authors would also like 1054 to thank Damien Saucez, Luigi Iannone, Joel Halpern, Vince Fuller, 1055 Dino Farinacci, Terry Manderson, Noel Chiappa, Hannu Flinck, Paul 1056 Vinciguerra, Fred Templin, Brian Haberman, and everyone else who 1057 provided input. 1059 9. References 1061 9.1. Normative References 1063 [RFC6830] Farinacci, D., Fuller, V., Meyer, D., and D. Lewis, "The 1064 Locator/ID Separation Protocol (LISP)", RFC 6830, 1065 January 2013. 1067 [RFC6832] Lewis, D., Meyer, D., Farinacci, D., and V. Fuller, 1068 "Interworking between Locator/ID Separation Protocol 1069 (LISP) and Non-LISP Sites", RFC 6832, January 2013. 1071 [RFC6833] Fuller, V. and D. Farinacci, "Locator/ID Separation 1072 Protocol (LISP) Map-Server Interface", RFC 6833, 1073 January 2013. 1075 9.2. Informative References 1077 [CACHE] Jung, J., Sit, E., Balakrishnan, H., and R. Morris, "DNS 1078 performance and the effectiveness of caching", 2002. 1080 [DDT-ROOT] 1081 "DDT Root", . 1083 [I-D.ietf-lisp-ddt] 1084 Fuller, V., Lewis, D., Ermagan, V., and A. Jain, "LISP 1085 Delegated Database Tree", draft-ietf-lisp-ddt-01 (work in 1086 progress), March 2013. 1088 [I-D.ietf-lisp-sec] 1089 Maino, F., Ermagan, V., Cabellos-Aparicio, A., Saucez, D., 1090 and O. Bonaventure, "LISP-Security (LISP-SEC)", 1091 draft-ietf-lisp-sec-05 (work in progress), October 2013. 1093 [I-D.ietf-lisp-threats] 1094 Saucez, D., Iannone, L., and O. Bonaventure, "LISP Threats 1095 Analysis", draft-ietf-lisp-threats-08 (work in progress), 1096 October 2013. 1098 [RFC4459] Savola, P., "MTU and Fragmentation Issues with In-the- 1099 Network Tunneling", RFC 4459, April 2006. 1101 [RFC4786] Abley, J. and K. Lindqvist, "Operation of Anycast 1102 Services", BCP 126, RFC 4786, December 2006. 1104 [RFC4984] Meyer, D., Zhang, L., and K. Fall, "Report from the IAB 1105 Workshop on Routing and Addressing", RFC 4984, 1106 September 2007. 1108 [RFC6834] Iannone, L., Saucez, D., and O. Bonaventure, "Locator/ID 1109 Separation Protocol (LISP) Map-Versioning", RFC 6834, 1110 January 2013. 1112 [RFC6836] Fuller, V., Farinacci, D., Meyer, D., and D. Lewis, 1113 "Locator/ID Separation Protocol Alternative Logical 1114 Topology (LISP+ALT)", RFC 6836, January 2013. 1116 [RFC6887] Wing, D., Cheshire, S., Boucadair, M., Penno, R., and P. 1117 Selkirk, "Port Control Protocol (PCP)", RFC 6887, 1118 April 2013. 1120 [TELCO96] "Telecommunications Act of 1996", 1996. 1122 Appendix A. Step-by-Step Example BGP to LISP Migration Procedure 1124 To help the operational community deploy LISP, this informative 1125 section offers a step-by-step guide for migrating a BGP based 1126 Internet presence to a LISP site. It includes a pre-install/ 1127 pre-turn-up checklist, and customer and provider activation 1128 procedures. 1130 A.1. Customer Pre-Install and Pre-Turn-up Checklist 1132 1. Determine how many current physical service provider connections 1133 the customer has and their existing bandwidth and traffic 1134 engineering requirements. 1136 This information will determine the number of routing locators, 1137 and the priorities and weights that should be configured on the 1138 xTRs. 1140 2. Make sure customer router has LISP capabilities. 1142 * Check OS version of the CE router. If LISP is an add-on, 1143 check if it is installed. 1145 This information can be used to determine if the platform is 1146 appropriate to support LISP, in order to determine if a 1147 software and/or hardware upgrade is required. 1149 * Have customer upgrade (if necessary, software and/or hardware) 1150 to be LISP capable. 1152 3. Obtain current running configuration of CE router. A suggested 1153 LISP router configuration example can be customized to the 1154 customer's existing environment. 1156 4. Verify MTU Handling 1158 * Request increase in MTU to 1556 or more on service provider 1159 connections. Prior to MTU change verify that 1500 byte packet 1160 from P-xTR to RLOC with do not fragment (DF-bit) bit set. 1162 * Ensure they are not filtering ICMP unreachable or time- 1163 exceeded on their firewall or router. 1165 LISP, like any tunneling protocol, will increase the size of 1166 packets when the LISP header is appended. If increasing the MTU 1167 of the access links is not possible, care must be taken that ICMP 1168 is not being filtered in order to allow for Path MTU Discovery to 1169 take place. 1171 5. Validate member prefix allocation. 1173 This step is to check if the prefix used by the customer is a 1174 direct (Provider Independent), or if it is a prefix assigned by a 1175 physical service provider (Provider Aggregatable). If the 1176 prefixes are assigned by other service providers then a Letter of 1177 Agreement is required to announce prefixes through the Proxy 1178 Service Provider. 1180 6. Verify the member RLOCs and their reachability. 1182 This step ensures that the RLOCs configured on the CE router are 1183 in fact reachable and working. 1185 7. Prepare for cut-over. 1187 * If possible, have a host outside of all security and filtering 1188 policies connected to the console port of the edge router or 1189 switch. 1191 * Make sure customer has access to the router in order to 1192 configure it. 1194 A.2. Customer Activating LISP Service 1196 1. Customer configures LISP on CE router(s) from service provider 1197 recommended configuration. 1199 The LISP configuration consists of the EID prefix, the locators, 1200 and the weights and priorities of the mapping between the two 1201 values. In addition, the xTR must be configured with Map 1202 Resolver(s), Map Server(s) and the shared key for registering to 1203 Map Server(s). If required, Proxy-ETR(s) may be configured as 1204 well. 1206 In addition to the LISP configuration, the following: 1208 * Ensure default route(s) to next-hop external neighbors are 1209 included and RLOCs are present in configuration. 1211 * If two or more routers are used, ensure all RLOCs are included 1212 in the LISP configuration on all routers. 1214 * It will be necessary to redistribute default route via IGP 1215 between the external routers. 1217 2. When transition is ready perform a soft shutdown on existing eBGP 1218 peer session(s) 1220 * From CE router, use LIG to ensure registration is successful. 1222 * To verify LISP connectivity, find and ping LISP connected 1223 sites. If possible, find ping destinations that are not 1224 covered by a prefix in the global BGP routing system, because 1225 PITRs may deliver the packets even if LISP connectivity is not 1226 working. Traceroutes may help discover if this is the case. 1228 * To verify connectivity to non-LISP sites, try accessing a 1229 landmark (e.g., a major Internet site) via a web browser. 1231 A.3. Cut-Over Provider Preparation and Changes 1233 1. Verify site configuration and then active registration on Map 1234 Server(s) 1236 * Authentication key 1238 * EID prefix 1240 2. Add EID space to map-cache on proxies 1242 3. Add networks to BGP advertisement on proxies 1244 * Modify route-maps/policies on P-xTRs 1246 * Modify route policies on core routers (if non-connected 1247 member) 1249 * Modify ingress policers on core routers 1251 * Ensure route announcement in looking glass servers, RouteViews 1253 4. Perform traffic verification test 1255 * Ensure MTU handling is as expected (PMTUD working) 1257 * Ensure proxy-ITR map-cache population 1259 * Ensure access from traceroute/ping servers around Internet 1261 * Use a looking glass, to check for external visibility of 1262 registration via several Map Resolvers 1264 Authors' Addresses 1266 Lorand Jakab 1267 Cisco Systems 1268 170 Tasman Drive 1269 San Jose, CA 95134 1270 USA 1272 Email: lojakab@cisco.com 1273 Albert Cabellos-Aparicio 1274 Technical University of Catalonia 1275 C/Jordi Girona, s/n 1276 BARCELONA 08034 1277 Spain 1279 Email: acabello@ac.upc.edu 1281 Florin Coras 1282 Technical University of Catalonia 1283 C/Jordi Girona, s/n 1284 BARCELONA 08034 1285 Spain 1287 Email: fcoras@ac.upc.edu 1289 Jordi Domingo-Pascual 1290 Technical University of Catalonia 1291 C/Jordi Girona, s/n 1292 BARCELONA 08034 1293 Spain 1295 Email: jordi.domingo@ac.upc.edu 1297 Darrel Lewis 1298 Cisco Systems 1299 170 Tasman Drive 1300 San Jose, CA 95134 1301 USA 1303 Email: darlewis@cisco.com