idnits 2.17.1 draft-ietf-lisp-deployment-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 510: '... operators, who SHOULD check ownershi...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 487 has weird spacing: '...ructure x ...' -- The document date (March 20, 2013) is 4047 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 6830 (Obsoleted by RFC 9300, RFC 9301) ** Obsolete normative reference: RFC 6833 (Obsoleted by RFC 9301) == Outdated reference: A later version (-13) exists of draft-ietf-lisp-eid-block-04 == Outdated reference: A later version (-29) exists of draft-ietf-lisp-sec-04 == Outdated reference: A later version (-15) exists of draft-ietf-lisp-threats-04 -- Obsolete informational reference (is this intentional?): RFC 6834 (Obsoleted by RFC 9302) Summary: 3 errors (**), 0 flaws (~~), 5 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group L. Jakab 3 Internet-Draft Cisco Systems 4 Intended status: Informational A. Cabellos-Aparicio 5 Expires: September 21, 2013 F. Coras 6 J. Domingo-Pascual 7 Technical University of 8 Catalonia 9 D. Lewis 10 Cisco Systems 11 March 20, 2013 13 LISP Network Element Deployment Considerations 14 draft-ietf-lisp-deployment-07.txt 16 Abstract 18 This document discusses the different scenarios for the deployment of 19 the new network elements introduced by the Locator/Identifier 20 Separation Protocol (LISP). 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on September 21, 2013. 39 Copyright Notice 41 Copyright (c) 2013 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 57 2. Tunnel Routers . . . . . . . . . . . . . . . . . . . . . . . . 4 58 2.1. Customer Edge . . . . . . . . . . . . . . . . . . . . . . 4 59 2.2. Provider Edge . . . . . . . . . . . . . . . . . . . . . . 5 60 2.3. Split ITR/ETR . . . . . . . . . . . . . . . . . . . . . . 7 61 2.4. Inter-Service Provider Traffic Engineering . . . . . . . . 8 62 2.5. Tunnel Routers Behind NAT . . . . . . . . . . . . . . . . 10 63 2.5.1. ITR . . . . . . . . . . . . . . . . . . . . . . . . . 10 64 2.5.2. ETR . . . . . . . . . . . . . . . . . . . . . . . . . 11 65 2.6. Summary and Feature Matrix . . . . . . . . . . . . . . . . 11 66 3. Map Resolvers and Map Servers . . . . . . . . . . . . . . . . 11 67 3.1. Map Servers . . . . . . . . . . . . . . . . . . . . . . . 11 68 3.2. Map Resolvers . . . . . . . . . . . . . . . . . . . . . . 12 69 4. Proxy Tunnel Routers . . . . . . . . . . . . . . . . . . . . . 13 70 4.1. P-ITR . . . . . . . . . . . . . . . . . . . . . . . . . . 13 71 4.2. P-ETR . . . . . . . . . . . . . . . . . . . . . . . . . . 14 72 5. Migration to LISP . . . . . . . . . . . . . . . . . . . . . . 15 73 5.1. LISP+BGP . . . . . . . . . . . . . . . . . . . . . . . . . 15 74 5.2. Mapping Service Provider (MSP) P-ITR Service . . . . . . . 16 75 5.3. Proxy-ITR Route Distribution (PITR-RD) . . . . . . . . . . 16 76 5.4. Migration Summary . . . . . . . . . . . . . . . . . . . . 19 77 6. Step-by-Step Example BGP to LISP Migration Procedure . . . . . 19 78 6.1. Customer Pre-Install and Pre-Turn-up Checklist . . . . . . 19 79 6.2. Customer Activating LISP Service . . . . . . . . . . . . . 21 80 6.3. Cut-Over Provider Preparation and Changes . . . . . . . . 21 81 7. Security Considerations . . . . . . . . . . . . . . . . . . . 22 82 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 83 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 22 84 10. References . . . . . . . . . . . . . . . . . . . . . . . . . . 23 85 10.1. Normative References . . . . . . . . . . . . . . . . . . . 23 86 10.2. Informative References . . . . . . . . . . . . . . . . . . 23 87 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 24 89 1. Introduction 91 The Locator/Identifier Separation Protocol (LISP) addresses the 92 scaling issues of the global Internet routing system by separating 93 the current addressing scheme into Endpoint IDentifiers (EIDs) and 94 Routing LOCators (RLOCs). The main protocol specification [RFC6830] 95 describes how the separation is achieved, which new network elements 96 are introduced, and details the packet formats for the data and 97 control planes. 99 LISP assumes that such separation is between the edge and core and 100 uses a map-and-encap scheme for forwarding. While the boundary 101 between both is not strictly defined, one widely accepted definition 102 places it at the border routers of stub autonomous systems, which may 103 carry a partial or complete default-free zone (DFZ) routing table. 104 The initial design of LISP took this location as a baseline for 105 protocol development. However, the applications of LISP go beyond of 106 just decreasing the size of the DFZ routing table, and include 107 improved multihoming and ingress traffic engineering (TE) support for 108 edge networks, and even individual hosts. Throughout the draft we 109 will use the term LISP site to refer to these networks/hosts behind a 110 LISP Tunnel Router. We formally define it as: 112 LISP site: A single host or a set of network elements in an edge 113 network under the administrative control of a single organization, 114 delimited from other networks by LISP Tunnel Router(s). 116 Network element: Active or passive device that is connected 117 connected to other active or passive devices for transporting 118 packet switched data. 120 Since LISP is a protocol which can be used for different purposes, it 121 is important to identify possible deployment scenarios and the 122 additional requirements they may impose on the protocol specification 123 and other protocols. Additionally, this document is intended as a 124 guide for the operational community for LISP deployments in their 125 networks. It is expected to evolve as LISP deployment progresses, 126 and the described scenarios are better understood or new scenarios 127 are discovered. 129 Each subsection considers an element type, discussing the impact of 130 deployment scenarios on the protocol specification. For definition 131 of terms, please refer to the appropriate documents (as cited in the 132 respective sections). 134 2. Tunnel Routers 136 The device that is the gateway between the edge and the core is 137 called Tunnel Router (xTR), performing one or both of two separate 138 functions: 140 1. Encapsulating packets originating from an end host to be 141 transported over intermediary (transit) networks towards the 142 other end-point of the communication 144 2. Decapsulating packets entering from intermediary (transit) 145 networks, originated at a remote end host. 147 The first function is performed by an Ingress Tunnel Router (ITR), 148 the second by an Egress Tunnel Router (ETR). 150 Section 8 of the main LISP specification [RFC6830] has a short 151 discussion of where Tunnel Routers can be deployed and some of the 152 associated advantages and disadvantages. This section adds more 153 detail to the scenarios presented there, and provides additional 154 scenarios as well. 156 2.1. Customer Edge 158 The first scenario we discuss is customer edge, when xTR 159 functionality is placed on the router(s) that connect the LISP site 160 to its upstream(s), but are under its control. As such, this is the 161 most common expected scenario for xTRs, and this document considers 162 it the reference location, comparing the other scenarios to this one. 164 ISP1 ISP2 165 | | 166 | | 167 +----+ +----+ 168 +--|xTR1|--|xTR2|--+ 169 | +----+ +----+ | 170 | | 171 | LISP site | 172 +------------------+ 174 Figure 1: xTRs at the customer edge 176 From the LISP site perspective the main advantage of this type of 177 deployment (compared to the one described in the next section) is 178 having direct control over its ingress traffic engineering. This 179 makes it easy to set up and maintain active/active, active/backup, or 180 more complex TE policies, without involving third parties. 182 Being under the same administrative control, reachability information 183 of all ETRs is easier to synchronize, because the necessary control 184 traffic can be allowed between the locators of the ETRs. A correct 185 synchronous global view of the reachability status is thus available, 186 and the Loc-Status-Bits can be set correctly in the LISP data header 187 of outgoing packets. 189 By placing the tunnel router at the edge of the site, existing 190 internal network configuration does not need to be modified. 191 Firewall rules, router configurations and address assignments inside 192 the LISP site remain unchanged. This helps with incremental 193 deployment and allows a quick upgrade path to LISP. For larger sites 194 with many external connections, distributed in geographically diverse 195 PoPs, and complex internal topology, it may however make more sense 196 to both encapsulate and decapsulate as soon as possible, to benefit 197 from the information in the IGP to choose the best path (see 198 Section 2.3 for a discussion of this scenario). 200 Another thing to consider when placing tunnel routers are MTU issues. 201 Since encapsulating packets increases overhead, the MTU of the end- 202 to-end path may decrease, when encapsulated packets need to travel 203 over segments having close to minimum MTU. Some transit networks are 204 known to provide larger MTU than the typical value of 1500 bytes of 205 popular access technologies used at end hosts (e.g., IEEE 802.3 and 206 802.11). However, placing the LISP router connecting to such a 207 network at the customer edge could possibly bring up MTU issues, 208 depending on the link type to the provider as opposed to the 209 following scenario. See [RFC4459] for MTU considerations of 210 tunneling protocols on how to mitigate potential issues. Still, even 211 with these mitigations, path MTU issues are still possible. 213 2.2. Provider Edge 215 The other location at the core-edge boundary for deploying LISP 216 routers is at the Internet service provider edge. The main incentive 217 for this case is that the customer does not have to upgrade the CE 218 router(s), or change the configuration of any equipment. 219 Encapsulation/decapsulation happens in the provider's network, which 220 may be able to serve several customers with a single device. For 221 large ISPs with many residential/business customers asking for LISP 222 this can lead to important savings, since there is no need to upgrade 223 the software (or hardware, if it's the case) at each client's 224 location. Instead, they can upgrade the software (or hardware) on a 225 few PE routers serving the customers. This scenario is depicted in 226 Figure 2. 228 +----------+ +------------------+ 229 | ISP1 | | ISP2 | 230 | | | | 231 | +----+ | | +----+ +----+ | 232 +--|xTR1|--+ +--|xTR2|--|xTR3|--+ 233 +----+ +----+ +----+ 234 | | | 235 | | | 236 +--<[LISP site]>---+-------+ 238 Figure 2: xTR at the PE 240 While this approach can make transition easy for customers and may be 241 cheaper for providers, the LISP site looses one of the main benefits 242 of LISP: ingress traffic engineering. Since the provider controls 243 the ETRs, additional complexity would be needed to allow customers to 244 modify their mapping entries. 246 The problem is aggravated when the LISP site is multihomed. Consider 247 the scenario in Figure 2: whenever a change to TE policies is 248 required, the customer contacts both ISP1 and ISP2 to make the 249 necessary changes on the routers (if they provide this possibility). 250 It is however unlikely, that both ISPs will apply changes 251 simultaneously, which may lead to inconsistent state for the mappings 252 of the LISP site. Since the different upstream ISPs are usually 253 competing business entities, the ETRs may even be configured to 254 compete, either to attract all the traffic or to get no traffic. The 255 former will happen if the customer pays per volume, the latter if the 256 connectivity has a fixed price. A solution could be to have the 257 mappings in the Map Server(s), and have their operator give control 258 over the entries to customer, much like in the Domain Name System at 259 the time of this writing. 261 Additionally, since xTR1, xTR2, and xTR3 are in different 262 administrative domains, locator reachability information is unlikely 263 to be exchanged among them, making it difficult to set Loc-Status- 264 Bits (LSB) correctly on encapsulated packets. Because of this, and 265 due to the security concerns about LSB described in 266 [I-D.ietf-lisp-threats] their use is discouraged without verifying 267 ETR reachability through the mapping system or other means. Mapping 268 versioning is another alternative [RFC6834]. 270 Compared to the customer edge scenario, deploying LISP at the 271 provider edge might have the advantage of diminishing potential MTU 272 issues, because the tunnel router is closer to the core, where links 273 typically have higher MTUs than edge network links. 275 2.3. Split ITR/ETR 277 In a simple LISP deployment, xTRs are located at the border of the 278 LISP site (see Section 2.1). In this scenario packets are routed 279 inside the domain according to the EID. However, more complex 280 networks may want to route packets according to the destination RLOC. 281 This would enable them to choose the best egress point. 283 The LISP specification separates the ITR and ETR functionality and 284 considers that both entities can be deployed in separated network 285 equipment. ITRs can be deployed closer to the host (i.e., access 286 routers). This way packets are encapsulated as soon as possible, and 287 packets exit the network through the best egress point in terms of 288 BGP policy. In turn, ETRs can be deployed at the border routers of 289 the network, and packets are decapsulated as soon as possible. Once 290 decapsulated, packets are routed based on destination EID, according 291 to internal routing policy. 293 In the following figure we can see an example. The Source (S) 294 transmits packets using its EID and in this particular case packets 295 are encapsulated at ITR_1. The encapsulated packets are routed 296 inside the domain according to the destination RLOC, and can egress 297 the network through the best point (i.e., closer to the RLOC's AS). 298 On the other hand, inbound packets are received by ETR_1 which 299 decapsulates them. Then packets are routed towards S according to 300 the EID, again following the best path. 302 +---------------------------------------+ 303 | | 304 | +-------+ +-------+ +-------+ 305 | | ITR_1 |---------+ | ETR_1 |-RLOC_A--| ISP_A | 306 | +-------+ | +-------+ +-------+ 307 | +-+ | | | 308 | |S| | IGP | | 309 | +-+ | | | 310 | +-------+ | +-------+ +-------+ 311 | | ITR_2 |---------+ | ETR_2 |-RLOC_B--| ISP_B | 312 | +-------+ +-------+ +-------+ 313 | | 314 +---------------------------------------+ 316 Figure 3: Split ITR/ETR Scenario 318 This scenario has a set of implications: 320 o The site must carry at least partial BGP routes in order to choose 321 the best egress point, increasing the complexity of the network. 322 However, this is usually already the case for LISP sites that 323 would benefit from this scenario. 325 o If the site is multihomed to different ISPs and any of the 326 upstream ISPs is doing uRPF filtering, this scenario may become 327 impractical. ITRs need to determine the exit ETR, for setting the 328 correct source RLOC in the encapsulation header. This adds 329 complexity and reliability concerns. 331 o In LISP, ITRs set the reachability bits when encapsulating data 332 packets. Hence, ITRs need a mechanism to be aware of the liveness 333 of all ETRs serving their site. 335 o MTU within the site network must be large enough to accommodate 336 encapsulated packets. 338 o In this scenario, each ITR is serving fewer hosts than in the case 339 when it is deployed at the border of the network. It has been 340 shown that cache hit ratio grows logarithmically with the amount 341 of users [cache]. Taking this into account, when ITRs are 342 deployed closer to the host the effectiveness of the mapping cache 343 may be lower (i.e., the miss ratio is higher). Another 344 consequence of this is that the site may transmit a higher amount 345 of Map-Requests, increasing the load on the distributed mapping 346 database. To lower the impact, the site could use a local caching 347 Map Resolver. 349 o By placing the ITRs inside the site, they will still need global 350 RLOCs, and this may add complexity to intra-site routing 351 configuration, and further intra-site issues when there is a 352 change of providers. 354 2.4. Inter-Service Provider Traffic Engineering 356 With LISP, two LISP sites can route packets among them and control 357 their ingress TE policies. Typically, LISP is seen as applicable to 358 stub networks, however the LISP protocol can also be applied to 359 transit networks recursively. 361 Consider the scenario depicted in Figure 4. Packets originating from 362 the LISP site Stub1, client of ISP_A, with destination Stub4, client 363 of ISP_B, are LISP encapsulated at their entry point into the ISP_A's 364 network. The external IP header now has as the source RLOC an IP 365 from ISP_A's address space and destination RLOC from ISP_B's address 366 space. One or more ASes separate ISP_A from ISP_B. With a single 367 level of LISP encapsulation, Stub4 has control over its ingress 368 traffic. However, at the time of this writing, ISP_B has only BGP 369 tools (such as prefix deaggregation) to control on which of his own 370 upstream or peering links should packets enter. This is either not 371 feasible (if fine-grained per-customer control is required, the very 372 specific prefixes may not be propagated) or increases DFZ table size. 374 _.--. 375 Stub1 ... +-------+ ,-'' `--. +-------+ ... Stub3 376 \ | R_A1|----,' `. ---|R_B1 | / 377 --| R_A2|---( Transit ) | |-- 378 Stub2 .../ | R_A3|-----. ,' ---|R_B2 | \... Stub4 379 +-------+ `--. _.-' +-------+ 380 ... ISP_A `--'' ISP_B ... 382 Figure 4: Inter-Service provider TE scenario 384 A solution for this is to apply LISP recursively. ISP_A and ISP_B 385 may reach a bilateral agreement to deploy their own private mapping 386 system. ISP_A then encapsulates packets destined for the prefixes of 387 ISP_B, which are listed in the shared mapping system. Note that in 388 this case the packet is double-encapsulated (using R_A1, R_A2 or R_A3 389 as source and R_B1 or R_B2 as destination in the example above). 390 ISP_B's ETR removes the outer, second layer of LISP encapsulation 391 from the incoming packet, and routes it towards the original RLOC, 392 the ETR of Stub4, which does the final decapsulation. 394 If ISP_A and ISP_B agree to share a private distributed mapping 395 database, both can control their ingress TE without the need of 396 deaggregating prefixes. In this scenario the private database 397 contains RLOC-to-RLOC bindings. The convergence time on the TE 398 policies updates is expected to be fast, since ISPs only have to 399 update/query a mapping to/from the database. 401 This deployment scenario includes two important caveats. First, it 402 is intended to be deployed between only two ISPs (ISP_A and ISP_B in 403 Figure 4). If more than two ISPs use this approach, then the xTRs 404 deployed at the participating ISPs must either query multiple mapping 405 systems, or the ISPs must agree on a common shared mapping system. 406 Second, the scenario is only recommended for ISPs providing 407 connectivity to LISP sites, such that source RLOCs of packets to be 408 reencapsulated belong to said ISP. Otherwise the participating ISPs 409 must register prefixes they do not own in the above mentioned private 410 mapping system. Failure to follow these recommendations may lead to 411 operational and security issues when deploying this scenario. 413 Besides these recommendations, the main disadvantages of this 414 deployment case are: 416 o Extra LISP header is needed. This increases the packet size and 417 requires that the MTU between both ISPs accommodates double- 418 encapsulated packets. 420 o The ISP ITR must encapsulate packets and therefore must know the 421 RLOC-to-RLOC binding. These bindings are stored in a mapping 422 database and may be cached in the ITR's mapping cache. Cache 423 misses lead to an additional lookup latency, unless a push based 424 mapping system is used for the private mapping system. 426 o The operational overhead of maintaining the shared mapping 427 database. 429 o If an IPv6 address block is reserved for EID use, as specified in 430 [I-D.ietf-lisp-eid-block], the EID-to-RLOC encapsulation (first 431 level) can avoid LISP processing altogether for non-LISP 432 destinations. The ISP tunnel routers however will not be able to 433 take advantage of this optimization, all RLOC-to-RLOC mappings 434 need a lookup in the private database (or map-cache, once results 435 are cached). 437 2.5. Tunnel Routers Behind NAT 439 NAT in this section refers to IPv4 network address and port 440 translation. 442 2.5.1. ITR 444 Packets encapsulated by an ITR are just UDP packets from a NAT 445 device's point of view, and they are handled like any UDP packet, 446 there are no additional requirements for LISP data packets. 448 Map-Requests sent by an ITR, which create the state in the NAT table, 449 have a different 5-tuple in the IP header than the Map-Reply 450 generated by the authoritative ETR. Since the source address of this 451 packet is different from the destination address of the request 452 packet, no state will be matched in the NAT table and the packet will 453 be dropped. To avoid this, the NAT device has to do the following: 455 o Send all UDP packets with source port 4342, regardless of the 456 destination port, to the RLOC of the ITR. The most simple way to 457 achieve this is configuring 1:1 NAT mode from the external RLOC of 458 the NAT device to the ITR's RLOC (Called "DMZ" mode in consumer 459 broadband routers). 461 o Rewrite the ITR-AFI and "Originating ITR RLOC Address" fields in 462 the payload. 464 This setup supports only a single ITR behind the NAT device. 466 2.5.2. ETR 468 An ETR placed behind NAT is reachable from the outside by the 469 Internet-facing locator of the NAT device. It needs to know this 470 locator (and configure a loopback interface with it), so that it can 471 use it in Map-Reply and Map-Register messages. Thus support for 472 dynamic locators for the mapping database is needed in LISP 473 equipment. 475 Again, only one ETR behind the NAT device is supported. 477 An implication of the issues described above is that LISP sites with 478 xTRs can not be behind carrier based NATs, since two different sites 479 would collide on the port forwarding. 481 2.6. Summary and Feature Matrix 483 Feature CE PE Split Recursive 484 ------------------------------------------------------------- 485 Control of ingress TE x - x x 486 No modifications to existing 487 int. network infrastructure x x - - 488 Loc-Status-Bits sync x - x x 489 MTU/PMTUD issues minimized - x - - 491 3. Map Resolvers and Map Servers 493 3.1. Map Servers 495 The Map Server learns EID-to-RLOC mapping entries from an 496 authoritative source and publishes them in the distributed mapping 497 database. These entries are learned through authenticated Map- 498 Register messages sent by authoritative ETRs. Also, upon reception 499 of a Map-Request, the Map Server verifies that the destination EID 500 matches an EID-prefix for which it is authoritative for, and then re- 501 encapsulates and forwards it to a matching ETR. Map Server 502 functionality is described in detail in [RFC6833]. 504 The Map Server is provided by a Mapping Service Provider (MSP). The 505 MSP participates in the global distributed mapping database 506 infrastructure, by setting up connections to other participants, 507 according to the specific mapping system that is employed (e.g., ALT, 508 DDT). Participation in the mapping database, and the storing of EID- 509 to-RLOC mapping data is subject to the policies of the "root" 510 operators, who SHOULD check ownership rights for the EID prefixes 511 stored in the database by participants. These policies are out of 512 the scope of this document. 514 In all cases, the MSP configures its Map Server(s) to publish the 515 prefixes of its clients in the distributed mapping database and start 516 encapsulating and forwarding Map-Requests to the ETRs of the AS. 517 These ETRs register their prefix(es) with the Map Server(s) through 518 periodic authenticated Map-Register messages. In this context, for 519 some LISP end sites, there is a need for mechanisms to: 521 o Automatically distribute EID prefix(es) shared keys between the 522 ETRs and the EID-registrar Map Server. 524 o Dynamically obtain the address of the Map Server in the ETR of the 525 AS. 527 The Map Server plays a key role in the reachability of the EID- 528 prefixes it is serving. On the one hand it is publishing these 529 prefixes into the distributed mapping database and on the other hand 530 it is encapsulating and forwarding Map-Requests to the authoritative 531 ETRs of these prefixes. ITRs encapsulating towards EIDs under the 532 responsibility of a failed Map Server will be unable to look up any 533 of their covering prefixes. The only exception are the ITRs that 534 already contain the mappings in their local cache. In this case ITRs 535 can reach ETRs until the entry expires (typically 24 hours). For 536 this reason, redundant Map Server deployments are desirable. A set 537 of Map Servers providing high-availability service to the same set of 538 prefixes is called a redundancy group. ETRs are configured to send 539 Map-Register messages to all Map Servers in the redundancy group. To 540 achieve fail-over (or load-balancing, if desired), known mapping 541 system specific best practices should be used. 543 Additionally, if a Map Server has no reachability for any ETR serving 544 a given EID block, it should not originate that block into the 545 mapping system. 547 3.2. Map Resolvers 549 A Map Resolver a is a network infrastructure component which accepts 550 LISP encapsulated Map-Requests, typically from an ITR, and finds the 551 appropriate EID-to-RLOC mapping by either consulting its local cache 552 or by consulting the distributed mapping database. Map Resolver 553 functionality is described in detail in [RFC6833]. 555 Anyone with access to the distributed mapping database can set up a 556 Map Resolver and provide EID-to-RLOC mapping lookup service. 557 Database access setup is mapping system specific. 559 For performance reasons, it is recommended that LISP sites use Map 560 Resolvers that are topologically close to their ITRs. ISPs 561 supporting LISP will provide this service to their customers, 562 possibly restricting access to their user base. LISP sites not in 563 this position can use open access Map Resolvers, if available. 564 However, regardless of the availability of open access resolvers, the 565 MSP providing the Map Server(s) for a LISP site should also make 566 available Map Resolver(s) for the use of that site. 568 In medium to large-size ASes, ITRs must be configured with the RLOC 569 of a Map Resolver, operation which can be done manually. However, in 570 Small Office Home Office (SOHO) scenarios a mechanism for 571 autoconfiguration should be provided. 573 One solution to avoid manual configuration in LISP sites of any size 574 is the use of anycast RLOCs for Map Resolvers similar to the DNS root 575 server infrastructure. Since LISP uses UDP encapsulation, the use of 576 anycast would not affect reliability. LISP routers are then shipped 577 with a preconfigured list of well know Map Resolver RLOCs, which can 578 be edited by the network administrator, if needed. 580 The use of anycast also helps improving mapping lookup performance. 581 Large MSPs can increase the number and geographical diversity of 582 their Map Resolver infrastructure, using a single anycasted RLOC. 583 Once LISP deployment is advanced enough, very large content providers 584 may also be interested running this kind of setup, to ensure minimal 585 connection setup latency for those connecting to their network from 586 LISP sites. 588 While Map Servers and Map Resolvers implement different 589 functionalities within the LISP mapping system, they can coexist on 590 the same device. For example, MSPs offering both services, can 591 deploy a single Map Resolver/Map Server in each PoP where they have a 592 presence. 594 4. Proxy Tunnel Routers 596 4.1. P-ITR 598 Proxy Ingress Tunnel Routers (P-ITRs) are part of the non-LISP/LISP 599 transition mechanism, allowing non-LISP sites to reach LISP sites. 600 They announce via BGP certain EID prefixes (aggregated, whenever 601 possible) to attract traffic from non-LISP sites towards EIDs in the 602 covered range. They do the mapping system lookup, and encapsulate 603 received packets towards the appropriate ETR. Note that for the 604 reverse path LISP sites can reach non-LISP sites simply by not 605 encapsulating traffic. See [RFC6832] for a detailed description of 606 P-ITR functionality. 608 The success of new protocols depends greatly on their ability to 609 maintain backwards compatibility and inter-operate with the 610 protocol(s) they intend to enhance or replace, and on the incentives 611 to deploy the necessary new software or equipment. A LISP site needs 612 an interworking mechanism to be reachable from non-LISP sites. A 613 P-ITR can fulfill this role, enabling early adopters to see the 614 benefits of LISP, similar to tunnel brokers helping the transition 615 from IPv4 to IPv6. A site benefits from new LISP functionality 616 (proportionally with existing global LISP deployment) when going 617 LISP, so it has the incentives to deploy the necessary tunnel 618 routers. In order to be reachable from non-LISP sites it has two 619 options: keep announcing its prefix(es) with BGP, or have a P-ITR 620 announce prefix(es) covering them. 622 If the goal of reducing the DFZ routing table size is to be reached, 623 the second option is preferred. Moreover, the second option allows 624 LISP-based ingress traffic engineering from all sites. However, the 625 placement of P-ITRs significantly influences performance and 626 deployment incentives. Section 5 is dedicated to the migration to a 627 LISP-enabled Internet, and includes deployment scenarios for P-ITRs. 629 4.2. P-ETR 631 In contrast to P-ITRs, P-ETRs are not required for the correct 632 functioning of all LISP sites. There are two cases, where they can 633 be of great help: 635 o LISP sites with unicast reverse path forwarding (uRPF) 636 restrictions, and 638 o Communication between sites using different address family RLOCs. 640 In the first case, uRPF filtering is applied at their upstream PE 641 router. When forwarding traffic to non-LISP sites, an ITR does not 642 encapsulate packets, leaving the original IP headers intact. As a 643 result, packets will have EIDs in their source address. Since we are 644 discussing the transition period, we can assume that a prefix 645 covering the EIDs belonging to the LISP site is advertised to the 646 global routing tables by a P-ITR, and the PE router has a route 647 towards it. However, the next hop will not be on the interface 648 towards the CE router, so non-encapsulated packets will fail uRPF 649 checks. 651 To avoid this filtering, the affected ITR encapsulates packets 652 towards the locator of the P-ETR for non-LISP destinations. Now the 653 source address of the packets, as seen by the PE router is the ITR's 654 locator, which will not fail the uRPF check. The P-ETR then 655 decapsulates and forwards the packets. 657 The second use case is IPv4-to-IPv6 transition. Service providers 658 using older access network hardware, which only supports IPv4 can 659 still offer IPv6 to their clients, by providing a CPE device running 660 LISP, and P-ETR(s) for accessing IPv6-only non-LISP sites and LISP 661 sites, with IPv6-only locators. Packets originating from the client 662 LISP site for these destinations would be encapsulated towards the 663 P-ETR's IPv4 locator. The P-ETR is in a native IPv6 network, 664 decapsulating and forwarding packets. For non-LISP destination, the 665 packet travels natively from the P-ETR. For LISP destinations with 666 IPv6-only locators, the packet will go through a P-ITR, in order to 667 reach its destination. 669 For more details on P-ETRs see the [RFC6832] draft. 671 P-ETRs can be deployed by ISPs wishing to offer value-added services 672 to their customers. As is the case with P-ITRs, P-ETRs too may 673 introduce path stretch. Because of this the ISP needs to consider 674 the tradeoff of using several devices, close to the customers, to 675 minimize it, or few devices, farther away from the customers, 676 minimizing cost instead. 678 Since the deployment incentives for P-ITRs and P-ETRs are different, 679 it is likely they will be deployed in separate devices, except for 680 the CDN case, which may deploy both in a single device. 682 In all cases, the existence of a P-ETR involves another step in the 683 configuration of a LISP router. CPE routers, which are typically 684 configured by DHCP, stand to benefit most from P-ETRs. 685 Autoconfiguration of the P-ETR locator could be achieved by a DHCP 686 option, or adding a P-ETR field to either Map-Notifys or Map-Replies. 688 5. Migration to LISP 690 This section discusses a deployment architecture to support the 691 migration to a LISP-enabled Internet. The loosely defined terms of 692 "early transition phase", "late transition phase", and "LISP Internet 693 phase" refer to time periods when LISP sites are a minority, a 694 majority, or represent all edge networks respectively. 696 5.1. LISP+BGP 698 For sites wishing to go LISP with their PI prefix the least 699 disruptive way is to upgrade their border routers to support LISP, 700 register the prefix into the LISP mapping system, but keep announcing 701 it with BGP as well. This way LISP sites will reach them over LISP, 702 while legacy sites will be unaffected by the change. The main 703 disadvantage of this approach is that no decrease in the DFZ routing 704 table size is achieved. Still, just increasing the number of LISP 705 sites is an important gain, as an increasing LISP/non-LISP site ratio 706 will slowly decrease the need for BGP-based traffic engineering that 707 leads to prefix deaggregation. That, in turn, may lead to a decrease 708 in the DFZ size and churn in the late transition phase. 710 This scenario is not limited to sites that already have their 711 prefixes announced with BGP. Newly allocated EID blocks could follow 712 this strategy as well during the early LISP deployment phase, 713 depending on the cost/benefit analysis of the individual networks. 714 Since this leads to an increase in the DFZ size, the following 715 architecture should be preferred for new allocations. 717 5.2. Mapping Service Provider (MSP) P-ITR Service 719 In addition to publishing their clients' registered prefixes in the 720 mapping system, MSPs with enough transit capacity can offer them 721 P-ITR service as a separate service. This service is especially 722 useful for new PI allocations, to sites without existing BGP 723 infrastructure, that wish to avoid BGP altogether. The MSP announces 724 the prefix into the DFZ, and the client benefits from ingress traffic 725 engineering without prefix deaggregation. The downside of this 726 scenario is adding path stretch. 728 Routing all non-LISP ingress traffic through a third party which is 729 not one of its ISPs is only feasible for sites with modest amounts of 730 traffic (like those using the IPv6 tunnel broker services today), 731 especially in the first stage of the transition to LISP, with a 732 significant number of legacy sites. This is because the handling of 733 said traffic is likely to result in additional costs, which would be 734 passed down to the client. When the LISP/non-LISP site ratio becomes 735 high enough, this approach can prove increasingly attractive. 737 Compared to LISP+BGP, this approach avoids DFZ bloat caused by prefix 738 deaggregation for traffic engineering purposes, resulting in slower 739 routing table increase in the case of new allocations and potential 740 decrease for existing ones. Moreover, MSPs serving different clients 741 with adjacent aggregatable prefixes may lead to additional decrease, 742 but quantifying this decrease is subject to future research study. 744 5.3. Proxy-ITR Route Distribution (PITR-RD) 746 Instead of a LISP site, or the MSP, announcing their EIDs with BGP to 747 the DFZ, this function can be outsourced to a third party, a P-ITR 748 Service Provider (PSP). This will result in a decrease of the 749 operational complexity both at the site and at the MSP. 751 The PSP manages a set of distributed P-ITR(s) that will advertise the 752 corresponding EID prefixes through BGP to the DFZ. These P-ITR(s) 753 will then encapsulate the traffic they receive for those EIDs towards 754 the RLOCs of the LISP site, ensuring their reachability from non-LISP 755 sites. Note that handling non-LISP-originated traffic may incur 756 additional costs for the PSP, which may be passed down to the client. 758 While it is possible for a PSP to manually configure each client's 759 EID routes to be announced, this approach offers little flexibility 760 and is not scalable. This section presents a scalable architecture 761 that offers automatic distribution of EID routes to LISP sites and 762 service providers. 764 The architecture requires no modification to existing LISP network 765 elements, but it introduces a new (conceptual) network element, the 766 EID Route Server, defined as a router that either propagates routes 767 learned from other EID Route Servers, or it originates EID Routes. 768 The EID-Routes that it originates are those that it is authoritative 769 for. It propagates these routes to Proxy-ITRs within the AS of the 770 EID Route Server. It is worth to note that a BGP capable router can 771 be also considered as an EID Route Server. 773 Further, an EID-Route is defined as a prefix originated via the Route 774 Server of the mapping service provider, which should be aggregated if 775 the MSP has multiple customers inside a single large continuous 776 prefix. This prefix is propagated to other P-ITRs both within the 777 MSP and to other P-ITR operators it peers with. EID Route Servers 778 are operated either by the LISP site, MSPs or PSPs, and they may be 779 collocated with a Map Server or P-ITR, but are a functionally 780 discrete entity. They distribute EID-Routes, using BGP, to other 781 domains, according to policies set by participants. 783 MSP (AS64500) 784 RS ---> P-ITR 785 | / 786 | _.--./ 787 ,-'' /`--. 788 LISP site ---,' | v `. 789 ( | DFZ )----- Mapping system 790 non-LISP site ----. | ^ ,' 791 `--. / _.-' 792 | `--'' 793 v / 794 P-ITR 795 PSP (AS64501) 797 Figure 5: The P-ITR Route Distribution architecture 799 The architecture described above decouples EID origination from route 800 propagation, with the following benefits: 802 o Can accurately represent business relationships between P-ITR 803 operators 805 o More mapping system agnostic 807 o Minor changes to P-ITR implementation, no changes to other 808 components 810 In the example in the figure we have a MSP providing services to the 811 LISP site. The LISP site does not run BGP, and gets an EID 812 allocation directly from a RIR, or from the MSP, who may be a LIR. 813 Existing PI allocations can be migrated as well. The MSP ensures the 814 presence of the prefix in the mapping system, and runs an EID Route 815 Server to distribute it to P-ITR service providers. Since the LISP 816 site does not run BGP, the prefix will be originated with the AS 817 number of the MSP. 819 In the simple case depicted in Figure 5 the EID-Route of LISP Site 820 will be originated by the Route Server, and announced to the DFZ by 821 the PSP's P-ITRs with AS path 64501 64500. From that point on, the 822 usual BGP dynamics apply. This way, routes announced by P-ITR are 823 still originated by the authoritative Route Server. Note that the 824 peering relationships between MSP/PSPs and those in the underlying 825 forwarding plane may not be congruent, making the AS path to a P-ITR 826 shorter than it is in reality. 828 The non-LISP site will select the best path towards the EID-prefix, 829 according to its local BGP policies. Since AS-path length is usually 830 an important metric for selecting paths, a careful placement of P-ITR 831 could significantly reduce path-stretch between LISP and non-LISP 832 sites. 834 The architecture allows for flexible policies between MSP/PSPs. 835 Consider the EID Route Server networks as control plane overlays, 836 facilitating the implementation of policies necessary to reflect the 837 business relationships between participants. The results are then 838 injected to the common underlying forwarding plane. For example, 839 some MSP/PSPs may agree to exchange EID-Prefixes and only announce 840 them to each of their forwarding plane customers. Global 841 reachability of an EID-prefix depends on the MSP the LISP site buys 842 service from, and is also subject to agreement between the mentioned 843 parties. 845 In terms of impact on the DFZ, this architecture results in a slower 846 routing table increase for new allocations, since traffic engineering 847 will be done at the LISP level. For existing allocations migrating 848 to LISP, the DFZ may decrease since MSPs may be able to aggregate the 849 prefixes announced. 851 Compared to LISP+BGP, this approach avoids DFZ bloat caused by prefix 852 deaggregation for traffic engineering purposes, resulting in slower 853 routing table increase in the case of new allocations and potential 854 decrease for existing ones. Moreover, MSPs serving different clients 855 with adjacent aggregatable prefixes may lead to additional decrease, 856 but quantifying this decrease is subject to future research study. 858 The flexibility and scalability of this architecture does not come 859 without a cost however: A PSP operator has to establish either 860 transit or peering relationships to improve their connectivity. 862 5.4. Migration Summary 864 The following table presents the expected effects of the different 865 transition scenarios during a certain phase on the DFZ routing table 866 size: 868 Phase | LISP+BGP | MSP P-ITR | PITR-RD 869 -----------------+--------------+-----------------+---------------- 870 Early transition | no change | slower increase | slower increase 871 Late transition | may decrease | slower increase | slower increase 872 LISP Internet | considerable decrease 874 It is expected that PITR-RD will co-exist with LISP+BGP during the 875 migration, with the latter being more popular in the early transition 876 phase. As the transition progresses and the MSP P-ITR and PITR-RD 877 ecosystem gets more ubiquitous, LISP+BGP should become less 878 attractive, slowing down the increase of the number of routes in the 879 DFZ. 881 6. Step-by-Step Example BGP to LISP Migration Procedure 883 6.1. Customer Pre-Install and Pre-Turn-up Checklist 885 1. Determine how many current physical service provider connections 886 the customer has and their existing bandwidth and traffic 887 engineering requirements. 889 This information will determine the number of routing locators, 890 and the priorities and weights that should be configured on the 891 xTRs. 893 2. Make sure customer router has LISP capabilities. 895 * Check OS version of the CE router. If LISP is an add-on, 896 check if it is installed. 898 This information can be used to determine if the platform is 899 appropriate to support LISP, in order to determine if a 900 software and/or hardware upgrade is required. 902 * Have customer upgrade (if necessary, software and/or hardware) 903 to be LISP capable. 905 3. Obtain current running configuration of CE router. A suggested 906 LISP router configuration example can be customized to the 907 customer's existing environment. 909 4. Verify MTU Handling 911 * Request increase in MTU to 1556 or more on service provider 912 connections. Prior to MTU change verify that 1500 byte packet 913 from P-xTR to RLOC with do not fragment (DF-bit) bit set. 915 * Ensure they are not filtering ICMP unreachable or time- 916 exceeded on their firewall or router. 918 LISP, like any tunneling protocol, will increase the size of 919 packets when the LISP header is appended. If increasing the MTU 920 of the access links is not possible, care must be taken that ICMP 921 is not being filtered in order to allow for Path MTU Discovery to 922 take place. 924 5. Validate member prefix allocation. 926 This step is to check if the prefix used by the customer is a 927 direct (Provider Independent), or if it is a prefix assigned by a 928 physical service provider (Provider Aggregatable). If the 929 prefixes are assigned by other service provivers then a Letter of 930 Agreement is required to announce prefixes through the Proxy 931 Service Provider. 933 6. Verify the member RLOCs and their reachability. 935 This step ensures that the RLOCs configured on the CE router are 936 in fact reachable and working. 938 7. Prepare for cut-over. 940 * If possible, have a host outside of all security and filtering 941 policies connected to the console port of the edge router or 942 switch. 944 * Make sure customer has access to the router in order to 945 configure it. 947 6.2. Customer Activating LISP Service 949 1. Customer configures LISP on CE router(s) from service provider 950 recommended configuration. 952 The LISP configuration consists of the EID prefix, the locators, 953 and the weights and priorities of the mapping between the two 954 values. In addition, the xTR must be configured with Map 955 Resolver(s), Map Server(s) and the shared key for registering to 956 Map Server(s). If required, Proxy-ETR(s) may be configured as 957 well. 959 In addition to the LISP configuration, the following: 961 * Ensure default route(s) to next-hop external neighbors are 962 included and RLOCs are present in configuration. 964 * If two or more routers are used, ensure all RLOCs are included 965 in the LISP configuration on all routers. 967 * It will be necessary to redistribute default route via IGP 968 between the external routers. 970 2. When transition is ready perform a soft shutdown on existing eBGP 971 peer session(s) 973 * From CE router, use LIG to ensure registration is successful. 975 * To verify LISP connectivity, ping LISP connected sites. See 976 http://www.lisp4.net/ and/or http://www.lisp6.net/ for 977 potential candidates. If possible, find ping destinations 978 that are not covered by a prefix in the global BGP routing 979 system, because PITRs may deliver the packets even if LISP 980 connectivity is not working. Traceroutes may help discover if 981 this is the case. 983 * To verify connectivity to non-LISP sites, try accessing a 984 landmark (e.g., a major Internet site) via a web browser. 986 6.3. Cut-Over Provider Preparation and Changes 988 1. Verify site configuration and then active registration on Map 989 Server(s) 990 * Authentication key 992 * EID prefix 994 2. Add EID space to map-cache on proxies 996 3. Add networks to BGP advertisement on proxies 998 * Modify route-maps/policies on P-xTRs 1000 * Modify route policies on core routers (if non-connected 1001 member) 1003 * Modify ingress policers on core routers 1005 * Ensure route announcement in looking glass servers, RouteViews 1007 4. Perform traffic verification test 1009 * Ensure MTU handling is as expected (PMTUD working) 1011 * Ensure proxy-ITR map-cache population 1013 * Ensure access from traceroute/ping servers around Internet 1015 * Use a looking glass, to check for external visibility of 1016 registration via several Map Resolvers (e.g., 1017 http://lispmon.net/). 1019 7. Security Considerations 1021 Security implications of LISP deployments are to be discussed in 1022 separate documents. [I-D.ietf-lisp-threats] gives an overview of 1023 LISP threat models, while securing mapping lookups is discussed in 1024 [I-D.ietf-lisp-sec]. 1026 8. IANA Considerations 1028 This memo includes no request to IANA. 1030 9. Acknowledgements 1032 Many thanks to Margaret Wasserman for her contribution to the IETF76 1033 presentation that kickstarted this work. The authors would also like 1034 to thank Damien Saucez, Luigi Iannone, Joel Halpern, Vince Fuller, 1035 Dino Farinacci, Terry Manderson, Noel Chiappa, Hannu Flinck, Paul 1036 Vinciguerra, Fred Templin, and everyone else who provided input. 1038 10. References 1040 10.1. Normative References 1042 [RFC6830] Farinacci, D., Fuller, V., Meyer, D., and D. Lewis, "The 1043 Locator/ID Separation Protocol (LISP)", RFC 6830, 1044 January 2013. 1046 [RFC6832] Lewis, D., Meyer, D., Farinacci, D., and V. Fuller, 1047 "Interworking between Locator/ID Separation Protocol 1048 (LISP) and Non-LISP Sites", RFC 6832, January 2013. 1050 [RFC6833] Fuller, V. and D. Farinacci, "Locator/ID Separation 1051 Protocol (LISP) Map-Server Interface", RFC 6833, 1052 January 2013. 1054 10.2. Informative References 1056 [I-D.ietf-lisp-eid-block] 1057 Iannone, L., Lewis, D., Meyer, D., and V. Fuller, "LISP 1058 EID Block", draft-ietf-lisp-eid-block-04 (work in 1059 progress), February 2013. 1061 [I-D.ietf-lisp-sec] 1062 Maino, F., Ermagan, V., Cabellos-Aparicio, A., Saucez, D., 1063 and O. Bonaventure, "LISP-Security (LISP-SEC)", 1064 draft-ietf-lisp-sec-04 (work in progress), October 2012. 1066 [I-D.ietf-lisp-threats] 1067 Saucez, D., Iannone, L., and O. Bonaventure, "LISP Threats 1068 Analysis", draft-ietf-lisp-threats-04 (work in progress), 1069 February 2013. 1071 [RFC4459] Savola, P., "MTU and Fragmentation Issues with In-the- 1072 Network Tunneling", RFC 4459, April 2006. 1074 [RFC6834] Iannone, L., Saucez, D., and O. Bonaventure, "Locator/ID 1075 Separation Protocol (LISP) Map-Versioning", RFC 6834, 1076 January 2013. 1078 [cache] Jung, J., Sit, E., Balakrishnan, H., and R. Morris, "DNS 1079 performance and the effectiveness of caching", 2002. 1081 Authors' Addresses 1083 Lorand Jakab 1084 Cisco Systems 1085 170 Tasman Drive 1086 San Jose, CA 95134 1087 USA 1089 Email: lojakab@cisco.com 1091 Albert Cabellos-Aparicio 1092 Technical University of Catalonia 1093 C/Jordi Girona, s/n 1094 BARCELONA 08034 1095 Spain 1097 Email: acabello@ac.upc.edu 1099 Florin Coras 1100 Technical University of Catalonia 1101 C/Jordi Girona, s/n 1102 BARCELONA 08034 1103 Spain 1105 Email: fcoras@ac.upc.edu 1107 Jordi Domingo-Pascual 1108 Technical University of Catalonia 1109 C/Jordi Girona, s/n 1110 BARCELONA 08034 1111 Spain 1113 Email: jordi.domingo@ac.upc.edu 1115 Darrel Lewis 1116 Cisco Systems 1117 170 Tasman Drive 1118 San Jose, CA 95134 1119 USA 1121 Email: darlewis@cisco.com