idnits 2.17.1 draft-irtf-rrg-recommendation-16.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 29, 2010) is 4894 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-14) exists of draft-frejborg-hipv4-10 == Outdated reference: A later version (-24) exists of draft-ietf-lisp-09 == Outdated reference: A later version (-10) exists of draft-ietf-lisp-alt-05 == Outdated reference: A later version (-06) exists of draft-ietf-lisp-interworking-01 == Outdated reference: A later version (-16) exists of draft-ietf-lisp-ms-06 == Outdated reference: A later version (-06) exists of draft-irtf-rrg-design-goals-04 == Outdated reference: A later version (-16) exists of draft-meyer-lisp-mn-04 == Outdated reference: A later version (-11) exists of draft-rja-ilnp-nonce-06 == Outdated reference: A later version (-68) exists of draft-templin-intarea-seal-23 == Outdated reference: A later version (-40) exists of draft-templin-intarea-vet-16 == Outdated reference: A later version (-17) exists of draft-templin-iron-13 -- Obsolete informational reference (is this intentional?): RFC 4423 (Obsoleted by RFC 9063) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 5201 (Obsoleted by RFC 7401) Summary: 0 errors (**), 0 flaws (~~), 12 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Research Task Force T. Li, Ed. 3 Internet-Draft Cisco Systems 4 Intended status: Informational November 29, 2010 5 Expires: June 2, 2011 7 Recommendation for a Routing Architecture 8 draft-irtf-rrg-recommendation-16 10 Abstract 12 It is commonly recognized that the Internet routing and addressing 13 architecture is facing challenges in scalability, multihoming, and 14 inter-domain traffic engineering. This document presents, as a 15 recommendation of future directions for the IETF, solutions which 16 could aid the future scalability of the Internet. To this end, this 17 document surveys many of the proposals that were brought forward for 18 discussion in this activity, as well as some of the subsequent 19 analysis and the architectural recommendation of the chairs. This 20 document is a product of the Routing Research Group. 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on June 2, 2011. 39 Copyright Notice 41 Copyright (c) 2010 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 6 57 1.1. Background to This Document . . . . . . . . . . . . . . . 6 58 1.2. Areas of Group Consensus . . . . . . . . . . . . . . . . . 7 59 1.3. Abbreviations . . . . . . . . . . . . . . . . . . . . . . 8 60 2. Locator Identifier Separation Protocol (LISP) . . . . . . . . 9 61 2.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 9 62 2.1.1. Key Idea . . . . . . . . . . . . . . . . . . . . . . . 9 63 2.1.2. Gains . . . . . . . . . . . . . . . . . . . . . . . . 9 64 2.1.3. Costs . . . . . . . . . . . . . . . . . . . . . . . . 10 65 2.1.4. References . . . . . . . . . . . . . . . . . . . . . . 10 66 2.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 11 67 2.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 12 68 3. Routing Architecture for the Next Generation Internet 69 (RANGI) . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 70 3.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 13 71 3.1.1. Key Idea . . . . . . . . . . . . . . . . . . . . . . . 13 72 3.1.2. Gains . . . . . . . . . . . . . . . . . . . . . . . . 13 73 3.1.3. Costs . . . . . . . . . . . . . . . . . . . . . . . . 14 74 3.1.4. References . . . . . . . . . . . . . . . . . . . . . . 14 75 3.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 14 76 3.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 15 77 4. Internet Vastly Improved Plumbing (Ivip) . . . . . . . . . . . 16 78 4.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 16 79 4.1.1. Key Ideas . . . . . . . . . . . . . . . . . . . . . . 16 80 4.1.2. Extensions . . . . . . . . . . . . . . . . . . . . . . 18 81 4.1.2.1. TTR Mobility . . . . . . . . . . . . . . . . . . . 18 82 4.1.2.2. Modified Header Forwarding . . . . . . . . . . . . 18 83 4.1.3. Gains . . . . . . . . . . . . . . . . . . . . . . . . 18 84 4.1.4. Costs . . . . . . . . . . . . . . . . . . . . . . . . 19 85 4.1.5. References . . . . . . . . . . . . . . . . . . . . . . 19 86 4.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 19 87 4.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 20 88 5. Hierarchical IPv4 Framework (hIPv4) . . . . . . . . . . . . . 22 89 5.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 22 90 5.1.1. Key Idea . . . . . . . . . . . . . . . . . . . . . . . 22 91 5.1.2. Gains . . . . . . . . . . . . . . . . . . . . . . . . 23 92 5.1.3. Costs And Issues . . . . . . . . . . . . . . . . . . . 24 93 5.1.4. References . . . . . . . . . . . . . . . . . . . . . . 24 94 5.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 24 95 5.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 25 97 6. Name overlay (NOL) service for scalable Internet routing . . . 25 98 6.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 25 99 6.1.1. Key Idea . . . . . . . . . . . . . . . . . . . . . . . 26 100 6.1.2. Gains . . . . . . . . . . . . . . . . . . . . . . . . 26 101 6.1.3. Costs . . . . . . . . . . . . . . . . . . . . . . . . 27 102 6.1.4. References . . . . . . . . . . . . . . . . . . . . . . 28 103 6.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 28 104 6.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 29 105 7. Compact routing in locator identifier mapping system (CRM) . . 30 106 7.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 30 107 7.1.1. Key Idea . . . . . . . . . . . . . . . . . . . . . . . 30 108 7.1.2. Gains . . . . . . . . . . . . . . . . . . . . . . . . 30 109 7.1.3. Costs . . . . . . . . . . . . . . . . . . . . . . . . 30 110 7.1.4. References . . . . . . . . . . . . . . . . . . . . . . 30 111 7.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 30 112 7.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 32 113 8. Layered mapping system (LMS) . . . . . . . . . . . . . . . . . 32 114 8.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 32 115 8.1.1. Key Ideas . . . . . . . . . . . . . . . . . . . . . . 32 116 8.1.2. Gains . . . . . . . . . . . . . . . . . . . . . . . . 33 117 8.1.3. Costs . . . . . . . . . . . . . . . . . . . . . . . . 33 118 8.1.4. References . . . . . . . . . . . . . . . . . . . . . . 34 119 8.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 34 120 8.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 35 121 9. 2-phased mapping . . . . . . . . . . . . . . . . . . . . . . . 35 122 9.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 35 123 9.1.1. Considerations . . . . . . . . . . . . . . . . . . . . 35 124 9.1.2. Basics of a 2-phased mapping . . . . . . . . . . . . . 35 125 9.1.3. Gains . . . . . . . . . . . . . . . . . . . . . . . . 36 126 9.1.4. Summary . . . . . . . . . . . . . . . . . . . . . . . 36 127 9.1.5. References . . . . . . . . . . . . . . . . . . . . . . 36 128 9.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 36 129 9.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 37 130 10. Global Locator, Local Locator, and Identifier Split 131 (GLI-Split) . . . . . . . . . . . . . . . . . . . . . . . . . 37 132 10.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 37 133 10.1.1. Key Idea . . . . . . . . . . . . . . . . . . . . . . . 37 134 10.1.2. Gains . . . . . . . . . . . . . . . . . . . . . . . . 37 135 10.1.3. Costs . . . . . . . . . . . . . . . . . . . . . . . . 38 136 10.1.4. References . . . . . . . . . . . . . . . . . . . . . . 38 137 10.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 38 138 10.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 39 139 11. Tunneled Inter-domain Routing (TIDR) . . . . . . . . . . . . . 40 140 11.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 40 141 11.1.1. Key Idea . . . . . . . . . . . . . . . . . . . . . . . 40 142 11.1.2. Gains . . . . . . . . . . . . . . . . . . . . . . . . 40 143 11.1.3. Costs . . . . . . . . . . . . . . . . . . . . . . . . 41 144 11.1.4. References . . . . . . . . . . . . . . . . . . . . . . 41 146 11.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 41 147 11.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 43 148 12. Identifier-Locator Network Protocol (ILNP) . . . . . . . . . . 43 149 12.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 43 150 12.1.1. Key Ideas . . . . . . . . . . . . . . . . . . . . . . 43 151 12.1.2. Benefits . . . . . . . . . . . . . . . . . . . . . . . 43 152 12.1.3. Costs . . . . . . . . . . . . . . . . . . . . . . . . 45 153 12.1.4. References . . . . . . . . . . . . . . . . . . . . . . 45 154 12.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 45 155 12.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 46 156 13. Enhanced Efficiency of Mapping Distribution Protocols in 157 Map-and-Encap Schemes (EEMDP) . . . . . . . . . . . . . . . . 48 158 13.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 48 159 13.1.1. Introduction . . . . . . . . . . . . . . . . . . . . . 48 160 13.1.2. Management of Mapping Distribution of Subprefixes 161 Spread Across Multiple ETRs . . . . . . . . . . . . . 48 162 13.1.3. Management of Mapping Distribution for Scenarios 163 with Hierarchy of ETRs and Multihoming . . . . . . . . 50 164 13.1.4. References . . . . . . . . . . . . . . . . . . . . . . 50 165 13.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 50 166 13.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 51 167 14. Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . 52 168 14.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 52 169 14.1.1. Need for Evolution . . . . . . . . . . . . . . . . . . 53 170 14.1.2. Relation to Other RRG Proposals . . . . . . . . . . . 53 171 14.1.3. Aggregation with Increasing Scopes . . . . . . . . . . 53 172 14.1.4. References . . . . . . . . . . . . . . . . . . . . . . 55 173 14.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 55 174 14.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 56 175 15. Name-Based Sockets . . . . . . . . . . . . . . . . . . . . . . 56 176 15.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 56 177 15.1.1. References . . . . . . . . . . . . . . . . . . . . . . 58 178 15.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 58 179 15.2.1. Deployment . . . . . . . . . . . . . . . . . . . . . . 59 180 15.2.2. Edge-networks . . . . . . . . . . . . . . . . . . . . 59 181 15.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 59 182 16. Routing and Addressing in Networks with Global Enterprise 183 Recursion (IRON-RANGER) . . . . . . . . . . . . . . . . . . . 59 184 16.1. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 59 185 16.1.1. Gains . . . . . . . . . . . . . . . . . . . . . . . . 60 186 16.1.2. Costs . . . . . . . . . . . . . . . . . . . . . . . . 61 187 16.1.3. References . . . . . . . . . . . . . . . . . . . . . . 61 188 16.2. Critique . . . . . . . . . . . . . . . . . . . . . . . . . 61 189 16.3. Rebuttal . . . . . . . . . . . . . . . . . . . . . . . . . 62 190 17. Recommendation . . . . . . . . . . . . . . . . . . . . . . . . 63 191 17.1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . 63 192 17.2. Recommendation to the IETF . . . . . . . . . . . . . . . . 65 193 17.3. Rationale . . . . . . . . . . . . . . . . . . . . . . . . 65 195 18. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 65 196 19. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 66 197 20. Security Considerations . . . . . . . . . . . . . . . . . . . 66 198 21. Informative References . . . . . . . . . . . . . . . . . . . . 66 199 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 72 201 1. Introduction 203 It is commonly recognized that the Internet routing and addressing 204 architecture is facing challenges in scalability, multihoming, and 205 inter-domain traffic engineering. The problem being addressed has 206 been documented in [I-D.narten-radir-problem-statement], and the 207 design goals that we have discussed can be found in 208 [I-D.irtf-rrg-design-goals]. 210 This document surveys many of the proposals that were brought forward 211 for discussion in this activity. For some of the proposals, this 212 document also includes additional analysis showing some of the 213 concerns with specific proposals, and how some of those concerns may 214 be addressed. Readers are cautioned not to draw any conclusions 215 about the degree of interest or endorsement by the Routing Research 216 Group (RRG) from the presence of any proposals in this document, or 217 the amount of analysis devoted to specific proposals. 219 1.1. Background to This Document 221 The RRG was chartered to research and recommend a new routing 222 architecture for the Internet. The goal was to explore many 223 alternatives and build consensus around a single proposal. The only 224 constraint on the group's process was that the process be open and 225 the group set forth with the usual discussion of proposals and trying 226 to build consensus around them. There were no explicit contingencies 227 in the group's process for the eventuality that the group did not 228 reach consensus. 230 The group met at every IETF meeting from March 2007 to March 2010 and 231 discussed many proposals, both in person and via its mailing list. 232 Unfortunately, the group did not reach consensus. Rather than lose 233 the contributions and progress that had been made, the chairs (Lixia 234 Zhang, Tony Li) elected to collect the proposals of the group and 235 some of the debate concerning the proposals and make a recommendation 236 from those proposals. Thus, the recommendation reflects the opinions 237 of the chairs and not necessarily the consensus of the group. 239 The group was able to reach consensus on a number of items that are 240 included below. The proposals included here were collected in an 241 open call amongst the group. Once the proposals were collected, the 242 group was solicited to submit critiques of each proposal. The group 243 was asked to self-organize to produce a single critique for each 244 proposal. In cases where there were several critiques submitted, the 245 editor selected one. The proponents of each proposal then were given 246 the opportunity to write a rebuttal of the critique. Finally, the 247 group again had the opportunity to write a counterpoint of the 248 rebuttal. No counterpoints were submitted. For pragmatic reasons, 249 each submission was severely constrained in length. 251 All of the proposals were given the opportunity to progress their 252 documents to RFC status, however, not all of them have chosen to 253 pursue this path. As a result, some of the references in this 254 document may become inaccessible. This is unfortunately unavoidable. 256 The group did reach consensus that the overall document should be 257 published. The document has been reviewed by many of the active 258 members of the Research Group. 260 1.2. Areas of Group Consensus 262 The group was also able to reach broad and clear consensus on some 263 terminology and several important technical points. For the sake of 264 posterity, these are recorded here: 266 1. A "node" is either a host or a router. 268 2. A "router" is any device that forwards packets at the Network 269 Layer (e.g. IPv4, IPv6) of the Internet Architecture. 271 3. A "host" is a device that can send/receive packets to/from the 272 network, but does not forward packets. 274 4. A "bridge" is a device that forwards packets at the Link Layer 275 (e.g. Ethernet) of the Internet Architecture. An Ethernet 276 switch or Ethernet hub are examples of bridges. 278 5. An "address" is an object that combines aspects of identity with 279 topological location. IPv4 and IPv6 addresses are current 280 examples. 282 6. A "locator" is a structured topology-dependent name that is not 283 used for node identification, and is not a path. Two related 284 meanings are current, depending on the class of things being 285 named: 287 1. The topology-dependent name of a node's interface. 289 2. The topology-dependent name of a single subnetwork OR 290 topology-dependent name of a group of related subnetworks 291 that share a single aggregate. An IP routing prefix is a 292 current example of the latter. 294 7. An "identifier" is a topology-independent name for a logical 295 node. Depending upon instantiation, a "logical node" might be a 296 single physical device, a cluster of devices acting as a single 297 node, or a single virtual partition of a single physical device. 298 An OSI End System Identifier (ESID) is an example of an 299 identifier. A Fully-Qualified Domain Name that precisely names 300 one logical node is another example. (Note well that not all 301 FQDNs meet this definition.) 303 8. Various other names (i.e. other than addresses, locators, or 304 identifiers), each of which has the sole purpose of identifying 305 a component of a logical system or physical device, might exist 306 at various protocol layers in the Internet Architecture. 308 9. The Research Group has rough consensus that separating identity 309 from location is desirable and technically feasible. However, 310 the Research Group does NOT have consensus on the best 311 engineering approach to such an identity/location split. 313 10. The Research Group has consensus that the Internet needs to 314 support multihoming in a manner that scales well and does not 315 have prohibitive costs. 317 11. Any IETF solution to Internet scaling has to not only support 318 multihoming, but address the real-world constraints of the end 319 customers (large and small). 321 1.3. Abbreviations 323 This section lists some of the most common abbreviations used in the 324 remainder of this document. 326 DFZ Default-Free Zone 328 EID Endpoint IDentifer: The precise definition varies depending on 329 the proposal. 331 ETR Egress Tunnel Router: In a system that tunnels traffic across 332 the existing infrastructure by encapsulating it, the device close 333 to the actual ultimate destination that decapsulates the traffic 334 before forwarding it to the ultimate destination. 336 FIB Forwarding Information Base: The forwarding table, used in the 337 data plane of routers to select the next hop for each packet. 339 ITR Ingress Tunnel Router: In a system that tunnels traffic across 340 the existing infrastructure by encapsulating it, the device close 341 to the actual original source that encapsulates the traffic before 342 using the tunnel to send it to the appropriate ETR. 344 PA Provider Aggregatable: Address space that can be aggregated as 345 part of a service provider's routing advertisements. 347 PI Provider Independent: Address space assigned by an Internet 348 registry independent of any service provider. 350 PMTUD Path Maximum Transmission Unit Discovery: The process or 351 mechanism that determines the largest packet that can be sent 352 between a given source and destination without being either i) 353 fragmented (IPv4 only), or ii) discarded (if not fragmentable) 354 because it is too large to be sent down one link in the path from 355 the source to the destination. 357 RIB Routing Information Base. The routing table, used in the 358 control plane of routers to exchange routing information and 359 construct the FIB. 361 RLOC Routing LOCator: The precise definition varies depending on the 362 proposal. 364 xTR Tunnel Router: In some systems, the term used to describe a 365 device which can function as both an ITR and an ETR. 367 2. Locator Identifier Separation Protocol (LISP) 369 2.1. Summary 371 2.1.1. Key Idea 373 Implements a locator-identifier separation mechanism using 374 encapsulation between routers at the "edge" of the Internet. Such a 375 separation allows topological aggregation of the routable addresses 376 (locators) while providing stable and portable numbering of end 377 systems (identifiers). 379 2.1.2. Gains 381 o topological aggregation of locator space (RLOCs) used for routing, 382 which greatly reduces both the overall size and the "churn rate" 383 of the information needed to operate the Internet global routing 384 system 386 o separate identifier space (EIDs) for end-systems, effectively 387 allowing "PI for all" (no renumbering cost for connectivity 388 changes) without adding state to the global routing system 390 o improved traffic engineering capabilities that explicitly do not 391 add state to the global routing system and whose deployment will 392 allow active removal of the more-specific state that is currently 393 used 395 o no changes required to end systems 397 o no changes to Internet "core" routers 399 o minimal and straightforward changes to "edge" routers 401 o day-one advantages for early adopters 403 o defined router-to-router protocol 405 o defined database mapping system 407 o defined deployment plan 409 o defined interoperability/interworking mechanisms 411 o defined scalable end-host mobility mechanisms 413 o prototype implementation already exists and undergoing testing 415 o production implementations in progress 417 2.1.3. Costs 419 o mapping system infrastructure (map servers, map resolvers, 420 Alternative Logical Topology (ALT) routers) (new potential 421 business opportunity) 423 o Interworking infrastructure (proxy ITRs) (new potential business 424 opportunity) 426 o overhead for determining/maintaining locator/path liveness (common 427 issue for all id/loc separation proposals) 429 2.1.4. References 431 [I-D.ietf-lisp] [I-D.ietf-lisp-alt] [I-D.ietf-lisp-ms] 432 [I-D.ietf-lisp-interworking] [I-D.meyer-lisp-mn] 433 [I-D.farinacci-lisp-lig] [I-D.meyer-loc-id-implications] 435 2.2. Critique 437 LISP-ALT distributes mapping information to ITRs via (optional, 438 local, potentially caching) Map Resolvers and with globally 439 distributed query servers: ETRs and optional Map Servers (MS). 441 A fundamental problem with any global query server network is that 442 the frequently long paths and greater risk of packet loss may cause 443 ITRs to drop or significantly delay the initial packets of many new 444 sessions. ITRs drop the packet(s) they have no mapping for. After 445 the mapping arrives, the ITR waits for a resent packet and will 446 tunnel that packet correctly. These "initial packet delays" reduce 447 performance and so create a major barrier to voluntary adoption on 448 wide enough basis to solve the routing scaling problem. 450 ALT's delays are compounded by its structure being "aggressively 451 aggregated", without regard to the geographic location of the 452 routers. Tunnels between ALT routers will often span 453 intercontinental distances and traverse many Internet routers. 455 The many levels to which a query typically ascends in the ALT 456 hierarchy before descending towards its destination will often 457 involve excessively long geographic paths and so worsen initial 458 packet delays. 460 No solution has been proposed for these problems or for the 461 contradiction between the need for high aggregation while making the 462 ALT structure robust against single points of failure. 464 LISP's ITRs multihoming service restoration depends on them 465 determining reachability of end-user networks via two or more ETRs. 466 Large numbers of ITRs doing this is inefficient and may overburden 467 ETRs. 469 Testing reachability of the ETRs is complex and costly - and 470 insufficient. ITRs cannot test network reachability via each ETR, 471 since the ITRs have no address of a device in that network. So ETRs 472 must report network un-reachability to ITRs. 474 LISP involves complex communication between ITRs and ETRs, with UDP 475 and 64-bit LISP headers in all traffic packets. 477 The advantage of LISP+ALT is that its ability to handle billions of 478 EIDs is not constrained by the need to transmit or store the mapping 479 to any one location. Such numbers, beyond a few tens of millions of 480 EIDs, will only result if the system is used for Mobility. Yet the 481 concerns just mentioned about ALT's structure arise from the millions 482 of ETRs which would be needed just for non-mobile networks. 484 In LISP's mobility approach each Mobile Node (MN) needs an RLOC 485 address to be its own ETR, meaning the MN cannot be behind NAT. 486 Mapping changes must be sent instantly to all relevant ITRs every 487 time the MN gets a new address - which LISP cannot achieve. 489 In order to enforce ISP filtering of incoming packets by source 490 address, LISP ITRs would have to implement the same filtering on each 491 decapsulated packet. This may be prohibitively expensive. 493 LISP monolithically integrates multihoming failure detection and 494 restoration decision-making processes into the Core-Edge Separation 495 (CES) scheme itself. End-user networks must rely on the necessarily 496 limited capabilities which are built into every ITR. 498 LISP-ALT may be able to solve the routing scaling problem, but 499 alternative approaches would be superior because they eliminate the 500 initial packet delay problem and give end-user networks real-time 501 control over ITR tunneling. 503 2.3. Rebuttal 505 Initial-packet loss/delays turn out not to be a deep issue. 506 Mechanisms for interoperation with the legacy part of the network are 507 needed in any viably deployable design, and LISP has such mechanisms. 508 If needed, initial packets can be sent via those legacy mechanisms 509 until the ITR has a mapping. (Field experience has shown that the 510 caches on those interoperation devices are guaranteed to be 511 populated, as 'crackers' doing address-space sweeps periodically send 512 packets to every available mapping.) 514 On ALT issues, it is not at all mandatory that ALT be the mapping 515 system used in the long term. LISP has a standardized mapping system 516 interface, in part to allow reasonably smooth deployment of whatever 517 new mapping system(s) experience might show are required. At least 518 one other mapping system (LISP-TREE) [LISP-TREE], which avoids ALT's 519 problems (such as query load concentration at high-level nodes), has 520 already been laid out and extensively simulated. Exactly what 521 mixture of mapping system(s) is optimal is not really answerable 522 without more extensive experience, but LISP is designed to allow 523 evolutionary changes to other mapping system(s). 525 As far as ETR reachability goes, a potential problem to which there 526 is a solution which has an adequate level of efficiency, complexity 527 and robustness is not really a problem. LISP has a number of 528 overlapping mechanisms which it is believed will provide adequate 529 reachability detection (along the three axes above), and in field 530 testing to date, they have behaved as expected. 532 Operation of LISP devices behind a NAT has already been demonstrated. 533 A number of mechanisms to update correspondent nodes when a mapping 534 is updated have been designed (some are already in use). 536 3. Routing Architecture for the Next Generation Internet (RANGI) 538 3.1. Summary 540 3.1.1. Key Idea 542 Similar to Host Identity Protocol (HIP) [RFC4423], RANGI introduces a 543 host identifier layer between the network layer and the transport 544 layer, and the transport-layer associations (i.e., TCP connections) 545 are no longer bound to IP addresses, but to host identifiers. The 546 major difference from HIP is that the host identifier in RANGI is a 547 128-bit hierarchical and cryptographic identifier which has 548 organizational structure. As a result, the corresponding ID->locator 549 mapping system for such identifiers has a reasonable business model 550 and clear trust boundaries. In addition, RANGI uses IPv4-embedded 551 IPv6 addresses as locators. The Locator Domain Identifier (LD ID) 552 (i.e., the leftmost 96 bits) of this locator is a provider-assigned 553 /96 IPv6 prefix, while the last four octets of this locator is a 554 local IPv4 address (either public or private). This special locator 555 could be used to realize 6over4 automatic tunneling (borrowing ideas 556 from ISATAP [RFC5214]), which will reduce the deployment cost of this 557 new routing architecture. Within RANGI, the mappings from FQDN to 558 host identifiers are stored in the DNS system, while the mappings 559 from host identifiers to locators are stored in a distributed id/ 560 locator mapping system (e.g., a hierarchical Distributed Hash Table 561 (DHT) system, or a reverse DNS system). 563 3.1.2. Gains 565 RANGI achieves almost all of the goals set forth by RRG as follows: 567 1. Routing Scalability: Scalability is achieved by decoupling 568 identifiers from locators. 570 2. Traffic Engineering: Hosts located in a multihomed site can 571 suggest the upstream ISP for outbound and inbound traffic, while 572 the first-hop Locator Domain Border Router (LDBR) (i.e., site 573 border router) has the final decision on the upstream ISP 574 selection. 576 3. Mobility and Multihoming: Sessions will not be interrupted due to 577 locator change in cases of mobility or multihoming. 579 4. Simplified Renumbering: When changing providers, the local IPv4 580 addresses of the site do not need to change. Hence the internal 581 routers within the site don't need renumbering. 583 5. Decoupling Location and Identifier: Obvious. 585 6. Routing Stability: Since the locators are topologically 586 aggregatable and the internal topology within the LD will not be 587 disclosed outside, routing stability could be improved greatly. 589 7. Routing Security: RANGI reuses the current routing system and 590 does not introduce any new security risks into the routing 591 system. 593 8. Incremental Deployability: RANGI allows an easy transition from 594 IPv4 networks to IPv6 networks. In addition, RANGI proxy allows 595 RANGI-aware hosts to communicate to legacy IPv4 or IPv6 hosts, 596 and vice-versa. 598 3.1.3. Costs 600 1. A host change is required. 602 2. The first-hop LDBR change is required to support site-controlled 603 traffic-engineering capability. 605 3. The ID->Locator mapping system is a new infrastructure to be 606 deployed. 608 4. RANGI proxy needs to be deployed for communication between RANGI- 609 aware hosts and legacy hosts. 611 3.1.4. References 613 [RFC3007] [RFC4423] [I-D.xu-rangi] [I-D.xu-rangi-proxy] [RANGI] 615 3.2. Critique 617 RANGI is an ID/locator split protocol that, like HIP, places a 618 cryptographically signed ID between the network layer (IPv6) and 619 transport. Unlike the HIP ID, the RANGI ID has a hierarchical 620 structure that allows it to support ID->locator lookups. This 621 hierarchical structure addresses two weaknesses of the flat HIP ID: 622 the difficulty of doing the ID->locator lookup, and the 623 administrative scalability of doing firewall filtering on flat IDs. 624 The usage of this hierarchy is overloaded: it serves to make the ID 625 unique, to drive the lookup process, and possibly other things like 626 firewall filtering. More thought is needed as to what constitutes 627 these levels with respect to these various roles. 629 The RANGI draft suggests FQDN->ID lookup through DNS, and separately 630 an ID->locator lookup which may be DNS or may be something else (a 631 hierarchy of DHTs). It would be more efficient if the FQDN lookup 632 produces both ID and locators (as does ILNP). Probably DNS alone is 633 sufficient for the ID->locator lookup since individual DNS servers 634 can hold very large numbers of mappings. 636 RANGI provides strong sender identification, but at the cost of 637 computing crypto. Many hosts (public web servers) may prefer to 638 forgo the crypto at the expense of losing some functionality 639 (receiver mobility or dynamic multihoming load balancing). While 640 RANGI doesn't require that the receiver validate the sender, it may 641 be good to have a mechanism whereby the receiver can signal to the 642 sender that it is not validating, so that the sender can avoid 643 locator changes. 645 Architecturally there are many advantages to putting the mapping 646 function at the end host (versus at the edge). This simplifies the 647 neighbor aliveness and delayed first packet problems, and avoids 648 stateful middleboxes. Unfortunately, the early-adopter incentive for 649 host upgrade may not be adequate (HIP's lack of uptake being an 650 example). 652 RANGI does not have an explicit solution for the mobility race 653 condition (there is no mention of a home-agent like device). 654 However, host-to-host notification combined with fallback on the 655 ID->locators lookup (assuming adequate dynamic update of the lookup 656 system) may be good enough for the vast majority of mobility 657 situations. 659 RANGI uses proxies to deal with both legacy IPv6 and IPv4 sites. 660 RANGI proxies have no mechanisms to deal with the edge-to-edge 661 aliveness problem. The edge-to-edge proxy approach dirties-up an 662 otherwise clean end-to-end model. 664 RANGI exploits existing IPv6 transition technologies (ISATAP and 665 softwire). These transition technologies are in any event being 666 pursued outside of RRG and do not need to be specified in RANGI 667 drafts per se. RANGI only needs to address how it interoperates with 668 IPv4 and legacy IPv6, which through proxies it appears to do 669 adequately well. 671 3.3. Rebuttal 673 The reason why the ID->Locator lookup is separated from the FQDN->ID 674 lookup is: 1) not all applications are tied to FQDNs, and 2) it seems 675 unnecessary to require all devices to possess a FQDN of their own. 676 Basically RANGI uses DNS to realize the ID->Locator mapping system. 677 If there are too many entries to be maintained by the authoritative 678 servers of a given Administrative Domain (AD), Distributed Hash Table 679 (DHT) technology can be used to make these authoritative servers 680 scale better, e.g., the mappings maintained by a given AD will be 681 distributed among a group of authoritative servers in a DHT fashion. 682 As a result, the robustness feature of DHT is inherited naturally 683 into the ID->Locator mapping system. Meanwhile, there is no trust 684 issue since each AD authority runs its own DHT ring which maintains 685 only the mappings for those identifiers that are administrated by 686 that AD authority. 688 For host mobility, if communicating entities are RANGI nodes, the 689 mobile node will notify the correspondent node of its new locator 690 once its locator changes due to a mobility or re-homing event. 691 Meanwhile, it should also update its locator information in the 692 ID->Locator mapping system in a timely fashion by using the Secure 693 DNS Dynamic Update mechanism defined in [RFC3007]. In case of 694 simultaneous mobility, at least one of the nodes has to resort to the 695 ID->Locator mapping system for resolving the correspondent node's new 696 locator so as to continue their communication. If the correspondent 697 node is a legacy host, Transit Proxies, which play a similar function 698 to the home-agents in Mobile IP, will relay the packets between the 699 communicating parties. 701 RANGI uses proxies (e.g., Site Proxy and Transit Proxy) to deal with 702 both legacy IPv6 and IPv4 sites. Since proxies function as RANGI 703 hosts, they can handle Locator Update Notification messages sent from 704 remote RANGI hosts (or even from remote RANGI proxies) correctly. 705 Hence there is no edge-to-edge aliveness problem. Details will be 706 specified in a later version of RANGI-PROXY. 708 The intention behind RANGI using IPv4-embedded IPv6 addresses as 709 locators is to reduce the total deployment cost of this new Internet 710 architecture and to avoid renumbering the site internal routers when 711 such a site changes ISPs. 713 4. Internet Vastly Improved Plumbing (Ivip) 715 4.1. Summary 717 4.1.1. Key Ideas 719 Ivip (pronounced eye-vip, est. 2007-06-15) is a core-edge separation 720 scheme for IPv4 and IPv6. It provides multihoming, portability of 721 address space and inbound traffic engineering for end-user networks 722 of all sizes and types, including those of corporations, SOHO and 723 mobile devices. 725 Ivip meets all the constraints imposed by the need for widespread 726 voluntary adoption [Ivip Constraints]. 728 Ivip's global fast-push mapping distribution network is structured 729 like a cross-linked multicast tree. This pushes all mapping changes 730 to full database query servers (QSDs) within ISPs and end-user 731 networks which have ITRs. Each mapping change is sent to all QSDs 732 within a few seconds. 734 ITRs gain mapping information from these local QSDs within a few tens 735 of milliseconds. QSDs notify ITRs of changed mappings with similarly 736 low latency. ITRs tunnel all traffic packets to the correct ETR 737 without significant delay. 739 Ivip's mapping consists of a single ETR address for each range of 740 mapped address space. Ivip ITRs do not need to test reachability to 741 ETRs because the mapping is changed in real-time to that of the 742 desired ETR. 744 End-user networks control the mapping, typically by contracting a 745 specialized company to monitor the reachability of their ETRs and 746 change the mapping to achieve multihoming and/or Traffic Engineering 747 (TE). So the mechanisms which control ITR tunneling are controlled 748 by the end-user networks in real-time and are completely separate 749 from the core-edge separation scheme itself. 751 ITRs can be implemented in dedicated servers or hardware-based 752 routers. The ITR function can also be integrated into sending hosts. 753 ETRs are relatively simple and only communicate with ITRs rarely - 754 for Path MTU management with longer packets. 756 Ivip-mapped ranges of end-user address space need not be subnets. 757 They can be of any length, in units of IPv4 addresses or IPv6 /64s. 759 Compared to conventional unscalable BGP techniques, and to the use of 760 core-edge separation architectures with non-real-time mapping 761 systems, end-user networks will be able to achieve more flexible and 762 responsive inbound TE. If inbound traffic is split into several 763 streams, each to addresses in different mapped ranges, then real-time 764 mapping changes can be used to steer the streams between multiple 765 ETRs at multiple ISPs. 767 Default ITRs in the DFZ (DITRs, similar to LISP's Proxy Tunnel 768 Routers) tunnel packets sent by hosts in networks which lack ITRs. 769 So multihoming, portability and TE benefits apply to all traffic. 771 ITRs request mappings either directly from a local QSD or via one or 772 more layers of caching query servers (QSCs) which in turn request it 773 from a local QSD. QSCs are optional but generally desirable since 774 they reduce the query load on QSDs. 776 ETRs may be in ISP or end-user networks. IP-in-IP encapsulation is 777 used, so there is no UDP or any other header. PMTUD (Path MTU 778 Discovery) management with minimal complexity and overhead will 779 handle the problems caused by encapsulation, and adapt smoothly to 780 jumbo frame paths becoming available in the DFZ. The outer header's 781 source address is that of the sending host - which enables existing 782 ISP Border Router (BR) filtering of source addresses to be extended 783 to encapsulated traffic packets by the simple mechanism of the ETR 784 dropping packets whose inner and outer source address do not match. 786 4.1.2. Extensions 788 4.1.2.1. TTR Mobility 790 The Translating Tunnel Router (TTR) approach to mobility [Ivip 791 Mobility] is applicable to all core-edge separation techniques and 792 provides scalable IPv4 and IPv6 mobility in which the MN keeps its 793 own mapped IP address(es) no matter how or where it is physically 794 connected, including behind one or more layers of NAT. 796 Path-lengths are typically optimal or close to optimal and the MN 797 communicates normally with all other non-mobile hosts (no stack or 798 app changes), and of course other MNs. Mapping changes are only 799 needed when the MN uses a new TTR, which would typically be if the MN 800 moved more than 1000km. Mapping changes are not required when the MN 801 changes its physical address(es). 803 4.1.2.2. Modified Header Forwarding 805 Separate schemes for IPv4 and IPv6 enable tunneling from ITR to ETR 806 without encapsulation. This will remove the encapsulation overhead 807 and PMTUD problems. Both approaches involve modifying all routers 808 between the ITR and ETR to accept a modified form of the IP header. 809 These schemes require new FIB/RIB functionality in DFZ and some other 810 routers but do not alter the BGP functions of DFZ routers. 812 4.1.3. Gains 814 Amenable to widespread voluntary adoption due to no need for host 815 changes, complete support for packets sent from non-upgraded networks 816 and no significant degradation in performance. 818 Modular separation of the control of ITR tunneling behavior from the 819 ITRs and the core-edge separation scheme itself: end-user networks 820 control mapping in any way they like, in real-time. 822 A small fee per mapping change deters frivolous changes and helps pay 823 for pushing the mapping data to all QSDs. End-user networks who make 824 frequent mapping changes for inbound TE, should find these fees 825 attractive considering how it improves their ability to utilize the 826 bandwidth of multiple ISP links. 828 End-user networks will typically pay the cost of Open ITR in the DFZ 829 (OITRD) forwarding to their networks. This provides a business model 830 for OITRD deployment and avoids unfair distribution of costs. 832 Existing source address filtering arrangements at BRs of ISPs and 833 end-user networks are prohibitively expensive to implement directly 834 in ETRs, but with the outer header's source address being the same as 835 the sending host's address, Ivip ETRs inexpensively enforce BR 836 filtering on decapsulated packets. 838 4.1.4. Costs 840 QSDs receive all mapping changes and store a complete copy of the 841 mapping database. However, a worst case scenario is 10 billion IPv6 842 mappings, each of 32 bytes, which fits on a consumer hard drive today 843 and should fit in server DRAM by the time such adoption is reached. 845 The maximum number of non-mobile networks requiring multihoming etc. 846 is likely to be ~10M, so most of the 10B mappings would be for mobile 847 devices. However, TTR mobility does not involve frequent mapping 848 changes since most MNs only rarely move more than 1000km. 850 4.1.5. References 852 [I-D.whittle-ivip4-etr-addr-forw] [Ivip PMTUD] [Ivip6] [Ivip 853 Constraints] [Ivip Mobility] [I-D.whittle-ivip-drtm] 854 [I-D.whittle-ivip-glossary] 856 4.2. Critique 858 Looked at from the thousand foot level, Ivip shares the basic design 859 approaches with LISP and a number of other Map-and-Encap designs 860 based on the core-edge separation. However the details differ 861 substantially. Ivip's design makes a bold assumption that, with 862 technology advances, one could afford to maintain a real time 863 distributed global mapping database for all networks and hosts. Ivip 864 proposes that multiple parties collaborate to build a mapping 865 distribution system that pushes all mapping information and updates 866 to local, full database query servers located in all ISPs within a 867 few seconds. The system has no single point of failure, and uses 868 end-to-end authentication. 870 A "real time, globally synchronized mapping database" is a critical 871 assumption in Ivip. Using that as a foundation, Ivip design avoids 872 several challenging design issues that others have studied 873 extensively, that include 875 1. special considerations of mobility support that add additional 876 complexity to the overall system; 878 2. prompt detection of ETR failures and notification to all relevant 879 ITRs, which turns out to be a rather difficult problem; and 881 3. development of a partial-mapping lookup sub-system. Ivip assumes 882 the existence of local query servers with a full database with 883 the latest mapping information changes. 885 To be considered as a viable solution to Internet routing scalability 886 problem, Ivip faces two fundamental questions. First, whether a 887 global-scale system can achieve real time synchronized operations as 888 assumed by Ivip is an entirely open question. Past experiences 889 suggest otherwise. 891 The second question concerns incremental rollout. Ivip represents an 892 ambitious approach, with real-time mapping and local full database 893 query servers - which many people regard as impossible. Developing 894 and implementing Ivip may take a fair amount of resources, yet there 895 is an open question regarding how to quantify the gains by first 896 movers - both those who will provide the Ivip infrastructure and 897 those that will use it. Significant global routing table reduction 898 only happens when a large enough number of parties have adopted Ivip. 899 The same question arises for most other proposals as well. 901 One belief is that Ivip's more ambitious mapping system makes a good 902 design tradeoff for the greater benefits for end-user networks and 903 for those which develop the infrastructure. Another belief is that 904 this ambitious design is not viable. 906 4.3. Rebuttal 908 Since the Summary and Critique were written, Ivip's mapping system 909 has been significantly redesigned: DRTM - Distributed Real Time 910 Mapping [I-D.whittle-ivip-drtm]. 912 DRTM makes it easier for ISPs to install their own ITRs. It also 913 facilitates Mapped Address Block (MAB) operating companies - which 914 need not be ISPs - leasing Scalable Provider Independent (SPI) 915 address space to end-user networks with almost no ISP involvement. 916 ISPs need not install ITRs or ETRs. For an ISP to support its 917 customers using SPI space, they need only allow the forwarding of 918 outgoing packets whose source addresses are from SPI space. End-user 919 networks can implement their own ETRs on their existing PA 920 address(es) - and MAB operating companies make all the initial 921 investments. 923 Once SPI adoption becomes widespread, ISPs will be motivated to 924 install their own ITRs to locally tunnel packets sent from customer 925 networks which must be tunneled to SPI-using customers of the same 926 ISP - rather than letting these packets exit the ISP's network and 927 return in tunnels to ETRs in the network. 929 There is no need for full-database query servers in ISPs or for any 930 device which stores the full mapping information for all Mapped 931 Address Blocks (MABs). ISPs that want ITRs will install two or more 932 Map Resolver (MR) servers. These are caching query servers which 933 query multiple typically nearby query servers which are full-database 934 for the subset of MABs they serve. These "nearby" query servers will 935 be at DITR sites, which will be run by, or for, MAB operating 936 companies who lease MAB space to large numbers of end-user networks. 937 These DITR-site servers will usually be close enough to the MRs to 938 generate replies with sufficiently low delay and risk of packet loss 939 for ITRs to buffer initial packets for a few tens of milliseconds 940 while the mapping arrives. 942 DRTM will scale to billions of micronets, tens of thousands of MABs 943 and potentially hundreds of MAB operating companies, without single 944 points of failure or central coordination. 946 The critique implies a threshold of adoption is required before 947 significant routing scaling benefits occur. This is untrue of any 948 Core-Edge Separation proposal, including LISP and Ivip. Both can 949 achieve scalable routing benefits in direct proportion to their level 950 of adoption by providing portability, multihoming and inbound TE to 951 large numbers of end-user networks. 953 Core-Edge Elimination (CEE) architectures require all Internet 954 communications to change to IPv6 with a new Locator/Identifier 955 Separation naming model. This would impose burdens of extra 956 management effort, packets and session establishment delays on all 957 hosts - which is a particularly unacceptable burden on battery- 958 operated mobile hosts which rely on wireless links. 960 Core-Edge Separation architectures retain the current, efficient, 961 naming model, require no changes to hosts and support both IPv4 and 962 IPv6. Ivip is the most promising architecture for future development 963 because its scalable, distributed, real-time mapping system best 964 supports TTR Mobility, enables ITRs to be simpler and gives real-time 965 control of ITR tunneling to the end-user network or to organizations 966 they appoint to control the mapping of their micronets. 968 5. Hierarchical IPv4 Framework (hIPv4) 970 5.1. Summary 972 5.1.1. Key Idea 974 The Hierarchical IPv4 Framework (hIPv4) adds scalability to the 975 routing architecture by introducing additional hierarchy in the IPv4 976 address space. The IPv4 addressing scheme is divided into two parts, 977 the Area Locator (ALOC) address space which is globally unique and 978 the Endpoint Locator (ELOC) address space which is only regionally 979 unique. The ALOC and ELOC prefixes are added as a shim header 980 between the IP header and transport protocol header, the shim header 981 is identified with a new protocol number in the IP header. Instead 982 of creating a tunneling (i.e. overlay) solution a new routing element 983 is needed in the service provider's routing domain (called ALOC 984 realm) - a Locator Swap Router. The current IPv4 forwarding plane 985 remains intact and no new routing protocols, mapping systems or 986 caching solutions are required. The control plane of the ALOC realm 987 routers needs some modification in order for ICMP to be compatible 988 with the hIPv4 framework. When an area (one or several ASes) of an 989 ISP has transformed into an ALOC realm, only ALOC prefixes are 990 exchanged with other ALOC realms. Directly attached ELOC prefixes 991 are only inserted to the RIB of the local ALOC realm, ELOC prefixes 992 are not distributed to the DFZ. Multihoming can be achieved in two 993 ways, either the enterprise requests an ALOC prefix from the RIR 994 (this is not recommended) or the enterprise receives the ALOC 995 prefixes from their upstream ISPs. ELOC prefixes are PI addresses 996 and remain intact when a upstream ISP is changed, only the ALOC 997 prefix is replaced. When the RIB of the DFZ is compressed 998 (containing only ALOC prefixes), ingress routers will no longer know 999 the availability of the destination prefix, thus the endpoints must 1000 take more responsibility for their sessions. This can be achieved by 1001 using multipath enabled transport protocols, such as SCTP [RFC4960] 1002 and Multipath TCP (MPTCP) [I-D.ford-mptcp-architecture], at the 1003 endpoints. The multipath transport protocols also provide a session 1004 identifier, i.e. verification tag or token, thus the location and 1005 identifier split is carried out - site mobility, endpoint mobility, 1006 and mobile site mobility are achieved. DNS needs to be upgraded: in 1007 order to resolve the location of an endpoint, the endpoint must have 1008 one ELOC value (current A-record) and at least one ALOC value in DNS 1009 (in multihoming solutions there will be several ALOC values for an 1010 endpoint). 1012 5.1.2. Gains 1014 1. Improved routing scalability: Adding additional hierarchy to the 1015 address space enables more hierarchy in the routing architecture. 1016 Early adapters of an ALOC realm will no longer carry the current 1017 RIB of the DFZ - only ELOC prefixes of their directly attached 1018 networks and ALOC prefixes from other service providers that have 1019 migrated are installed in the ALOC realm's RIB. 1021 2. Scalable support for traffic engineering: Multipath enabled 1022 transport protocols are recommended to achieve dynamic load- 1023 balancing of a session. Support for Valiant Load-balancing 1024 [Valiant] schemes has been added to the framework; more research 1025 work is required around VLB switching. 1027 3. Scalable support for multihoming: Only attachment points of a 1028 multihomed site are advertised (using the ALOC prefix) in the 1029 DFZ. DNS will inform the requester on how many attachment points 1030 the destination endpoint has. It is the initiating endpoint's 1031 choice/responsibility to choose which attachment point is used 1032 for the session; endpoints using multipath enabled transport 1033 protocols can make use of several attachment points for a 1034 session. 1036 4. Simplified Renumbering: When changing provider, the local ELOC 1037 prefixes remains intact, only the ALOC prefix is changed at the 1038 endpoints. The ALOC prefix is not used for routing or forwarding 1039 decisions in the local network. 1041 5. Decoupling Location and Identifier: The verification tag (SCTP) 1042 and token (MPTCP) can be considered to have the characteristics 1043 of a session identifier and thus a session layer is created 1044 between the transport and application layer in the TCP/IP model. 1046 6. Routing quality: The hIPv4 framework introduces no tunneling or 1047 caching mechanisms, only a swap of the content in the IPv4 header 1048 and locator header at the destination ALOC realm is required, 1049 thus current routing and forwarding algorithms are preserved as 1050 such. Valiant Load-balancing might be used as a new forwarding 1051 mechanism. 1053 7. Routing Security: Similar as with today's DFZ, except that ELOC 1054 prefixes can not be hijacked (by injecting a longest match 1055 prefix) outside an ALOC realm. 1057 8. Deployability: The hIPv4 framework is an evolution of the current 1058 IPv4 framework and is backwards compatible with the current IPv4 1059 framework. Sessions in a local network and inside an ALOC realm 1060 might in the future still use the current IPv4 framework. 1062 5.1.3. Costs And Issues 1064 1. Upgrade of the stack at an endpoint that is establishing sessions 1065 outside the local ALOC realm. 1067 2. In a multihoming solution the border routers should be able to 1068 apply policy based routing upon the ALOC value in the locator 1069 header. 1071 3. New IP allocation policies must be set by the RIRs. 1073 4. Short timeframe before the expected depletion of the IPv4 address 1074 space occurs. 1076 5. Will enterprises give up their current globally unique IPv4 1077 address block allocation they have gained? 1079 6. Coordination with MPTCP is highly desirable. 1081 5.1.4. References 1083 [I-D.frejborg-hipv4] 1085 5.2. Critique 1087 hIPv4 is an innovative approach to expanding the IPv4 addressing 1088 system in order to resolve the scalable routing problem. This 1089 critique does not attempt a full assessment of hIPv4's architecture 1090 and mechanisms. The only question addressed here is whether hIPv4 1091 should be chosen for IETF development in preference to, or together 1092 with, the only two proposals which appear to be practical solutions 1093 for IPv4: Ivip and LISP. 1095 Ivip and LISP appear to have a major advantage over hIPv4 in terms of 1096 support for packets sent from non-upgraded hosts/networks. Ivip's 1097 DITRs (Default ITRs in the DFZ) and LISP's PTRs (Proxy Tunnel 1098 Routers) both accept packets sent by any non-upgraded host/network 1099 and tunnel them to the correct ETR - so providing full benefits of 1100 portability, multihoming and inbound TE for these packets as well as 1101 those sent by hosts in networks with ITRs. hIPv4 appears to have no 1102 such mechanism - so these benefits are only available for 1103 communications between two upgraded hosts in upgraded networks. 1105 This means that significant benefits for adopters - the ability to 1106 rely on the new system to provide the portability, multihoming and 1107 inbound TE benefits for all, or almost all, their communications - 1108 will only arise after all, or almost all networks upgrade their 1109 networks, hosts and addressing arrangements. hIPv4's relationship 1110 between adoption levels and benefits to any adopter therefore are far 1111 less favorable to widespread adoption than those of Core-Edge 1112 Separation (CES) architectures such as Ivip and LISP. 1114 This results in hIPv4 also being at a disadvantage regarding the 1115 achievement of significant routing scaling benefits - which likewise 1116 will only result once adoption is close to ubiquitous. Ivip and LISP 1117 can provide routing scaling benefits in direct proportion to their 1118 level of adoption, since all adopters gain full benefits for all 1119 their communications, in a highly scalable manner. 1121 hIPv4 requires stack upgrades, which are not required by any CES 1122 architecture. Furthermore, a large number of existing IPv4 1123 application protocols convey IP addresses between hosts in a manner 1124 which will not work with hIPv4: "There are several applications that 1125 are inserting IPv4 address information in the payload of a packet. 1126 Some applications use the IPv4 address information to create new 1127 sessions or for identification purposes. This section is trying to 1128 list the applications that need to be enhanced; however, this is by 1129 no means a comprehensive list." [I-D.frejborg-hipv4] 1131 If even a few widely used applications would need to be rewritten to 1132 operate successfully with hIPv4, then this would be such a 1133 disincentive to adoption to rule out hIPv4 ever being adopted widely 1134 enough to solve the routing scaling problem, especially since CES 1135 architectures fully support all existing protocols, without the need 1136 for altering host stacks. 1138 It appears that hIPv4 involves major practical difficulties which 1139 mean that in its current form it is not suitable for IETF 1140 development. 1142 5.3. Rebuttal 1144 No rebuttal was submitted for this proposal. 1146 6. Name overlay (NOL) service for scalable Internet routing 1148 6.1. Summary 1149 6.1.1. Key Idea 1151 The basic idea is to add a name overlay (NOL) onto the existing 1152 TCP/IP stack. 1154 Its functions include: 1156 1. Managing host name configuration, registration and 1157 authentication; 1159 2. Initiating and managing transport connection channels (i.e., 1160 TCP/IP connections) by name; 1162 3. Keeping application data transport continuity for mobility. 1164 At the edge network, we introduce a new type of gateway, a Name 1165 Transfer Relay (NTR), which blocks the PI addresses of edge networks 1166 into upstream transit networks. NTRs performs address and/or port 1167 translation between blocked PI addresses and globally routable 1168 addresses, which seem like today's widely used NAT/NAPT devices. 1169 Both legacy and NOL applications behind a NTR can access the outside 1170 as usual. To access the hosts behind a NTR from outside, we need to 1171 use NOL traverse the NTR by name and initiate connections to the 1172 hosts behind it. 1174 Different from proposed host-based ID/Locator split solutions, such 1175 as HIP, Shim6, and name-oriented stack, NOL doesn't need to change 1176 the existing TCP/IP stack, sockets and their packet formats. NOL can 1177 co-exist with the legacy infrastructure, and the core-edge separation 1178 solutions (e.g., APT, LISP, Six/one, Ivip, etc.) 1180 6.1.2. Gains 1182 1. Reduce routing table size: Prevent edge network PI address from 1183 leaking into transit network by deploying gateway NTRs. 1185 2. Traffic Engineering: For legacy and NOL application sessions, 1186 the incoming traffic can be directed to a specific NTR by DNS. 1187 In addition, for NOL applications, initial sessions can be 1188 redirected from one NTR to other appropriate NTRs. These 1189 mechanisms provide some support for traffic engineering. 1191 3. Multihoming: When a PI addressed network connects to the 1192 Internet by multihoming with several providers, it can deploy 1193 NTRs to block the PI addresses from leaking into provider 1194 networks. 1196 4. Transparency: NTRs can be allocated PA addresses from the 1197 upstream providers and store them in NTRs' address pool. By DNS 1198 query or NOL session, any session that wants to access the hosts 1199 behind the NTR can be delegated to a specific PA address in the 1200 NTR address pool. 1202 5. Mobility: The NOL layer manages the traditional TCP/IP transport 1203 connections, and provides application data transport continuity 1204 by checkpointing the transport connection at sequence number 1205 boundaries. 1207 6. No need to change TCP/IP stack, sockets and DNS system. 1209 7. No need for extra mapping system. 1211 8. NTR can be deployed unilaterally, just like NATs 1213 9. NOL applications can communicate with legacy applications. 1215 10. NOL can be compatible with existing solutions, such as APT, 1216 LISP, Ivip, etc. 1218 11. End user controlled multipath indirect routing based on 1219 distributed NTRs. This will give benefits to the performance- 1220 aware applications, such as, MSN, Video streaming, etc. 1222 6.1.3. Costs 1224 1. Legacy applications have trouble with initiating access to the 1225 servers behind NTR. Such trouble can be resolved by deploying 1226 NOL proxy for legacy hosts, or delegating globally routable PA 1227 addresses in the NTR address pool for these servers, or deploying 1228 a proxy server outside the NTR. 1230 2. NOL may increase the number of entries in DNS, but it is not 1231 drastic, because it only increases the number of DNS records at 1232 domain granularity not the number of hosts. The name used in 1233 NOL, for example, is similar to an email address 1234 hostname@domain.net. The needed DNS entries and query is just 1235 for "domain.net", and the NTR knows the "hostnames". Not only 1236 will the number of DNS records be increased, but the dynamics of 1237 DNS might be agitated as well. However the scalability and 1238 performance of DNS is guaranteed by its naming hierarchy and 1239 caching mechanisms. 1241 3. Address translating/rewriting costs on NTRs. 1243 6.1.4. References 1245 No references were submitted. 1247 6.2. Critique 1249 1. Applications on hosts need to be rebuilt based on a name overlay 1250 library to be NOL-enabled. The legacy software that is not 1251 maintained will not be able to benefit from NOL in the core-edge 1252 elimination situation. In the core-edge separation scheme, a new 1253 gateway NTR is deployed to prevent edge specific PI prefixes from 1254 leaking into the transit core. NOL doesn't impede the legacy 1255 endpoints behind the NTR from accessing the outside Internet, but 1256 the legacy endpoints cannot or will have difficultly accessing 1257 the endpoints behind a NTR without the help of NOL. 1259 2. In the case of core-edge elimination, the end site will be 1260 assigned multiple PA address spaces, which leads to renumbering 1261 troubles when switching to other upstream providers. Upgrading 1262 endpoints to support NOL doesn't give any benefits to edge 1263 networks. Endpoints have little incentive to use NOL in a core- 1264 edge elimination scenario, and the same is true with other host- 1265 based ID/locator split proposals. Edge networks prefer PI 1266 address space to PA address space whether they are IPv4 or IPv6 1267 networks. 1269 3. In the core-edge separation scenario, the additional gateway NTR 1270 is to prevent the specific prefixes from the edge networks, just 1271 like a NAT or the ITR/ETR of LISP. A NTR gateway can be seen as 1272 an extension of NAT (Network Address Translation). Although NATs 1273 are deployed widely, upgrading them to support NOL extension or 1274 deploying additional new gateway NTRs at the edge networks are on 1275 a voluntary basis and have few economic incentives. 1277 4. The stateful or stateless translation for each packet traversing 1278 a NTR will require the cost of the CPU and memory of NTRs, and 1279 increase forwarding delay. Thus, it is not appropriate to deploy 1280 NTRs at the high-level transit networks where aggregated traffic 1281 may cause congestion at the NTRs. 1283 5. In the core-edge separation scenario, the requirement for 1284 multihoming and inter-domain traffic engineering will make end 1285 sites accessible via multiple different NTRs. For reliability, 1286 all of the associations between multiple NTRs and the end site 1287 name will be kept in DNS, which may increase the load of DNS. 1289 6. To support mobility, it is necessary for DNS to update the 1290 corresponding name-NTR mapping records when an end system moves 1291 from behind one NTR to another NTR. The NOL-enabled end relies 1292 on the NOL layer to preserve the continuity of the transport 1293 layer, since the underlying TCP/UDP transport session would be 1294 broken when the IP address changed. 1296 6.3. Rebuttal 1298 NOL resembles neither CEE nor CES as a solution. By supporting 1299 application level session through the name overlay layer, NOL can 1300 support some solutions in the CEE style. However, NOL is in general 1301 closer to CES solutions, i.e., preventing PI prefixes of edge 1302 networks from entering into the upstream transit networks. This is 1303 done by the NTR, like the ITR/ETRs in CES solutions, but NOL has no 1304 need to define the clear boundary between core and edge networks. 1305 NOL is designed to try to provide end users or networks a service 1306 that facilitates the adoption of multihoming, multipath routing and 1307 traffic engineering by the indirect routing through NTRs, and, in the 1308 mean time, doesn't accelerate, or decrease, the growth of global 1309 routing table size. 1311 Some problems are described in the NOL critique. In the original NOL 1312 proposal document, the DNS query for a host that is behind a NTR will 1313 induce the return of the actual IP addresses of the host and the 1314 address of the NTR. This arrangement might cause some difficulties 1315 for legacy applications due to the non-standard response from DNS. 1316 To resolve this problem, we instead have the NOL service use a new 1317 namespace, and have DNS not return NTR IP addresses for the legacy 1318 hosts. The names used for NOL are formatted like email addresses, 1319 such as "des@domain.net". The mapping between "domain.net" and IP 1320 address of corresponding NTR will be registered in DNS. The NOL 1321 layer will understand the meaning of the name "des@domain.net" , and 1322 it will send a query to DNS only for "domain.net". DNS will then 1323 return IP addresses of the corresponding NTRs. Legacy applications, 1324 will still use the traditional FQDN name and DNS will return the 1325 actual IP address of the host. However, if the host is behind a NTR, 1326 the legacy applications may be unable to access the host. 1328 The stateless address translation or stateful address and port 1329 translation may cause a scaling problem with the number of table 1330 entries NTR must maintain, and legacy applications can not initiate 1331 sessions with hosts inside the NOL-adopting End User Network (EUN). 1332 However, these problems may not be a big barrier for the deployment 1333 of NOL or other similar approaches. Many NAT-like boxes, proxy, and 1334 firewall devices are widely used at the Ingress/Egress points of 1335 Enterprise networks, campus networks or other stub EUNs. The hosts 1336 running as servers can be deployed outside NTRs or be assigned PA 1337 addresses in a NTR-adopting EUN. 1339 7. Compact routing in locator identifier mapping system (CRM) 1341 7.1. Summary 1343 7.1.1. Key Idea 1345 This proposal is to build a highly scalable locator identity mapping 1346 system using compact routing principles. This provides the means for 1347 dynamic topology adaption to facilitate efficient aggregation [CRM]. 1348 Map servers are assigned as cluster heads or landmarks based on their 1349 capability to aggregate EID announcements. 1351 7.1.2. Gains 1353 Minimizes the routing table sizes at the system level (i.e., map 1354 servers). Provides clear upper bounds for routing stretch that 1355 define the packet delivery delay of the map request/first packet. 1357 Organizes the mapping system based on the EID numbering space, 1358 minimizes the administrative overhead of managing the EID space. No 1359 need for administratively planned hierarchical address allocation as 1360 the system will find convergence into a set of EID allocations. 1362 Availability and robustness of the overall routing system (including 1363 xTRs and map servers) is improved because of the potential to use 1364 multiple map servers and direct routes without the involvement of map 1365 servers. 1367 7.1.3. Costs 1369 The scalability gains will materialize only in large deployments. If 1370 the stretch is bounded to those of compact routing (worst case 1371 stretch less or equal to 3, on average 1+epsilon) then xTRs need to 1372 have memory/cache for the mappings of its cluster. 1374 7.1.4. References 1376 [CRM] 1378 7.2. Critique 1380 The CRM proposal is not a complete proposal, and therefore cannot be 1381 considered for further development by the IETF as a scalable routing 1382 solution. 1384 While Compact Routing principles may be able to improve a mapping 1385 overlay structure such as LISP-ALT there are several objections to 1386 this approach. 1388 Firstly, a CRM-modified ALT structure would still be a global query 1389 server system. No matter how ALT's path lengths and delays are 1390 optimized, there is a problem with a querier - which could be 1391 anywhere in the world - relying on mapping information from one or 1392 ideally two or more authoritative query servers, which could also be 1393 anywhere in the world. The delays and risks of packet loss that are 1394 inherent in such a system constitute a fundamental problem. This is 1395 especially true when multiple, potentially long, traffic streams are 1396 received by ITRs and forwarded over the CRM networks for delivery to 1397 the destination network. ITRs must use the CRM infrastructure while 1398 they are awaiting a map reply. The traffic forwarded on the CRM 1399 infrastructure functions as map requests and can present a 1400 scalability and performance issue to the infrastructure. 1402 Secondly, the alterations contemplated in this proposal involve the 1403 roles of particular nodes in the network being dynamically assigned 1404 as part of its self-organizing nature. 1406 The discussion of Clustering in the middle of page 4 also indicates 1407 that particular nodes are responsible for registering EIDs from 1408 typically far-distant ETRs, all of which are handling closely related 1409 EIDs which this node can aggregate. Since MSes are apparently nodes 1410 within the compact routing system, and the process of an MS deciding 1411 whether to accept EID registrations is determined as part of the 1412 self-organizing properties of the system, there are concerns about 1413 how EID registration can be performed securely, when no particular 1414 physical node is responsible for it. 1416 Thirdly there are concerns about individually owned nodes performing 1417 work for other organizations. Such problems of trust and of 1418 responsibilities and costs being placed on those who do not directly 1419 benefit already exist in the interdomain routing system, and are a 1420 challenge for any scalable routing solution. 1422 There are simpler solutions to the mapping problem than having an 1423 elaborate network of routers. If a global-scale query system is 1424 still preferred, then it would be better to have ITRs use local MRs, 1425 each of which is dynamically configured to know the IP address of the 1426 million or so authoritative Map Server (MS) query servers - or two 1427 million or so assuming they exist in pairs for redundancy. 1429 It appears that the inherently greater delays and risks of packet 1430 loss of any global query server system make them unsuitable mapping 1431 solutions for Core-Edge Elimination or Core-Edge Separation 1432 architectures. The solution to these problems appears to involve a 1433 greater number of widely distributed authoritative query servers, one 1434 or more of which will therefore be close enough to each querier that 1435 delays and risk of packet loss are reduced to acceptable levels. 1437 Such a structure would be suitable for map requests, but perhaps not 1438 for handling traffic packets to be delivered to the destination 1439 networks. 1441 7.3. Rebuttal 1443 CRM is most easily understood as an alteration to the routing 1444 structure of the LISP-ALT mapping overlay system, by altering or 1445 adding to the network's BGP control plane. 1447 CRM's aims include the delivery of initial traffic packets to their 1448 destination networks where they also function as map requests. These 1449 packet streams may be long and numerous in the fractions of a second 1450 to perhaps several seconds that may elapse before the ITR receives 1451 the map reply. 1453 Compact Routing principles are used to optimize the path length taken 1454 by these query or traffic packets through a significantly modified 1455 version of the ALT (or similar) network while also generally reducing 1456 typical or maximum paths taken by the query packets. 1458 An overlay network is a diversion from the shortest path. However, 1459 CMR limits this diversion and provides an upper bound. Landmark 1460 routers/servers could deliver more than just the first traffic 1461 packet, subject to their CPU capabilities and their network 1462 connectivity bandwidths. 1464 The trust between the landmarks (mapping servers) can be built based 1465 on the current BGP relationships. Registration to the landmark nodes 1466 needs to be authenticated mutually between the MS and the system that 1467 is registering. This part is not documented in the proposal text. 1469 8. Layered mapping system (LMS) 1471 8.1. Summary 1473 8.1.1. Key Ideas 1475 The layered mapping system proposal builds a hierarchical mapping 1476 system to support scalability, analyzes the design constraints and 1477 presents an explicit system structure; designs a two-cache mechanism 1478 on ingress tunneling router (ITR) to gain low request delay and 1479 facilitates data validation. Tunneling and mapping are done at the 1480 core and no change is needed on edge networks. The mapping system is 1481 run by interest groups independent of any ISP, which conforms to an 1482 economical model and can be voluntarily adopted by various networks. 1483 Mapping systems can also be constructed stepwise, especially in the 1484 IPv6 scenario. 1486 8.1.2. Gains 1488 1. Scalability 1490 1. Distributed storage of mapping data avoids central storage of 1491 massive amounts of data and restricts updates within local 1492 areas. 1494 2. The cache mechanism in an ITR reduces request loads on 1495 mapping system reasonably. 1497 2. Deployability 1499 1. No change on edge systems, only tunneling in core routers, 1500 and new devices in core networks. 1502 2. The mapping system can be constructed stepwise: a mapping 1503 node needn't be constructed if none of its responsible ELOCs 1504 is allocated. This makes sense especially for IPv6. 1506 3. Conforms to a viable economic model: the mapping system 1507 operators can profit from their services; core routers and 1508 edge networks are willing to join the circle either to avoid 1509 router upgrades or realize traffic engineering. Benefits 1510 from joining are independent of the scheme's implementation 1511 scale. 1513 3. Low request delay: The low number of layers in the mapping 1514 structure and the two-stage cache help achieve low request delay. 1516 4. Data consistency: The two-stage cache enables an ITR to update 1517 data in the map cache conveniently. 1519 5. Traffic engineering support: Edge networks inform the mapping 1520 system of their prioritized mappings with all upstream routers, 1521 thus giving the edge networks control over their ingress flows. 1523 8.1.3. Costs 1525 1. Deployment of LMS needs to be further discussed. 1527 2. The structure of mapping system needs to be refined according to 1528 practical circumstances. 1530 8.1.4. References 1532 [LMS Summary] [LMS] 1534 8.2. Critique 1536 LMS is a mapping mechanism based on core-edge separation. In fact, 1537 any proposal that needs a global mapping system with keys with 1538 similar properties to that of an "edge address" in a core-edge 1539 separation scenario can use such a mechanism. This means that those 1540 keys are globally unique (by authorization or just statistically), at 1541 the disposal of edge users, and may have several satisfied mappings 1542 (with possibly different weights). A proposal to address routing 1543 scalability that needs mapping but doesn't specify the mapping 1544 mechanism can use LMS to strengthen its infrastructure. 1546 The key idea of LMS is similar to that of LISP+ALT: that the mapping 1547 system should be hierarchically organized to gain scalability for 1548 storage and updates, and to achieve quick indexing for lookups. 1549 However, LMS advocates an ISP-independent mapping system and ETRs are 1550 not the authorities of mapping data. ETRs or edge-sites report their 1551 mapping data to related mapping servers. 1553 LMS assumes that mapping servers can be incrementally deployed in 1554 that a server may not be constructed if none of its administered edge 1555 addresses are allocated, and that mapping servers can charge for 1556 their services, which provides the economic incentive for their 1557 existence. How this brand-new system can be constructed is still not 1558 clear. Explicit layering is only an ideal state, and the proposal 1559 analyzes the layering limits and feasibility, rather than provide a 1560 practical way for deployment. 1562 The drawbacks of LMS's feasibility analysis also include that it 1) 1563 is based on current PC power and may not represent future 1564 circumstances (especially for IPv6), and 2) does not consider the 1565 variability of address utilization. Some IP address spaces may be 1566 effectively allocated and used while some may not, causing some 1567 mapping servers to be overloaded with others poorly utilized. More 1568 thoughts are needed as to the flexibility of the layer design. 1570 LMS doesn't fit well for mobility. It does not solve the problem 1571 when hosts move faster than the mapping updates and propagation 1572 between relative mapping servers. On the other hand, mobile hosts 1573 moving across ASes and changing their attachment points (core 1574 addresses) is less frequent than hosts moving within an AS. 1576 Separation needs two planes: core-edge separation, which is to gain 1577 routing table scalability and identity-location separation, which is 1578 to achieve mobility. GLI does a good clarification of this and in 1579 that case, LMS can be used to provide identity-to-core address 1580 mapping. Of course, other schemes may be competent and LMS can be 1581 incorporated with them if the scheme has global keys and needs to map 1582 them to other namespaces. 1584 8.3. Rebuttal 1586 No rebuttal was submitted for this proposal. 1588 9. 2-phased mapping 1590 9.1. Summary 1592 9.1.1. Considerations 1594 1. A mapping from prefixes to ETRs is an M:M mapping. Any change of 1595 a (prefix, ETR) pair should be updated in a timely manner which 1596 can be a heavy burden to any mapping system if the relation 1597 changes frequently. 1599 2. A prefix<->ETR mapping system cannot be deployed efficiently if 1600 it is overwhelmed by the worldwide dynamics. Therefore the 1601 mapping itself is not scalable with this direct mapping scheme. 1603 9.1.2. Basics of a 2-phased mapping 1605 1. Introduce an AS number in the middle of the mapping, the phase I 1606 mapping is prefix<->AS#, phase II mapping is AS#<->ETRs. This 1607 creates a M:1:M mapping model. 1609 2. It is fair to assume that all ASes know their local prefixes (in 1610 the IGP) better than others and that it is most likely that local 1611 prefixes can be aggregated when they can be mapped to the AS 1612 number, which will reduce the number of mapping entries. ASes 1613 also know clearly their ETRs on the border between core and edge. 1614 So all mapping information can be collected locally. 1616 3. A registry system will take care of the phase I mapping 1617 information. Each AS should have a registration agent to notify 1618 the registry of the local range of IP address space. This system 1619 can be organized as a hierarchical infrastructure like DNS, or 1620 alternatively as a centralized registry like "whois" in each RIR. 1621 Phase II mapping information can be distributed between xTRs as a 1622 BGP extension. 1624 4. The basic forwarding procedure is that the ITR first gets the 1625 destination AS number from the phase I mapper (or from cache) 1626 when the packet is entering the "core". Then it will extract the 1627 closest ETR for the destination AS number. This is local, since 1628 phase II mapping information has been "pushed" to it through BGP 1629 updates. Finally, the ITR tunnels the packet to the 1630 corresponding ETR. 1632 9.1.3. Gains 1634 1. Any prefix reconfiguration (aggregation/deaggregation) within an 1635 AS will not be reflected in the mapping system. 1637 2. Local prefixes can be aggregated with a high degree of 1638 efficiency. 1640 3. Both phase I and phase II mappings can be stable. 1642 4. A stable mapping system will reduce the update overhead 1643 introduced by topology changes and/or routing policy dynamics. 1645 9.1.4. Summary 1647 1. The 2-phased mapping scheme introduces an AS number between the 1648 mapping prefixes and ETRs. 1650 2. The decoupling of direct mapping makes highly dynamic updates 1651 stable, therefore it can be more scalable than any direct mapping 1652 designs. 1654 3. The 2-phased mapping scheme is adaptable to any core/edge split 1655 based proposals. 1657 9.1.5. References 1659 No references were submitted. 1661 9.2. Critique 1663 This is a simple idea on how to scale mapping. However, this design 1664 is too incomplete to be considered a serious input to RRG. Take the 1665 following 2 issues as example: 1667 First, in this 2-phase scheme, an AS is essentially the unit of 1668 destinations (i.e. sending ITRs find out destination AS D, then send 1669 data to one of of D's ETR). This does not offer much choice for 1670 traffic engineering. 1672 Second, there is no consideration whatsoever on failure detection and 1673 handling. 1675 9.3. Rebuttal 1677 No rebuttal was submitted for this proposal. 1679 10. Global Locator, Local Locator, and Identifier Split (GLI-Split) 1681 10.1. Summary 1683 10.1.1. Key Idea 1685 GLI-Split implements a separation between global routing (in the 1686 global Internet outside edge networks) and local routing (inside edge 1687 networks) using global and local locators (GLs, LLs). In addition, a 1688 separate static identifier (ID) is used to identify communication 1689 endpoints (e.g. nodes or services) independently of any routing 1690 information. Locators and IDs are encoded in IPv6 addresses to 1691 enable backwards-compatibility with the IPv6 Internet. The higher 1692 order bits store either a GL or a LL while the lower order bits 1693 contain the ID. A local mapping system maps IDs to LLs and a global 1694 mapping system maps IDs to GLs. The full GLI-mode requires nodes 1695 with upgraded networking stacks and special GLI-gateways. The GLI- 1696 gateways perform stateless locator rewriting in IPv6 addresses with 1697 the help of the local and global mapping system. Non-upgraded IPv6 1698 nodes can also be accommodated in GLI-domains since an enhanced DHCP 1699 service and GLI-gateways compensate their missing GLI-functionality. 1700 This is an important feature for incremental deployability. 1702 10.1.2. Gains 1704 The benefits of GLI-Split are 1706 o Hierarchical aggregation of routing information in the global 1707 Internet through separation of edge and core routing 1709 o Provider changes not visible to nodes inside GLI-domains 1710 (renumbering not needed) 1712 o Rearrangement of subnetworks within edge networks not visible to 1713 the outside world (better support of large edge networks) 1715 o Transport connections survive both types of changes 1717 o Multihoming 1718 o Improved traffic engineering for incoming and outgoing traffic 1720 o Multipath routing and load balancing for hosts 1722 o Improved resilience 1724 o Improved mobility support without home agents and triangle routing 1726 o Interworking with the classic Internet 1728 * without triangle routing over proxy routers 1730 * without stateful NAT 1732 These benefits are available for upgraded GLI-nodes, but non-upgraded 1733 nodes in GLI-domains partially benefit from these advanced features, 1734 too. This offers multiple incentives for early adopters and they 1735 have the option to migrate their nodes gradually from non-GLI stacks 1736 to GLI-stacks. 1738 10.1.3. Costs 1740 o Local and global mapping system 1742 o Modified DHCP or similar mechanism 1744 o GLI-gateways with stateless locator rewriting in IPv6 addresses 1746 o Upgraded stacks (only for full GLI-mode) 1748 10.1.4. References 1750 [GLI] [Valiant] 1752 10.2. Critique 1754 GLI-Split makes a clear distinction between two separation planes: 1755 the separation between identifier and locator, which is to meet end- 1756 users needs including mobility; and the separation between local and 1757 global locator, to make the global routing table scalable. The 1758 distinction is needed since ISPs and hosts have different 1759 requirements, also make the changes inside and outside GLI-domains 1760 invisible to their opposites. 1762 A main drawback of GLI-Split is that it puts a burden on hosts. 1763 Before routing a packet received from upper layers, network stacks in 1764 hosts first need to resolve the DNS name to an IP address; if the IP 1765 address is GLI-formed, it may look up the map from the identifier 1766 extracted from the IP address to the local locator. If the 1767 communication is between different GLI-domains, hosts may further 1768 look up the mapping from the identifier to the global locator. 1769 Having the local mapping system forward requests to the global 1770 mapping system for hosts is just an option. Though host lookup may 1771 ease the burden of intermediate nodes which would otherwise to 1772 perform the mapping lookup, the three lookups by hosts in the worst 1773 case may lead to large delays unless a very efficient mapping 1774 mechanism is devised. The work may also become impractical for low- 1775 powered hosts. On one hand, GLI-split can provide backward 1776 compatibility where classic and upgraded IPv6 hosts can communicate, 1777 which is its big virtue; while the upgrades may work against hosts' 1778 enthusiasm to change, compared to the benefits they would gain. 1780 GLI-split provides additional features to improve TE and to improve 1781 resilience, e.g., exerting multipath routing. However the cost is 1782 that more burdens are placed on hosts, e.g. they may need more lookup 1783 actions and route selections. However, these kinds of tradeoffs 1784 between costs and gains exists in most proposals. 1786 One improvement of GLI-Split is its support for mobility by updating 1787 DNS data as GLI-hosts move across GLI-domains. Through this the GLI- 1788 corresponding-node can query DNS to get a valid global locator of the 1789 GLI-mobile-node and need not query the global mapping system (unless 1790 it wants to do multipath routing), giving more incentives for nodes 1791 to become GLI-enabled. The merits of GLI-Split, simplified-mobility- 1792 handover provision, compensate for the costs of this improvement. 1794 GLI-Split claims to use rewriting instead of tunneling for 1795 conversions between local and global locators when packets span GLI- 1796 domains. The major advantage is that this kind of rewriting needs no 1797 extra state, since local and global locators need not map to each 1798 other. Many other rewriting mechanisms instead need to maintain 1799 extra state. It also avoids the MTU problem faced by the tunneling 1800 methods. However, GLI-Split achieves this only by compressing the 1801 namespace size of each attribute (identifier, local and global 1802 locator). GLI-Split encodes two namespaces (identifier and local/ 1803 global locator) into an IPv6 address, each has a size of 2^64 or 1804 less, while map-and-encap proposals assume that identifier and 1805 locator each occupy a 128 bit space. 1807 10.3. Rebuttal 1809 The arguments in the GLI-Split critique are correct. There are only 1810 two points that should be clarified here. (1) First, it is not a 1811 drawback that hosts perform the mapping lookups. (2) Second, the 1812 critique proposed an improvement to the mobility mechanism, which is 1813 of general nature and not specific to GLI-Split. 1815 1. The additional burden on the hosts is actually a benefit, 1816 compared to having the same burden on the gateways. If the 1817 gateway would perform the lookups and packets addressed to 1818 uncached EIDs arrive, a lookup in the mapping system must be 1819 initiated. Until the mapping reply returns, packets must be 1820 either dropped, cached, or the packets must be sent over the 1821 mapping system to the destination. All these options are not 1822 optimal and have their drawbacks. To avoid these problems in 1823 GLI-Split, the hosts perform the lookup. The short additional 1824 delay is not a big issue in the hosts because it happens before 1825 the first packets are sent. So no packets are lost or have to be 1826 cached. GLI-Split could also easily be adapted to special GLI- 1827 hosts (e.g., low power sensor nodes) that do not have to do any 1828 lookup and simply let the gateway do all the work. This 1829 functionality is included anyway for backward compatibility with 1830 regular IPv6-hosts inside the GLI-domain. 1832 2. The critique proposes a DNS-based mobility mechanism as an 1833 improvement to GLI-Split. However, this improvement is an 1834 alternative mobility approach which can be applied to any routing 1835 architecture including GLI-Split and raises also some concerns, 1836 e.g., the update speed of DNS. Therefore, we prefer to keep this 1837 issue out of the discussion. 1839 11. Tunneled Inter-domain Routing (TIDR) 1841 11.1. Summary 1843 11.1.1. Key Idea 1845 Provides a method for locator-identifier separation using tunnels 1846 between routers on the edge of the Internet transit infrastructure. 1847 It enriches the BGP protocol for distributing the identifier-to- 1848 locator mapping. Using new BGP attributes, "identifier prefixes" are 1849 assigned inter-domain routing locators so that they will not be 1850 installed in the RIB and will be moved to a new table called Tunnel 1851 Information Base (TIB). Afterwards, when routing a packet to an 1852 "identifier prefix", the TIB will be searched first to perform 1853 tunneling, and secondly the RIB for actual routing. After the edge 1854 router performs tunneling, all routers in the middle will route this 1855 packet until the router at the tail-end of the tunnel. 1857 11.1.2. Gains 1859 o Smooth deployment 1860 o Size reduction of the global RIB 1862 o Deterministic customer traffic engineering for incoming traffic 1864 o Numerous forwarding decisions for a particular address prefix 1866 o Stops AS number space depletion 1868 o Improved BGP convergence 1870 o Protection of the inter-domain routing infrastructure 1872 o Easy separation of control traffic and transit traffic 1874 o Different layer-2 protocol-IDs for transit and non-transit traffic 1876 o Multihoming resilience 1878 o New address families and tunneling techniques 1880 o Support for IPv4 or IPv6, and migration to IPv6 1882 o Scalability, stability and reliability 1884 o Faster inter-domain routing 1886 11.1.3. Costs 1888 o Routers on the edge of the inter-domain infrastructure will need 1889 to be upgraded to hold the mapping database (i.e. the TIB) 1891 o "Mapping updates" will need to be treated differently from usual 1892 BGP "routing updates" 1894 11.1.4. References 1896 [I-D.adan-idr-tidr] [TIDR identifiers] [TIDR and LISP] [TIDR AS 1897 forwarding] 1899 11.2. Critique 1901 TIDR is a Core-Edge Separation architecture from late 2006 which 1902 distributes its mapping information via BGP messages which are passed 1903 between DFZ routers. 1905 This means that TIDR cannot solve the most important goal of scalable 1906 routing - to accommodate much larger numbers of end-user network 1907 prefixes (millions or billions) without each such prefix directly 1908 burdening every DFZ router. Messages advertising routes for TIDR- 1909 managed prefixes may be handled with lower priority, but this would 1910 only marginally reduce the workload for each DFZ router compared to 1911 handling an advertisement of a conventional PI prefix. 1913 Therefore, TIDR cannot be considered for RRG recommendation as a 1914 solution to the routing scaling problem. 1916 For a TIDR-using network to receive packets sent from any host, every 1917 BR of all ISPs must be upgraded to have the new ITR-like 1918 functionality. Furthermore, all DFZ routers would need to be altered 1919 so they accepted and correctly propagated the routes for end-user 1920 network address space, with the new LOCATOR attribute which contains 1921 the ETR address and a REMOTE-PREFERENCE value. Firstly, if they 1922 received two such advertisements with different LOCATORs, they would 1923 advertise a single route to this prefix containing both. Secondly, 1924 for end-user address space (for IPv4) to be more finely divided, the 1925 DFZ routers must propagate LOCATOR-containing advertisements for 1926 prefixes longer than /24. 1928 TIDR's ITR-like routers store the full mapping database - so there 1929 would be no delay in obtaining mapping, and therefore no significant 1930 delay in tunneling traffic packets. 1932 The TIDR ID is written as if traffic packets are classified by 1933 reference to the RIB - but routers use the FIB for this purpose, and 1934 "FIB" does not appear in the ID. 1936 TIDR does not specify a tunneling technique, leaving this to be 1937 chosen by the ETR-like function of BRs and specified as part of a 1938 second-kind of new BGP route advertised by that ETR-like BR. There 1939 is no provision for solving the PMTUD problems inherent in 1940 encapsulation-based tunneling. 1942 ITR functions must be performed by already busy routers of ISPs, 1943 rather than being distributed to other routers or to sending hosts. 1944 There is no practical support for mobility. The mapping in each end- 1945 user route advertisement includes a REMOTE-PREFERENCE for each ETR- 1946 like BR, but this is used by the ITR-like functions of BRs to always 1947 select the LOCATOR with the highest value. As currently described, 1948 TIDR does not provide inbound load splitting TE. 1950 Multihoming service restoration is achieved initially by the ETR-like 1951 function of BR at the ISP whose link to the end-user network has just 1952 failed, looking up the mapping to find the next preferred ETR-like 1953 BR's address. The first ETR-like router tunnels the packets to the 1954 second ETR-like router in the other ISP. However, if the failure was 1955 caused by the first ISP itself being unreachable, then connectivity 1956 would not be restored until a revised mapping (with higher REMOTE- 1957 PREFERENCE) from the reachable ETR-like BR of the second ISP 1958 propagated across the DFZ to all ITR-like routers, or the withdrawn 1959 advertisement for the first one reaches the ITR-like router. 1961 11.3. Rebuttal 1963 No rebuttal was submitted for this proposal. 1965 12. Identifier-Locator Network Protocol (ILNP) 1967 12.1. Summary 1969 12.1.1. Key Ideas 1971 o Provides crisp separation of Identifiers from Locators. 1973 o Identifiers name nodes, not interfaces. 1975 o Locators name subnetworks, rather than interfaces, so they are 1976 equivalent to an IP routing prefix. 1978 o Identifiers are never used for network-layer routing, whilst 1979 Locators are never used for Node Identity. 1981 o Transport-layer sessions (e.g. TCP session state) use only 1982 Identifiers, never Locators, meaning that changes in location have 1983 no adverse impact on an IP session. 1985 12.1.2. Benefits 1987 o The underlying protocol mechanisms support fully scalable site 1988 multihoming, node multihoming, site mobility, and node mobility. 1990 o ILNP enables topological aggregation of location information while 1991 providing stable and topology-independent identities for nodes. 1993 o In turn, this topological aggregation reduces both the routing 1994 prefix "churn" rate and the overall size of the Internet's global 1995 routing table, by eliminating the value and need for more-specific 1996 routing state currently carried throughout the global (default- 1997 free) zone of the routing system. 1999 o ILNP enables improved Traffic Engineering capabilities without 2000 adding any state to the global routing system. TE capabilities 2001 include both provider-driven TE and also end-site-controlled TE. 2003 o ILNP's mobility approach: 2005 * eliminates the need for special-purpose routers (e.g. Home 2006 Agent and/or Foreign Agent now required by Mobile IP & NEMO). 2008 * eliminates "triangle routing" in all cases. 2010 * supports both "make before break" and "break before make" 2011 layer-3 handoffs. 2013 o ILNP improves resilience and network availability while reducing 2014 the global routing state (as compared with the currently deployed 2015 Internet). 2017 o ILNP is Incrementally Deployable: 2019 * No changes are required to existing IPv6 (or IPv4) routers. 2021 * Upgraded nodes gain benefits immediately ("day one"); those 2022 benefits gain in value as more nodes are upgraded (this follows 2023 Metcalfe's Law). 2025 * Incremental Deployment approach is documented. 2027 o ILNP is Backwards Compatible: 2029 * ILNPv6 is fully backwards compatible with IPv6 (ILNPv4 is fully 2030 backwards compatible with IPv4). 2032 * Reuses existing known-to-scale DNS mechanisms to provide 2033 identifier/locator mapping. 2035 * Existing DNS Security mechanisms are reused without change. 2037 * Existing IP Security mechanisms are reused with one minor 2038 change (IPsec Security Associations replace the current use of 2039 IP Addresses with the use of Identifier values). NB: IPsec is 2040 also backwards compatible. 2042 * Backwards Compatibility approach is documented. 2044 o No new or additional overhead is required to determine or to 2045 maintain locator/path liveness. 2047 o ILNP does not require locator rewriting (NAT); ILNP permits and 2048 tolerates NAT should that be desirable in some deployment(s). 2050 o Changes to upstream network providers do not require node or 2051 subnetwork renumbering within end-sites. 2053 o Compatible with and can facilitate the transition from current 2054 single-path TCP to multipath TCP. 2056 o ILNP can be implemented such that existing applications (e.g. 2057 applications using the BSD Sockets API) do NOT need any changes or 2058 modifications to use ILNP. 2060 12.1.3. Costs 2062 o End systems need to be enhanced incrementally to support ILNP in 2063 addition to IPv6 (or IPv4 or both). 2065 o DNS servers supporting upgraded end systems also should be 2066 upgraded to support new DNS resource records for ILNP. (DNS 2067 protocol & DNS security do not need any changes.) 2069 12.1.4. References 2071 [ILNP Site] [MobiArch1] [MobiArch2] [MILCOM1] [MILCOM2] [DNSnBIND] 2072 [I-D.carpenter-behave-referral-object] [I-D.rja-ilnp-nonce] [RFC4033] 2073 [RFC4034] [RFC4035] [RFC5534] [RFC5902] 2075 12.2. Critique 2077 The primary issue for ILNP is how the deployment incentives and 2078 benefits line up with the RRG goal of reducing the rate of growth of 2079 entries and churn in the core routing table. If a site is currently 2080 using PI space, it can only stop advertising that space when the 2081 entire site is ILNP capable. This needs at least clear elucidation 2082 of the incentives for ILNP which are not related to routing scaling, 2083 in order for there to be a path for this to address the RRG needs. 2084 Similarly, the incentives for upgrading hosts need to align with the 2085 value for those hosts. 2087 A closely related question is whether this mechanism actually 2088 addresses the sites need for PI addresses. Assuming ILNP is 2089 deployed, the site does achieve flexible, resilient, communication 2090 using all of its Internet connections. While the proposal addresses 2091 the host updates when the host learns of provider changes, there are 2092 other aspects of provider change that are not addressed. This 2093 includes renumbering router, subnets, and certain servers. (It is 2094 presumed that most servers, once the entire site has moved to ILNP, 2095 will not be concerned if their locator changes. However, some 2096 servers must have known locators, such as the DNS server.) The 2097 issues described in [RFC5887] will be ameliorated, but not resolved. 2099 To be able to adopt this proposal, and have sites use it, we need to 2100 address these issues. When a site changes points of attachment only 2101 a small amount of DNS provisioning should be required. The LP record 2102 is apparently intended to help with this. It is also likely that the 2103 use of dynamic DNS will help this. 2105 The ILNP mechanism is described as being suitable for use in 2106 conjunction with mobility. This raises the question of race 2107 conditions. To the degree that mobility concerns are valid at this 2108 time, it is worth asking how communication can be established if a 2109 node is sufficiently mobile that it is moving faster than the DNS 2110 update and DNS fetch cycle can effectively propagate changes. 2112 This proposal does presume that all communication using this 2113 mechanism is tied to DNS names. While it is true that most 2114 communication does start from a DNS name, it is not the case that all 2115 exchanges have this property. Some communication initiation and 2116 referral can be done with an explicit I/L pair. This does appear to 2117 require some extensions to the existing mechanism (for both sides to 2118 add locators). In general, some additional clarity on the 2119 assumptions regarding DNS, particularly for low end devices, would 2120 seem appropriate. 2122 One issue that this proposal shares with many others is the question 2123 of how to determine which locator pairs (local and remote) are 2124 actually functional. This is an issue both for initial 2125 communications establishment, and for robustly maintaining 2126 communication. While it is likely that a combination of monitoring 2127 of traffic (in the host, where this is tractable), coupled with other 2128 active measures, can address this. ICMP is clearly insufficient. 2130 12.3. Rebuttal 2132 ILNP eliminates the perceived need for PI addressing, and encourages 2133 increased DFZ aggregation. Many enterprise users view DFZ scaling 2134 issues as too abstruse. So ILNP creates more user-visible incentives 2135 to upgrade deployed systems. 2137 ILNP mobility eliminates Duplicate Address Detection (DAD), reducing 2138 the layer-3 handoff time significantly, compared to IETF standard 2139 Mobile IP. [MobiArch1] [MobiArch2] ICMP Location updates separately 2140 reduce the layer-3 handoff latency. 2142 Also, ILNP enables both host multihoming and site multihoming. 2143 Current BGP approaches cannot support host multihoming. Host 2144 multihoming is valuable in reducing the site's set of externally 2145 visible nodes. 2147 Improved mobility support is very important. This is shown by the 2148 research literature and also appears in discussions with vendors of 2149 mobile devices (smartphones, MP3-players). Several operating system 2150 vendors push "updates" with major networking software changes in 2151 maintenance releases today. Security concerns mean most hosts 2152 receive vendor updates more quickly these days. 2154 ILNP enables a site to hide exterior connectivity changes from 2155 interior nodes, using various approaches. One approach deploys 2156 unique local address (ULA) prefixes within the site and has the site 2157 border router(s) rewrite the Locator values. The usual NAT issues 2158 don't arise because the Locator value is not used above the network- 2159 layer. [MILCOM1] [MILCOM2] 2161 [RFC5902] makes clear that many users desire IPv6 NAT, with site 2162 interior obfuscation as a major driver. This makes global-scope PI 2163 addressing much less desirable for end sites than formerly. 2165 ILNP-capable nodes can talk existing IP with legacy IP-only nodes, 2166 with no loss of current IP capability. So ILNP-capable nodes will 2167 never be worse off. 2169 Secure Dynamic DNS Update is standard, and widely supported in 2170 deployed hosts and DNS servers. [DNSnBIND] says many sites have 2171 deployed this technology without realizing it (e.g. by enabling both 2172 the DHCP server and Active Directory of MS-Windows Server). 2174 If a node is as mobile as the critique says, then existing IETF 2175 Mobile IP standards also will fail. They also use location updates 2176 (e.g. MN->HA, MN->FA). 2178 ILNP also enables new approaches to security that eliminate 2179 dependence upon location-dependent ACLs without packet 2180 authentication. Instead, security appliances track flows using 2181 Identifier values, and validate the I/L relationship 2182 cryptographically [RFC4033] [RFC4034] [RFC4035] or non- 2183 cryptographically by reading the [I-D.rja-ilnp-nonce]. 2185 The DNS LP record has a more detailed explanation now. LP records 2186 enable a site to change its upstream connectivity by changing the L 2187 records of a single FQDN covering the whole site, providing 2188 scalability. 2190 DNS-based server load balancing works well with ILNP by using DNS SRV 2191 records. DNS SRV records are not new, are widely available in DNS 2192 clients & servers, and are widely used today in the IPv4 Internet for 2193 Server Load Balancing. 2195 Recent ILNP I-Ds discuss referrals in more detail. A node with a 2196 binary-referral can find the FQDN using DNS PTR records, which can be 2197 authenticated [RFC4033] [RFC4034] [RFC4035]. Approaches such as 2198 [I-D.carpenter-behave-referral-object] improve user experience and 2199 user capability, so are likely to self-deploy. 2201 Selection from multiple Locators is identical to an IPv4 system 2202 selecting from multiple A records for its correspondent. Deployed IP 2203 nodes can track reachability via existing host mechanisms, or by 2204 using the SHIM6 method. [RFC5534] 2206 13. Enhanced Efficiency of Mapping Distribution Protocols in Map-and- 2207 Encap Schemes (EEMDP) 2209 13.1. Summary 2211 13.1.1. Introduction 2213 We present some architectural principles pertaining to the mapping 2214 distribution protocols, especially applicable to map-and-encap (e.g., 2215 LISP) type of protocols. These principles enhance the efficiency of 2216 the map-and-encap protocols in terms of (1) better utilization of 2217 resources (e.g., processing and memory) at Ingress Tunnel Routers 2218 (ITRs) and mapping servers, and consequently, (2) reduction of 2219 response time (e.g., first packet delay). We consider how Egress 2220 Tunnel Routers (ETRs) can perform aggregation of end-point ID (EID) 2221 address space belonging to their downstream delivery networks, in 2222 spite of migration/re-homing of some subprefixes to other ETRs. This 2223 aggregation may be useful for reducing the processing load and memory 2224 consumption associated with map messages, especially at some 2225 resource-constrained ITRs and subsystems of the mapping distribution 2226 system. We also consider another architectural concept where the 2227 ETRs are organized in a hierarchical manner for the potential benefit 2228 of aggregation of their EID address spaces. The two key 2229 architectural ideas are discussed in some more detail below. A more 2230 complete description can be found in [EEMDP Considerations] and 2231 [EEMDP Presentation]. 2233 It will be helpful to refer to Figures 1, 2, and 3 in the document 2234 noted above for some of the discussions that follow here below. 2236 13.1.2. Management of Mapping Distribution of Subprefixes Spread Across 2237 Multiple ETRs 2239 To assist in this discussion, we start with the high level 2240 architecture of a map-and-encap approach (it would be helpful to see 2241 Fig. 1 in the document mentioned above). In this architecture we 2242 have the usual ITRs, ETRs, delivery networks, etc. In addition, we 2243 have the ID-Locator Mapping (ILM) servers which are repositories for 2244 complete mapping information, while the ILM-Regional (ILM-R) servers 2245 can contain partial and/or regionally relevant mapping information. 2247 While a large endpoint address space contained in a prefix may be 2248 mostly associated with the delivery networks served by one ETR, some 2249 fragments (subprefixes) of that address space may be located 2250 elsewhere at other ETRs. Let a/20 denote a prefix that is 2251 conceptually viewed as composed of 16 subnets of /24 size that are 2252 denoted as a1/24, a2/24, ..., a16/24. For example, a/20 is mostly at 2253 ETR1, while only two of its subprefixes a8/24 and a15/24 are 2254 elsewhere at ETR3 and ETR2, respectively (see Fig. 2 in the 2255 document). From the point of view of efficiency of the mapping 2256 distribution protocol, it may be beneficial for ETR1 to announce a 2257 map for the entire space a/20 (rather than fragment it into a 2258 multitude of more-specific prefixes), and provide the necessary 2259 exceptions in the map information. Thus the map message could be in 2260 the form of Map:(a/20, ETR1; Exceptions: a8/24, a15/24). In 2261 addition, ETR2 and ETR3 announce the maps for a15/24 and a8/24, 2262 respectively, and so the ILMs know where the exception EID addresses 2263 are located. Now consider a host associated with ITR1 initiating a 2264 packet destined for an address a7(1), which is in a7/24 that is not 2265 in the exception portion of a/20. Now a question arises as to which 2266 of the following approaches would be the best choice: 2268 1. ILM-R provides the complete mapping information for a/20 to ITR1 2269 including all maps for relevant exception subprefixes. 2271 2. ILM-R provides only the directly relevant map to ITR1 which in 2272 this case is (a/20, ETR1). 2274 In the first approach, the advantage is that ITR1 would have the 2275 complete mapping for a/20 (including exception subnets), and it would 2276 not have to generate queries for subsequent first packets that are 2277 destined to any address in a/20, including a8/24 and a15/24. 2278 However, the disadvantage is that if there is a significant number of 2279 exception subprefixes, then the very first packet destined for a/20 2280 will experience a long delay, and also the processors at ITR1 and 2281 ILM-R can experience overload. In addition, the memory usage at ITR1 2282 can be very inefficient as well. The advantage of the second 2283 approach above is that the ILM-R does not overload resources at ITR1 2284 both in terms of processing and memory usage but it needs an enhanced 2285 map response in of the form Map:(a/20, ETR1, MS=1), where MS (more 2286 specific) indicator is set to 1 to indicate to ITR1 that not all 2287 subnets in a/20 map to ETR1. The key idea is that aggregation is 2288 beneficial and subnet exceptions must be handled with additional 2289 messages or indicators in the maps. 2291 13.1.3. Management of Mapping Distribution for Scenarios with Hierarchy 2292 of ETRs and Multihoming 2294 Now we highlight another architectural concept related to mapping 2295 management (please refer to Fig. 3 in the document). Here we 2296 consider the possibility that ETRs may be organized in a hierarchical 2297 manner. For instance ETR7 is higher in hierarchy relative to ETR1, 2298 ETR2, and ETR3, and like-wise ETR8 is higher relative to ETR4, ETR5, 2299 and ETR6. For instance, ETRs 1 through 3 can relegate the locator 2300 role to ETR7 for their EID address space. In essence, they can allow 2301 ETR7 to act as the locator for the delivery networks in their 2302 purview. ETR7 keeps a local mapping table for mapping the 2303 appropriate EID address space to specific ETRs that are 2304 hierarchically associated with it in the level below. In this 2305 situation, ETR7 can perform EID address space aggregation across ETRs 2306 1 through 3 and can also include its own immediate EID address space 2307 for the purpose of that aggregation. The many details related to 2308 this approach and special circumstances involving multihoming of 2309 subnets are discussed in detail in the detailed document noted 2310 earlier. The hierarchical organization of ETRs and delivery networks 2311 should help in the future growth and scalability of ETRs and mapping 2312 distribution networks. This is essentially recursive map-and-encap, 2313 and some of the mapping distribution and management functionality 2314 will remain local to topologically neighboring delivery networks 2315 which are hierarchically underneath ETRs. 2317 13.1.4. References 2319 [EEMDP Considerations] [EEMDP Presentation] [FIBAggregatability] 2321 13.2. Critique 2323 This scheme [EEMDP Considerations] represents one approach to mapping 2324 overhead reduction, and it is a general idea that is applicable to 2325 any proposal that includes prefix or EID aggregation. A somewhat 2326 similar idea is also used in Level-3 aggregation in the FIB 2327 aggregation proposal. [FIBAggregatability] There can be cases where 2328 deaggregation of EID prefixes occur in such a way that bulk of an EID 2329 prefix P would be attached to one locator (say, ETR1) while a few 2330 subprefixes under P would be attached to other locators elsewhere 2331 (say, ETR2, ETR3, etc.). Ideally such cases should not happen, 2332 however in reality it can happen as RIR's address allocations are 2333 imperfect. In addition, as new IP address allocations become harder 2334 to get, an IPv4 prefix owner might split previously unused 2335 subprefixes of that prefix and allocate them to remote sites (homed 2336 to other ETRs). Assuming these situations could arise in practice, 2337 the nature of the solution would be that the response from the 2338 mapping server for the coarser site would include information about 2339 the more specifics. The solution as presented seems correct. 2341 The proposal mentions that in Approach 1, the ID-Locator Mapping 2342 (ILM) system provides the complete mapping information for an 2343 aggregate EID prefix to a querying ITR including all the maps for the 2344 relevant exception subprefixes. The sheer number of such more- 2345 specifics can be worrisome, for example, in LISP. What if a 2346 company's mobile-node EIDs came out of their corporate EID-prefix? 2347 Approach 2 is far better but still there may be too many entries for 2348 a regional ILM to store. In Approach 2, the ILM communicates that 2349 there are more specifics but does not communicate their mask-length. 2350 A suggested improvement would be that rather than saying that there 2351 are more specifics, indicate what their mask-lengths are. There can 2352 be multiple mask lengths. This number should be pretty small for 2353 IPv4 but can be large for IPv6. 2355 Later in the proposal, a different problem is addressed involving a 2356 hierarchy of ETRs and how aggregation of EID prefixes from lower 2357 level ETRs can be performed at a higher level ETR. The various 2358 scenarios here are well illustrated and described. This seems like a 2359 good idea, and a solution like LISP can support this as specified. 2360 As any optimization scheme would inevitably add some complexity; the 2361 proposed scheme for enhancing mapping efficiency comes with some of 2362 its own overhead. The gain depends on the details of specific EID 2363 blocks, i.e., how frequently the situations arise such as an ETR 2364 having a bigger EID block with a few holes. 2366 13.3. Rebuttal 2368 There are two main points in the critique that would be addressed 2369 here: (1) The gain depends on the details of specific EID blocks, 2370 i.e., how frequently the situations arise such as an ETR having a 2371 bigger EID block with a few holes, and (2) Approach 2 is lacking an 2372 added feature of conveying just the mask-length of the more specifics 2373 that exist as part of current map-response. 2375 Regarding comment (1) above, there are multiple possibilities 2376 regarding how situations can arise resulting in allocations having 2377 holes in them. An example of one of these possibilities is as 2378 follows. Org-A has historically received multiple /20s, /22s, /24s 2379 over the course of time which are adjacent to each other. At the 2380 present time, these prefixes would all aggregate to a /16 but for the 2381 fact that just a few of the underlying /24s have been allocated 2382 elsewhere historically to other organizations by an RIR or ISPs. An 2383 example of a second possibility is that Org-A has an allocation of a 2384 /16. It has suballocated a /22 to one of its subsidiaries, and 2385 subsequently sold the subsidiary to another Org-B. For ease of 2386 keeping the /22 subnet up and running without service disruption, the 2387 /22 subprefix is allowed to be transferred in the acquisition 2388 process. Now the /22 subprefix originates from a different AS and is 2389 serviced by a different ETR (as compared to the parent \16 prefix). 2390 We are in the process of performing an analysis of RIR allocation 2391 data and are aware of other studies (notably at UCLA) which are also 2392 performing similar analysis to quantify the frequency of occurrence 2393 of the holes. We feel that the problem that has been addressed is a 2394 realistic one, and the proposed scheme would help reduce the 2395 overheads associated with the mapping distribution system. 2397 Regarding comment (2) above, the suggested modification to Approach 2 2398 would be definitely beneficial. In fact, we feel that it would be 2399 fairly straight forward to dynamically use Approach 1 or Approach 2 2400 (with the suggested modification), depending on whether there are 2401 only a few (e.g., <=5) or many (e.g., >5) more specifics, 2402 respectively. The suggested modification of notifying the mask- 2403 length of the more specifics in map-response is indeed very helpful 2404 because then the ITR would not have to resend a map-query for EID 2405 addresses that match the EID address in the previous query up to at 2406 least mask-length bit positions. There can be a two-bit field in 2407 map-response that would indicate: (a) With value 00 for notifying 2408 that there are no more-specifics; (b) With value 01 for notifying 2409 that there are more-specifics and their exact information follows in 2410 additional map-responses, and (c) With value 10 for notifying that 2411 there are more-specifics and the mask-length of the next more- 2412 specific is indicated in the current map-response. An additional 2413 field will be included which will be used to specify the mask-length 2414 of the next more-specific in the case of the "10" indication (case 2415 (c) above). 2417 14. Evolution 2419 14.1. Summary 2421 As the Internet continues its rapid growth, router memory size and 2422 CPU cycle requirements are outpacing feasible hardware upgrade 2423 schedules. We propose to solve this problem by applying aggregation 2424 with increasing scopes to gradually evolve the routing system towards 2425 a scalable structure. At each evolutionary step, our solution is 2426 able to interoperate with the existing system and provide immediate 2427 benefits to adopters to enable deployment. This document summarizes 2428 the need for an evolutionary design, the relationship between our 2429 proposal and other revolutionary proposals and the steps of 2430 aggregation with increasing scopes. Our detailed proposal can be 2431 found in [I-D.zhang-evolution]. 2433 14.1.1. Need for Evolution 2435 Multiple different views exist regarding the routing scalability 2436 problem. Networks differ vastly in goals, behavior, and resources, 2437 giving each a different view of the severity and imminence of the 2438 scalability problem. Therefore we believe that, for any solution to 2439 be adopted, it will start with one or a few early adopters, and may 2440 not ever reach the entire Internet. The evolutionary approach 2441 recognizes that changes to the Internet can only be a gradual process 2442 with multiple stages. At each stage, adopters are driven by and 2443 rewarded with solving an immediate problem. Each solution must be 2444 deployable by individual networks who deem it necessary at a time 2445 they deem it necessary, without requiring coordination from other 2446 networks, and the solution has to bring immediate relief to a single 2447 first-mover. 2449 14.1.2. Relation to Other RRG Proposals 2451 Most proposals take a revolutionary approach that expects the entire 2452 Internet to eventually move to some new design whose main benefits 2453 would not materialize until the vast majority of the system has been 2454 upgraded; their incremental deployment plan simply ensures 2455 interoperation between upgraded and legacy parts of the system. In 2456 contrast, the evolutionary approach depicts a picture where changes 2457 may happen here and there as needed, but there is no dependency on 2458 the system as a whole making a change. Whoever takes a step forward 2459 gains the benefit by solving his own problem, without depending on 2460 others to take actions. Thus, deployability includes not only 2461 interoperability, but also the alignment of costs and gains. 2463 The main differences between our approach and more revolutionary map- 2464 and-encap proposals are: (a) we do not start with a pre-defined 2465 boundary between edge and core; and (b) each step brings immediate 2466 benefits to individual first-movers. Note that our proposal neither 2467 interferes nor prevents any revolutionary host-based solutions such 2468 as ILNP from being rolled out. However, host-based solutions do not 2469 bring useful impact until a large portion of hosts have been 2470 upgraded. Thus even if a host-based solution is rolled out in the 2471 long run, an evolutionary solution is still needed for the near term. 2473 14.1.3. Aggregation with Increasing Scopes 2475 Aggregating many routing entries to a fewer number is a basic 2476 approach to improving routing scalability. Aggregation can take 2477 different forms and be done within different scopes. In our design, 2478 the aggregation scope starts from a single router, then expands to a 2479 single network, and neighbor networks. The order of the following 2480 steps is not fixed but is merely a suggestion; it is under each 2481 individual network's discretion which steps they choose to take based 2482 on their evaluation of the severity of the problems and the 2483 affordability of the solutions. 2485 1. FIB Aggregation (FA) in a single router. A router 2486 algorithmically aggregates its FIB entries without changing its 2487 RIB or its routing announcements. No coordination among routers 2488 is needed, nor any change to existing protocols. This brings 2489 scalability relief to individual routers with only a software 2490 upgrade. 2492 2. Enabling 'best external' on PEs, ASBRs, and RRs, and turning on 2493 next-hop-self on RRs. For hierarchical networks, the RRs in each 2494 PoP can serve as a default gateway for nodes in the PoP, thus 2495 allowing the non-RR nodes in each PoP to maintain smaller routing 2496 tables that only include paths that egress out of that PoP. This 2497 is known as 'topology-based mode' Virtual Aggregation, and can be 2498 done with existing hardware and configuration changes only. 2499 Please see [Evolution Grow Presentation] for details. 2501 3. Virtual Aggregation (VA) in a single network. Within an AS, some 2502 fraction of existing routers are designated as Aggregation Point 2503 Routers (APRs). These routers are either individually or 2504 collectively maintain the full FIB table. Other routers may 2505 suppress entries from their FIBs, instead forwarding packets to 2506 APRs, which will then tunnel the packets to the correct egress 2507 routers. VA can be viewed as an intra-domain map-and-encap 2508 system to provide the operators with a control mechanism for the 2509 FIB size in their routers. 2511 4. VA across neighbor networks. When adjacent networks have VA 2512 deployed, they can go one step further by piggybacking egress 2513 router information on existing BGP announcements, so that packets 2514 can be tunneled directly to a neighbor network's egress router. 2515 This improves packet delivery performance by performing the 2516 encapsulation/decapsulation only once across these neighbor 2517 networks, as well as reducing the stretch of the path. 2519 5. Reducing RIB Size by separating the control plane from the data 2520 plane. Although a router's FIB can be reduced by FA or VA, it 2521 usually still needs to maintain the full RIB to produce complete 2522 routing announcements to its neighbors. To reduce the RIB size, 2523 a network can set up special boxes, which we call controllers, to 2524 take over the eBGP sessions from border routers. The controllers 2525 receive eBGP announcements, make routing decisions, and then 2526 inform other routers in the same network of how to forward 2527 packets, while the regular routers just focus on the job of 2528 forwarding packets. The controllers, not being part of the data 2529 path, can be scaled using commodity hardware. 2531 6. Insulating forwarding routers from routing churn. For routers 2532 with a smaller RIB, the rate of routing churn is naturally 2533 reduced. Further reduction can be achieved by not announcing 2534 failures of customer prefixes into the core, but handling these 2535 failures in a data-driven fashion, e.g., a link failure to an 2536 edge network is not reported unless and until there are data 2537 packets that are heading towards the failed link. 2539 14.1.4. References 2541 [I-D.zhang-evolution] [Evolution Grow Presentation] 2543 14.2. Critique 2545 All of the RRG proposals that scale the routing architecture share 2546 one fundamental approach, route aggregation, in different forms, 2547 e.g., LISP removes "edge prefixes" using encapsulation at ITRs, and 2548 ILNP achieves the goal by locator rewrite. In this evolutionary path 2549 proposal, each stage of the evolution applies aggregation with 2550 increasing scopes to solve a specific scalability problem, and 2551 eventually the path leads towards global routing scalability. For 2552 example, it uses FIB aggregation at the single router level, virtual 2553 aggregation at the network level, and then between neighboring 2554 networks at the inter-domain level. 2556 Compared to other proposals, this proposal has the lowest hurdle to 2557 deployment, because it does not require that all networks move to use 2558 a global mapping system or upgrade all hosts, and it is designed for 2559 each individual network to get immediate benefits after its own 2560 deployment. 2562 Criticisms of this proposal fall into two types. The first type 2563 concerns several potential issues in the technical design as listed 2564 below: 2566 1. FIB aggregation, at level-3 and level-4, may introduce extra 2567 routable space. Concerns have been raised about the potential 2568 routing loops resulting from forwarding otherwise non-routable 2569 packets, and the potential impact on RPF checking. These 2570 concerns can be addressed by choosing a lower level of 2571 aggregation and by adding null routes to minimize the extra 2572 space, at the cost of reduced aggregation gain. 2574 2. Virtual Aggregation changes the traffic paths in an ISP network, 2575 thereby introducing stretch. Changing the traffic path may also 2576 impact the reverse path checking practice used to filter out 2577 packets from spoofed sources. More analysis is need to identify 2578 the potential side-effects of VA and to address these issues. 2580 3. The current Virtual Aggregation description is difficult to 2581 understand, due to its multiple options for encapsulation and 2582 popular prefix configurations, which makes the mechanism look 2583 overly complicated. More thought is needed to simplify the 2584 design and description. 2586 4. FIB Aggregation and Virtual Aggregation may require additional 2587 operational cost. There may be new design trade-offs that the 2588 operators need to understand in order to select the best option 2589 for their networks. More analysis is needed to identify and 2590 quantify all potential operational costs. 2592 5. In contrast to a number of other proposals, this solution does 2593 not provide mobility support. It remains an open question as to 2594 whether the routing system should handle mobility. 2596 The second criticism is whether deploying quick fixes like FIB 2597 aggregation would alleviate scalability problems in the short term 2598 and reduce the incentives for deploying a new architecture; and 2599 whether an evolutionary approach would end up with adding more and 2600 more patches to the old architecture, and not lead to a fundamentally 2601 new architecture as the proposal had expected. Though this solution 2602 may get rolled out more easily and quickly, a new architecture, if/ 2603 once deployed, could solve more problems with cleaner solutions. 2605 14.3. Rebuttal 2607 No rebuttal was submitted for this proposal. 2609 15. Name-Based Sockets 2611 15.1. Summary 2613 Name-based sockets are an evolution of the existing address-based 2614 sockets, enabling applications to initiate and receive communication 2615 sessions based on the use of domain names in lieu of IP addresses. 2616 Name-based sockets move the existing indirection from domain names to 2617 IP addresses from its current position in applications down to the IP 2618 layer. As a result, applications communicate exclusively based on 2619 domain names, while the discovery, selection, and potentially in- 2620 session re-selection of IP addresses is centrally performed by the IP 2621 stack itself. 2623 Name-based sockets help mitigate the Internet routing scalability 2624 problem by separating naming and addressing more consistently than 2625 what is possible with the existing address-based sockets. This 2626 supports IP address aggregation because it simplifies the use of IP 2627 addresses with high topological significance, as well as the dynamic 2628 replacement of IP addresses during network-topological and host- 2629 attachment changes. 2631 A particularly positive effect of name-based sockets on Internet 2632 routing scalability is the new incentives for edge network operators 2633 to use provider-assigned IP addresses, which are more aggregatable 2634 than the typically preferred provider-independent IP addresses. Even 2635 though provider-independent IP addresses are harder to get and more 2636 expensive than provider-assigned IP addresses, many operators desire 2637 provider-independent addresses due to the high indirect cost of 2638 provider-assigned IP addresses. This indirect cost is comprised of 2639 both difficulties in multihoming, and tedious and largely manual 2640 renumbering upon provider changes. 2642 Name-based sockets reduce the indirect cost of provider-assigned IP 2643 addresses in three ways, and hence make the use of provider-assigned 2644 IP addresses more acceptable: (1) They enable fine-grained and 2645 responsive multihoming. (2) They simplify renumbering by offering an 2646 easy means to replace IP addresses in referrals with domain names. 2647 This helps avoiding updates to application and operating system 2648 configurations, scripts, and databases during renumbering. (3) They 2649 facilitate low-cost solutions that eliminate renumbering altogether. 2650 One such low-cost solution is IP address translation, which in 2651 combination with name-based sockets loses its adverse impact on 2652 applications. 2654 The prerequisite for a positive effect of name-based sockets on 2655 Internet routing scalability is their adoption in operating systems 2656 and applications. Operating systems should be augmented to offer 2657 name-based sockets as a new alternative to the existing address-based 2658 sockets, and applications should use name-based sockets for their 2659 communications. Neither an instantaneous, nor an eventually complete 2660 transition to name-based sockets is required, yet the positive effect 2661 on Internet routing scalability will grow with the extent of this 2662 transition. 2664 Name-based sockets were hence designed with a focus on deployment 2665 incentives, comprising both immediate deployment benefits as well as 2666 low deployment costs. Name-based sockets provide a benefit to 2667 application developers because the alleviation of applications from 2668 IP address management responsibilities simplifies and expedites 2669 application development. This benefit is immediate owing to the 2670 backwards compatibility of name-based sockets with legacy 2671 applications and legacy peers. The appeal to application developers, 2672 in turn, is an immediate benefit for operating system vendors who 2673 adopt name-based sockets. 2675 Name-based sockets furthermore minimize deployment costs: Alternative 2676 techniques to separate naming and addressing provide applications 2677 with "surrogate IP addresses" that dynamically map onto regular IP 2678 addresses. A surrogate IP address is indistinguishable from a 2679 regular IP address for applications, but does not have the 2680 topological significance of a regular IP address. Mobile IP and the 2681 Host Identity Protocol are examples of such separation techniques. 2682 Mobile IP uses "home IP addresses" as surrogate IP addresses with 2683 reduced topological significance. The Host Identity Protocol uses 2684 "host identifiers" as surrogate IP addresses without topological 2685 significance. A disadvantage of surrogate IP addresses is their 2686 incurred cost in terms of extra administrative overhead and, for some 2687 techniques, extra infrastructure. Since surrogate IP addresses must 2688 be resolvable to the corresponding regular IP addresses, they must be 2689 provisioned in the DNS or similar infrastructure. Mobile IP uses a 2690 new infrastructure of home agents for this purpose, while the Host 2691 Identity Protocol populates DNS servers with host identities. Name- 2692 based sockets avoid this cost because they function without surrogate 2693 IP addresses, and hence without the provisioning and infrastructure 2694 requirements that accompany surrogate addresses. 2696 Certainly, some edge networks will continue to use provider- 2697 independent addresses despite name-based sockets, perhaps simply due 2698 to inertia. But name-based sockets will help reduce the number of 2699 those networks, and thus have a positive impact on Internet routing 2700 scalability. 2702 A more comprehensive description of name-based sockets can be found 2703 in [Name Based Sockets]. 2705 15.1.1. References 2707 [Name Based Sockets] 2709 15.2. Critique 2711 Name-based sockets contribution to the routing scalability problem is 2712 to decrease the reliance on PI addresses, allowing a greater use of 2713 PA addresses, and thus a less fragmented routing table. It provides 2714 end hosts with an API which makes the applications address-agnostic. 2715 The name abstraction allows the hosts to use any type of locator, 2716 independent of format or provider. This increases the motivation and 2717 usability of PA addresses. Some applications, in particular 2718 bootstrapping applications, may still require hard coded IP 2719 addresses, and as such will still motivate the use of PI addresses. 2721 15.2.1. Deployment 2723 The main incentives and drivers are geared towards the transition of 2724 applications to the name-based sockets. Adoption by applications 2725 will be driven by benefits in terms of reduced application 2726 development cost. Legacy applications are expected to migrate to the 2727 new API at a slower pace, as the name-based sockets are backwards 2728 compatible, this can happen in a per-host fashion. Also, not all 2729 applications can be ported to a FQDN dependent infrastructure, e.g. 2730 DNS functions. This hurdle is manageable, and may not be a definite 2731 obstacle for the transition of a whole domain, but it needs to be 2732 taken into account when striving for mobility/multihoming of an 2733 entire site. The transition of functions on individual hosts may be 2734 trivial, either through upgrades/changes to the OS or as linked 2735 libraries. This can still happen incrementally and independently, as 2736 compatibility is not affected by the use of name-based sockets. 2738 15.2.2. Edge-networks 2740 Name-based sockets rely on the transition of individual applications 2741 and are backwards compatible, so they do not require bilateral 2742 upgrades. This allows each host to migrate its applications 2743 independently. Name-based sockets may make an individual client 2744 agnostic to the networking medium, be it PA/PI IP-addresses or in a 2745 the future an entirely different networking medium. However, an 2746 entire edge-network, with internal and external services will not be 2747 able to make a complete transition in the near future. Hence, even 2748 if a substantial fraction of the hosts in an edge-network use name- 2749 based sockets, PI addresses may still be required by the edge- 2750 network. In short, new services may be implemented using name-based 2751 sockets, old services may be ported. Name-based sockets provide an 2752 increased motivation to move to PA-addresses as actual provider 2753 independence relies less and less on PI-addressing. 2755 15.3. Rebuttal 2757 No rebuttal was submitted for this proposal. 2759 16. Routing and Addressing in Networks with Global Enterprise Recursion 2760 (IRON-RANGER) 2762 16.1. Summary 2764 RANGER is a locator-identifier separation approach that uses IP-in-IP 2765 encapsulation to connect edge networks across transit networks such 2766 as the global Internet. End systems use endpoint interface 2767 identifier (EID) addresses that may be routable within edge networks 2768 but do not appear in transit network routing tables. EID to Routing 2769 Locator (RLOC) address bindings are instead maintained in mapping 2770 tables and also cached in default router FIBs (i.e., very much the 2771 same as for the global DNS and its associated caching resolvers). 2772 RANGER enterprise networks are organized in a recursive hierarchy 2773 with default mappers connecting lower layers to the next higher layer 2774 in the hierarchy. Default mappers forward initial packets and push 2775 mapping information to lower-tier routers and end systems through 2776 secure redirection. 2778 RANGER is an architectural framework derived from the Intra-Site 2779 Automatic Tunnel Addressing Protocol (ISATAP). 2781 16.1.1. Gains 2783 o provides a scalable routing system alternative in instances where 2784 dynamic routing protocols are impractical 2786 o naturally supports a recursively-nested "network-of-networks" (or, 2787 "enterprise-within-enterprise") hierarchy 2789 o uses asymmetric security mechanisms (i.e., secure neighbor 2790 discovery) to secure router discovery and the redirection 2791 mechanism 2793 o can quickly detect path failures and pick alternate routes 2795 o naturally supports provider-independent addressing 2797 o support for site multihoming and traffic engineering 2799 o ingress filtering for multihomed sites 2801 o mobility-agile through explicit cache invalidation (much more 2802 reactive than DynDns) 2804 o supports neighbor discovery and neighbor unreachability detection 2805 over tunnels 2807 o no changes to end systems 2809 o no changes to most routers 2811 o supports IPv6 transition 2813 o compatible with true identity/locator split mechanisms such as HIP 2814 (i.e., packets contain a HIP Host Identity Tag (HIT) as an end 2815 system identifier, IPv6 address as endpoint Interface iDentifier 2816 (EID) in the inner IP header and IPv4 address as Routing LOCator 2817 (RLOC) in the outer IP header) 2819 o prototype code available 2821 16.1.2. Costs 2823 o new code needed in enterprise border routers 2825 o locator/path liveness detection using RFC 4861 neighbor 2826 unreachability detection (i.e., extra control messages, but data- 2827 driven) [RFC4861] 2829 16.1.3. References 2831 [I-D.templin-iron] [I-D.russert-rangers] [I-D.templin-intarea-vet] 2832 [I-D.templin-intarea-seal] [RFC5201] [RFC5214] [RFC5720] 2834 16.2. Critique 2836 The RANGER architectural framework is intended to be applicable for a 2837 Core-Edge Separation (CES) architecture for scalable routing, using 2838 either IPv4 or IPv6 - or using both in an integrated system which may 2839 carry one protocol over the other. 2841 However, despite the ID being readied for publication as an 2842 experimental RFC, the framework falls well short of the level of 2843 detail required to envisage how it could be used to implement a 2844 practical scalable routing solution. For instance, the ID contains 2845 no specification for a mapping protocol, or how the mapping lookup 2846 system would work on a global scale. 2848 There is no provision for RANGER's ITR-like routers being able to 2849 probe the reachability of end-user networks via multiple ETR-like 2850 routers - nor for any other approach to multihoming service 2851 restoration. 2853 Nor is there any provision for inbound TE or support of mobile 2854 devices which frequently change their point of attachment. 2856 Therefore, in its current form, RANGER cannot be contemplated as a 2857 superior scalable routing solution to some other proposals which are 2858 specified in sufficient detail and which appear to be feasible. 2860 RANGER uses its own tunneling and PMTUD management protocol: SEAL. 2861 Adoption of SEAL in its current form would prevent the proper 2862 utilization of jumbo frame paths in the DFZ, which will become the 2863 norm in the future. SEAL uses RFC 1191 PTB messages to the sending 2864 host only to fix a preset maximum packet length. To avoid the need 2865 for the SEAL layer to fragment packets of this length, this MTU value 2866 (for the input of the tunnel) needs to be set significantly below 2867 1500 bytes, assuming the typically ~1500 byte MTU values for paths 2868 across the DFZ today. In order to avoid this excessive 2869 fragmentation, this value could only be raised to a ~9k byte value at 2870 some time in the future where essentially all paths between ITRs and 2871 ETRs were jumbo frame capable. 2873 16.3. Rebuttal 2875 The Internet Routing Overlay Network (IRON) [I-D.templin-iron] is a 2876 scalable Internet routing architecture that builds on the RANGER 2877 recursive enterprise network hierarchy [RFC5720]. IRON bonds 2878 together participating RANGER networks using VET 2879 [I-D.templin-intarea-vet] and SEAL [I-D.templin-intarea-seal] to 2880 enable secure and scalable routing through automatic tunneling within 2881 the Internet core. The IRON-RANGER automatic tunneling abstraction 2882 views the entire global Internet DFZ as a virtual NBMA link similar 2883 to ISATAP [RFC5214]. 2885 IRON-RANGER is an example of a Core-Edge Separation (CES) system. 2886 Instead of a classical mapping database, however, IRON-RANGER uses a 2887 hybrid combination of a proactive dynamic routing protocol for 2888 distributing highly aggregated Virtual Prefixes (VPs) and an on- 2889 demand data driven protocol for distributing more-specific Provider 2890 Independent (PI) prefixes derived from the VPs. 2892 The IRON-RANGER hierarchy consists of recursively-nested RANGER 2893 enterprise networks joined together by IRON routers that participate 2894 in a global BGP instance. The IRON BGP instance is maintained 2895 separately from the current Internet BGP Routing LOCator (RLOC) 2896 address space (i.e., the set of all public IPv4 prefixes in the 2897 Internet). Instead, the IRON BGP instance maintains VPs taken from 2898 Endpoint Interface iDentifier (EID) address space, e.g., the IPv6 2899 global unicast address space. To accommodate scaling, only O(10k) - 2900 O(100k) VPs are allocated e.g., using /20 or shorter IPv6 prefixes. 2902 IRON routers lease portions of their VPs as Provider Independent (PI) 2903 prefixes for customer equipment (CEs), thereby creating a sustainable 2904 business model. CEs that lease PI prefixes propagate address 2905 mapping(s) throughout their attached RANGER networks and up to VP- 2906 owning IRON router(s) through periodic transmission of "bubbles" with 2907 authentication and PI prefix information. Routers in RANGER networks 2908 and IRON routers that receive and forward the bubbles securely 2909 install PI prefixes in their FIBs, but do not inject them into the 2910 RIB. IRON routers therefore keep track of only their customer base 2911 via the FIB entries and keep track of only the Internet-wide VP 2912 database in the RIB. 2914 IRON routers propagate more-specific prefixes using secure 2915 redirection to update router FIBs. Prefix redirection is driven by 2916 the data plane and does not affect the control plane. Redirected 2917 prefixes are not injected into the RIB, but rather are maintained as 2918 FIB soft state that is purged after expiration or route failure. 2919 Neighbor unreachability detection is used to detect failure. 2921 Secure prefix registrations and redirections are accommodated through 2922 the mechanisms of SEAL. Tunnel endpoints using SEAL synchronize 2923 sequence numbers, and can therefore discard any packets they receive 2924 that are outside of the current sequence number window. Hence, off- 2925 path attacks are defeated. These synchronized tunnel endpoints can 2926 therefore exchange prefixes with signed certificates that prove 2927 prefix ownership in such a way that DoS vectors that attack crypto 2928 calculation overhead are eliminated due to the prevention of off-path 2929 attacks. 2931 CEs can move from old RANGER networks and re-inject their PI prefixes 2932 into new RANGER networks. This would be accommodated by IRON-RANGER 2933 as a site multihoming event while host mobility and true locator-ID 2934 separation is accommodated via HIP [RFC5201]. 2936 17. Recommendation 2938 As can be seen from the extensive list of proposals above, the group 2939 explored a number of possible solutions. Unfortunately, the group 2940 did not reach rough consensus on a single best approach. 2941 Accordingly, the recommendation has been left to the co-chairs. The 2942 remainder of this section describes the rationale and decision of the 2943 co-chairs. 2945 As a reminder, the goal of the research group was to develop a 2946 recommendation for an approach to a routing and addressing 2947 architecture for the Internet. The primary goal of the architecture 2948 is to provide improved scalability for the routing subsystem. 2949 Specifically, this implies that we should be able to continue to grow 2950 the routing subsystem to meet the needs of the Internet without 2951 requiring drastic and continuous increases in the amount of state or 2952 processing requirements for routers. 2954 17.1. Motivation 2956 There is a general concern that the cost and structure of the routing 2957 and addressing architecture as we know it today may become 2958 prohibitively expensive with continued growth, with repercussions to 2959 the health of the Internet. As such, there is an urgent need to 2960 examine and evaluate potential scalability enhancements. 2962 For the long term future of the Internet, it has become apparent that 2963 IPv6 is going to play a significant role. It has taken more than a 2964 decade, but IPv6 is starting to see some non-trivial amount of 2965 deployment. This is in part due to the depletion of IPv4 addresses. 2966 It therefore seems apparent that the new architecture must be 2967 applicable to IPv6. It may or may not be applicable to IPv4, but not 2968 addressing the IPv6 portion of the network would simply lead to 2969 recreating the routing scalability problem in the IPv6 domain, 2970 because the two share a common routing architecture. 2972 Whatever change we make, we should expect that this is a very long- 2973 lived change. The routing architecture of the entire Internet is a 2974 loosely coordinated, complex, expensive subsystem, and permanent, 2975 pervasive changes to it will require difficult choices during 2976 deployment and integration. These cannot be undertaken lightly. 2978 By extension, if we are going to the trouble, pain, and expense of 2979 making major architectural changes, it follows that we want to make 2980 the best changes possible. We should regard any such changes as 2981 permanent and we should therefore aim for long term solutions that 2982 place the network in the best possible position for ongoing growth. 2983 These changes should be cleanly integrated, first-class citizens 2984 within the architecture. That is to say that any new elements that 2985 are integrated into the architecture should be fundamental 2986 primitives, on par with the other existing legacy primitives in the 2987 architecture, that interact naturally and logically when in 2988 combination with other elements of the architecture. 2990 Over the history of the Internet, we have been very good about 2991 creating temporary, ad-hoc changes, both to the routing architecture 2992 and other aspects of the network layer. However, many of these band- 2993 aid solutions have come with a significant overhead in terms of long- 2994 term maintenance and architectural complexity. This is to be avoided 2995 and short-term improvements should eventually be replaced by long- 2996 term, permanent solutions. 2998 In the particular instance of the routing and addressing architecture 2999 today, we feel that the situation requires that we pursue both short- 3000 term improvements and long-term solutions. These are not 3001 incompatible because we truly intend for the short-term improvements 3002 to be completely localized and temporary. The short-term 3003 improvements are necessary to give us the time necessary to develop, 3004 test, and deploy the long-term solution. As the long-term solution 3005 is rolled out and gains traction, the short-term improvements should 3006 be of less benefit and can subsequently be withdrawn. 3008 17.2. Recommendation to the IETF 3010 The group explored a number of proposed solutions but did not reach 3011 consensus on a single best approach. Therefore, in fulfillment of 3012 the routing research group's charter, the co-chairs recommend that 3013 the IETF pursue work in the following areas: 3015 Evolution [I-D.zhang-evolution] 3017 Identifier/Locator Network Protocol (ILNP) [ILNP Site] 3019 Renumbering [RFC5887] 3021 17.3. Rationale 3023 We selected Evolution because it is a short-term improvement. It can 3024 be applied on a per-domain basis, under local administration and has 3025 immediate effect. While there is some complexity involved, we feel 3026 that this option is constructive for service providers who find the 3027 additional complexity to be less painful than upgrading hardware. 3028 This improvement can be deployed by domains that feel it necessary, 3029 for as long as they feel it is necessary. If this deployment lasts 3030 longer than expected, then the implications of that decision are 3031 wholly local to the domain. 3033 We recommended ILNP because we find it to be a clean solution for the 3034 architecture. It separates location from identity in a clear, 3035 straightforward way that is consistent with the remainder of the 3036 Internet architecture and makes both first-class citizens. Unlike 3037 the many map-and-encap proposals, there are no complications due to 3038 tunneling, indirection, or semantics that shift over the lifetime of 3039 a packet's delivery. 3041 We recommend further work on automating renumbering because even with 3042 ILNP, the ability of a domain to change its locators at minimal cost 3043 is fundamentally necessary. No routing architecture will be able to 3044 scale without some form of abstraction, and domains that change their 3045 point of attachment must fundamentally be prepared to change their 3046 locators in line with this abstraction. We recognize that [RFC5887] 3047 is not a solution so much as a problem statement, and we are simply 3048 recommending that the IETF create effective and convenient mechanisms 3049 for site renumbering. 3051 18. Acknowledgments 3053 This document presents a small portion of the overall work product of 3054 the Routing Research Group, who have developed all of these 3055 architectural approaches and many specific proposals within this 3056 solution space. 3058 19. IANA Considerations 3060 This memo includes no requests to IANA. 3062 20. Security Considerations 3064 Space precludes a full treatment of security considerations for all 3065 proposals summarized herein. [RFC3552] However, it was a requirement 3066 of the research group to provide security that is at least as strong 3067 as the existing Internet routing and addressing architecture. Each 3068 technical proposal has slightly different security considerations, 3069 the details of which are in many of the references cited. 3071 21. Informative References 3073 [CRM] Flinck, H., "Compact routing in locator identifier mapping 3074 system", . 3077 [DNSnBIND] 3078 Liu, C. and P. Albitz, "DNS & BIND", 2006. 3080 5th Edition, O'Reilly & Associates, Sebastopol, CA, USA. 3081 ISBN 0-596-10057-4 3083 [EEMDP Considerations] 3084 Sriram, K., Kim, Y., and D. Montgomery, "Enhanced 3085 Efficiency of Mapping Distribution Protocols in Scalable 3086 Routing and Addressing Architectures", Proceedings of the 3087 ICCCN, August 2010, 3088 . 3090 Zurich, Switzerland 3092 [EEMDP Presentation] 3093 Sriram, K., Gleichmann, P., Kim, Y., and D. Montgomery, 3094 "Enhanced Efficiency of Mapping Distribution Protocols in 3095 Scalable Routing and Addressing Architectures", 3096 . 3098 Presented at the LISP WG meeting, IETF-78, July 2010. 3099 Originally presented at the RRG meeting at IETF-72. 3101 [Evolution Grow Presentation] 3102 Francis, P., Xu, X., Ballani, H., Jen, D., Raszuk, R., and 3103 L. Zhang, "Virtual Aggregation (VA)", 3104 . 3106 [FIBAggregatability] 3107 Zhang, B., Wang, L., Zhao, X., Liu, Y., and L. Zhang, "An 3108 Evaluation Study of Router FIB Aggregatability", 3109 . 3111 [GLI] Menth, M., Hartmann, M., and D. Klein, "Global Locator, 3112 Local Locator, and Identifier Split (GLI-Split)", 3113 . 3115 [I-D.adan-idr-tidr] 3116 Adan, J., "Tunneled Inter-domain Routing (TIDR)", 3117 draft-adan-idr-tidr-01 (work in progress), December 2006. 3119 [I-D.carpenter-behave-referral-object] 3120 Carpenter, B., Boucadair, M., Halpern, J., Jiang, S., and 3121 K. Moore, "A Generic Referral Object for Internet 3122 Entities", draft-carpenter-behave-referral-object-01 (work 3123 in progress), October 2009. 3125 [I-D.farinacci-lisp-lig] 3126 Farinacci, D. and D. Meyer, "LISP Internet Groper (LIG)", 3127 draft-farinacci-lisp-lig-02 (work in progress), 3128 February 2010. 3130 [I-D.ford-mptcp-architecture] 3131 Ford, A., Raiciu, C., Barre, S., Iyengar, J., and B. Ford, 3132 "Architectural Guidelines for Multipath TCP Development", 3133 draft-ford-mptcp-architecture-01 (work in progress), 3134 February 2010. 3136 [I-D.frejborg-hipv4] 3137 Frejborg, P., "Hierarchical IPv4 Framework", 3138 draft-frejborg-hipv4-10 (work in progress), October 2010. 3140 [I-D.ietf-lisp] 3141 Farinacci, D., Fuller, V., Meyer, D., and D. Lewis, 3142 "Locator/ID Separation Protocol (LISP)", 3143 draft-ietf-lisp-09 (work in progress), October 2010. 3145 [I-D.ietf-lisp-alt] 3146 Fuller, V., Farinacci, D., Meyer, D., and D. Lewis, "LISP 3147 Alternative Topology (LISP+ALT)", draft-ietf-lisp-alt-05 3148 (work in progress), October 2010. 3150 [I-D.ietf-lisp-interworking] 3151 Lewis, D., Meyer, D., Farinacci, D., and V. Fuller, 3152 "Interworking LISP with IPv4 and IPv6", 3153 draft-ietf-lisp-interworking-01 (work in progress), 3154 August 2010. 3156 [I-D.ietf-lisp-ms] 3157 Fuller, V. and D. Farinacci, "LISP Map Server", 3158 draft-ietf-lisp-ms-06 (work in progress), October 2010. 3160 [I-D.irtf-rrg-design-goals] 3161 Li, T., "Design Goals for Scalable Internet Routing", 3162 draft-irtf-rrg-design-goals-04 (work in progress), 3163 November 2010. 3165 [I-D.meyer-lisp-mn] 3166 Meyer, D., Lewis, D., and D. Farinacci, "LISP Mobile 3167 Node", draft-meyer-lisp-mn-04 (work in progress), 3168 October 2010. 3170 [I-D.meyer-loc-id-implications] 3171 Meyer, D. and D. Lewis, "Architectural Implications of 3172 Locator/ID Separation", draft-meyer-loc-id-implications-01 3173 (work in progress), January 2009. 3175 [I-D.narten-radir-problem-statement] 3176 Narten, T., "On the Scalability of Internet Routing", 3177 draft-narten-radir-problem-statement-05 (work in 3178 progress), February 2010. 3180 [I-D.rja-ilnp-nonce] 3181 Atkinson, R., "ILNP Nonce Destination Option", 3182 draft-rja-ilnp-nonce-06 (work in progress), August 2010. 3184 [I-D.russert-rangers] 3185 Russert, S., Fleischman, E., and F. Templin, "RANGER 3186 Scenarios", draft-russert-rangers-05 (work in progress), 3187 July 2010. 3189 [I-D.templin-intarea-seal] 3190 Templin, F., "The Subnetwork Encapsulation and Adaptation 3191 Layer (SEAL)", draft-templin-intarea-seal-23 (work in 3192 progress), October 2010. 3194 [I-D.templin-intarea-vet] 3195 Templin, F., "Virtual Enterprise Traversal (VET)", 3196 draft-templin-intarea-vet-16 (work in progress), 3197 July 2010. 3199 [I-D.templin-iron] 3200 Templin, F., "The Internet Routing Overlay Network 3201 (IRON)", draft-templin-iron-13 (work in progress), 3202 October 2010. 3204 [I-D.whittle-ivip-drtm] 3205 Whittle, R., "DRTM - Distributed Real Time Mapping for 3206 Ivip and LISP", draft-whittle-ivip-drtm-01 (work in 3207 progress), March 2010. 3209 [I-D.whittle-ivip-glossary] 3210 Whittle, R., "Glossary of some Ivip and scalable routing 3211 terms", draft-whittle-ivip-glossary-01 (work in progress), 3212 March 2010. 3214 [I-D.whittle-ivip4-etr-addr-forw] 3215 Whittle, R., "Ivip4 ETR Address Forwarding", 3216 draft-whittle-ivip4-etr-addr-forw-02 (work in progress), 3217 January 2010. 3219 [I-D.xu-rangi] 3220 Xu, X., "Routing Architecture for the Next Generation 3221 Internet (RANGI)", draft-xu-rangi-04 (work in progress), 3222 August 2010. 3224 [I-D.xu-rangi-proxy] 3225 Xu, X., "Transition Mechanisms for Routing Architecture 3226 for the Next Generation Internet (RANGI)", 3227 draft-xu-rangi-proxy-01 (work in progress), July 2009. 3229 [I-D.zhang-evolution] 3230 Zhang, B. and L. Zhang, "Evolution Towards Global Routing 3231 Scalability", draft-zhang-evolution-02 (work in progress), 3232 October 2009. 3234 [ILNP Site] 3235 Atkinson, R., Bhatti, S., Hailes, S., Rehunathan, D., and 3236 M. Lad, "ILNP - Identifier/Locator Network Protocol", 3237 . 3239 [Ivip Constraints] 3240 Whittle, R., "List of constraints on a successful scalable 3241 routing solution which result from the need for widespread 3242 voluntary adoption", 3243 . 3245 [Ivip Mobility] 3246 Whittle, R., "TTR Mobility Extensions for Core-Edge 3247 Separation Solutions to the Internet's Routing Scaling 3248 Problem", 3249 . 3251 [Ivip PMTUD] 3252 Whittle, R., "IPTM - Ivip's approach to solving the 3253 problems with encapsulation overhead, MTU, fragmentation 3254 and Path MTU Discovery", 3255 . 3257 [Ivip6] Whittle, R., "Ivip6 - instead of map-and-encap, use the 20 3258 bit Flow Label as a Forwarding Label", 3259 . 3261 [LISP-TREE] 3262 Jakab, L., Cabellos-Aparicio, A., Coras, F., Saucez, D., 3263 and O. Bonaventure, "LISP-TREE: A DNS Hierarchy to Support 3264 the LISP Mapping System", . 3267 [LMS] Letong, S., Xia, Y., ZhiLiang, W., and W. Jianping, "A 3268 Layered Mapping System For Scalable Routing", . 3273 [LMS Summary] 3274 Sun, C., "A Layered Mapping System (Summary)", . 3278 [MILCOM1] Atkinson, R. and S. Bhatti, "Site-Controlled Secure Multi- 3279 homing and Traffic Engineering for IP", IEEE Military 3280 Communications Conference (MILCOM) 28, Boston, MA, USA, 3281 October 2009. 3283 [MILCOM2] Atkinson, R., Bhatti, S., and S. Hailes, "Harmonised 3284 Resilience, Multi-homing and Mobility Capability for IP", 3285 IEEE Military Communications Conference (MILCOM) 27, San 3286 Diego, CA, USA, November 2008. 3288 [MobiArch1] 3289 Atkinson, R., Bhatti, S., and S. Hailes, "Mobility as an 3290 Integrated Service through the Use of Naming", ACM 3291 International Workshop on Mobility in the Evolving 3292 Internet (MobiArch) 2, Kyoto, Japan, August 2007. 3294 [MobiArch2] 3295 Atkinson, R., Bhatti, S., and S. Hailes, "Mobility Through 3296 Naming: Impact on DNS", ACM International Workshop on 3297 Mobility in the Evolving Internet (MobiArch) 3, Seattle, 3298 USA, August 2008. 3300 [Name Based Sockets] 3301 Vogt, C., "Simplifying Internet Applications Development 3302 With A Name-Based Sockets Interface", . 3306 [RANGI] Xu, X., "Routing Architecture for the Next-Generation 3307 Internet (RANGI)", 3308 . 3310 [RFC3007] Wellington, B., "Secure Domain Name System (DNS) Dynamic 3311 Update", RFC 3007, November 2000. 3313 [RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC 3314 Text on Security Considerations", BCP 72, RFC 3552, 3315 July 2003. 3317 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 3318 Rose, "DNS Security Introduction and Requirements", 3319 RFC 4033, March 2005. 3321 [RFC4034] Arends, R., Austein, R., Larson, M., Massey, D., and S. 3322 Rose, "Resource Records for the DNS Security Extensions", 3323 RFC 4034, March 2005. 3325 [RFC4035] Arends, R., Austein, R., Larson, M., Massey, D., and S. 3326 Rose, "Protocol Modifications for the DNS Security 3327 Extensions", RFC 4035, March 2005. 3329 [RFC4423] Moskowitz, R. and P. Nikander, "Host Identity Protocol 3330 (HIP) Architecture", RFC 4423, May 2006. 3332 [RFC4861] Narten, T., Nordmark, E., Simpson, W., and H. Soliman, 3333 "Neighbor Discovery for IP version 6 (IPv6)", RFC 4861, 3334 September 2007. 3336 [RFC4960] Stewart, R., "Stream Control Transmission Protocol", 3337 RFC 4960, September 2007. 3339 [RFC5201] Moskowitz, R., Nikander, P., Jokela, P., and T. Henderson, 3340 "Host Identity Protocol", RFC 5201, April 2008. 3342 [RFC5214] Templin, F., Gleeson, T., and D. Thaler, "Intra-Site 3343 Automatic Tunnel Addressing Protocol (ISATAP)", RFC 5214, 3344 March 2008. 3346 [RFC5534] Arkko, J. and I. van Beijnum, "Failure Detection and 3347 Locator Pair Exploration Protocol for IPv6 Multihoming", 3348 RFC 5534, June 2009. 3350 [RFC5720] Templin, F., "Routing and Addressing in Networks with 3351 Global Enterprise Recursion (RANGER)", RFC 5720, 3352 February 2010. 3354 [RFC5887] Carpenter, B., Atkinson, R., and H. Flinck, "Renumbering 3355 Still Needs Work", RFC 5887, May 2010. 3357 [RFC5902] Thaler, D., Zhang, L., and G. Lebovitz, "IAB Thoughts on 3358 IPv6 Network Address Translation", RFC 5902, July 2010. 3360 [TIDR AS forwarding] 3361 Adan, J., "yetAnotherProposal: AS-number forwarding", 3362 . 3364 [TIDR and LISP] 3365 Adan, J., "LISP etc architecture", 3366 . 3368 [TIDR identifiers] 3369 Adan, J., "TIDR using the IDENTIFIERS attribute", . 3372 [Valiant] Zhang-Shen, R. and N. McKeown, "Designing a Predictable 3373 Internet Backbone Network", . 3376 Author's Address 3378 Tony Li (editor) 3379 Cisco Systems 3380 170 West Tasman Dr. 3381 San Jose, CA 95134 3382 USA 3384 Phone: +1 408 853 9317 3385 Email: tony.li@tony.li