idnits 2.17.1 draft-karimi-ideas-gnrs-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (March 12, 2017) is 2574 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 4423 (Obsoleted by RFC 9063) ** Obsolete normative reference: RFC 6830 (Obsoleted by RFC 9300, RFC 9301) Summary: 2 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group P. Karimi 3 Internet-Draft S. Mukherjee 4 Intended status: Informational Rutgers University 5 Expires: September 13, 2017 March 12, 2017 7 Global Name Resolution Service 8 draft-karimi-ideas-gnrs-00 10 Abstract 12 This document describes the requirement of a new mapping system, 13 explains why DNS was not chosen, follows the introductions of a few 14 proposed new mapping system designs. 16 Requirements Language 18 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 19 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 20 document are to be interpreted as described in [RFC2119]. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at http://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on September 13, 2017. 39 Copyright Notice 41 Copyright (c) 2017 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (http://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 57 2. Specification of Requirements . . . . . . . . . . . . . . . . 3 58 3. Functional Requirements for a Mapping System . . . . . . . . 3 59 4. The Domain Name System . . . . . . . . . . . . . . . . . . . 5 60 5. MobilityFirst Name Resolution Service . . . . . . . . . . . . 6 61 5.1. Separation of names, address and flat ID . . . . . . . . 6 62 5.2. Different Implementations of the GNRS . . . . . . . . . . 7 63 5.2.1. Auspice . . . . . . . . . . . . . . . . . . . . . . . 8 64 5.2.2. Direct Map . . . . . . . . . . . . . . . . . . . . . 9 65 5.2.3. G Map . . . . . . . . . . . . . . . . . . . . . . . . 10 66 5.3. GNRS summary . . . . . . . . . . . . . . . . . . . . . . 10 67 6. Security Considerations . . . . . . . . . . . . . . . . . . . 11 68 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 69 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 70 9. Normative References . . . . . . . . . . . . . . . . . . . . 12 71 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 73 1. Introduction 75 The current Internet architecture which was designed with fixed hosts 76 in mind, uses IP address to identify both the users as well as their 77 locations. This overloading of the namespace or location-identity 78 conflation [RFC1498] makes deploying basic mobility services such as 79 session continuity, multi-homing etc., challenging. In order for 80 future networks to natively support these services, location- 81 independent communication for fixed names of various endpoint 82 principal (host, content or services) is a crucial underlying 83 requirement. 85 Separation of names/identities from addressing/location has been 86 proposed in multiple architectures to facilitate location-independent 87 communication [MF], [RFC6830], [RFC4423], [XIA]. There is a need for 88 an efficient resolution system that can therefore provide this 89 identity to location translation for all network-attached objects. 90 In the current internet a similar resolution of identities (domain 91 names) to obtain network locations (IP addresses) is provided by the 92 Domain Name System (DNS). Although the DNS has historically evolved 93 significantly from the time it was based on text files to 94 sophisticated hierarchically distributed resolvers, it still lacks 95 the support for the requirements of next generation networks, i.e. a 96 distributed mapping infrastructure that can scale to orders of 97 magnitude higher update rates with orders of magnitude of lower user- 98 perceived latency. 100 2. Specification of Requirements 102 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 103 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 104 document are to be interpreted as described in [RFC2119]. 106 3. Functional Requirements for a Mapping System 108 The following lists the requirements of a mapping system: 110 o Low Propagation and Response Latency: Typical user perceived 111 network latency on the ballpark of 100 milliseconds is considered 112 acceptable [VMWARE-WP]. Depending on the application and QoS 113 metrics, this requirement could be more strict. For example, 114 future 5G applications such as vehicle-to-vehicle safety 115 applications or real-time mobile control may require less than one 116 millisecond latencies [HUAWEI-WP]. This implies future mapping 117 systems should be able to return up-to-date responses within 118 milliseconds. Note that this latency includes both the time to 119 update a mapping and the time to return a correct response to a 120 querying entity for the mapping. 122 o Scalability: There are approximately 4.9 billion global mobile 123 users and according to a recent study, over 20% of these users 124 currently change network addresses over 10 times a day 125 [LOC-INDEP-ARCH]. Cisco has predicted that by 2021, the number of 126 mobile users will go up to 5.5 billion, whereas the total number 127 of mobile connected devices could be as high as 12 billion 128 [CISCO-VNI]. Therefore the proposed mapping system should scale 129 to orders of magnitude higher update scalability than existing 130 systems. 132 o Distribution and performance: The mapping system should be 133 logically centralized but physically distributed. The 134 distribution of mapping system should be optimized to find the 135 sweet spot of minimizing resource cost, lookup latency and update 136 latency. These parameters are tied to one another and the 137 performance of the mapping system is directed by them. The 138 optimized geo-distribution of mapping system can be achieved by 139 taking into account reactive approaches, like distribution of 140 mapping system entities based on the popularity and demand for 141 those entries, or proactive approaches using predicted patterns 142 for future mobility which determines future demand for entries. 144 o Extensibility and Flexibility: The mapping service should be 145 extensible enough to allow newer applications deployable in the 146 near future. The service should go beyond a simple ID to location 147 mapping repository to a distributed network service that would 148 simplify deployment of a richer set of ID based services. As an 149 example the mapping system should capture relationships like 150 grouping between names by providing name-to-name mapping and 151 recursive resolution of names. The syntax and semantics also 152 should be flexible enough to allow a multitude of name based 153 architectures such as LISP [RFC6830], MF [MF], ILA [ILA], HIP 154 [RFC4423], etc., to easily utilize the mapping service as a common 155 control plane accessible through well-defined standard API calls. 156 Having structure-free and extensible fields in the mapping system 157 should provide the desired flexibility. The structure of the 158 mapping system should not be bound to the structure of names. 159 This will alleviate the restrictions on the structure of names, 160 which will eventually lead to a more generalized mapping system 161 usable by any prevailing name-based internet architecture. The 162 extensibility of the fields in the mapping system can allow 163 certain service-specific information to be stored in the mapping 164 system, for example storing mobility-related information for a 165 user. 167 o Security and Reliability: Being a database mapping identities of 168 network-attached objects to their network locations (which may be 169 correlated to physical locations), the mapping system should be 170 resilient to attacks and robust to failures. Local or private 171 instantiations and confidential mappings should also be 172 provisioned for. However, there should not be a single root of 173 trust. Access control is an important security aspect of a name 174 resolution service. Some of the attributes bound to a name might 175 contain sensitive information, in which case the principal in 176 charge of that instance of the mapping system can maintain access 177 control for each row-column pair and type of operation (read or 178 write). The access control policy can be specified in the form of 179 a blacklist or whitelist of names that are allowed to perform the 180 corresponding operation on that attribute. 182 This list of functional requirements is not comprehensive. Please 183 refer to [IDEAS-PS] for a detailed discussion on the requirements for 184 the next-generation mapping and resolution services. The goal of 185 this document is to highlight key functional requirements which are 186 not well-handled by existing mapping systems such as the DNS and the 187 proposed LISP DDT. 189 4. The Domain Name System 191 The DNS is a distributed mapping service ubiquitously used for 192 translating human readable names/URLs to IP addresses. DNS database 193 is stored in a structured zone file which hierarchically divides the 194 name space into zones and distributes it over a collection of DNS 195 servers. The top-down hierarchy in DNS servers is as follows: Root 196 servers (13), Top-Level Domain servers (TLD includes generic domains, 197 country domain and managed professionally domains), Authoritative DNS 198 servers (providing public records for domain names). 200 The resolution of domain names follows the same hierarchy. Each 201 network host is affiliated with a local DNS server (via DHCP or 202 configuration) which is configured with initial cache of publicly- 203 known addresses for root name servers (the top of the hierarchy). 204 This functionality can also be put on the application to query the 205 DNS directly. The root servers usually do not conduct the resolution 206 directly and instead will refer to resolver to TLDs and authoritative 207 servers iteratively. This iterative back-and-forth messaging will 208 result in really large DNS query latency and massive amount of 209 unnecessary traffic. In order to reduce DNS traffic overhead, 210 decrease the latency for the end-user application and improve 211 efficiency caching of DNS query results is performed by local DNS 212 servers or browsers. 214 DNS suffers from high propagation and response latency. The DNS name 215 resolution is initiated by the client process calling the resolver to 216 return a cached record or, in case of a cache miss, request the DNS 217 for the record of a given domain name. The request is sent to the 218 DNS root server and the referral will be forwarded iteratively 219 between the local DNS server and along the DNS hierarchy and the 220 chain of authoritative DNS servers. In summary the lookup latency 221 for DNS query will consist of a) the latency from the client to the 222 resolver and b) from the resolver through the DNS hierarchy, if there 223 is no available cache for that name. 225 a) Client to resolver latency: In [DNS-MEASURE1], thorough latency 226 measurements from global vantage points to 9 most commonly used 227 public DNS providers is provided. It has been shown that the average 228 latency for cached queries (no cache-misses were counted) vary from 229 38-159 millisecond based on the provider. The latency and its 230 variation is governed by the number and location of data centers, 231 anycast routing latency, additional caching [GOOGLE-EDGE], congestion 232 and load on servers, etc. 234 b) Iterative lookup along DNS hierarchy latency: This latency 235 consists of back-and-forth latency between the resolvers and root 236 servers, TDLs and the authoritative name servers, specifically when 237 there is a cache miss in the resolver. This latency can be 238 exacerbated by unprovisioning and high queues at the DNS resolvers 239 and malicious traffic [GOOGLE-DNS]. In [DNS-MEASURE2], the global 240 average latency for the fastest root server within each country has 241 been reported as 70 ms. To the best of our knowledge there has not 242 been a recent comprehensive measurement of latencies to access TDLs 243 and authoritative name servers. This is reasonable as the geo- 244 distribution and number of TDLs and authoritative nameservers are 245 very diverse. However considering the static placement of 246 authoritative name servers in most of the cases, unless the domain 247 name being served by services like Google's cloud DNS 248 [GOOGLE-CLOUDDNS], the latency from the resolvers to authoritative 249 name servers will be unresponsive to the change in popularity/demand 250 for the domain names it is responsible for, and hence will supposedly 251 have high values for some cases. In [GOOGLE-DNS], the actual end-to- 252 end resolution time is estimated to be around 300-400 ms with high 253 variance and long tail. 255 There have been many proposals focused on improving the lookup 256 latency for the DNS resolvers [CODONS] and a number of public DNS 257 resolvers like Google public DNS [GOOGLE-DNS] which perform more 258 optimized lookups (measurements of which have been discussed above). 259 However, low lookup latency values can be achieved only if an 260 optimally-located DNS resolver has a cache hit for the lookup query. 261 Unfortunately, cache misses are fundamentally difficult to avoid due 262 to internet growth and size, low TTL and cache isolation. Heavy 263 reliance of DNS's performance on TTL-based caching is a known issue. 264 Despite the benefits of TTL-based caching for static contents, what 265 DNS has been designed for, caching becomes completely ineffective for 266 highly mobile users or CDNs which require close to zero TTL values 267 for mobility or load balancing purposes. This is discussed in more 268 details in the next section. 270 5. MobilityFirst Name Resolution Service 272 5.1. Separation of names, address and flat ID 274 MobilityFirst is a clean-slate future internet architecture with 275 principal design goals of mobility and trustworthiness. The vision 276 of MobilityFirst is that due to the abundance of mobile users, 277 ranging from cellphones to drones, mobility should be treated as a 278 first-class service. The current approaches to provide mobility like 279 mobile IP [RFC5944] suffer from routing inefficiency (both in terms 280 of latency and overhead) due to tunneling all the data through an 281 anchor point. In MobilityFirst these goals are achieved by the clean 282 separation of names and addresses. Every network entity is 283 represented by a flat self-certifying identifier, which is location- 284 independent and allows network layer authentication through a 285 bilateral procedure. In addition to GUID which is a statically 286 assigned identifier, each point of attachment of a network entity to 287 the network will be assigned a network address (similar to IP 288 addresses), which can dynamically change. 290 Binding the GUID to network addresses is facilitated by a global name 291 resolution service, which is a logically centralized but physically 292 distributed called the GNRS (Global Name Resolution Service). 294 In today's internet the DNS provides the functionality of going from 295 a human-readable name to an IP address determining where the name is 296 located. DNS provides this service to the end-point applications. 297 However, in MobilityFirst a human-readable name is translated to the 298 corresponding GUID (globally-unique identifier) through the ORS 299 (Object Resolution Service). It is noteworthy that this operation 300 happens infrequently as GUIDs are statically assigned. 302 Within the network the tuple [GUID, Network Address (NA)] is a 303 routable destination identifier carried in packet headers. So after 304 obtaining the corresponding GUID, the next step is to discover the 305 location for that GUID. GNRS is the service that performs this name 306 to address resolution. This allows the entities to retain their 307 long-lasting globally unique identifiers and maintain reachability 308 and session continuity more effectively. 310 5.2. Different Implementations of the GNRS 312 MobilityFirst relies heavily on the name service for advanced 313 network-layer functionalities. This reliance necessitates high 314 performance for the name service, which depends on resolving 315 identifiers to dynamic attributes in a fast, consistent, and cost- 316 effective manner at Internet scale. As mentioned before, a name 317 resolution service should support two main functionalities: insert/ 318 update and lookup. These operations which include querying the 319 massively distributed name resolution service by any node in the 320 network (an end-host or a router) should not induce a big overhead. 322 To achieve this goal, large geo-distributed deployment of name 323 servers is necessary. The challenge will be to ensure the placement 324 of a consistent replica close to its demand regions, while taking 325 into account frequent updates due to mobility. There have been two 326 major proposals for the efficient implementation and deployment of 327 GNRS: 1) DHT-based name service, in which hash of a name determines 328 where the mapping entry for that name should be stored [DMAP]. In a 329 more advanced version of this implementation, popularity and locality 330 has been integrated into storing the mapping entries as well [GMAP]. 331 2) Demand-aware mapping entry replica placement engine that 332 intelligently replicates name records to provide low lookup latency, 333 low update cost, and high availability [AUSPICE]. 335 All of these design and deployment proposals argue that a DNS-like 336 design is ill-suited to enable fast, consistent and cost-effective 337 query-update mechanism. DNS's TTL-based caching mechanism is one of 338 the strengths of DNS, with long TTLs reducing client-perceived 339 latency and overhead on the infrastructure. However, this very 340 strength of DNS, can pose serious challenges in the face of frequent 341 node mobility, which requires near-zero TTL values to ensure 342 consistent responses. Low TTL values make caching ineffective and 343 exacerbate update propagation times. Moreover, authoritative name 344 servers need high provisioning to keep low lookup latencies which 345 will increase the maintenance cost for the mapping system. 347 5.2.1. Auspice 349 The main design goal of Auspice is to provide an automated 350 infrastructure for the placement of geo-distributed name resolvers. 351 The two main components of Auspice are the replica controllers, which 352 determine the number and geo-location of name resolvers, and the name 353 resolvers, which are responsible for maintaining the identifier's 354 attributes and replying to user-request read or write operations. 355 Each name is associated to a fixed number, k , of replica-controllers 356 and a variable number of active replicas of the corresponding 357 resolver. 359 Replica controllers: They have fixed number of locations computed 360 using k well-known consistent hash functions. Replica controllers 361 take into account the popularity and frequency of queries, which can 362 dynamically change in short and long timescale. Having an automated 363 resolver placement as an infrastructure service relinquishes the 364 manual and redundant effort from authoritative name servers to do 365 this task. Paxos [PAXOS] is executed for consistency, coordination 366 and fault tolerance between replica controllers. 368 Replica controllers are in charge of responding to client's requests 369 for a GUID. Specifically, if a sender want to communicate with 370 GUID_A, it will perform a consistent function on GUID_A. The result 371 of this hashing will be a list of current replica controllers 372 responsible for GUID_A. This request is then redirected to name 373 resolvers, where the attribute for GUID_A are stored. 375 Name resolvers: The resolvers host active replicas for identifiers. 376 The decision to distribute/migrate all the active replicas for 377 identifiers is made at pre-determined time period called an epoch. 378 In each epoch, the replica-controllers receive a summarized load 379 report, which can be a spatial vector of request rates for an 380 identifier from different regions seen by the active replica. By 381 aggregating these load reports, the replica-controller will develop a 382 concise spatial distribution of request for identifiers. 384 After having this distribution of requests and capacity constraints 385 at the mapping servers, the replica-controllers will use a mapping 386 algorithm to determine the number and location of active replicas for 387 an identifier. 389 The number of active replicas is proportional to the ratio of the 390 read rate and write rate for an identifier. The placement of active 391 replicas is based on highest number of requests in addition to some 392 random locations for load balancing. Redirecting the client's 393 requests to corresponding active replicas is done taking into account 394 name server load and latency to it. 396 One important challenge for updating the name servers is to maintain 397 write consistency between the various active replicas of a GUID. The 398 consistency is maintained by Paxos, by forwarding the write to any 399 GUID attribute to the current Paxos coordinator node. After 400 numbering the request and checking with the majority of replicas the 401 coordinator will send a commit notification to all replicas. 403 5.2.2. Direct Map 405 The direct mapping (DMap) design was the first proposed 406 implementation which is an in-network approach, wherein every 407 autonomous system (AS) in the world participate in a hashmap based 408 name resolution service and share the workload of hosting GUID to 409 network address mapping. Assuming the underlying routing to be 410 stable and all networks to be reachable, DMap hashes every GUID to K 411 network addresses (which are IP addresses in this case) and then 412 stores the mapping at those K addresses. Every time the mapping 413 changes, K update messages are sent to each of the servers at these 414 locations. Correspondingly every query for the current mapping of 415 the GUID is anycasted to the nearest of the K locations. 417 DMap is the simplest of the three designs and it manages workload 418 balance across all ASes efficiently. Since uniform hash functions 419 decide where a mapping is stored, basic DMap implementation is not 420 suitable for optimized mapping placement based on the service 421 requirements. However the focus of this work was on providing a 422 globally available mapping system with high availability, and 423 moderate latencies, making it ideal to handle basic mobility and 424 services with medium latency requirements. Detailed internet scale 425 simulation of DMap shows that with 3 replicas per GUID the 98\% 426 latency is around 100 milliseconds [DMAP], which is reasonable for 427 most user-mobility centric applications. 429 5.2.3. G Map 431 GMap [GMAP] is an updated version of DMap, in which the GUID -> 432 address mapping is distributed considering geo-location and local 433 popularity. For each GUID, similar consistent hash functions are 434 used to assign resolution servers. However for each mapping, the 435 servers are categorized into local, regional and global sets, based 436 on geo-locality. Each mapping now gets replicated into K1 local 437 servers, K2 regional servers and K3 global servers. Therefore, 438 unlike Auspice, GMap does not require per-GUID replica optimization, 439 but still achieves better latency goals than DMap, at the cost of 440 higher storage workload, due to increased number of replicas per 441 GUID. In addition, GMap allows temporary in-network caching of the 442 mapping along the route between a resolution server and a querying 443 entity, to ensure future mapping requests for the same GUID can be 444 resolved faster. Internet-scale simulations show GMap to achieve 445 similar latency goals of tens of milliseconds as Auspice but with 446 lower complexity and computation overhead. 448 5.3. GNRS summary 450 A summary of different functionalities and features for these name 451 resolution implementations is shown in Table.1. 453 +-----------+---------------+--------------------+------------------+ 454 | | Auspice | GMap | DMap | 455 +-----------+---------------+--------------------+------------------+ 456 | Location | overlaid on | in-network | in-network | 457 | relative | top of | | | 458 | to | network | | | 459 | network | | | | 460 |-----------|---------------|--------------------|------------------| 461 | | | | | 462 | Algorithm | Demand-aware | Distributed hash | Distributed hash | 463 | type | replicated | table | table | 464 | | state machine | | | 465 |-----------|---------------|--------------------|------------------| 466 | | | | | 467 | Record | GUID to | GUID to arbitrary | GUID to upto 5 | 468 | content | arbitrary | values(recursively | NAs, each with | 469 | | number of | other GUIDs or | an expiration | 470 | | values | Network Addresses) | time and | 471 | | | | prioritization | 472 | | | | weight | 473 |-----------|---------------|--------------------|------------------| 474 | | | | | 475 | Name | Geo-located | Geo-located based | Not Geolocated, | 476 | server | based on | on GUIDs physical | One name server | 477 | placement | requests | location | in the GUIDs AS | 478 |-----------|---------------|--------------------|------------------| 479 | | | | | 480 | Number of | Based on | Fixed number; each | Fixed number: | 481 | replicas | recent demand | GUID has K1 local, | each GUID has K | 482 | per GUID | and update | K2 regional, K3 | global, 1 local | 483 | | frequency | global replicas | replicas | 484 |-----------|---------------|--------------------|------------------| 485 | | | | | 486 | Caching | No caching; | Caches response | Future work | 487 | | load | along the path | | 488 | | balancing by | from querying | | 489 | | adjusting | entity and name | | 490 | | number of | server | | 491 | | name servers | | | 492 +-----------+---------------+--------------------+------------------+ 494 Table.1 Summary of different name resolution implementations 496 6. Security Considerations 498 TBD. 500 7. Acknowledgements 502 TBD. 504 8. IANA Considerations 506 TBD. 508 9. Normative References 510 [AUSPICE] Sharma, A., Tie, X., Uppal, H., Venkataramani, A., 511 Westbrook, D., and A. Yadav, "A global name service for a 512 highly mobile internetwork", 2014, 513 . 515 [CISCO-VNI] 516 "Cisco Visual Networking Index: Global Mobile Data Traffic 517 Forecast Update, 2016-2021", 2017, 518 . 522 [CODONS] Ramasubramanian, V. and E. Sirer, "The design and 523 implementation of a next generation name service for the 524 internet", 2004, 525 . 527 [DMAP] Vu, T., Baid, A., Zhang, Y., Nguyen, T., Fukuyama, J., 528 Martin, R., and R. Raychaudhuri, "DMap: A Shared Hosting 529 Scheme for Dynamic Identifier to Locator Mappings in the 530 Global Internet", 2012, 531 . 533 [DNS-MEASURE1] 534 "Comparing Latency of the Top Public DNS Providers", 2015, 535 . 538 [DNS-MEASURE2] 539 "Comparing Root Server Performance Around the World", 540 2015, . 543 [GMAP] Hu, Y., Yates, R., and R. Raychaudhuri, "A Hierarchically 544 Aggregated In-Network Global Name Resolution Service for 545 the Mobile Internet", WINLAB TR 442, March 2015. 547 [GOOGLE-CLOUDDNS] 548 "Reliable, resilient, low-latency DNS serving from 549 Google's worldwide network", . 552 [GOOGLE-DNS] 553 "Google Public DNS", . 556 [GOOGLE-EDGE] 557 "Google Edge Caching Project", 558 . 560 [HUAWEI-WP] 561 "5G: A Technology Vision", 2013, 562 . 564 [IDEAS-PS] 565 Pillay-Esnault, P., Boucadair, M., Jacquenet, C., 566 Fioccola, M., and A. Nennker, "Problem Statement for 567 Mapping Systems in Identity Enabled Networks", March 2017, 568 . 571 [ILA] Herbert, T., "Identifier-locator addressing for network 572 virtualization", March 2016, 573 . 576 [LOC-INDEP-ARCH] 577 Gao, Z., Venkataramani, A., Kurose, J., and S. Heimlicher, 578 "Towards a Quantitative Comparison of Location-Independent 579 Network Architectures", 2014, 580 . 582 [MF] Venkataramani, A., Kurose, J., Raychaudhuri, D., Nagaraja, 583 K., Mao, M., and S. Banerjee, "MobilityFirst: a mobility- 584 centric and trustworthy internet architecture", 2014, 585 . 587 [PAXOS] Lamport, L., "The part-time parliament", 1998, 588 . 590 [RFC1498] Saltzer, J., "On the Naming and Binding of Network 591 Destinations", RFC 1498, August 1993. 593 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 594 Requirement Levels", BCP 14, RFC 2119, March 1997. 596 [RFC4423] Moskowitz, R. and P. Nikander, , RFC 4423, May 2006. 598 [RFC5944] Perkins, C., "IP Mobility Support for IPv4, Revised", 599 RFC 5944, November 2010. 601 [RFC6830] Farinacci, D., Fuller, V., Meyer, D., and D. Lewis, "The 602 Locator/ID Separation Protocol (LISP)", RFC 6830, January 603 2013. 605 [VMWARE-WP] 606 "VMware View 5 with PCoIP, Network Optimization Guide 607 White Paper", 2011, . 611 [XIA] Anand, A., Dogar, F., Han, D., Li, B., Lim, H., Wu, W., 612 Akella, A., Andersen, D., Byers, J., and S. Seshan, "XIA: 613 Efficient Support for Evolvable Internetworking", 2012, 614 . 617 Authors' Addresses 619 Parishad Karimi 620 Rutgers University 622 Email: parishad@winlab.rutgers.edu 624 Shreyasee Mukherjee 625 Rutgers University 627 Email: shreya@winlab.rutgers.edu