idnits 2.17.1 draft-moura-dnsop-authoritative-recommendations-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 28, 2018) is 1977 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DNSOP Working Group G. Moura 3 Internet-Draft SIDN Labs/TU Delft 4 Intended status: Informational W. Hardaker 5 Expires: June 1, 2019 J. Heidemann 6 USC/Information Sciences Institute 7 M. Davids 8 SIDN Labs 9 November 28, 2018 11 Recommendations for Authoritative Servers Operators 12 draft-moura-dnsop-authoritative-recommendations-00 14 Abstract 16 This document summarizes recent research work exploring DNS 17 configurations and offers specific, tangible recommendations to 18 operators for configuring authoritative servers. 20 This document is not an Internet Standards Track specification; it is 21 published for informational purposes. 23 Status of This Memo 25 This Internet-Draft is submitted in full conformance with the 26 provisions of BCP 78 and BCP 79. 28 Internet-Drafts are working documents of the Internet Engineering 29 Task Force (IETF). Note that other groups may also distribute 30 working documents as Internet-Drafts. The list of current Internet- 31 Drafts is at https://datatracker.ietf.org/drafts/current/. 33 Internet-Drafts are draft documents valid for a maximum of six months 34 and may be updated, replaced, or obsoleted by other documents at any 35 time. It is inappropriate to use Internet-Drafts as reference 36 material or to cite them other than as "work in progress." 38 This Internet-Draft will expire on June 1, 2019. 40 Copyright Notice 42 Copyright (c) 2018 IETF Trust and the persons identified as the 43 document authors. All rights reserved. 45 This document is subject to BCP 78 and the IETF Trust's Legal 46 Provisions Relating to IETF Documents 47 (https://trustee.ietf.org/license-info) in effect on the date of 48 publication of this document. Please review these documents 49 carefully, as they describe your rights and restrictions with respect 50 to this document. Code Components extracted from this document must 51 include Simplified BSD License text as described in Section 4.e of 52 the Trust Legal Provisions and are provided without warranty as 53 described in the Simplified BSD License. 55 Table of Contents 57 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 58 2. R1: All authoritative server should have similar latency . . 4 59 3. R2: Routing Can Matter More Than Locations . . . . . . . . . 5 60 4. R3: Collecting Detailed Anycast Catchment Maps Ahead of 61 Actual Deployment Can Improve Engineering Designs . . . . . . 6 62 5. R4: When under stress, employ two strategies . . . . . . . . 7 63 6. R5: Be careful on how to choose your records time-to-live 64 values . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 65 7. R6: Shared Infrastructure Risks Collateral Damage During 66 Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 67 8. Security considerations . . . . . . . . . . . . . . . . . . . 11 68 9. IANA considerations . . . . . . . . . . . . . . . . . . . . . 11 69 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 11 70 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 71 11.1. Normative References . . . . . . . . . . . . . . . . . . 12 72 11.2. Informative References . . . . . . . . . . . . . . . . . 13 73 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 75 1. Introduction 77 The domain name system (DNS) has main two types of DNS servers: 78 authoritative servers and recursive resolvers. Figure 1 shows their 79 relationship. An authoritative server knows the content of a DNS 80 zone from local knowledge, and thus can answer queries about that 81 zone needing to query other servers [RFC2181]. A recursive resolver 82 is a program that extracts information from name servers in response 83 to client requests [RFC1034]. A client, in Figure 1, is shown as 84 stub, which is shorthand for stub resolver [RFC1034] that is 85 typically located within the client software. 87 +-----+ +-----+ +-----+ +-----+ 88 | AT1 | | AT2 | | AT3 | | AT4 | 89 +--+--+ +--+--+ +---+-+ +--+--+ 90 ^ ^ ^ ^ 91 | | | | 92 | +--+--+ | | 93 +------+ Rn +-------+ | 94 | +--^--+ | 95 | | | 96 | +--+--+ +-----+ | 97 +------+R1_1 | |R1_2 +------+ 98 +-+---+ +----+ 99 ^ ^ 100 | | 101 | +------+ | 102 +-+ stub +--+ 103 +------+ 105 Figure 1: Relationship between recursive resolvers (R) and 106 authoritative name servers (AT) 108 DNS queries contribute to web latency and affect user experience 109 [Sigla2014], and the DNS system has been subject to repeated Denial 110 of Service (DoS) attacks (for example, in November 2015 [Moura16b]) 111 in order to degrade user experience. To reduce latency and improve 112 resiliency against DoS attacks, DNS uses several types of server 113 replication. Replication at the authoritative server level can be 114 achieved with the deployment of multiple servers for the same zone 115 [RFC1035] (AT1--AT4 in Figure 1), the use of IP anycast 116 [RFC1546][RFC7094] and by using load balancers to support multiple 117 servers inside a single (potentially anycasted) site. As a 118 consequence, there are many possible ways a DNS provider can engineer 119 its production authoritative server network, with multiple viable 120 choices and no single optimal design. 122 This document summarizes recent research work exploring DNS 123 configurations and offers specific tangible recommendations to DNS 124 authoritative servers operators (DNS operators hereafter). It 125 presents recommendations derived from multiple studies with the goal 126 of improving DNS engineering, to promote understanding about how 127 anycast reacts to DoS attacks[Moura16b], how anycast affects query 128 latency[Schmidt17a], how to accurately map anycast network reanch 129 [Vries17b], how recursive and authoritative resolvers 130 interact[Mueller17b], and how recursive resolver caching and retries 131 help clients during DDoS attacks on authoritatives[Moura18b]. The 132 recommendations (R1-R6) presented in this document are backed by 133 these studies, which used wide-scale Internet measurements upon which 134 to draw their conclusions. This document describes the key 135 engineering options, and points readers to the pertinent papers for 136 details. 138 2. R1: All authoritative server should have similar latency 140 Authoritative DNS servers operators, such as Top-level domain (TLD) 141 operators (e.g.,: .org and .nl), announce their authoritative servers 142 in the form of Name Server (NS) records. Different authoritatives 143 should return the same content, typically by staying synchronized 144 using DNS zone transfers (AXFR[RFC5936] and IXFR[RFC1995]) to 145 coordinate the authoritative zone data to return to their clients. 147 DNS heavily relies upon replication to support high reliability, 148 capacity and to reduce latency [Moura16b]. DNS has two complementary 149 mechanisms to replicate the service. First, the protocol itself 150 supports nameserver replication of DNS service for a DNS zone through 151 the use of multiple nameservers that each operate on different IP 152 addresses, listed by a zone's NS records. Second, each of these 153 network addresses can run from multiple physical locations through 154 the use of IP anycast[RFC1546], by announcing the same IP address 155 from each site and allowing Internet routing (BGP[RFC4271]) to 156 associate clients with their topologically nearest anycast site. 157 Outside the DNS protocol, replication can be achieved by deploying 158 load balancers at each physical location. Nameserver replication is 159 recommended for all zones, and IP anycast is used by most large zones 160 such as the DNS Root, most top-level domains[Moura16b] and large 161 commercial enterprises, governments and other organizations. 163 Most DNS operators strive to reduce latency for users of their 164 service. However, because they control only their authoritative 165 servers, and not the recursive resolvers communicating with those 166 servers, it is difficult to ensure that recursives will be served by 167 the closest authoritative server. Server selection is up to the 168 recursive resolver's software implementation, and different software 169 vendors and releases employ different criteria to chose which 170 authoritative servers with which to communicate. 172 Knowing how recursives choose authoritative servers is a key step to 173 better engineer the deployment of authoritative servers. 174 [Mueller17b] evaluates this with a measurement study in which they 175 deployed seven unicast authoritative name servers in different global 176 locations and queried these authoritative servers from more than 177 9,000 RIPE Atlas probes (Vantage Points--VPs) and their respective 178 recursive resolvers. 180 In the wild, [Mueller17b] found that recursives query all available 181 authoritative servers, regardless of latency. But the distribution 182 of queries tend to be skewed towards authoritatives with lower 183 latency: the lower the latency between a recursive resolver and an 184 authoritative server, the more often the recursive will send queries 185 to that authoritative. Our hypothesis is that this behavior is a 186 consequence of two main criteria employed by resolvers when choosing 187 authoritatives: performance (lower latency) and diversity of 188 authoritatives, where a resolver checks all recursives to determine 189 which is closer and to provide alternatives if one is unavailable. 191 For a DNS operator, this policy means that latency of all 192 authoritatives matter, so all must be similarly capable, since all 193 available authoritatives will be queried by most recursives. Since 194 unicast cannot deliver good latency worldwide (a site in Europe will 195 always have a high latency to resolvers in California, for example), 196 [Mueller17b] recommends to DNS operators that they deploy equally 197 strong IP anycast in every authoritative server (and thus to phase 198 out unicast), so they can deliver similar {xxx: I don't think similar 199 is the right word here. "good" or "best possible"? - Wes} latency 200 values to recursives. Having one or few unicast authoritative will 201 limit the worst-case latency for most users {xxx: I don't understand 202 what this sentence is trying to bring up as a point but can't fix it 203 without potentially changing the meaning in a way that wasn't 204 intended - Wes}. Note that DNS operators should also take 205 architectural considerations into account when planning for deploying 206 anycast [RFC1546]. 208 This recommendation was deployed at the ".nl" TLD zone, which 209 originally had a mixed unicast/anycast setup; since early 2018 it now 210 has 4 anycast authoritative name servers. 212 3. R2: Routing Can Matter More Than Locations 214 A common metric when choosing an anycast DNS provider or setting up 215 an anycast service is the number of anycast sites, i.e., the number 216 of global locations from which the same address is announced with 217 BGP. Intuitively, one could think that more sites will lead to 218 shorter response times. 220 However, this is not necessarily true. In fact, [Schmidt17a] found 221 that routing can matter more than the total number of locations. 222 They analyzed the relationship between the number of anycast sites 223 and the performance of a service (latency-wise, RTT) and measured the 224 overall performance of four DNS Root servers, namely C, F, K and L, 225 from more than 7.9K RIPE Atlas probes. 227 [Schmidt17a] found that C-Root, a smaller anycast deployment 228 consisting of only 8 sites, provided a very similar overall 229 performance than that of the much larger deployments of K and L, with 230 33 and 144 sites respectively. A median RTT was measured between 231 30ms and 32ms for C, K and L roots, and 25ms for F. 233 Their recommendation for DNS operators when engineering anycast 234 services is consider factors other than just the number of sites 235 (such as local routing connectivity) when designing for performance. 236 They showed that 12 sites can provide reasonable latency, given they 237 are globally distributed and have good local interconnectivity. 238 However, more sites can be useful for other reasons, such as when 239 handling DoS attacks [Mueller17b]. 241 4. R3: Collecting Detailed Anycast Catchment Maps Ahead of Actual 242 Deployment Can Improve Engineering Designs 244 An anycast DNS service may have several dozens or even hundreds sites 245 (such as L-Root does). Anycast leverages Internet routing to 246 distribute the incoming queries to a service's distributed anycast 247 sites; in theory, BGP (the Internet's defacto routing protocol) 248 forwards incoming queries to a nearby anycast site (in terms of BGP 249 distance). However, usually queries are not evenly distributed 250 across all anycast sites, as found in the case of L-Root 251 [IcannHedge18]. 253 Adding new sites to an anycast service may change the load 254 distribution across all sites, leading to suboptimal usage of the 255 service or even stressing some sites while others remain 256 underutilized. This is a scenario that operators constantly face 257 when expanding an anycast service. Besides, when setting up a new 258 anycast service instance, operators cannot directly estimate the 259 query distribution among the sites in advance of enabling the site. 261 To estimate the query loads across sites of an expanding service or a 262 when setting up an entirely new service, operators need detailed 263 anycast maps and catchment estimates (i.e., operators need to know 264 which prefixes will be matched to which anycast site). To do that, 265 [Vries17b] developed a new technique enabling operators to carry out 266 active measurements, using a technique and tool called Verfploeter. 267 Verfploeter maps a large portion of the IPv4 address space, allowing 268 DNS operators to predict both query distribution and clients 269 catchment before deploying new anycast sites. 271 [Vries17b] shows how this technique was used to predict both the 272 catchment and query load distribution for the new anycast service of 273 B-Root. Using two anycast sites in Miami (MIA) and Los Angeles (LAX) 274 from the operational B-Root server, they sent ICMP echo packets to IP 275 addresses from each IPv4 /24 in on the Internet using a source 276 address within the anycast prefix. Then, they recorded which site 277 the ICMP echo replies arrived at based on the Internet's BGP routing. 279 This analysis resulted in an Internet wide catchment map. Weighting 280 was then applied to the incoming traffic prefixes based on of 1 day 281 of B-Root traffic (2017-04-12, DITL datasets [Ditl17]). The 282 combination of the created catchment mapping and the load per prefix 283 created an estimate predicting that 81.6% of the traffic would go to 284 the LAX site. The actual value was 81.4% of traffic going to LAX, 285 showing that the estimation was pretty close and the Verfploeter 286 technique was a excellent method of predicting traffic loads in 287 advance of a new anycast instance deployment. 289 Besides that, Verfploeter can also be used to estimate how traffic 290 shifts among sites when BGP manipulations are executed, such as AS 291 Path prepending that is frequently used by production networks during 292 DDoS attacks. A new catchment mapping for each prepending 293 configuration configuration: no prepending, and prepending with 1, 2 294 or 3 hops at each site. Then, [Vries17b] shows that this mapping can 295 accurately estimate the load distribution for each configuration. 297 An important operational takeaway from [Vries17b] is that DNS 298 operators can make informed choices when engineering new anycast 299 sites or when expending new ones by carrying out active measurements 300 using Verfploeter in advance of operationally enabling the fully 301 anycast service. Operators can spot sub-optimal routing situations 302 early, with a fine granularity, and with significantly better 303 coverage than using traditional measurement platforms such as RIPE 304 Atlas. 306 Deploying a small test Verfploeter-enabled platform in advance at a 307 potential anycast site may reveal the realizable benefits of using 308 that site as an anycast interest, potentially saving significant 309 financial and labor costs of deploying hardware to a new site that 310 was less effective than as had been hoped. 312 5. R4: When under stress, employ two strategies 314 DDoS attacks are becoming bigger, cheaper, and more frequent 315 [Mueller17b]. The most powerful recorded DDoS attack to DNS servers 316 to date reached 1.2 Tbps, by using IoT devices [Perlroth16]. Such 317 attacks call for an answer for the following question: how should a 318 DNS operator engineer its anycast authoritative DNS server react to 319 the stress of a DDoS attack? This question is investigated in study 320 [Moura16b] in which empirical observations are grounded with the 321 following theoretical evaluation of options. 323 An authoritative DNS server deployed using anycast will have many 324 server instances distributed over many networks and sites. 325 Ultimately, the relationship between the DNS provider's network and a 326 client's ISP will determine which anycast site will answer for 327 queries for a given client. As a consequence, when an anycast 328 authoritative server is under attack, the load that each anycast site 329 receives is likely to be unevenly distributed (a function of the 330 source of the attacks), thus some sites may be more overloaded than 331 others which is what was observed analyzing the Root DNS events of 332 Nov. 2015 [Moura16b]. Given the fact that different sites may have 333 different capacity (bandwidth, CPU, etc.), making a decision about 334 how to react to stress becomes even more difficult. 336 In practice, an anycast site under stress, overloaded with incoming 337 traffic, has two options: 339 o It can withdraw or pre-prepend its route to some or to all of its 340 neighbors, shrinking its catchment (the number of clients that BGP 341 maps to it), shifting both legitimate and attack traffic to other 342 anycast sites. The other sites will hopefully have greater 343 capacity and be able to service the queries. 345 o Alternatively, it can be become a degraded absorber, continuing to 346 operate, but with overloaded ingress routers, dropping some 347 incoming legitimate requests due to queue overflow. However, 348 continued operation will also absorb traffic from attackers in its 349 catchment, protecting the other anycast sites. 351 [Moura16b] saw both of these behaviors in practice in the Root DNS 352 events, observed through site reachability and RTTs. These options 353 represent different uses of an anycast deployment. The withdrawal 354 strategy causes anycast to respond as a waterbed, with stress 355 displacing queries from one site to others. The absorption strategy 356 behaves as a conventional mattress, compressing under load, with some 357 queries getting delayed or dropped. 359 Although described as strategies and policies, these outcomes are the 360 result of several factors: the combination of operator and host ISP 361 routing policies, routing implementations withdrawing under load, the 362 nature of the attack, and the locations of the sites and the 363 attackers. Some policies are explicit, such as the choice of local- 364 only anycast sites, or operators removing a site for maintenance or 365 modifying routing to manage load. However, under stress, the choices 366 of withdrawal and absorption can also be results that emerge from a 367 mix of explicit choices and implementation details, such as BGP 368 timeout values. 370 [Moura16b] speculates that more careful, explicit, and automated 371 management of policies may provide stronger defenses to overload, an 372 area currently under study. For DNS operators, that means that 373 besides traditional filtering, two other options are available 374 (withdraw/prepend or isolate sites), and the best choice depends on 375 the specifics of the attack. 377 6. R5: Be careful on how to choose your records time-to-live values 379 In a DNS response, each resource record is accompanied by a time-to- 380 live value (TTL), which "describes how long a RR can be cached before 381 it should be discarded" [RFC1034]. The TTL values are set by zone 382 owners in their zone files - either specifically per record or by 383 using default values for the entire zone. Sometimes the same 384 resource record may have different TTL values - one from the parent 385 and one from the child DNS server. In this cases, resolvers are 386 expected to prioritize the answer according to Section 5.4.1 in 387 [RFC2181]. 389 While set by authoritative server operators (labeled "AT"s in 390 Figure 1), the TTL value in fact influences the behavior of recursive 391 resolvers (and their operators - "Rn" in the same figure), by setting 392 an upper limit on how long a record should be cached before 393 discarded. In this sense, caching can be seen as a sort of 394 "ephemeral replication", i.e., the contents of an authoritative 395 server are placed at a recursive resolver cache for a period of time 396 up to the TTL value. Caching improves response times (by avoiding 397 repeated queries between recursive resolvers and authoritative). 399 Besides improving performance, caching may play a significant role 400 during DoS attacks against authoritative servers. To investigate 401 that, [Moura18b] evaluates the role of caching (and retries) in DNS 402 resiliency to DDoS attacks. Two authoritative servers were 403 configured for a newly registered domain and a series of experiments 404 were carried out using various TTL values (60,1800, 3600, 86400s) for 405 records. Unique DNS queries were sent from roughly 15,000 vantage 406 points, using RIPE Atlas. 408 [Moura18b] found that caching, in the wild, works as expected 70% of 409 the times - for various TTL values. It is believe that Complex 410 recursive infrastructure (such as anycast recursives with fragmented 411 cache), besides cache flushing and hierarchy explains these other 30% 412 of the non-cached records. The results from the experiments were 413 confirmed by analyzing authoritative traffic for the .nl TLD, which 414 showed similar figures. 416 DDoS attacks on authoritative servers were emulated by dropping all 417 incoming pakcets for various TTLs values. The results showed: 419 o When 100% of requests were dropped, the TTL value of the record 420 set by the zone owner determined how long clients received 421 responses, together with the status of the cache at the attack 422 time. Given the TTL value decreases as time passes at the cache, 423 it protected clients for up to its value in cache. 425 * Once the TTL values expired, there was some evidence of some 426 recursives serving stale content 427 [I-D.ietf-dnsop-terminology-bis]. Serving stale is the only 428 viable option when TTL values expire in recursive caches and 429 authoritative servers became completely unavailable. 431 Partial-failure DDoS failures were also emulated (similar to Dyn 2016 432 [Perlroth16]), simulating when authoritative are partially available, 433 by dropping packet at rates of 50-90%, for various TTL values. The 434 results showed: 436 o For various TTL values, caching was a key component in the success 437 of queries. For example, with a 50% packet drop rate at the 438 authoritatives, most clients eventually got an answer. 440 o When caching could not help (for a scenario with TTL of 60s, and 441 time in between probing of 10 minutes), recursive servers kept 442 retrying queries to authoritatives: at 90% packet drop with TTL of 443 60s, 27% of clients still got an answer, at the price of increased 444 response times. 446 o The study also showed that these retries have a significant effect 447 on the authoriative side: A 8.1x times increase was seen in normal 448 traffic during a 90% packet drop with TTL of 60s, as recursives 449 attempt to resolve queries - thus effectively creating "friendly 450 fire". 452 Therefore, given the important role of the TTL, it is recommended 453 that DNS zone owners set their TTL values carefully, knowing that 454 they will influence (i) the success of client's queries and (ii) the 455 amount of "friendly fire" traffic they will receive. Many operators 456 may as well reconsider their 10x overprovision metric for DNS 457 servers, given this significant increase legitimate traffic during 458 DDoS. 460 XXX: WJH: I don't understand what that last sentence is trying to 461 suggest. Specifically "reconsider their 10x overprovision metric". 462 Is it suggested that it doesn't need to be that high if the TTL 463 values are chosen more carefully? We should state that specifically, 464 if that's the case as it's hard to derive from that sentence. Or is 465 it trying to say it needs to be higher than 10x with short TTLs? Or 466 both! 468 7. R6: Shared Infrastructure Risks Collateral Damage During Attacks 470 Co-locating services, such as authoritative servers, creates some 471 degree of shared risk, in that stress on one service may spill over 472 into another, resulting in collateral damage. Collateral damage is a 473 common side-effect of DDoS, and data centers and operators strive to 474 minimize collateral damage through redundancy, overcapacity, and 475 isolation. 477 This has been seen in practice during the DDoS attack against the 478 Root DNS system in November 2015 [Moura16b]. In this study, it was 479 shown that two services not directly targeted by the attack, namely 480 D-Root and the .nl TLD, suffered collateral damage. These services 481 showed reduced end-to-end performance (i.e., higher latency and 482 reduced reachability) with timing consistent with the DDoS event, 483 strongly suggesting a shared resource with original targets of the 484 attack. 486 Another example of collateral damage was the 1.2 Tbps attack against 487 Dyn, a major DNS provider on October 2017 [Perlroth16]. As a result, 488 many of their customers, including Airbnb, HBO, Netflix, and Twitter 489 experienced issues with clients failing to resolve their domains, 490 since the servers partially shared the same infrastructure. 492 It is recommended, therefore, when choosing third-party DNS 493 providers, operators should be aware of shared infrastructure risks. 494 By sharing infrastructure, there is an increased attack surface. 496 8. Security considerations 498 o to be added 500 9. IANA considerations 502 This document has no IANA actions. 504 10. Acknowledgements 506 This document is a summary of the main lessons of the research works 507 mentioned on each recommendation here provided. As such, each author 508 of each paper has a clear contribution. Here we mention the papers 509 co-authors and thank them for their work: Ricardo de O Schmidt, 510 Wouter B de Vries, Moritz Mueller, Lan Wei, Cristian Hesselman, Jan 511 Harm Kuipers, Pieter-Tjerk de Boer and Aiko Pras. 513 Besides those, we would like thank those who have been individually 514 thanked in each research work, RIPE NCC and DNS OARC for their tools 515 and datasets used in this research, as well as the funding agencies 516 sponsoring the individual research works. 518 11. References 520 11.1. Normative References 522 [I-D.ietf-dnsop-terminology-bis] 523 Hoffman, P., Sullivan, A., and K. Fujiwara, "DNS 524 Terminology", draft-ietf-dnsop-terminology-bis-14 (work in 525 progress), September 2018. 527 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 528 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 529 . 531 [RFC1035] Mockapetris, P., "Domain names - implementation and 532 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, 533 November 1987, . 535 [RFC1546] Partridge, C., Mendez, T., and W. Milliken, "Host 536 Anycasting Service", RFC 1546, DOI 10.17487/RFC1546, 537 November 1993, . 539 [RFC1995] Ohta, M., "Incremental Zone Transfer in DNS", RFC 1995, 540 DOI 10.17487/RFC1995, August 1996, 541 . 543 [RFC2181] Elz, R. and R. Bush, "Clarifications to the DNS 544 Specification", RFC 2181, DOI 10.17487/RFC2181, July 1997, 545 . 547 [RFC4271] Rekhter, Y., Ed., Li, T., Ed., and S. Hares, Ed., "A 548 Border Gateway Protocol 4 (BGP-4)", RFC 4271, 549 DOI 10.17487/RFC4271, January 2006, 550 . 552 [RFC5936] Lewis, E. and A. Hoenes, Ed., "DNS Zone Transfer Protocol 553 (AXFR)", RFC 5936, DOI 10.17487/RFC5936, June 2010, 554 . 556 [RFC7094] McPherson, D., Oran, D., Thaler, D., and E. Osterweil, 557 "Architectural Considerations of IP Anycast", RFC 7094, 558 DOI 10.17487/RFC7094, January 2014, 559 . 561 11.2. Informative References 563 [Ditl17] OARC, D., "2017 DITL data", October 2018, 564 . 566 [IcannHedge18] 567 ICANN, ., "DNS-STATS - Hedgehog 2.4.1", October 2018, 568 . 570 [Moura16b] 571 Moura, G., Schmidt, R., Heidemann, J., Vries, W., Mueller, 572 M., Wei, L., and C. Hesselman, "Anycast vs DDoS Evaluating 573 the November 2015 Root DNS Events.", ACM 2016 Internet 574 Measurement Conference, DOI /10.1145/2987443.2987446, 575 October 2016, 576 . 578 [Moura18b] 579 Moura, G., Heidemann, J., Mueller, M., Schmidt, R., and M. 580 Davids, "When the Dike Breaks: Dissecting DNS Defenses 581 During DDos", ACM 2018 Internet Measurement Conference, 582 DOI 10.1145/3278532.3278534, October 2018, 583 . 585 [Mueller17b] 586 Mueller, M., Moura, G., Schmidt, R., and J. Heidemann, 587 "Recursives in the Wild- Engineering Authoritative DNS 588 Servers.", ACM 2017 Internet Measurement Conference, 589 DOI 10.1145/3131365.3131366, October 2017, 590 . 592 [Perlroth16] 593 Perlroth, N., "Hackers Used New Weapons to Disrupt Major 594 Websites Across U.S.", October 2016, 595 . 598 [Schmidt17a] 599 Schmidt, R., Heidemann, J., and J. Kuipers, "Anycast 600 Latency - How Many Sites Are Enough. In Proceedings of the 601 Passive and Active Measurement Workshop", PAM Passive and 602 Active Measurement Conference, March 2017, 603 . 605 [Sigla2014] 606 Singla, A., Chandrasekaran, B., Godfrey, P., and B. Maggs, 607 "The Internet at the speed of light. In Proceedings of the 608 13th ACM Workshop on Hot Topics in Networks (Oct 2014)", 609 ACM Workshop on Hot Topics in Networks, October 2014, 610 . 612 [Vries17b] 613 Vries, W., Schmidt, R., Hardaker, W., Heidemann, J., Boer, 614 P., and A. Pras, "Verfploeter - Broad and Load-Aware 615 Anycast Mapping", ACM 2017 Internet Measurement 616 Conference, DOI 10.1145/3131365.3131371, October 2017, 617 . 619 Authors' Addresses 621 Giovane C. M. Moura 622 SIDN Labs/TU Delft 623 Meander 501 624 Arnhem 6825 MD 625 The Netherlands 627 Phone: +31 26 352 5500 628 Email: giovane.moura@sidn.nl 630 Wes Hardaker 631 USC/Information Sciences Institute 632 PO Box 382 633 Davis 95617-0382 634 U.S.A. 636 Phone: +1 (530) 404-0099 637 Email: ietf@hardakers.net 639 John Heidemann 640 USC/Information Sciences Institute 641 4676 Admiralty Way 642 Marina Del Rey 90292-6695 643 U.S.A. 645 Phone: +1 (310) 448-8708 646 Email: johnh@isi.edu 647 Marco Davids 648 SIDN Labs 649 Meander 501 650 Arnhem 6825 MD 651 The Netherlands 653 Phone: +31 26 352 5500 654 Email: marco.davids@sidn.nl