idnits 2.17.1 draft-ietf-dnsop-serve-stale-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. -- The draft header indicates that this document updates RFC1034, but the abstract doesn't seem to directly say this. It does mention RFC1034 though, so this could be OK. -- The draft header indicates that this document updates RFC1035, but the abstract doesn't seem to directly say this. It does mention RFC1035 though, so this could be OK. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year (Using the creation date from RFC1034, updated by this document, for RFC5378 checks: 1987-11-01) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 09, 2019) is 1601 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Obsolete informational reference (is this intentional?): RFC 8499 (Obsoleted by RFC 9499) Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 DNSOP Working Group D. Lawrence 3 Internet-Draft Oracle 4 Updates: 1034, 1035, 2181 (if approved) W. Kumari 5 Intended status: Standards Track P. Sood 6 Expires: June 11, 2020 Google 7 December 09, 2019 9 Serving Stale Data to Improve DNS Resiliency 10 draft-ietf-dnsop-serve-stale-10 12 Abstract 14 This draft defines a method (serve-stale) for recursive resolvers to 15 use stale DNS data to avoid outages when authoritative nameservers 16 cannot be reached to refresh expired data. One of the motivations 17 for serve-stale is to make the DNS more resilient to DoS attacks, and 18 thereby make them less attractive as an attack vector. This document 19 updates the definitions of TTL from RFC 1034 and RFC 1035 so that 20 data can be kept in the cache beyond the TTL expiry, updates RFC 2181 21 by interpreting values with the high order bit set as being positive, 22 rather than 0, and suggests a cap of 7 days. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at https://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on June 11, 2020. 41 Copyright Notice 43 Copyright (c) 2019 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (https://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 59 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 60 3. Background . . . . . . . . . . . . . . . . . . . . . . . . . 3 61 4. Standards Action . . . . . . . . . . . . . . . . . . . . . . 4 62 5. Example Method . . . . . . . . . . . . . . . . . . . . . . . 4 63 6. Implementation Considerations . . . . . . . . . . . . . . . . 6 64 7. Implementation Caveats . . . . . . . . . . . . . . . . . . . 8 65 8. Implementation Status . . . . . . . . . . . . . . . . . . . . 9 66 9. EDNS Option . . . . . . . . . . . . . . . . . . . . . . . . . 10 67 10. Security Considerations . . . . . . . . . . . . . . . . . . . 10 68 11. Privacy Considerations . . . . . . . . . . . . . . . . . . . 11 69 12. NAT Considerations . . . . . . . . . . . . . . . . . . . . . 11 70 13. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11 71 14. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 11 72 15. References . . . . . . . . . . . . . . . . . . . . . . . . . 11 73 15.1. Normative References . . . . . . . . . . . . . . . . . . 11 74 15.2. Informative References . . . . . . . . . . . . . . . . . 12 75 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 12 77 1. Introduction 79 Traditionally the Time To Live (TTL) of a DNS resource record has 80 been understood to represent the maximum number of seconds that a 81 record can be used before it must be discarded, based on its 82 description and usage in [RFC1035] and clarifications in [RFC2181]. 84 This document expands the definition of the TTL to explicitly allow 85 for expired data to be used in the exceptional circumstance that a 86 recursive resolver is unable to refresh the information. It is 87 predicated on the observation that authoritative answer 88 unavailability can cause outages even when the underlying data those 89 servers would return is typically unchanged. 91 We describe a method below for this use of stale data, balancing the 92 competing needs of resiliency and freshness. 94 This document updates the definitions of TTL from [RFC1034] and 95 [RFC1035] so that data can be kept in the cache beyond the TTL 96 expiry, and also updates [RFC2181] by interpreting values with the 97 high order bit set as being positive, rather than 0, and also 98 suggests a cap of 7 days. 100 2. Terminology 102 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 103 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 104 "OPTIONAL" in this document are to be interpreted as described in BCP 105 14 [RFC2119] [RFC8174] when, and only when, they appear in all 106 capitals, as shown here. 108 For a glossary of DNS terms, please see [RFC8499]. 110 3. Background 112 There are a number of reasons why an authoritative server may become 113 unreachable, including Denial of Service (DoS) attacks, network 114 issues, and so on. If a recursive server is unable to contact the 115 authoritative servers for a query but still has relevant data that 116 has aged past its TTL, that information can still be useful for 117 generating an answer under the metaphorical assumption that "stale 118 bread is better than no bread." 120 [RFC1035] Section 3.2.1 says that the TTL "specifies the time 121 interval that the resource record may be cached before the source of 122 the information should again be consulted", and Section 4.1.3 further 123 says the TTL, "specifies the time interval (in seconds) that the 124 resource record may be cached before it should be discarded." 126 A natural English interpretation of these remarks would seem to be 127 clear enough that records past their TTL expiration must not be used. 128 However, [RFC1035] predates the more rigorous terminology of 129 [RFC2119] which softened the interpretation of "may" and "should". 131 [RFC2181] aimed to provide "the precise definition of the Time to 132 Live", but in Section 8 was mostly concerned with the numeric range 133 of values rather than data expiration behavior. It does, however, 134 close that section by noting, "The TTL specifies a maximum time to 135 live, not a mandatory time to live." This wording again does not 136 contain BCP 14 [RFC2119] key words, but does convey the natural 137 language connotation that data becomes unusable past TTL expiry. 139 As of the time of this writing, several large-scale operators use 140 stale data for answers in some way. A number of recursive resolver 141 packages, including BIND, Knot, OpenDNS, and Unbound, provide options 142 to use stale data. Apple MacOS can also use stale data as part of 143 the Happy Eyeballs algorithms in mDNSResponder. The collective 144 operational experience is that using stale data can provide 145 significant benefit with minimal downside. 147 4. Standards Action 149 The definition of TTL in [RFC1035] Sections 3.2.1 and 4.1.3 is 150 amended to read: 152 TTL a 32-bit unsigned integer number of seconds that specifies the 153 duration that the resource record MAY be cached before the source 154 of the information MUST again be consulted. Zero values are 155 interpreted to mean that the RR can only be used for the 156 transaction in progress, and should not be cached. Values SHOULD 157 be capped on the orders of days to weeks, with a recommended cap 158 of 604,800 seconds (seven days). If the data is unable to be 159 authoritatively refreshed when the TTL expires, the record MAY be 160 used as though it is unexpired. See [RFC Editor: replace by RFC 161 number] Section 5 and Section 6 for details. 163 Interpreting values which have the high-order bit set as being 164 positive, rather than 0, is a change from [RFC2181], the rationale 165 for which is explained in Section 6. Suggesting a cap of seven days, 166 rather than the 68 years allowed by [RFC2181], reflects the current 167 practice of major modern DNS resolvers. 169 When returning a response containing stale records, a recursive 170 resolver MUST set the TTL of each expired record in the message to a 171 value greater than 0, with a RECOMMENDED value of 30 seconds. See 172 Section 6 for explanation. 174 Answers from authoritative servers that have a DNS Response Code of 175 either 0 (NoError) or 3 (NXDomain) and the Authoritative Answers (AA) 176 bit set MUST be considered to have refreshed the data at the 177 resolver. Answers from authoritative servers that have any other 178 response code SHOULD be considered a failure to refresh the data and 179 therefore leave any previous state intact. See Section 6 for a 180 discussion. 182 5. Example Method 184 There is more than one way a recursive resolver could responsibly 185 implement this resiliency feature while still respecting the intent 186 of the TTL as a signal for when data is to be refreshed. 188 In this example method four notable timers drive considerations for 189 the use of stale data: 191 o A client response timer, which is the maximum amount of time a 192 recursive resolver should allow between the receipt of a 193 resolution request and sending its response. 195 o A query resolution timer, which caps the total amount of time a 196 recursive resolver spends processing the query. 198 o A failure recheck timer, which limits the frequency at which a 199 failed lookup will be attempted again. 201 o A maximum stale timer, which caps the amount of time that records 202 will be kept past their expiration. 204 Most recursive resolvers already have the query resolution timer, and 205 effectively some kind of failure recheck timer. The client response 206 timer and maximum stale timer are new concepts for this mechanism. 208 When a recursive resolver receives a request, it should start the 209 client response timer. This timer is used to avoid client timeouts. 210 It should be configurable, with a recommended value of 1.8 seconds as 211 being just under a common timeout value of 2 seconds while still 212 giving the resolver a fair shot at resolving the name. 214 The resolver then checks its cache for any unexpired records that 215 satisfy the request and returns them if available. If it finds no 216 relevant unexpired data and the Recursion Desired flag is not set in 217 the request, it should immediately return the response without 218 consulting the cache for expired records. Typically this response 219 would be a referral to authoritative nameservers covering the zone, 220 but the specifics are implementation-dependent. 222 If iterative lookups will be done, then the failure recheck timer is 223 consulted. Attempts to refresh from non-responsive or otherwise 224 failing authoritative nameservers are recommended to be done no more 225 frequently than every 30 seconds. If this request was received 226 within this period, the cache may be immediately consulted for stale 227 data to satisfy the request. 229 Outside the period of the failure recheck timer, the resolver should 230 start the query resolution timer and begin the iterative resolution 231 process. This timer bounds the work done by the resolver when 232 contacting external authorities, and is commonly around 10 to 30 233 seconds. If this timer expires on an attempted lookup that is still 234 being processed, the resolution effort is abandoned. 236 If the answer has not been completely determined by the time the 237 client response timer has elapsed, the resolver should then check its 238 cache to see whether there is expired data that would satisfy the 239 request. If so, it adds that data to the response message with a TTL 240 greater than 0 (as specified in Section 4). The response is then 241 sent to the client while the resolver continues its attempt to 242 refresh the data. 244 When no authorities are able to be reached during a resolution 245 attempt, the resolver should attempt to refresh the delegation and 246 restart the iterative lookup process with the remaining time on the 247 query resolution timer. This resumption should be done only once per 248 resolution effort. 250 Outside the resolution process, the maximum stale timer is used for 251 cache management and is independent of the query resolution process. 252 This timer is conceptually different from the maximum cache TTL that 253 exists in many resolvers, the latter being a clamp on the value of 254 TTLs as received from authoritative servers and recommended to be 255 seven days in the TTL definition in Section 4. The maximum stale 256 timer should be configurable, and defines the length of time after a 257 record expires that it should be retained in the cache. The 258 suggested value is between 1 and 3 days. 260 6. Implementation Considerations 262 This document mainly describes the issues behind serving stale data 263 and intentionally does not provide a formal algorithm. The concept 264 is not overly complex, and the details are best left to resolver 265 authors to implement in their codebases. The processing of serve- 266 stale is a local operation, and consistent variables between 267 deployments are not needed for interoperability. However, we would 268 like to highlight the impact of various implementation choices, 269 starting with the timers involved. 271 The most obvious of these is the maximum stale timer. If this 272 variable is too large it could cause excessive cache memory usage, 273 but if it is too small, the serve-stale technique becomes less 274 effective, as the record may not be in the cache to be used if 275 needed. Shorter values, even less than a day, can effectively handle 276 the vast majority of outages. Longer values, as much as a week, give 277 time for monitoring systems to notice a resolution problem and for 278 human intervention to fix it; operational experience has been that 279 sometimes the right people can be hard to track down and 280 unfortunately slow to remedy the situation. 282 Increased memory consumption could be mitigated by prioritizing 283 removal of stale records over non-expired records during cache 284 exhaustion. Implementations may also wish to consider whether to 285 track the names in requests for their last time of use or their 286 popularity, using that as an additional factor when considering cache 287 eviction. A feature to manually flush only stale records could also 288 be useful. 290 The client response timer is another variable which deserves 291 consideration. If this value is too short, there exists the risk 292 that stale answers may be used even when the authoritative server is 293 actually reachable but slow; this may result in undesirable answers 294 being returned. Conversely, waiting too long will negatively impact 295 user experience. 297 The balance for the failure recheck timer is responsiveness in 298 detecting the renewed availability of authorities versus the extra 299 resource use for resolution. If this variable is set too large, 300 stale answers may continue to be returned even after the 301 authoritative server is reachable; per [RFC2308], Section 7, this 302 should be no more than five minutes. If this variable is too small, 303 authoritative servers may be targeted with a significant amount of 304 excess traffic. 306 Regarding the TTL to set on stale records in the response, 307 historically TTLs of zero seconds have been problematic for some 308 implementations, and negative values can't effectively be 309 communicated to existing software. Other very short TTLs could lead 310 to congestive collapse as TTL-respecting clients rapidly try to 311 refresh. The recommended value of 30 seconds not only sidesteps 312 those potential problems with no practical negative consequences, it 313 also rate limits further queries from any client that honors the TTL, 314 such as a forwarding resolver. 316 As for the change to treat a TTL with the high-order bit set as 317 positive and then clamping it, as opposed to [RFC2181] treating it as 318 zero, the rationale here is basically one of engineering simplicity 319 versus an inconsequential operational history. Negative TTLs had no 320 rational intentional meaning that wouldn't have been satisfied by 321 just sending 0 instead, and similarly there was realistically no 322 practical purpose for sending TTLs of 2^25 seconds (1 year) or more. 323 There's also no record of TTLs in the wild having the most 324 significant bit set in DNS-OARC's "Day in the Life" samples [DITL]. 325 With no apparent reason for operators to use them intentionally, that 326 leaves either errors or non-standard experiments as explanations as 327 to why such TTLs might be encountered, with neither providing an 328 obviously compelling reason as to why having the leading bit set 329 should be treated differently from having any of the next eleven bits 330 set and then capped per Section 4. 332 Another implementation consideration is the use of stale nameserver 333 addresses for lookups. This is mentioned explicitly because, in some 334 resolvers, getting the addresses for nameservers is a separate path 335 from a normal cache lookup. If authoritative server addresses are 336 not able to be refreshed, resolution can possibly still be successful 337 if the authoritative servers themselves are up. For instance, 338 consider an attack on a top-level domain that takes its nameservers 339 offline; serve-stale resolvers that had expired glue addresses for 340 subdomains within that TLD would still be able to resolve names 341 within those subdomains, even those it had not previously looked up. 343 The directive in Section 4 that only NoError and NXDomain responses 344 should invalidate any previously associated answer stems from the 345 fact that no other RCODEs that a resolver normally encounters make 346 any assertions regarding the name in the question or any data 347 associated with it. This comports with existing resolver behavior 348 where a failed lookup (say, during pre-fetching) doesn't impact the 349 existing cache state. Some authoritative server operators have said 350 that they would prefer stale answers to be used in the event that 351 their servers are responding with errors like ServFail instead of 352 giving true authoritative answers. Implementers MAY decide to return 353 stale answers in this situation. 355 Since the goal of serve-stale is to provide resiliency for all 356 obvious errors to refresh data, these other RCODEs are treated as 357 though they are equivalent to not getting an authoritative response. 358 Although NXDomain for a previously existing name might well be an 359 error, it is not handled that way because there is no effective way 360 to distinguish operator intent for legitimate cases versus error 361 cases. 363 During discussion in the IETF, it was suggested that, if all 364 authorities return responses with RCODE of Refused, it may be an 365 explicit signal to take down the zone from servers that still have 366 the zone's delegation pointed to them. Refused, however, is also 367 overloaded to mean multiple possible failures which could represent 368 transient configuration failures. Operational experience has shown 369 that purposely returning Refused is a poor way to achieve an explicit 370 takedown of a zone compared to either updating the delegation or 371 returning NXDomain with a suitable SOA for extended negative caching. 372 Implementers MAY nonetheless consider whether to treat all 373 authorities returning Refused as preempting the use of stale data. 375 7. Implementation Caveats 377 Stale data is used only when refreshing has failed in order to adhere 378 to the original intent of the design of the DNS and the behaviour 379 expected by operators. If stale data were to always be used 380 immediately and then a cache refresh attempted after the client 381 response has been sent, the resolver would frequently be sending data 382 that it would have had no trouble refreshing. Because modern 383 resolvers use techniques like pre-fetching and request coalescing for 384 efficiency, it is not necessary that every client request needs to 385 trigger a new lookup flow in the presence of stale data, but rather 386 that a good-faith effort has been recently made to refresh the stale 387 data before it is delivered to any client. 389 It is important to continue the resolution attempt after the stale 390 response has been sent, until the query resolution timeout, because 391 some pathological resolutions can take many seconds to succeed as 392 they cope with unavailable servers, bad networks, and other problems. 393 Stopping the resolution attempt when the response with expired data 394 has been sent would mean that answers in these pathological cases 395 would never be refreshed. 397 The continuing prohibition against using data with a 0 second TTL 398 beyond the current transaction explicitly extends to it being 399 unusable even for stale fallback, as it is not to be cached at all. 401 Be aware that Canonical Name (CNAME) and DNAME [RFC6672] records 402 mingled in the expired cache with other records at the same owner 403 name can cause surprising results. This was observed with an initial 404 implementation in BIND when a hostname changed from having an IPv4 405 Address (A) record to a CNAME. The version of BIND being used did 406 not evict other types in the cache when a CNAME was received, which 407 in normal operations is not a significant issue. However, after both 408 records expired and the authorities became unavailable, the fallback 409 to stale answers returned the older A instead of the newer CNAME. 411 8. Implementation Status 413 The algorithm described in Section 5 was originally implemented as a 414 patch to BIND 9.7.0. It has been in use on Akamai's production 415 network since 2011, and effectively smoothed over transient failures 416 and longer outages that would have resulted in major incidents. The 417 patch was contributed to Internet Systems Consortium and the 418 functionality is now available in BIND 9.12 and later via the options 419 stale-answer-enable, stale-answer-ttl, and max-stale-ttl. 421 Unbound has a similar feature for serving stale answers, and will 422 respond with stale data immediately if it has recently tried and 423 failed to refresh the answer by pre-fetching. 425 Knot Resolver has a demo module here: https://knot- 426 resolver.readthedocs.io/en/stable/modules.html#serve-stale 428 Apple's system resolvers are also known to use stale answers, but the 429 details are not readily available. 431 In the research paper "When the Dike Breaks: Dissecting DNS Defenses 432 During DDoS" [DikeBreaks], the authors detected some use of stale 433 answers by resolvers when authorities came under attack. Their 434 research results suggest that more widespread adoption of the 435 technique would significantly improve resiliency for the large number 436 of requests that fail or experience abnormally long resolution times 437 during an attack. 439 9. EDNS Option 441 During the discussion of serve-stale in the IETF, it was suggested 442 that an EDNS option should be available to either explicitly opt-in 443 to getting data that is possibly stale, or at least as a debugging 444 tool to indicate when stale data has been used for a response. 446 The opt-in use case was rejected as the technique was meant to be 447 immediately useful in improving DNS resiliency for all clients. 449 The reporting case was ultimately also rejected because even the 450 simpler version of a proposed option was still too much bother to 451 implement for too little perceived value. 453 10. Security Considerations 455 The most obvious security issue is the increased likelihood of DNSSEC 456 validation failures when using stale data because signatures could be 457 returned outside their validity period. Stale negative records can 458 increase the time window where newly published TLSA or DS RRs may not 459 be used due to cached NSEC or NSEC3 records. These scenarios would 460 only be an issue if the authoritative servers are unreachable, the 461 only time the techniques in this document are used, and thus does not 462 introduce a new failure in place of what would have otherwise been 463 success. 465 Additionally, bad actors have been known to use DNS caches to keep 466 records alive even after their authorities have gone away. The serve 467 stale feature potentially makes the attack easier, although without 468 introducing a new risk. In addition, attackers could combine this 469 with a DDoS attack on authoritative servers with the explicit intent 470 of having stale information cached for longer. But if attackers have 471 this capacity, they probably could do much worse than prolonging the 472 life of old data. 474 In [CloudStrife], it was demonstrated how stale DNS data, namely 475 hostnames pointing to addresses that are no longer in use by the 476 owner of the name, can be used to co-opt security such as to get 477 domain-validated certificates fraudulently issued to an attacker. 478 While this document does not create a new vulnerability in this area, 479 it does potentially enlarge the window in which such an attack could 480 be made. A proposed mitigation is that certificate authorities 481 should fully look up each name starting at the DNS root for every 482 name lookup. Alternatively, CAs should use a resolver that is not 483 serving stale data. 485 11. Privacy Considerations 487 This document does not add any practical new privacy issues. 489 12. NAT Considerations 491 The method described here is not affected by the use of NAT devices. 493 13. IANA Considerations 495 There are no IANA considerations. 497 14. Acknowledgements 499 The authors wish to thank Brian Carpenter, Robert Edmonds, Tony 500 Finch, Bob Harold, Tatuya Jinmei, Matti Klock, Jason Moreau, Giovane 501 Moura, Jean Roy, Mukund Sivaraman, Davey Song, Paul Vixie, Ralf Weber 502 and Paul Wouters for their review and feedback. Paul Hoffman 503 deserves special thanks for submitting a number of Pull Requests. 505 Thank you also to the following members of the IESG for their final 506 review: Roman Danyliw, Benjamin Kaduk, Suresh Krishnan, Mirja 507 Kuehlewind, and Adam Roach. 509 15. References 511 15.1. Normative References 513 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 514 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 515 . 517 [RFC1035] Mockapetris, P., "Domain names - implementation and 518 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, 519 November 1987, . 521 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 522 Requirement Levels", BCP 14, RFC 2119, 523 DOI 10.17487/RFC2119, March 1997, 524 . 526 [RFC2181] Elz, R. and R. Bush, "Clarifications to the DNS 527 Specification", RFC 2181, DOI 10.17487/RFC2181, July 1997, 528 . 530 [RFC2308] Andrews, M., "Negative Caching of DNS Queries (DNS 531 NCACHE)", RFC 2308, DOI 10.17487/RFC2308, March 1998, 532 . 534 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 535 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 536 May 2017, . 538 15.2. Informative References 540 [CloudStrife] 541 Borgolte, K., Fiebig, T., Hao, S., Kruegel, C., and G. 542 Vigna, "Cloud Strife: Mitigating the Security Risks of 543 Domain-Validated Certificates", ACM 2018 Applied 544 Networking Research Workshop, DOI 10.1145/3232755.3232859, 545 July 2018, . 549 [DikeBreaks] 550 Moura, G., Heidemann, J., Mueller, M., Schmidt, R., and M. 551 Davids, "When the Dike Breaks: Dissecting DNS Defenses 552 During DDos", ACM 2018 Internet Measurement Conference, 553 DOI 10.1145/3278532.3278534, October 2018, 554 . 556 [DITL] "DITL Traces and Analysis | DNS-OARC", n.d., 557 . 559 [RFC6672] Rose, S. and W. Wijngaards, "DNAME Redirection in the 560 DNS", RFC 6672, DOI 10.17487/RFC6672, June 2012, 561 . 563 [RFC8499] Hoffman, P., Sullivan, A., and K. Fujiwara, "DNS 564 Terminology", BCP 219, RFC 8499, DOI 10.17487/RFC8499, 565 January 2019, . 567 Authors' Addresses 569 David C Lawrence 570 Oracle 572 Email: tale@dd.org 573 Warren "Ace" Kumari 574 Google 575 1600 Amphitheatre Parkway 576 Mountain View CA 94043 577 USA 579 Email: warren@kumari.net 581 Puneet Sood 582 Google 584 Email: puneets@google.com