idnits 2.17.1 draft-ietf-dprive-bcp-op-14.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 13, 2020) is 1382 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational RFC: RFC 6973 ** Downref: Normative reference to an Informational RFC: RFC 7457 ** Obsolete normative reference: RFC 7525 (Obsoleted by RFC 9325) ** Obsolete normative reference: RFC 7816 (Obsoleted by RFC 9156) ** Downref: Normative reference to an Informational RFC: RFC 7871 ** Downref: Normative reference to an Experimental RFC: RFC 8467 ** Obsolete normative reference: RFC 8499 (Obsoleted by RFC 9499) ** Downref: Normative reference to an Informational RFC: RFC 8806 == Outdated reference: A later version (-15) exists of draft-ietf-dnsop-dns-tcp-requirements-06 == Outdated reference: A later version (-15) exists of draft-ietf-httpbis-bcp56bis-09 -- Obsolete informational reference (is this intentional?): RFC 5077 (Obsoleted by RFC 8446) -- Obsolete informational reference (is this intentional?): RFC 7626 (Obsoleted by RFC 9076) Summary: 8 errors (**), 0 flaws (~~), 4 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 dprive S. Dickinson 3 Internet-Draft Sinodun IT 4 Intended status: Best Current Practice B. Overeinder 5 Expires: January 14, 2021 R. van Rijswijk-Deij 6 NLnet Labs 7 A. Mankin 8 Salesforce 9 July 13, 2020 11 Recommendations for DNS Privacy Service Operators 12 draft-ietf-dprive-bcp-op-14 14 Abstract 16 This document presents operational, policy, and security 17 considerations for DNS recursive resolver operators who choose to 18 offer DNS Privacy services. With these recommendations, the operator 19 can make deliberate decisions regarding which services to provide, 20 and how the decisions and alternatives impact the privacy of users. 22 This document also presents a non-normative framework to assist 23 writers of a Recursive operator Privacy Statement (analogous to DNS 24 Security Extensions (DNSSEC) Policies and DNSSEC Practice Statements 25 described in RFC6841). 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on January 14, 2021. 44 Copyright Notice 46 Copyright (c) 2020 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 63 3. Privacy-related documents . . . . . . . . . . . . . . . . . . 5 64 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 65 5. Recommendations for DNS privacy services . . . . . . . . . . 6 66 5.1. On the wire between client and server . . . . . . . . . . 7 67 5.1.1. Transport recommendations . . . . . . . . . . . . . . 7 68 5.1.2. Authentication of DNS privacy services . . . . . . . 8 69 5.1.3. Protocol recommendations . . . . . . . . . . . . . . 9 70 5.1.4. DNSSEC . . . . . . . . . . . . . . . . . . . . . . . 11 71 5.1.5. Availability . . . . . . . . . . . . . . . . . . . . 12 72 5.1.6. Service options . . . . . . . . . . . . . . . . . . . 12 73 5.1.7. Impact of Encryption on Monitoring by DNS Privacy 74 Service Operators . . . . . . . . . . . . . . . . . . 13 75 5.1.8. Limitations of fronting a DNS privacy service with a 76 pure TLS proxy . . . . . . . . . . . . . . . . . . . 13 77 5.2. Data at rest on the server . . . . . . . . . . . . . . . 14 78 5.2.1. Data handling . . . . . . . . . . . . . . . . . . . . 14 79 5.2.2. Data minimization of network traffic . . . . . . . . 15 80 5.2.3. IP address pseudonymization and anonymization methods 16 81 5.2.4. Pseudonymization, anonymization, or discarding of 82 other correlation data . . . . . . . . . . . . . . . 16 83 5.2.5. Cache snooping . . . . . . . . . . . . . . . . . . . 17 84 5.3. Data sent onwards from the server . . . . . . . . . . . . 17 85 5.3.1. Protocol recommendations . . . . . . . . . . . . . . 17 86 5.3.2. Client query obfuscation . . . . . . . . . . . . . . 18 87 5.3.3. Data sharing . . . . . . . . . . . . . . . . . . . . 19 88 6. Recursive operator Privacy Statement (RPS) . . . . . . . . . 20 89 6.1. Outline of an RPS . . . . . . . . . . . . . . . . . . . . 20 90 6.1.1. Policy . . . . . . . . . . . . . . . . . . . . . . . 20 91 6.1.2. Practice . . . . . . . . . . . . . . . . . . . . . . 21 92 6.2. Enforcement/accountability . . . . . . . . . . . . . . . 22 93 7. IANA considerations . . . . . . . . . . . . . . . . . . . . . 23 94 8. Security considerations . . . . . . . . . . . . . . . . . . . 23 95 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23 96 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 23 97 11. Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . 24 98 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 27 99 12.1. Normative References . . . . . . . . . . . . . . . . . . 27 100 12.2. Informative References . . . . . . . . . . . . . . . . . 29 101 Appendix A. Documents . . . . . . . . . . . . . . . . . . . . . 34 102 A.1. Potential increases in DNS privacy . . . . . . . . . . . 34 103 A.2. Potential decreases in DNS privacy . . . . . . . . . . . 34 104 A.3. Related operational documents . . . . . . . . . . . . . . 35 105 Appendix B. IP address techniques . . . . . . . . . . . . . . . 35 106 B.1. Categorization of techniques . . . . . . . . . . . . . . 36 107 B.2. Specific techniques . . . . . . . . . . . . . . . . . . . 37 108 B.2.1. Google Analytics non-prefix filtering . . . . . . . . 37 109 B.2.2. dnswasher . . . . . . . . . . . . . . . . . . . . . . 38 110 B.2.3. Prefix-preserving map . . . . . . . . . . . . . . . . 38 111 B.2.4. Cryptographic Prefix-Preserving Pseudonymization . . 38 112 B.2.5. Top-hash Subtree-replicated Anonymization . . . . . . 39 113 B.2.6. ipcipher . . . . . . . . . . . . . . . . . . . . . . 39 114 B.2.7. Bloom filters . . . . . . . . . . . . . . . . . . . . 39 115 Appendix C. Current policy and privacy statements . . . . . . . 40 116 Appendix D. Example RPS . . . . . . . . . . . . . . . . . . . . 40 117 D.1. Policy . . . . . . . . . . . . . . . . . . . . . . . . . 40 118 D.2. Practice . . . . . . . . . . . . . . . . . . . . . . . . 43 119 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 44 121 1. Introduction 123 The Domain Name System (DNS) is at the core of the Internet; almost 124 every activity on the Internet starts with a DNS query (and often 125 several). However the DNS was not originally designed with strong 126 security or privacy mechanisms. A number of developments have taken 127 place in recent years which aim to increase the privacy of the DNS 128 system and these are now seeing some deployment. This latest 129 evolution of the DNS presents new challenges to operators and this 130 document attempts to provide an overview of considerations for 131 privacy focused DNS services. 133 In recent years there has also been an increase in the availability 134 of "public resolvers" [RFC8499] which users may prefer to use instead 135 of the default network resolver either because they offer a specific 136 feature (e.g., good reachability or encrypted transport) or because 137 the network resolver lacks a specific feature (e.g., strong privacy 138 policy or unfiltered responses). These public resolvers have tended 139 to be at the forefront of adoption of privacy-related enhancements 140 but it is anticipated that operators of other resolver services will 141 follow. 143 Whilst protocols that encrypt DNS messages on the wire provide 144 protection against certain attacks, the resolver operator still has 145 (in principle) full visibility of the query data and transport 146 identifiers for each user. Therefore, a trust relationship (whether 147 explicit or implicit) is assumed to exist between each user and the 148 operator of the resolver(s) used by that user. The ability of the 149 operator to provide a transparent, well documented, and secure 150 privacy service will likely serve as a major differentiating factor 151 for privacy conscious users if they make an active selection of which 152 resolver to use. 154 It should also be noted that the choice of a user to configure a 155 single resolver (or a fixed set of resolvers) and an encrypted 156 transport to use in all network environments has both advantages and 157 disadvantages. For example, the user has a clear expectation of 158 which resolvers have visibility of their query data. However, this 159 resolver/transport selection may provide an added mechanism to track 160 them as they move across network environments. Commitments from 161 resolver operators to minimize such tracking as users move between 162 networks are also likely to play a role in user selection of 163 resolvers. 165 More recently the global legislative landscape with regard to 166 personal data collection, retention, and pseudonymization has seen 167 significant activity. Providing detailed practice advice about these 168 areas to the operator is out of scope, but Section 5.3.3 describes 169 some mitigations of data sharing risk. 171 This document has two main goals: 173 o To provide operational and policy guidance related to DNS over 174 encrypted transports and to outline recommendations for data 175 handling for operators of DNS privacy services. 177 o To introduce the Recursive operator Privacy Statement (RPS) and 178 present a framework to assist writers of an RPS. An RPS is a 179 document that an operator should publish which outlines their 180 operational practices and commitments with regard to privacy, 181 thereby providing a means for clients to evaluate both the 182 measurable and claimed privacy properties of a given DNS privacy 183 service. The framework identifies a set of elements and specifies 184 an outline order for them. This document does not, however, 185 define a particular privacy statement, nor does it seek to provide 186 legal advice as to the contents. 188 A desired operational impact is that all operators (both those 189 providing resolvers within networks and those operating large public 190 services) can demonstrate their commitment to user privacy thereby 191 driving all DNS resolution services to a more equitable footing. 192 Choices for users would (in this ideal world) be driven by other 193 factors, e.g., differing security policies or minor difference in 194 operator policy, rather than gross disparities in privacy concerns. 196 Community insight [or judgment?] about operational practices can 197 change quickly, and experience shows that a Best Current Practice 198 (BCP) document about privacy and security is a point-in-time 199 statement. Readers are advised to seek out any updates that apply to 200 this document. 202 2. Scope 204 "DNS Privacy Considerations" [RFC7626] describes the general privacy 205 issues and threats associated with the use of the DNS by Internet 206 users and much of the threat analysis here is lifted from that 207 document and from [RFC6973]. However this document is limited in 208 scope to best practice considerations for the provision of DNS 209 privacy services by servers (recursive resolvers) to clients (stub 210 resolvers or forwarders). Choices that are made exclusively by the 211 end user, or those for operators of authoritative nameservers are out 212 of scope. 214 This document includes (but is not limited to) considerations in the 215 following areas: 217 1. Data "on the wire" between a client and a server. 219 2. Data "at rest" on a server (e.g., in logs). 221 3. Data "sent onwards" from the server (either on the wire or shared 222 with a third party). 224 Whilst the issues raised here are targeted at those operators who 225 choose to offer a DNS privacy service, considerations for areas 2 and 226 3 could equally apply to operators who only offer DNS over 227 unencrypted transports but who would otherwise like to align with 228 privacy best practice. 230 3. Privacy-related documents 232 There are various documents that describe protocol changes that have 233 the potential to either increase or decrease the privacy properties 234 of the DNS in various ways. Note this does not imply that some 235 documents are good or bad, better or worse, just that (for example) 236 some features may bring functional benefits at the price of a 237 reduction in privacy and conversely some features increase privacy 238 with an accompanying increase in complexity. A selection of the most 239 relevant documents are listed in Appendix A for reference. 241 4. Terminology 243 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 244 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 245 "OPTIONAL" in this document are to be interpreted as described in BCP 246 14 [RFC2119] [RFC8174] when, and only when, they appear in all 247 capitals, as shown here. 249 DNS terminology is as described in [RFC8499] with one modification: 250 we restate the clause in the original definition of Privacy-enabling 251 DNS server in [RFC8310] to include the requirement that a DNS over 252 (D)TLS server should also offer at least one of the credentials 253 described in Section 8 of [RFC8310] and implement the (D)TLS profile 254 described in Section 9 of [RFC8310]. 256 Other Terms: 258 o RPS: Recursive operator Privacy Statement, see Section 6. 260 o DNS privacy service: The service that is offered via a privacy- 261 enabling DNS server and is documented either in an informal 262 statement of policy and practice with regard to users privacy or a 263 formal RPS. 265 5. Recommendations for DNS privacy services 267 In the following sections we first outline the threats relevant to 268 the specific topic and then discuss the potential actions that can be 269 taken to mitigate them. 271 We describe two classes of threats: 273 o Threats described in [RFC6973] 'Privacy Considerations for 274 Internet Protocols' 276 * Privacy terminology, threats to privacy, and mitigations as 277 described in Sections 3, 5, and 6 of [RFC6973]. 279 o DNS Privacy Threats 281 * These are threats to the users and operators of DNS privacy 282 services that are not directly covered by [RFC6973]. These may 283 be more operational in nature such as certificate management or 284 service availability issues. 286 We describe three classes of actions that operators of DNS privacy 287 services can take: 289 o Threat mitigation for well understood and documented privacy 290 threats to the users of the service and in some cases to the 291 operators of the service. 293 o Optimization of privacy services from an operational or management 294 perspective. 296 o Additional options that could further enhance the privacy and 297 usability of the service. 299 This document does not specify policy - only best practice, however 300 for DNS Privacy services to be considered compliant with these best 301 practice guidelines they SHOULD implement (where appropriate) all: 303 o Threat mitigations to be minimally compliant. 305 o Optimizations to be moderately compliant. 307 o Additional options to be maximally compliant. 309 The rest of this document does not use normative language but instead 310 refers only to the three differing classes of action which correspond 311 to the three named levels of compliance stated above. However, 312 compliance (to the indicated level) remains a normative requirement. 314 5.1. On the wire between client and server 316 In this section we consider both data on the wire and the service 317 provided to the client. 319 5.1.1. Transport recommendations 321 [RFC6973] Threats: 323 o Surveillance: 325 * Passive surveillance of traffic on the wire 327 DNS Privacy Threats: 329 o Active injection of spurious data or traffic. 331 Mitigations: 333 A DNS privacy service can mitigate these threats by providing service 334 over one or more of the following transports 336 o DNS over TLS (DoT) [RFC7858] and [RFC8310]. 338 o DNS over HTTPS (DoH) [RFC8484]. 340 It is noted that a DNS privacy service can also be provided over DNS 341 over DTLS [RFC8094], however this is an Experimental specification 342 and there are no known implementations at the time of writing. 344 It is also noted that DNS privacy service might be provided over 345 IPSec, DNSCrypt, or VPNs. However, there are no specific RFCs that 346 cover the use of these transports for DNS and any discussion of best 347 practice for providing such a service is out of scope for this 348 document. 350 Whilst encryption of DNS traffic can protect against active injection 351 on the paths traversed by the encrypted connection this does not 352 diminish the need for DNSSEC, see Section 5.1.4. 354 5.1.2. Authentication of DNS privacy services 356 [RFC6973] Threats: 358 o Surveillance: 360 * Active attacks on client resolver configuration 362 Mitigations: 364 DNS privacy services should ensure clients can authenticate the 365 server. Note that this, in effect, commits the DNS privacy service 366 to a public identity users will trust. 368 When using DoT, clients that select a 'Strict Privacy' usage profile 369 [RFC8310] (to mitigate the threat of active attack on the client) 370 require the ability to authenticate the DNS server. To enable this, 371 DNS privacy services that offer DNS over TLS need to provide 372 credentials that will be accepted by the client's trust model, in the 373 form of either X.509 certificates [RFC5280] or Subject Public Key 374 Info (SPKI) pin sets [RFC8310]. 376 When offering DoH [RFC8484], HTTPS requires authentication of the 377 server as part of the protocol. 379 Server operators should also follow the best practices with regard to 380 certificate revocation as described in [RFC7525]. 382 5.1.2.1. Certificate management 384 Anecdotal evidence to date highlights the management of certificates 385 as one of the more challenging aspects for operators of traditional 386 DNS resolvers that choose to additionally provide a DNS privacy 387 service as management of such credentials is new to those DNS 388 operators. 390 It is noted that SPKI pin set management is described in [RFC7858] 391 but that key pinning mechanisms in general have fallen out of favor 392 operationally for various reasons such as the logistical overhead of 393 rolling keys. 395 DNS Privacy Threats: 397 o Invalid certificates, resulting in an unavailable service which 398 might force a user to fallback to cleartext. 400 o Mis-identification of a server by a client e.g., typos in DoH URL 401 templates [RFC8484] or authentication domain names [RFC8310] which 402 accidentally direct clients to attacker controlled servers. 404 Mitigations: 406 It is recommended that operators: 408 o Follow the guidance in Section 6.5 of [RFC7525] with regards to 409 certificate revocation. 411 o Automate the generation, publication, and renewal of certificates. 412 For example, ACME [RFC8555] provides a mechanism to actively 413 manage certificates through automation and has been implemented by 414 a number of certificate authorities. 416 o Monitor certificates to prevent accidental expiration of 417 certificates. 419 o Choose a short, memorable authentication domain name for the 420 service. 422 5.1.3. Protocol recommendations 424 5.1.3.1. DoT 426 DNS Privacy Threats: 428 o Known attacks on TLS such as those described in [RFC7457]. 430 o Traffic analysis, for example: [Pitfalls-of-DNS-Encryption]. 432 o Potential for client tracking via transport identifiers. 434 o Blocking of well known ports (e.g., 853 for DoT). 436 Mitigations: 438 In the case of DoT, TLS profiles from Section 9 of [RFC8310] and the 439 Countermeasures to DNS Traffic Analysis from section 11.1 of 440 [RFC8310] provide strong mitigations. This includes but is not 441 limited to: 443 o Adhering to [RFC7525]. 445 o Implementing only (D)TLS 1.2 or later as specified in [RFC8310]. 447 o Implementing EDNS(0) Padding [RFC7830] using the guidelines in 448 [RFC8467] or a successor specification. 450 o Servers should not degrade in any way the query service level 451 provided to clients that do not use any form of session resumption 452 mechanism, such as TLS session resumption [RFC5077] with TLS 1.2, 453 section 2.2 of [RFC8446], or Domain Name System (DNS) Cookies 454 [RFC7873]. 456 o A DoT privacy service on both port 853 and 443. If the operator 457 deploys DoH on the same IP address this requires the use of the 458 'dot' ALPN value [dot-ALPN]. 460 Optimizations: 462 o Concurrent processing of pipelined queries, returning responses as 463 soon as available, potentially out of order as specified in 464 [RFC7766]. This is often called 'OOOR' - out-of-order responses 465 (providing processing performance similar to HTTP multiplexing). 467 o Management of TLS connections to optimize performance for clients 468 using [RFC7766] and EDNS(0) Keepalive [RFC7828] 470 Additional Options: 472 Management of TLS connections to optimize performance for clients 473 using DNS Stateful Operations [RFC8490]. 475 5.1.3.2. DoH 477 DNS Privacy Threats: 479 o Known attacks on TLS such as those described in [RFC7457]. 481 o Traffic analysis, for example: [DNS-Privacy-not-so-private]. 483 o Potential for client tracking via transport identifiers. 485 Mitigations: 487 o Clients must be able to forgo the use of HTTP Cookies [RFC6265] 488 and still use the service. 490 o Use of HTTP/2 padding and/or EDNS(0) padding as described in 491 Section 9 of [RFC8484] 493 o Clients should not be required to include any headers beyond the 494 absolute minimum to obtain service from a DoH server. (See 495 Section 6.1 of [I-D.ietf-httpbis-bcp56bis].) 497 5.1.4. DNSSEC 499 DNS Privacy Threats: 501 o Users may be directed to bogus IP addresses which, depending on 502 the application, protocol and authentication method, might lead 503 users to reveal personal information to attackers. One example is 504 a website that doesn't use TLS or its TLS authentication can 505 somehow be subverted. 507 Mitigations: 509 o All DNS privacy services must offer a DNS privacy service that 510 performs Domain Name System Security Extensions (DNSSEC) 511 validation. In addition they must be able to provide the DNSSEC 512 RRs to the client so that it can perform its own validation. 514 The addition of encryption to DNS does not remove the need for DNSSEC 515 [RFC4033] - they are independent and fully compatible protocols, each 516 solving different problems. The use of one does not diminish the 517 need nor the usefulness of the other. 519 While the use of an authenticated and encrypted transport protects 520 origin authentication and data integrity between a client and a DNS 521 privacy service it provides no proof (for a non-validating client) 522 that the data provided by the DNS privacy service was actually DNSSEC 523 authenticated. As with cleartext DNS the user is still solely 524 trusting the AD bit (if present) set by the resolver. 526 It should also be noted that the use of an encrypted transport for 527 DNS actually solves many of the practical issues encountered by DNS 528 validating clients e.g. interference by middleboxes with cleartext 529 DNS payloads is completely avoided. In this sense a validating 530 client that uses a DNS privacy service which supports DNSSEC has a 531 far simpler task in terms of DNSSEC Roadblock avoidance [RFC8027]. 533 5.1.5. Availability 535 DNS Privacy Threats: 537 o A failed DNS privacy service could force the user to switch 538 providers, fallback to cleartext or accept no DNS service for the 539 outage. 541 Mitigations: 543 A DNS privacy service should strive to engineer encrypted services to 544 the same availability level as any unencrypted services they provide. 545 Particular care should to be taken to protect DNS privacy services 546 against denial-of-service attacks, as experience has shown that 547 unavailability of DNS resolving because of attacks is a significant 548 motivation for users to switch services. See, for example 549 Section IV-C of [Passive-Observations-of-a-Large-DNS]. 551 Techniques such as those described in Section 10 of [RFC7766] can be 552 of use to operators to defend against such attacks. 554 5.1.6. Service options 556 DNS Privacy Threats: 558 o Unfairly disadvantaging users of the privacy service with respect 559 to the services available. This could force the user to switch 560 providers, fallback to cleartext or accept no DNS service for the 561 outage. 563 Mitigations: 565 A DNS privacy service should deliver the same level of service as 566 offered on un-encrypted channels in terms of options such as 567 filtering (or lack thereof), DNSSEC validation, etc. 569 5.1.7. Impact of Encryption on Monitoring by DNS Privacy Service 570 Operators 572 DNS Privacy Threats: 574 o Increased use of encryption can impact DNS privacy service 575 operator ability to monitor traffic and therefore manage their DNS 576 servers [RFC8404]. 578 Many monitoring solutions for DNS traffic rely on the plain text 579 nature of this traffic and work by intercepting traffic on the wire, 580 either using a separate view on the connection between clients and 581 the resolver, or as a separate process on the resolver system that 582 inspects network traffic. Such solutions will no longer function 583 when traffic between clients and resolvers is encrypted. Many DNS 584 privacy service operators still have need to inspect DNS traffic, 585 e.g., to monitor for network security threats. Operators may 586 therefore need to invest in alternative means of monitoring that 587 relies on either the resolver software directly, or exporting DNS 588 traffic from the resolver using e.g., [dnstap]. 590 Optimization: 592 When implementing alternative means for traffic monitoring, operators 593 of a DNS privacy service should consider using privacy conscious 594 means to do so (see section Section 5.2 for more details on data 595 handling and also the discussion on the use of Bloom Filters in 596 Appendix B. 598 5.1.8. Limitations of fronting a DNS privacy service with a pure TLS 599 proxy 601 DNS Privacy Threats: 603 o Limited ability to manage or monitor incoming connections using 604 DNS specific techniques. 606 o Misconfiguration (e.g., of the target server address in the proxy 607 configuration) could lead to data leakage if the proxy to target 608 server path is not encrypted. 610 Optimization: 612 Some operators may choose to implement DoT using a TLS proxy (e.g. 613 [nginx], [haproxy], or [stunnel]) in front of a DNS nameserver 614 because of proven robustness and capacity when handling large numbers 615 of client connections, load balancing capabilities and good tooling. 616 Currently, however, because such proxies typically have no specific 617 handling of DNS as a protocol over TLS or DTLS using them can 618 restrict traffic management at the proxy layer and at the DNS server. 619 For example, all traffic received by a nameserver behind such a proxy 620 will appear to originate from the proxy and DNS techniques such as 621 ACLs, RRL, or DNS64 will be hard or impossible to implement in the 622 nameserver. 624 Operators may choose to use a DNS aware proxy such as [dnsdist] which 625 offers custom options (similar to that proposed in 626 [I-D.bellis-dnsop-xpf]) to add source information to packets to 627 address this shortcoming. It should be noted that such options 628 potentially significantly increase the leaked information in the 629 event of a misconfiguration. 631 5.2. Data at rest on the server 633 5.2.1. Data handling 635 [RFC6973] Threats: 637 o Surveillance. 639 o Stored data compromise. 641 o Correlation. 643 o Identification. 645 o Secondary use. 647 o Disclosure. 649 Other Threats 651 o Contravention of legal requirements not to process user data. 653 Mitigations: 655 The following are recommendations relating to common activities for 656 DNS service operators and in all cases data retention should be 657 minimized or completely avoided if possible for DNS privacy services. 658 If data is retained it should be encrypted and either aggregated, 659 pseudonymized, or anonymized whenever possible. In general the 660 principle of data minimization described in [RFC6973] should be 661 applied. 663 o Transient data (e.g., that is used for real time monitoring and 664 threat analysis which might be held only in memory) should be 665 retained for the shortest possible period deemed operationally 666 feasible. 668 o The retention period of DNS traffic logs should be only those 669 required to sustain operation of the service and, to the extent 670 that such exists, meet regulatory requirements. 672 o DNS privacy services should not track users except for the 673 particular purpose of detecting and remedying technically 674 malicious (e.g., DoS) or anomalous use of the service. 676 o Data access should be minimized to only those personnel who 677 require access to perform operational duties. It should also be 678 limited to anonymized or pseudonymized data where operationally 679 feasible, with access to full logs (if any are held) only 680 permitted when necessary. 682 Optimizations: 684 o Consider use of full disk encryption for logs and data capture 685 storage. 687 5.2.2. Data minimization of network traffic 689 Data minimization refers to collecting, using, disclosing, and 690 storing the minimal data necessary to perform a task, and this can be 691 achieved by removing or obfuscating privacy-sensitive information in 692 network traffic logs. This is typically personal data, or data that 693 can be used to link a record to an individual, but may also include 694 revealing other confidential information, for example on the 695 structure of an internal corporate network. 697 The problem of effectively ensuring that DNS traffic logs contain no 698 or minimal privacy-sensitive information is not one that currently 699 has a generally agreed solution or any standards to inform this 700 discussion. This section presents an overview of current techniques 701 to simply provide reference on the current status of this work. 703 Research into data minimization techniques (and particularly IP 704 address pseudonymization/anonymization) was sparked in the late 705 1990s/early 2000s, partly driven by the desire to share significant 706 corpuses of traffic captures for research purposes. Several 707 techniques reflecting different requirements in this area and 708 different performance/resource tradeoffs emerged over the course of 709 the decade. Developments over the last decade have been both a 710 blessing and a curse; the large increase in size between an IPv4 and 711 an IPv6 address, for example, renders some techniques impractical, 712 but also makes available a much larger amount of input entropy, the 713 better to resist brute force re-identification attacks that have 714 grown in practicality over the period. 716 Techniques employed may be broadly categorized as either 717 anonymization or pseudonymization. The following discussion uses the 718 definitions from [RFC6973] Section 3, with additional observations 719 from [van-Dijkhuizen-et-al.] 721 o Anonymization. To enable anonymity of an individual, there must 722 exist a set of individuals that appear to have the same 723 attribute(s) as the individual. To the attacker or the observer, 724 these individuals must appear indistinguishable from each other. 726 o Pseudonymization. The true identity is deterministically replaced 727 with an alternate identity (a pseudonym). When the 728 pseudonymization schema is known, the process can be reversed, so 729 the original identity becomes known again. 731 In practice there is a fine line between the two; for example, how to 732 categorize a deterministic algorithm for data minimization of IP 733 addresses that produces a group of pseudonyms for a single given 734 address. 736 5.2.3. IP address pseudonymization and anonymization methods 738 A major privacy risk in DNS is connecting DNS queries to an 739 individual and the major vector for this in DNS traffic is the client 740 IP address. 742 There is active discussion in the space of effective pseudonymization 743 of IP addresses in DNS traffic logs, however there seems to be no 744 single solution that is widely recognized as suitable for all or most 745 use cases. There are also as yet no standards for this that are 746 unencumbered by patents. 748 Appendix B provides a more detailed survey of various techniques 749 employed or under development in 2019. 751 5.2.4. Pseudonymization, anonymization, or discarding of other 752 correlation data 754 DNS Privacy Threats: 756 o Fingerprinting of the client OS via various means including: IP 757 TTL/Hoplimit, TCP parameters (e.g., window size, ECN support, 758 SACK), OS specific DNS query patterns (e.g., for network 759 connectivity, captive portal detection, or OS specific updates). 761 o Fingerprinting of the client application or TLS library by, e.g., 762 HTTP headers (e.g., User-Agent, Accept, Accept-Encoding), TLS 763 version/Cipher suite combinations, or other connection parameters. 765 o Correlation of queries on multiple TCP sessions originating from 766 the same IP address. 768 o Correlating of queries on multiple TLS sessions originating from 769 the same client, including via session resumption mechanisms. 771 o Resolvers _might_ receive client identifiers, e.g., MAC addresses 772 in EDNS(0) options - some Customer-premises equipment (CPE) 773 devices are known to add them [MAC-address-EDNS]. 775 Mitigations: 777 o Data minimization or discarding of such correlation data. 779 5.2.5. Cache snooping 781 [RFC6973] Threats: 783 o Surveillance: 785 * Profiling of client queries by malicious third parties. 787 Mitigations: 789 o See [ISC-Knowledge-database-on-cache-snooping] for an example 790 discussion on defending against cache snooping. Options proposed 791 include limiting access to a server and limiting non-recursive 792 queries. 794 5.3. Data sent onwards from the server 796 In this section we consider both data sent on the wire in upstream 797 queries and data shared with third parties. 799 5.3.1. Protocol recommendations 801 [RFC6973] Threats: 803 o Surveillance: 805 * Transmission of identifying data upstream. 807 Mitigations: 809 As specified in [RFC8310] for DoT but applicable to any DNS Privacy 810 services the server should: 812 o Implement QNAME minimization [RFC7816]. 814 o Honor a SOURCE PREFIX-LENGTH set to 0 in a query containing the 815 EDNS(0) Client Subnet (ECS) option ([RFC7871] Section 7.1.2). 817 Optimizations: 819 o As per Section 2 of [RFC7871] the server should either: 821 * not use the ECS option in upstream queries at all, or 823 * offer alternative services, one that sends ECS and one that 824 does not. 826 If operators do offer a service that sends the ECS options upstream 827 they should use the shortest prefix that is operationally feasible 828 and ideally use a policy of allowlisting upstream servers to send ECS 829 to in order to reduce data leakage. Operators should make clear in 830 any policy statement what prefix length they actually send and the 831 specific policy used. 833 Allowlisting has the benefit that not only does the operator know 834 which upstream servers can use ECS but also allows the operator to 835 decide which upstream servers apply privacy policies that the 836 operator is happy with. However some operators consider allowlisting 837 to incur significant operational overhead compared to dynamic 838 detection of ECS support on authoritative servers. 840 Additional options: 842 o Aggressive Use of DNSSEC-Validated Cache [RFC8198] and [RFC8020] 843 (NXDOMAIN: There Really Is Nothing Underneath) to reduce the 844 number of queries to authoritative servers to increase privacy. 846 o Run a copy of the root zone on loopback [RFC8806] to avoid making 847 queries to the root servers that might leak information. 849 5.3.2. Client query obfuscation 851 Additional options: 853 Since queries from recursive resolvers to authoritative servers are 854 performed using cleartext (at the time of writing), resolver services 855 need to consider the extent to which they may be directly leaking 856 information about their client community via these upstream queries 857 and what they can do to mitigate this further. Note, that even when 858 all the relevant techniques described above are employed there may 859 still be attacks possible, e.g. [Pitfalls-of-DNS-Encryption]. For 860 example, a resolver with a very small community of users risks 861 exposing data in this way and ought to obfuscate this traffic by 862 mixing it with 'generated' traffic to make client characterization 863 harder. The resolver could also employ aggressive pre-fetch 864 techniques as a further measure to counter traffic analysis. 866 At the time of writing there are no standardized or widely recognized 867 techniques to perform such obfuscation or bulk pre-fetches. 869 Another technique that particularly small operators may consider is 870 forwarding local traffic to a larger resolver (with a privacy policy 871 that aligns with their own practices) over an encrypted protocol so 872 that the upstream queries are obfuscated among those of the large 873 resolver. 875 5.3.3. Data sharing 877 [RFC6973] Threats: 879 o Surveillance. 881 o Stored data compromise. 883 o Correlation. 885 o Identification. 887 o Secondary use. 889 o Disclosure. 891 DNS Privacy Threats: 893 o Contravention of legal requirements not to process user data. 895 Mitigations: 897 Operators should not share identifiable data with third-parties. 899 If operators choose to share identifiable data with third-parties in 900 specific circumstance they should publish the terms under which data 901 is shared. 903 Operators should consider including specific guidelines for the 904 collection of aggregated and/or anonymized data for research 905 purposes, within or outside of their own organization. This can 906 benefit not only the operator (through inclusion in novel research) 907 but also the wider Internet community. See the policy published by 908 SURFnet [SURFnet-policy] on data sharing for research as an example. 910 6. Recursive operator Privacy Statement (RPS) 912 To be compliant with this Best Common Practices document, a DNS 913 recursive operator SHOULD publish a Recursive operator Privacy 914 Statement (RPS). Adopting the outline, and including the headings in 915 the order provided, is a benefit to persons comparing RPSs from 916 multiple operators. 918 Appendix C provides a comparison of some existing policy and privacy 919 statements. 921 6.1. Outline of an RPS 923 The contents of Section 6.1.1 and Section 6.1.2 are non-normative, 924 other than the order of the headings. Material under each topic is 925 present to assist the operator developing their own RPS and: 927 o Relates _only_ to matters around to the technical operation of DNS 928 privacy services, and not on any other matters. 930 o Does not attempt to offer an exhaustive list for the contents of 931 an RPS. 933 o Is not intended to form the basis of any legal/compliance 934 documentation. 936 Appendix D provides an example (also non-normative) of an RPS 937 statement for a specific operator scenario. 939 6.1.1. Policy 941 1. Treatment of IP addresses. Make an explicit statement that IP 942 addresses are treated as personal data. 944 2. Data collection and sharing. Specify clearly what data 945 (including IP addresses) is: 947 * Collected and retained by the operator, and for what period it 948 is retained. 950 * Shared with partners. 952 * Shared, sold, or rented to third-parties. 954 and in each case whether it is aggregated, pseudonymized, or 955 anonymized and the conditions of data transfer. Where possible 956 provide details of the techniques used for the above data 957 minimizations. 959 3. Exceptions. Specify any exceptions to the above, for example, 960 technically malicious or anomalous behavior. 962 4. Associated entities. Declare and explicitly enumerate any 963 partners, third-party affiliations, or sources of funding. 965 5. Correlation. Whether user DNS data is correlated or combined 966 with any other personal information held by the operator. 968 6. Result filtering. This section should explain whether the 969 operator filters, edits or alters in any way the replies that it 970 receives from the authoritative servers for each DNS zone, before 971 forwarding them to the clients. For each category listed below, 972 the operator should also specify how the filtering lists are 973 created and managed, whether it employs any third-party sources 974 for such lists, and which ones. 976 * Specify if any replies are being filtered out or altered for 977 network and computer security reasons (e.g., preventing 978 connections to malware-spreading websites or botnet control 979 servers). 981 * Specify if any replies are being filtered out or altered for 982 mandatory legal reasons, due to applicable legislation or 983 binding orders by courts and other public authorities. 985 * Specify if any replies are being filtered out or altered for 986 voluntary legal reasons, due to an internal policy by the 987 operator aiming at reducing potential legal risks. 989 * Specify if any replies are being filtered out or altered for 990 any other reason, including commercial ones. 992 6.1.2. Practice 994 [NOTE FOR RFC EDITOR: Please update this section to use letters for 995 the sub-bullet points instead of numbers. This was not done during 996 review because the markdown tool used to write the document did not 997 support it.] 999 Communicate the current operational practices of the service. 1001 1. Deviations. Specify any temporary or permanent deviations from 1002 the policy for operational reasons. 1004 2. Client facing capabilities. With reference to each subsection of 1005 Section 5.1 provide specific details of which capabilities 1006 (transport, DNSSEC, padding, etc.) are provided on which client 1007 facing addresses/port combination or DoH URI template. For 1008 Section 5.1.2, clearly specify which specific authentication 1009 mechanisms are supported for each endpoint that offers DoT: 1011 1. The authentication domain name to be used (if any). 1013 2. The SPKI pin sets to be used (if any) and policy for rolling 1014 keys. 1016 3. Upstream capabilities. With reference to section Section 5.3 1017 provide specific details of which capabilities are provided 1018 upstream for data sent to authoritative servers. 1020 4. Support. Provide contact/support information for the service. 1022 5. Data Processing. This section can optionally communicate links 1023 to and the high level contents of any separate statements the 1024 operator has published which cover applicable data processing 1025 legislation or agreements with regard to the location(s) of 1026 service provision. 1028 6.2. Enforcement/accountability 1030 Transparency reports may help with building user trust that operators 1031 adhere to their policies and practices. 1033 Independent monitoring or analysis could be performed where possible 1034 of: 1036 o ECS, QNAME minimization, EDNS(0) padding, etc. 1038 o Filtering. 1040 o Uptime. 1042 This is by analogy with several TLS or website analysis tools that 1043 are currently available e.g., [SSL-Labs] or [Internet.nl]. 1045 Additionally operators could choose to engage the services of a third 1046 party auditor to verify their compliance with their published RPS. 1048 7. IANA considerations 1050 None 1052 8. Security considerations 1054 Security considerations for DNS over TCP are given in [RFC7766], many 1055 of which are generally applicable to session based DNS. Guidance on 1056 operational requirements for DNS over TCP are also available in [I- 1057 D.dnsop-dns-tcp-requirements]. Security considerations for DoT are 1058 given in [RFC7858] and [RFC8310], those for DoH in [RFC8484]. 1060 Security considerations for DNSSEC are given in [RFC4033], [RFC4034] 1061 and [RFC4035]. 1063 9. Acknowledgements 1065 Many thanks to Amelia Andersdotter for a very thorough review of the 1066 first draft of this document and Stephen Farrell for a thorough 1067 review at WGLC and for suggesting the inclusion of an example RPS. 1068 Thanks to John Todd for discussions on this topic, and to Stephane 1069 Bortzmeyer, Puneet Sood and Vittorio Bertola for review. Thanks to 1070 Daniel Kahn Gillmor, Barry Green, Paul Hoffman, Dan York, Jon Reed, 1071 Lorenzo Colitti for comments at the mic. Thanks to Loganaden 1072 Velvindron for useful updates to the text. 1074 Sara Dickinson thanks the Open Technology Fund for a grant to support 1075 the work on this document. 1077 10. Contributors 1079 The below individuals contributed significantly to the document: 1081 John Dickinson 1082 Sinodun Internet Technologies 1083 Magdalen Centre 1084 Oxford Science Park 1085 Oxford OX4 4GA 1086 United Kingdom 1088 Jim Hague 1089 Sinodun Internet Technologies 1090 Magdalen Centre 1091 Oxford Science Park 1092 Oxford OX4 4GA 1093 United Kingdom 1095 11. Changelog 1097 draft-ietf-dprive-bcp-op-13 1099 o Minor edits 1101 draft-ietf-dprive-bcp-op-12 1103 o Change DROP to RPS throughout 1105 draft-ietf-dprive-bcp-op-11 1107 o Improve text around use of normative language 1109 o Fix section 5.1.3.2 bullets 1111 o Improve text in 6.1.2. item 2. 1113 o Rework text of 6.1.2. item 5 and update example DROP 1115 o Various editorial improvements 1117 draft-ietf-dprive-bcp-op-10 1119 o Remove direct references to draft-ietf-dprive-rfc7626-bis, instead 1120 have one general reference RFC7626 1122 o Clarify that the DROP statement outline is non-normative and add 1123 some further qualifications about content 1125 o Update wording on data sharing to remove explicit discussion of 1126 consent 1128 o Move table in section 5.2.3 to an appendix 1130 o Move section 6.2 to an appendix 1132 o Corrections to references, typos and editorial updates from 1133 initial IESG comments. 1135 draft-ietf-dprive-bcp-op-09 1137 o Fix references so they match the correct section numbers in draft- 1138 ietf-dprive-rfc7626-bis-05 1140 draft-ietf-dprive-bcp-op-08 1142 o Address IETF Last call comments. 1144 o Editorial changes following AD review. 1146 o Change all URIs to Informational References. 1148 draft-ietf-dprive-bcp-op-06 1150 o Final minor changes from second WGLC. 1152 draft-ietf-dprive-bcp-op-05 1154 o Remove some text on consent: 1156 * Paragraph 2 in section 5.3.3 1158 * Item 6 in the DROP Practice statement (and example) 1160 o Remove .onion and TLSA options 1162 o Include ACME as a reference for certificate management 1164 o Update text on session resumption usage 1166 o Update section 5.2.4 on client fingerprinting 1168 draft-ietf-dprive-bcp-op-04 1170 o Change DPPPS to DROP (DNS Recursive Operator Privacy) statement 1172 o Update structure of DROP slightly 1174 o Add example DROP statement 1176 o Add text about restricting access to full logs 1178 o Move table in section 5.2.3 from SVG to inline table 1180 o Fix many editorial and reference nits 1182 draft-ietf-dprive-bcp-op-03 1184 o Add paragraph about operational impact 1186 o Move DNSSEC requirement out of the Appendix into main text as a 1187 privacy threat that should be mitigated 1189 o Add TLS version/Cipher suite as tracking threat 1190 o Add reference to Mozilla TRR policy 1192 o Remove several TODOs and QUESTIONS. 1194 draft-ietf-dprive-bcp-op-02 1196 o Change 'open resolver' for 'public resolver' 1198 o Minor editorial changes 1200 o Remove recommendation to run a separate TLS 1.3 service 1202 o Move TLSA to purely a optimization in Section 5.2.1 1204 o Update reference on minimal DoH headers. 1206 o Add reference on user switching provider after service issues in 1207 Section 5.1.4 1209 o Add text in Section 5.1.6 on impact on operators. 1211 o Add text on additional threat to TLS proxy use (Section 5.1.7) 1213 o Add reference in Section 5.3.1 on example policies. 1215 draft-ietf-dprive-bcp-op-01 1217 o Many minor editorial fixes 1219 o Update DoH reference to RFC8484 and add more text on DoH 1221 o Split threat descriptions into ones directly referencing RFC6973 1222 and other DNS Privacy threats 1224 o Improve threat descriptions throughout 1226 o Remove reference to the DNSSEC TLS Chain Extension draft until new 1227 version submitted. 1229 o Clarify use of allowlisting for ECS 1231 o Re-structure the DPPPS, add Result filtering section. 1233 o Remove the direct inclusion of privacy policy comparison, now just 1234 reference dnsprivacy.org and an example of such work. 1236 o Add an appendix briefly discussing DNSSEC 1237 o Update affiliation of 1 author 1239 draft-ietf-dprive-bcp-op-00 1241 o Initial commit of re-named document after adoption to replace 1242 draft-dickinson-dprive-bcp-op-01 1244 12. References 1246 12.1. Normative References 1248 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1249 Requirement Levels", BCP 14, RFC 2119, 1250 DOI 10.17487/RFC2119, March 1997, . 1253 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1254 Rose, "DNS Security Introduction and Requirements", 1255 RFC 4033, DOI 10.17487/RFC4033, March 2005, 1256 . 1258 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 1259 Housley, R., and W. Polk, "Internet X.509 Public Key 1260 Infrastructure Certificate and Certificate Revocation List 1261 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 1262 . 1264 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 1265 Morris, J., Hansen, M., and R. Smith, "Privacy 1266 Considerations for Internet Protocols", RFC 6973, 1267 DOI 10.17487/RFC6973, July 2013, . 1270 [RFC7457] Sheffer, Y., Holz, R., and P. Saint-Andre, "Summarizing 1271 Known Attacks on Transport Layer Security (TLS) and 1272 Datagram TLS (DTLS)", RFC 7457, DOI 10.17487/RFC7457, 1273 February 2015, . 1275 [RFC7525] Sheffer, Y., Holz, R., and P. Saint-Andre, 1276 "Recommendations for Secure Use of Transport Layer 1277 Security (TLS) and Datagram Transport Layer Security 1278 (DTLS)", BCP 195, RFC 7525, DOI 10.17487/RFC7525, May 1279 2015, . 1281 [RFC7766] Dickinson, J., Dickinson, S., Bellis, R., Mankin, A., and 1282 D. Wessels, "DNS Transport over TCP - Implementation 1283 Requirements", RFC 7766, DOI 10.17487/RFC7766, March 2016, 1284 . 1286 [RFC7816] Bortzmeyer, S., "DNS Query Name Minimisation to Improve 1287 Privacy", RFC 7816, DOI 10.17487/RFC7816, March 2016, 1288 . 1290 [RFC7828] Wouters, P., Abley, J., Dickinson, S., and R. Bellis, "The 1291 edns-tcp-keepalive EDNS0 Option", RFC 7828, 1292 DOI 10.17487/RFC7828, April 2016, . 1295 [RFC7830] Mayrhofer, A., "The EDNS(0) Padding Option", RFC 7830, 1296 DOI 10.17487/RFC7830, May 2016, . 1299 [RFC7858] Hu, Z., Zhu, L., Heidemann, J., Mankin, A., Wessels, D., 1300 and P. Hoffman, "Specification for DNS over Transport 1301 Layer Security (TLS)", RFC 7858, DOI 10.17487/RFC7858, May 1302 2016, . 1304 [RFC7871] Contavalli, C., van der Gaast, W., Lawrence, D., and W. 1305 Kumari, "Client Subnet in DNS Queries", RFC 7871, 1306 DOI 10.17487/RFC7871, May 2016, . 1309 [RFC8020] Bortzmeyer, S. and S. Huque, "NXDOMAIN: There Really Is 1310 Nothing Underneath", RFC 8020, DOI 10.17487/RFC8020, 1311 November 2016, . 1313 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1314 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1315 May 2017, . 1317 [RFC8198] Fujiwara, K., Kato, A., and W. Kumari, "Aggressive Use of 1318 DNSSEC-Validated Cache", RFC 8198, DOI 10.17487/RFC8198, 1319 July 2017, . 1321 [RFC8310] Dickinson, S., Gillmor, D., and T. Reddy, "Usage Profiles 1322 for DNS over TLS and DNS over DTLS", RFC 8310, 1323 DOI 10.17487/RFC8310, March 2018, . 1326 [RFC8467] Mayrhofer, A., "Padding Policies for Extension Mechanisms 1327 for DNS (EDNS(0))", RFC 8467, DOI 10.17487/RFC8467, 1328 October 2018, . 1330 [RFC8484] Hoffman, P. and P. McManus, "DNS Queries over HTTPS 1331 (DoH)", RFC 8484, DOI 10.17487/RFC8484, October 2018, 1332 . 1334 [RFC8490] Bellis, R., Cheshire, S., Dickinson, J., Dickinson, S., 1335 Lemon, T., and T. Pusateri, "DNS Stateful Operations", 1336 RFC 8490, DOI 10.17487/RFC8490, March 2019, 1337 . 1339 [RFC8499] Hoffman, P., Sullivan, A., and K. Fujiwara, "DNS 1340 Terminology", BCP 219, RFC 8499, DOI 10.17487/RFC8499, 1341 January 2019, . 1343 [RFC8806] Kumari, W. and P. Hoffman, "Running a Root Server Local to 1344 a Resolver", RFC 8806, DOI 10.17487/RFC8806, June 2020, 1345 . 1347 12.2. Informative References 1349 [Bloom-filter] 1350 van Rijswijk-Deij, R., Rijnders, G., Bomhoff, M., and L. 1351 Allodi, "Privacy-Conscious Threat Intelligence Using 1352 DNSBLOOM", 2019, 1353 . 1355 [Brenker-and-Arnes] 1356 Brekne, T. and A. Arnes, "CIRCUMVENTING IP-ADDRESS 1357 PSEUDONYMIZATION", 2005, . 1360 [Crypto-PAn] 1361 CESNET, "Crypto-PAn", 2015, 1362 . 1365 [DNS-Privacy-not-so-private] 1366 Silby, S., Juarez, M., Vallina-Rodriguez, N., and C. 1367 Troncosol, "DNS Privacy not so private: the traffic 1368 analysis perspective.", 2019, 1369 . 1371 [dnsdist] PowerDNS, "dnsdist Overview", 2019, . 1373 [dnstap] dnstap.info, "DNSTAP", 2019, . 1375 [DoH-resolver-policy] 1376 Mozilla, "Security/DOH-resolver-policy", 2019, 1377 . 1379 [dot-ALPN] 1380 IANA (iana.org), "TLS Application-Layer Protocol 1381 Negotiation (ALPN) Protocol IDs", 2020, 1382 . 1385 [Geolocation-Impact-Assessement] 1386 Conversion Works, "Anonymize IP Geolocation Accuracy 1387 Impact Assessment", 2017, 1388 . 1391 [haproxy] haproxy.org, "HAPROXY", 2019, . 1393 [Harvan] Harvan, M., "Prefix- and Lexicographical-order-preserving 1394 IP Address Anonymization", 2006, 1395 . 1397 [I-D.bellis-dnsop-xpf] 1398 Bellis, R., Dijk, P., and R. Gacogne, "DNS X-Proxied-For", 1399 draft-bellis-dnsop-xpf-04 (work in progress), March 2018. 1401 [I-D.ietf-dnsop-dns-tcp-requirements] 1402 Kristoff, J. and D. Wessels, "DNS Transport over TCP - 1403 Operational Requirements", draft-ietf-dnsop-dns-tcp- 1404 requirements-06 (work in progress), May 2020. 1406 [I-D.ietf-httpbis-bcp56bis] 1407 Nottingham, M., "Building Protocols with HTTP", draft- 1408 ietf-httpbis-bcp56bis-09 (work in progress), November 1409 2019. 1411 [Internet.nl] 1412 Internet.nl, "Internet.nl Is Your Internet Up To Date?", 1413 2019, . 1415 [IP-Anonymization-in-Analytics] 1416 Google, "IP Anonymization in Analytics", 2019, 1417 . 1420 [ipcipher1] 1421 Hubert, B., "On IP address encryption: security analysis 1422 with respect for privacy", 2017, 1423 . 1426 [ipcipher2] 1427 PowerDNS, "ipcipher", 2017, . 1430 [ipcrypt] veorq, "ipcrypt: IP-format-preserving encryption", 2015, 1431 . 1433 [ipcrypt-analysis] 1434 Aumasson, J., "Analysis of ipcrypt?", 2018, 1435 . 1438 [ISC-Knowledge-database-on-cache-snooping] 1439 ISC Knowledge Database, "DNS Cache snooping - should I be 1440 concerned?", 2018, . 1442 [MAC-address-EDNS] 1443 DNS-OARC mailing list, "Embedding MAC address in DNS 1444 requests for selective filtering IDs", 2016, 1445 . 1448 [nginx] nginx.org, "NGINX", 2019, . 1450 [Passive-Observations-of-a-Large-DNS] 1451 de Vries, W., van Rijswijk-Deij, R., de Boer, P., and A. 1452 Pras, "Passive Observations of a Large DNS Service: 2.5 1453 Years in the Life of Google", 2018, 1454 . 1457 [pcap] tcpdump.org, "PCAP", 2016, . 1459 [Pitfalls-of-DNS-Encryption] 1460 Shulman, H., "Pretty Bad Privacy: Pitfalls of DNS 1461 Encryption", 2014, . 1464 [policy-comparison] 1465 dnsprivacy.org, "Comparison of policy and privacy 1466 statements 2019", 2019, 1467 . 1470 [PowerDNS-dnswasher] 1471 PowerDNS, "dnswasher", 2019, 1472 . 1475 [Ramaswamy-and-Wolf] 1476 Ramaswamy, R. and T. Wolf, "High-Speed Prefix-Preserving 1477 IP Address Anonymization for Passive Measurement Systems", 1478 2007, 1479 . 1481 [RFC4034] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1482 Rose, "Resource Records for the DNS Security Extensions", 1483 RFC 4034, DOI 10.17487/RFC4034, March 2005, 1484 . 1486 [RFC4035] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1487 Rose, "Protocol Modifications for the DNS Security 1488 Extensions", RFC 4035, DOI 10.17487/RFC4035, March 2005, 1489 . 1491 [RFC5077] Salowey, J., Zhou, H., Eronen, P., and H. Tschofenig, 1492 "Transport Layer Security (TLS) Session Resumption without 1493 Server-Side State", RFC 5077, DOI 10.17487/RFC5077, 1494 January 2008, . 1496 [RFC6235] Boschi, E. and B. Trammell, "IP Flow Anonymization 1497 Support", RFC 6235, DOI 10.17487/RFC6235, May 2011, 1498 . 1500 [RFC6265] Barth, A., "HTTP State Management Mechanism", RFC 6265, 1501 DOI 10.17487/RFC6265, April 2011, . 1504 [RFC7626] Bortzmeyer, S., "DNS Privacy Considerations", RFC 7626, 1505 DOI 10.17487/RFC7626, August 2015, . 1508 [RFC7873] Eastlake 3rd, D. and M. Andrews, "Domain Name System (DNS) 1509 Cookies", RFC 7873, DOI 10.17487/RFC7873, May 2016, 1510 . 1512 [RFC8027] Hardaker, W., Gudmundsson, O., and S. Krishnaswamy, 1513 "DNSSEC Roadblock Avoidance", BCP 207, RFC 8027, 1514 DOI 10.17487/RFC8027, November 2016, . 1517 [RFC8094] Reddy, T., Wing, D., and P. Patil, "DNS over Datagram 1518 Transport Layer Security (DTLS)", RFC 8094, 1519 DOI 10.17487/RFC8094, February 2017, . 1522 [RFC8404] Moriarty, K., Ed. and A. Morton, Ed., "Effects of 1523 Pervasive Encryption on Operators", RFC 8404, 1524 DOI 10.17487/RFC8404, July 2018, . 1527 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 1528 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 1529 . 1531 [RFC8555] Barnes, R., Hoffman-Andrews, J., McCarney, D., and J. 1532 Kasten, "Automatic Certificate Management Environment 1533 (ACME)", RFC 8555, DOI 10.17487/RFC8555, March 2019, 1534 . 1536 [RFC8618] Dickinson, J., Hague, J., Dickinson, S., Manderson, T., 1537 and J. Bond, "Compacted-DNS (C-DNS): A Format for DNS 1538 Packet Capture", RFC 8618, DOI 10.17487/RFC8618, September 1539 2019, . 1541 [SSL-Labs] 1542 SSL Labs, "SSL Server Test", 2019, 1543 . 1545 [stunnel] ISC Knowledge Database, "DNS-over-TLS", 2018, 1546 . 1548 [SURFnet-policy] 1549 SURFnet, "SURFnet Data Sharing Policy", 2016, 1550 . 1552 [TCPdpriv] 1553 Ipsilon Networks, Inc., "TCPdpriv", 2005, 1554 . 1556 [van-Dijkhuizen-et-al.] 1557 Van Dijkhuizen , N. and J. Van Der Ham, "A Survey of 1558 Network Traffic Anonymisation Techniques and 1559 Implementations", 2018, . 1561 [Xu-et-al.] 1562 Fan, J., Xu, J., Ammar, M., and S. Moon, "Prefix- 1563 preserving IP address anonymization: measurement-based 1564 security evaluation and a new cryptography-based scheme", 1565 2004, . 1568 Appendix A. Documents 1570 This section provides an overview of some DNS privacy-related 1571 documents, however, this is neither an exhaustive list nor a 1572 definitive statement on the characteristic of the document. 1574 A.1. Potential increases in DNS privacy 1576 These documents are limited in scope to communications between stub 1577 clients and recursive resolvers: 1579 o 'Specification for DNS over Transport Layer Security (TLS)' 1580 [RFC7858]. 1582 o 'DNS over Datagram Transport Layer Security (DTLS)' [RFC8094]. 1583 Note that this document has the Category of Experimental. 1585 o 'DNS Queries over HTTPS (DoH)' [RFC8484]. 1587 o 'Usage Profiles for DNS over TLS and DNS over DTLS' [RFC8310]. 1589 o 'The EDNS(0) Padding Option' [RFC7830] and 'Padding Policy for 1590 EDNS(0)' [RFC8467]. 1592 These documents apply to recursive and authoritative DNS but are 1593 relevant when considering the operation of a recursive server: 1595 o 'DNS Query Name minimization to Improve Privacy' [RFC7816]. 1597 A.2. Potential decreases in DNS privacy 1599 These documents relate to functionality that could provide increased 1600 tracking of user activity as a side effect: 1602 o 'Client Subnet in DNS Queries' [RFC7871]. 1604 o 'Domain Name System (DNS) Cookies' [RFC7873]). 1606 o 'Transport Layer Security (TLS) Session Resumption without Server- 1607 Side State' [RFC5077] referred to here as simply TLS session 1608 resumption. 1610 o [RFC8446] Appendix C.4 describes Client Tracking Prevention in TLS 1611 1.3 1613 o 'A DNS Packet Capture Format' [RFC8618]. 1615 o Passive DNS [RFC8499]. 1617 o Section 8 of [RFC8484] outlines the privacy considerations of DoH. 1618 Note that (while that document advises exposing the minimal set of 1619 data needed to achieve the desired feature set) depending on the 1620 specifics of a DoH implementation there may be increased 1621 identification and tracking compared to other DNS transports. 1623 A.3. Related operational documents 1625 o 'DNS Transport over TCP - Implementation Requirements' [RFC7766]. 1627 o 'Operational requirements for DNS over TCP' 1628 [I-D.ietf-dnsop-dns-tcp-requirements]. 1630 o 'The edns-tcp-keepalive EDNS0 Option' [RFC7828]. 1632 o 'DNS Stateful Operations' [RFC8490]. 1634 Appendix B. IP address techniques 1636 The following table presents a high level comparison of various 1637 techniques employed or under development in 2019, and classifies them 1638 according to categorization of technique and other properties. Both 1639 the specific techniques and the categorisations are described in more 1640 detail in the following sections. The list of techniques includes 1641 the main techniques in current use, but does not claim to be 1642 comprehensive. 1644 +---------------------------+----+---+----+---+----+---+---+ 1645 | Categorization/Property | GA | d | TC | C | TS | i | B | 1646 +---------------------------+----+---+----+---+----+---+---+ 1647 | Anonymization | X | X | X | | | | X | 1648 | Pseudoanonymization | | | | X | X | X | | 1649 | Format preserving | X | X | X | X | X | X | | 1650 | Prefix preserving | | | X | X | X | | | 1651 | Replacement | | | X | | | | | 1652 | Filtering | X | | | | | | | 1653 | Generalization | | | | | | | X | 1654 | Enumeration | | X | | | | | | 1655 | Reordering/Shuffling | | | X | | | | | 1656 | Random substitution | | | X | | | | | 1657 | Cryptographic permutation | | | | X | X | X | | 1658 | IPv6 issues | | | | | X | | | 1659 | CPU intensive | | | | X | | | | 1660 | Memory intensive | | | X | | | | | 1661 | Security concerns | | | | | | X | | 1662 +---------------------------+----+---+----+---+----+---+---+ 1664 Table 1: Classification of techniques 1666 Legend of techniques: GA = Google Analytics, d = dnswasher, TC = 1667 TCPdpriv, C = CryptoPAn, TS = TSA, i = ipcipher, B = Bloom filter 1669 The choice of which method to use for a particular application will 1670 depend on the requirements of that application and consideration of 1671 the threat analysis of the particular situation. 1673 For example, a common goal is that distributed packet captures must 1674 be in an existing data format such as PCAP [pcap] or C-DNS [RFC8618] 1675 that can be used as input to existing analysis tools. In that case, 1676 use of a format-preserving technique is essential. This, though, is 1677 not cost-free - several authors (e.g., [Brenker-and-Arnes] have 1678 observed that, as the entropy in an IPv4 address is limited, if an 1679 attacker can 1681 o ensure packets are captured by the target and 1683 o send forged traffic with arbitrary source and destination 1684 addresses to that target and 1686 o obtain a de-identified log of said traffic from that target 1688 any format-preserving pseudonymization is vulnerable to an attack 1689 along the lines of a cryptographic chosen plaintext attack. 1691 B.1. Categorization of techniques 1693 Data minimization methods may be categorized by the processing used 1694 and the properties of their outputs. The following builds on the 1695 categorization employed in [RFC6235]: 1697 o Format-preserving. Normally when encrypting, the original data 1698 length and patterns in the data should be hidden from an attacker. 1699 Some applications of de-identification, such as network capture 1700 de-identification, require that the de-identified data is of the 1701 same form as the original data, to allow the data to be parsed in 1702 the same way as the original. 1704 o Prefix preservation. Values such as IP addresses and MAC 1705 addresses contain prefix information that can be valuable in 1706 analysis, e.g., manufacturer ID in MAC addresses, subnet in IP 1707 addresses. Prefix preservation ensures that prefixes are de- 1708 identified consistently; e.g., if two IP addresses are from the 1709 same subnet, a prefix preserving de-identification will ensure 1710 that their de-identified counterparts will also share a subnet. 1711 Prefix preservation may be fixed (i.e. based on a user selected 1712 prefix length identified in advance to be preserved ) or general. 1714 o Replacement. A one-to-one replacement of a field to a new value 1715 of the same type, for example, using a regular expression. 1717 o Filtering. Removing or replacing data in a field. Field data can 1718 be overwritten, often with zeros, either partially (truncation or 1719 reverse truncation) or completely (black-marker anonymization). 1721 o Generalization. Data is replaced by more general data with 1722 reduced specificity. One example would be to replace all TCP/UDP 1723 port numbers with one of two fixed values indicating whether the 1724 original port was ephemeral (>=1024) or non-ephemeral (>1024). 1725 Another example, precision degradation, reduces the accuracy of 1726 e.g., a numeric value or a timestamp. 1728 o Enumeration. With data from a well-ordered set, replace the first 1729 data item data using a random initial value and then allocate 1730 ordered values for subsequent data items. When used with 1731 timestamp data, this preserves ordering but loses precision and 1732 distance. 1734 o Reordering/shuffling. Preserving the original data, but 1735 rearranging its order, often in a random manner. 1737 o Random substitution. As replacement, but using randomly generated 1738 replacement values. 1740 o Cryptographic permutation. Using a permutation function, such as 1741 a hash function or cryptographic block cipher, to generate a 1742 replacement de-identified value. 1744 B.2. Specific techniques 1746 B.2.1. Google Analytics non-prefix filtering 1748 Since May 2010, Google Analytics has provided a facility 1749 [IP-Anonymization-in-Analytics] that allows website owners to request 1750 that all their users IP addresses are anonymized within Google 1751 Analytics processing. This very basic anonymization simply sets to 1752 zero the least significant 8 bits of IPv4 addresses, and the least 1753 significant 80 bits of IPv6 addresses. The level of anonymization 1754 this produces is perhaps questionable. There are some analysis 1755 results [Geolocation-Impact-Assessement] which suggest that the 1756 impact of this on reducing the accuracy of determining the user's 1757 location from their IP address is less than might be hoped; the 1758 average discrepancy in identification of the user city for UK users 1759 is no more than 17%. 1761 Anonymization: Format-preserving, Filtering (trucation). 1763 B.2.2. dnswasher 1765 Since 2006, PowerDNS have included a de-identification tool dnswasher 1766 [PowerDNS-dnswasher] with their PowerDNS product. This is a PCAP 1767 filter that performs a one-to-one mapping of end user IP addresses 1768 with an anonymized address. A table of user IP addresses and their 1769 de-identified counterparts is kept; the first IPv4 user addresses is 1770 translated to 0.0.0.1, the second to 0.0.0.2 and so on. The de- 1771 identified address therefore depends on the order that addresses 1772 arrive in the input, and running over a large amount of data the 1773 address translation tables can grow to a significant size. 1775 Anonymization: Format-preserving, Enumeration. 1777 B.2.3. Prefix-preserving map 1779 Used in [TCPdpriv], this algorithm stores a set of original and 1780 anonymised IP address pairs. When a new IP address arrives, it is 1781 compared with previous addresses to determine the longest prefix 1782 match. The new address is anonymized by using the same prefix, with 1783 the remainder of the address anonymized with a random value. The use 1784 of a random value means that TCPdpriv is not deterministic; different 1785 anonymized values will be generated on each run. The need to store 1786 previous addresses means that TCPdpriv has significant and unbounded 1787 memory requirements, and because of the need to allocated anonymized 1788 addresses sequentially cannot be used in parallel processing. 1790 Anonymization: Format-preserving, prefix preservation (general). 1792 B.2.4. Cryptographic Prefix-Preserving Pseudonymization 1794 Cryptographic prefix-preserving pseudonymization was originally 1795 proposed as an improvement to the prefix-preserving map implemented 1796 in TCPdpriv, described in [Xu-et-al.] and implemented in the 1797 [Crypto-PAn] tool. Crypto-PAn is now frequently used as an acronym 1798 for the algorithm. Initially it was described for IPv4 addresses 1799 only; extension for IPv6 addresses was proposed in [Harvan]. This 1800 uses a cryptographic algorithm rather than a random value, and thus 1801 pseudonymity is determined uniquely by the encryption key, and is 1802 deterministic. It requires a separate AES encryption for each output 1803 bit, so has a non-trivial calculation overhead. This can be 1804 mitigated to some extent (for IPv4, at least) by pre-calculating 1805 results for some number of prefix bits. 1807 Pseudonymization: Format-preserving, prefix preservation (general). 1809 B.2.5. Top-hash Subtree-replicated Anonymization 1811 Proposed in [Ramaswamy-and-Wolf], Top-hash Subtree-replicated 1812 Anonymization (TSA) originated in response to the requirement for 1813 faster processing than Crypto-PAn. It used hashing for the most 1814 significant byte of an IPv4 address, and a pre-calculated binary tree 1815 structure for the remainder of the address. To save memory space, 1816 replication is used within the tree structure, reducing the size of 1817 the pre-calculated structures to a few Mb for IPv4 addresses. 1818 Address pseudonymization is done via hash and table lookup, and so 1819 requires minimal computation. However, due to the much increased 1820 address space for IPv6, TSA is not memory efficient for IPv6. 1822 Pseudonymization: Format-preserving, prefix preservation (general). 1824 B.2.6. ipcipher 1826 A recently-released proposal from PowerDNS, ipcipher [ipcipher1] 1827 [ipcipher2] is a simple pseudonymization technique for IPv4 and IPv6 1828 addresses. IPv6 addresses are encrypted directly with AES-128 using 1829 a key (which may be derived from a passphrase). IPv4 addresses are 1830 similarly encrypted, but using a recently proposed encryption 1831 [ipcrypt] suitable for 32bit block lengths. However, the author of 1832 ipcrypt has since indicated [ipcrypt-analysis] that it has low 1833 security, and further analysis has revealed it is vulnerable to 1834 attack. 1836 Pseudonymization: Format-preserving, cryptographic permutation. 1838 B.2.7. Bloom filters 1840 van Rijswijk-Deij et al. have recently described work using Bloom 1841 filters [Bloom-filter] to categorize query traffic and record the 1842 traffic as the state of multiple filters. The goal of this work is 1843 to allow operators to identify so-called Indicators of Compromise 1844 (IOCs) originating from specific subnets without storing information 1845 about, or be able to monitor the DNS queries of an individual user. 1846 By using a Bloom filter, it is possible to determine with a high 1847 probability if, for example, a particular query was made, but the set 1848 of queries made cannot be recovered from the filter. Similarly, by 1849 mixing queries from a sufficient number of users in a single filter, 1850 it becomes practically impossible to determine if a particular user 1851 performed a particular query. Large numbers of queries can be 1852 tracked in a memory-efficient way. As filter status is stored, this 1853 approach cannot be used to regenerate traffic, and so cannot be used 1854 with tools used to process live traffic. 1856 Anonymized: Generalization. 1858 Appendix C. Current policy and privacy statements 1860 A tabular comparison of policy and privacy statements from various 1861 DNS Privacy service operators based loosely on the proposed RPS 1862 structure can be found at [policy-comparison]. The analysis is based 1863 on the data available in December 2019. 1865 We note that the existing set of policies vary widely in style, 1866 content and detail and it is not uncommon for the full text for a 1867 given operator to equate to more than 10 pages of moderate font sized 1868 A4 text. It is a non-trivial task today for a user to extract a 1869 meaningful overview of the different services on offer. 1871 It is also noted that Mozilla have published a DoH resolver policy 1872 [DoH-resolver-policy], which describes the minimum set of policy 1873 requirements that a party must satisfy to be considered as a 1874 potential partner for Mozilla's Trusted Recursive Resolver (TRR) 1875 program. 1877 Appendix D. Example RPS 1879 The following example RPS is very loosely based on some elements of 1880 published privacy statements for some public resolvers, with 1881 additional fields populated to illustrate the what the full contents 1882 of an RPS might look like. This should not be interpreted as 1884 o having been reviewed or approved by any operator in any way 1886 o having any legal standing or validity at all 1888 o being complete or exhaustive 1890 This is a purely hypothetical example of an RPS to outline example 1891 contents - in this case for a public resolver operator providing a 1892 basic DNS Privacy service via one IP address and one DoH URI with 1893 security based filtering. It does aim to meet minimal compliance as 1894 specified in Section 5. 1896 D.1. Policy 1898 1. Treatment of IP addresses. Many nations classify IP addresses as 1899 personal data, and we take a conservative approach in treating IP 1900 addresses as personal data in all jurisdictions in which our 1901 systems reside. 1903 2. Data collection and sharing. 1905 1. IP addresses. Our normal course of data management does not 1906 have any IP address information or other personal data logged 1907 to disk or transmitted out of the location in which the query 1908 was received. We may aggregate certain counters to larger 1909 network block levels for statistical collection purposes, but 1910 those counters do not maintain specific IP address data nor 1911 is the format or model of data stored capable of being 1912 reverse-engineered to ascertain what specific IP addresses 1913 made what queries. 1915 2. Data collected in logs. We do keep some generalized location 1916 information (at the city/metropolitan area level) so that we 1917 can conduct debugging and analyze abuse phenomena. We also 1918 use the collected information for the creation and sharing of 1919 telemetry (timestamp, geolocation, number of hits, first 1920 seen, last seen) for contributors, public publishing of 1921 general statistics of system use (protections, threat types, 1922 counts, etc.) When you use our DNS Services, here is the 1923 full list of items that are included in our logs: 1925 + Request domain name, e.g., example.net 1927 + Record type of requested domain, e.g., A, AAAA, NS, MX, 1928 TXT, etc. 1930 + Transport protocol on which the request arrived, i.e. UDP, 1931 TCP, DoT, 1932 DoH 1934 + Origin IP general geolocation information: i.e. geocode, 1935 region ID, city ID, and metro code 1937 + IP protocol version - IPv4 or IPv6 1939 + Response code sent, e.g., SUCCESS, SERVFAIL, NXDOMAIN, 1940 etc. 1942 + Absolute arrival time using a precision in ms 1944 + Name of the specific instance that processed this request 1946 + IP address of the specific instance to which this request 1947 was addressed (no relation to the requestor's IP address) 1949 We may keep the following data as summary information, 1950 including all the above EXCEPT for data about the DNS record 1951 requested: 1953 + Currently-advertised BGP-summarized IP prefix/netmask of 1954 apparent client origin 1956 + Autonomous system number (BGP ASN) of apparent client 1957 origin 1959 All the above data may be kept in full or partial form in 1960 permanent archives. 1962 3. Sharing of data. Except as described in this document, we do 1963 not intentionally share, sell, or rent individual personal 1964 information associated with the requestor (i.e. source IP 1965 address or any other information that can positively identify 1966 the client using our infrastructure) with anyone without your 1967 consent. We generate and share high level anonymized 1968 aggregate statistics including threat metrics on threat type, 1969 geolocation, and if available, sector, as well as other 1970 vertical metrics including performance metrics on our DNS 1971 Services (i.e. number of threats blocked, infrastructure 1972 uptime) when available with our threat intelligence (TI) 1973 partners, academic researchers, or the public. Our DNS 1974 Services share anonymized data on specific domains queried 1975 (records such as domain, timestamp, geolocation, number of 1976 hits, first seen, last seen) with our threat intelligence 1977 partners. Our DNS Services also builds, stores, and may 1978 share certain DNS data streams which store high level 1979 information about domain resolved, query types, result codes, 1980 and timestamp. These streams do not contain IP address 1981 information of requestor and cannot be correlated to IP 1982 address or other personal data. We do not and never will 1983 share any of its data with marketers, nor will it use this 1984 data for demographic analysis. 1986 3. Exceptions. There are exceptions to this storage model: In the 1987 event of actions or observed behaviors which we deem malicious or 1988 anomalous, we may utilize more detailed logging to collect more 1989 specific IP address data in the process of normal network defence 1990 and mitigation. This collection and transmission off-site will 1991 be limited to IP addresses that we determine are involved in the 1992 event. 1994 4. Associated entities. Details of our Threat Intelligence partners 1995 can be found at our website page (insert link). 1997 5. Correlation of Data. We do not correlate or combine information 1998 from our logs with any personal information that you have 1999 provided us for other services, or with your specific IP address. 2001 6. Result filtering. 2003 1. Filtering. We utilise cyber threat intelligence about 2004 malicious domains from a variety of public and private 2005 sources and blocks access to those malicious domains when 2006 your system attempts to contact them. An NXDOMAIN is 2007 returned for blocked sites. 2009 1. Censorship. We will not provide a censoring component 2010 and will limit our actions solely to the blocking of 2011 malicious domains around phishing, malware, and exploit 2012 kit domains. 2014 2. Accidental blocking. We implement allowlisting 2015 algorithms to make sure legitimate domains are not 2016 blocked by accident. However, in the rare case of 2017 blocking a legitimate domain, we work with the users to 2018 quickly allowlist that domain. Please use our support 2019 form (insert link) if you believe we are blocking a 2020 domain in error. 2022 D.2. Practice 2024 1. Deviations from Policy. None in place since (insert date). 2026 2. Client facing capabilities. 2028 1. We offer UDP and TCP DNS on port 53 on (insert IP address) 2030 2. We offer DNS over TLS as specified in RFC7858 on (insert IP 2031 address). It is available on port 853 and port 443. We also 2032 implement RFC7766. 2034 1. The DoT authentication domain name used is (insert domain 2035 name). 2037 2. We do not publish SPKI pin sets. 2039 3. We offer DNS over HTTPS as specified in RFC8484 on (insert 2040 URI template). 2042 4. Both services offer TLS 1.2 and TLS 1.3. 2044 5. Both services pad DNS responses according to RFC8467. 2046 6. Both services provide DNSSEC validation. 2048 3. Upstream capabilities. 2050 1. Our servers implement QNAME minimization. 2052 2. Our servers do not send ECS upstream. 2054 4. Support. Support information for this service is available at 2055 (insert link). 2057 5. Data Processing. We operate as the legal entity (insert entity) 2058 registered in (insert country); as such we operate under (insert 2059 country/region) law. Our separate statement regarding the 2060 specifics of our data processing policy, practice, and agreements 2061 can be found here (insert link). 2063 Authors' Addresses 2065 Sara Dickinson 2066 Sinodun IT 2067 Magdalen Centre 2068 Oxford Science Park 2069 Oxford OX4 4GA 2070 United Kingdom 2072 Email: sara@sinodun.com 2074 Benno J. Overeinder 2075 NLnet Labs 2076 Science Park 400 2077 Amsterdam 1098 XH 2078 The Netherlands 2080 Email: benno@nlnetLabs.nl 2082 Roland M. van Rijswijk-Deij 2083 NLnet Labs 2084 Science Park 400 2085 Amsterdam 1098 XH 2086 The Netherlands 2088 Email: roland@nlnetLabs.nl 2090 Allison Mankin 2091 Salesforce 2093 Email: allison.mankin@gmail.com