idnits 2.17.1 draft-ietf-dprive-bcp-op-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 18, 2020) is 1406 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Downref: Normative reference to an Informational RFC: RFC 6973 ** Obsolete normative reference: RFC 7525 (Obsoleted by RFC 9325) ** Obsolete normative reference: RFC 7816 (Obsoleted by RFC 9156) ** Downref: Normative reference to an Informational RFC: RFC 8404 ** Downref: Normative reference to an Experimental RFC: RFC 8467 ** Obsolete normative reference: RFC 8499 (Obsoleted by RFC 9499) == Outdated reference: A later version (-15) exists of draft-ietf-dnsop-dns-tcp-requirements-06 == Outdated reference: A later version (-15) exists of draft-ietf-httpbis-bcp56bis-09 -- Obsolete informational reference (is this intentional?): RFC 5077 (Obsoleted by RFC 8446) -- Obsolete informational reference (is this intentional?): RFC 7626 (Obsoleted by RFC 9076) -- Obsolete informational reference (is this intentional?): RFC 7706 (Obsoleted by RFC 8806) Summary: 6 errors (**), 0 flaws (~~), 4 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 dprive S. Dickinson 3 Internet-Draft Sinodun IT 4 Intended status: Best Current Practice B. Overeinder 5 Expires: December 20, 2020 R. van Rijswijk-Deij 6 NLnet Labs 7 A. Mankin 8 Salesforce 9 June 18, 2020 11 Recommendations for DNS Privacy Service Operators 12 draft-ietf-dprive-bcp-op-10 14 Abstract 16 This document presents operational, policy, and security 17 considerations for DNS recursive resolver operators who choose to 18 offer DNS Privacy services. With these recommendations, the operator 19 can make deliberate decisions regarding which services to provide, 20 and how the decisions and alternatives impact the privacy of users. 22 This document also presents a non-normative framework to assist 23 writers of a DNS Recursive Operator Privacy Statement (analogous to 24 DNS Security Extensions (DNSSEC) Policies and DNSSEC Practice 25 Statements described in RFC6841). 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on December 20, 2020. 44 Copyright Notice 46 Copyright (c) 2020 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 63 3. Privacy-related documents . . . . . . . . . . . . . . . . . . 5 64 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 65 5. Recommendations for DNS privacy services . . . . . . . . . . 6 66 5.1. On the wire between client and server . . . . . . . . . . 7 67 5.1.1. Transport recommendations . . . . . . . . . . . . . . 7 68 5.1.2. Authentication of DNS privacy services . . . . . . . 8 69 5.1.3. Protocol recommendations . . . . . . . . . . . . . . 9 70 5.1.4. DNSSEC . . . . . . . . . . . . . . . . . . . . . . . 11 71 5.1.5. Availability . . . . . . . . . . . . . . . . . . . . 12 72 5.1.6. Service options . . . . . . . . . . . . . . . . . . . 12 73 5.1.7. Impact of Encryption on Monitoring by DNS Privacy 74 Service Operators . . . . . . . . . . . . . . . . . . 12 75 5.1.8. Limitations of fronting a DNS privacy service with a 76 pure TLS proxy . . . . . . . . . . . . . . . . . . . 13 77 5.2. Data at rest on the server . . . . . . . . . . . . . . . 14 78 5.2.1. Data handling . . . . . . . . . . . . . . . . . . . . 14 79 5.2.2. Data minimization of network traffic . . . . . . . . 15 80 5.2.3. IP address pseudonymization and anonymization methods 16 81 5.2.4. Pseudonymization, anonymization, or discarding of 82 other correlation data . . . . . . . . . . . . . . . 16 83 5.2.5. Cache snooping . . . . . . . . . . . . . . . . . . . 17 84 5.3. Data sent onwards from the server . . . . . . . . . . . . 17 85 5.3.1. Protocol recommendations . . . . . . . . . . . . . . 17 86 5.3.2. Client query obfuscation . . . . . . . . . . . . . . 18 87 5.3.3. Data sharing . . . . . . . . . . . . . . . . . . . . 19 88 6. DNS Recursive Operator Privacy (DROP) statement . . . . . . . 19 89 6.1. Outline of a DROP statement . . . . . . . . . . . . . . . 20 90 6.1.1. Policy . . . . . . . . . . . . . . . . . . . . . . . 20 91 6.1.2. Practice . . . . . . . . . . . . . . . . . . . . . . 21 92 6.2. Enforcement/accountability . . . . . . . . . . . . . . . 22 93 7. IANA considerations . . . . . . . . . . . . . . . . . . . . . 23 94 8. Security considerations . . . . . . . . . . . . . . . . . . . 23 95 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23 96 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 23 97 11. Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . 24 98 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 26 99 12.1. Normative References . . . . . . . . . . . . . . . . . . 26 100 12.2. Informative References . . . . . . . . . . . . . . . . . 28 101 Appendix A. Documents . . . . . . . . . . . . . . . . . . . . . 33 102 A.1. Potential increases in DNS privacy . . . . . . . . . . . 33 103 A.2. Potential decreases in DNS privacy . . . . . . . . . . . 34 104 A.3. Related operational documents . . . . . . . . . . . . . . 34 105 Appendix B. IP address techniques . . . . . . . . . . . . . . . 35 106 B.1. Categorization of techniques . . . . . . . . . . . . . . 36 107 B.2. Specific techniques . . . . . . . . . . . . . . . . . . . 37 108 B.2.1. Google Analytics non-prefix filtering . . . . . . . . 37 109 B.2.2. dnswasher . . . . . . . . . . . . . . . . . . . . . . 37 110 B.2.3. Prefix-preserving map . . . . . . . . . . . . . . . . 37 111 B.2.4. Cryptographic Prefix-Preserving Pseudonymization . . 38 112 B.2.5. Top-hash Subtree-replicated Anonymization . . . . . . 38 113 B.2.6. ipcipher . . . . . . . . . . . . . . . . . . . . . . 38 114 B.2.7. Bloom filters . . . . . . . . . . . . . . . . . . . . 39 115 Appendix C. Current policy and privacy statements . . . . . . . 39 116 Appendix D. Example DROP statement . . . . . . . . . . . . . . . 40 117 D.1. Policy . . . . . . . . . . . . . . . . . . . . . . . . . 40 118 D.2. Practice . . . . . . . . . . . . . . . . . . . . . . . . 43 119 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 44 121 1. Introduction 123 The Domain Name System (DNS) is at the core of the Internet; almost 124 every activity on the Internet starts with a DNS query (and often 125 several). However the DNS was not originally designed with strong 126 security or privacy mechanisms. A number of developments have taken 127 place in recent years which aim to increase the privacy of the DNS 128 system and these are now seeing some deployment. This latest 129 evolution of the DNS presents new challenges to operators and this 130 document attempts to provide an overview of considerations for 131 privacy focused DNS services. 133 In recent years there has also been an increase in the availability 134 of "public resolvers" [RFC8499] which users may prefer to use instead 135 of the default network resolver either because they offer a specific 136 feature (e.g., good reachability or encrypted transport) or because 137 the network resolver lacks a specific feature (e.g., strong privacy 138 policy or unfiltered responses). These open resolvers have tended to 139 be at the forefront of adoption of privacy-related enhancements but 140 it is anticipated that operators of other resolver services will 141 follow. 143 Whilst protocols that encrypt DNS messages on the wire provide 144 protection against certain attacks, the resolver operator still has 145 (in principle) full visibility of the query data and transport 146 identifiers for each user. Therefore, a trust relationship exists. 147 The ability of the operator to provide a transparent, well 148 documented, and secure privacy service will likely serve as a major 149 differentiating factor for privacy conscious users if they make an 150 active selection of which resolver to use. 152 It should also be noted that the choice of a user to configure a 153 single resolver (or a fixed set of resolvers) and an encrypted 154 transport to use in all network environments has both advantages and 155 disadvantages. For example, the user has a clear expectation of 156 which resolvers have visibility of their query data. However, this 157 resolver/transport selection may provide an added mechanism to track 158 them as they move across network environments. Commitments from 159 resolver operators to minimize such tracking as users move between 160 networks are also likely to play a role in user selection of 161 resolvers. 163 More recently the global legislative landscape with regard to 164 personal data collection, retention, and pseudonymization has seen 165 significant activity. Providing detailed practice advice about these 166 areas to the operator is out of scope, but Section 5.3.3 describes 167 some mitigations of data sharing risk. 169 This document has two main goals: 171 o To provide operational and policy guidance related to DNS over 172 encrypted transports and to outline recommendations for data 173 handling for operators of DNS privacy services. 175 o To introduce the DNS Recursive Operator Privacy (DROP) statement 176 and present a framework to assist writers of a DROP statement. A 177 DROP statement is a document that an operator should publish which 178 outlines their operational practices and commitments with regard 179 to privacy, thereby providing a means for clients to evaluate the 180 measurable and claimed privacy properties of a given DNS privacy 181 service. The framework identifies a set of elements and specifies 182 an outline order for them. This document does not, however, 183 define a particular Privacy statement, nor does it seek to provide 184 legal advice as to the contents. 186 A desired operational impact is that all operators (both those 187 providing resolvers within networks and those operating large public 188 services) can demonstrate their commitment to user privacy thereby 189 driving all DNS resolution services to a more equitable footing. 190 Choices for users would (in this ideal world) be driven by other 191 factors, e.g., differing security policies or minor difference in 192 operator policy, rather than gross disparities in privacy concerns. 194 Community insight [or judgment?] about operational practices can 195 change quickly, and experience shows that a Best Current Practice 196 (BCP) document about privacy and security is a point-in-time 197 statement. Readers are advised to seek out any updates that apply to 198 this document. 200 2. Scope 202 "DNS Privacy Considerations" [RFC7626] describes the general privacy 203 issues and threats associated with the use of the DNS by Internet 204 users and much of the threat analysis here is lifted from that 205 document and from [RFC6973]. However this document is limited in 206 scope to best practice considerations for the provision of DNS 207 privacy services by servers (recursive resolvers) to clients (stub 208 resolvers or forwarders). Choices that are made exclusively by the 209 end user, or those for operators of authoritative nameservers are out 210 of scope. 212 This document includes (but is not limited to) considerations in the 213 following areas: 215 1. Data "on the wire" between a client and a server. 217 2. Data "at rest" on a server (e.g., in logs). 219 3. Data "sent onwards" from the server (either on the wire or shared 220 with a third party). 222 Whilst the issues raised here are targeted at those operators who 223 choose to offer a DNS privacy service, considerations for areas 2 and 224 3 could equally apply to operators who only offer DNS over 225 unencrypted transports but who would otherwise like to align with 226 privacy best practice. 228 3. Privacy-related documents 230 There are various documents that describe protocol changes that have 231 the potential to either increase or decrease the privacy properties 232 of the DNS. Note this does not imply that some documents are good or 233 bad, better or worse, just that (for example) some features may bring 234 functional benefits at the price of a reduction in privacy and 235 conversely some features increase privacy with an accompanying 236 increase in complexity. A selection of the most relevant documents 237 are listed in Appendix A for reference. 239 4. Terminology 241 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 242 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 243 "OPTIONAL" in this document are to be interpreted as described in BCP 244 14 [RFC2119] [RFC8174] when, and only when, they appear in all 245 capitals, as shown here. 247 DNS terminology is as described in [RFC8499] with one modification: 248 we restate the clause in the original definition of Privacy-enabling 249 DNS server in [RFC8310] to include the requirement that a DNS over 250 (D)TLS server should also offer at least one of the credentials 251 described in Section 8 of [RFC8310] and implement the (D)TLS profile 252 described in Section 9 of [RFC8310]. 254 Other Terms: 256 o DROP: DNS Recursive Operator Privacy statement, see Section 6. 258 o DNS privacy service: The service that is offered via a privacy- 259 enabling DNS server and is documented either in an informal 260 statement of policy and practice with regard to users privacy or a 261 formal DROP statement. 263 5. Recommendations for DNS privacy services 265 In the following sections we first outline the threats relevant to 266 the specific topic and then discuss the potential actions that can be 267 taken to mitigate them. 269 We describe two classes of threats: 271 o Threats described in [RFC6973] 'Privacy Considerations for 272 Internet Protocols' 274 * Privacy terminology, threats to privacy, and mitigations as 275 described in Sections 3, 5, and 6 of [RFC6973]. 277 o DNS Privacy Threats 279 * These are threats to the users and operators of DNS privacy 280 services that are not directly covered by [RFC6973]. These may 281 be more operational in nature such as certificate management or 282 service availability issues. 284 We describe three classes of actions that operators of DNS privacy 285 services can take: 287 o Threat mitigation for well understood and documented privacy 288 threats to the users of the service and in some cases to the 289 operators of the service. 291 o Optimization of privacy services from an operational or management 292 perspective. 294 o Additional options that could further enhance the privacy and 295 usability of the service. 297 This document does not specify policy - only best practice, however 298 for DNS Privacy services to be considered compliant with these best 299 practice guidelines they SHOULD implement (where appropriate) all: 301 o Threat mitigations to be minimally compliant. 303 o Optimizations to be moderately compliant. 305 o Additional options to be maximally compliant. 307 In other words, these requirements are specified here as all being 308 normative requirements, and are classified only by different levels 309 of compliance in the rest of the document. 311 5.1. On the wire between client and server 313 In this section we consider both data on the wire and the service 314 provided to the client. 316 5.1.1. Transport recommendations 318 [RFC6973] Threats: 320 o Surveillance: 322 * Passive surveillance of traffic on the wire 324 DNS Privacy Threats: 326 o Active injection of spurious data or traffic. 328 Mitigations: 330 A DNS privacy service can mitigate these threats by providing service 331 over one or more of the following transports 333 o DNS over TLS (DoT) [RFC7858] and [RFC8310]. 335 o DNS over HTTPS (DoH) [RFC8484]. 337 It is noted that a DNS privacy service can also be provided over DNS- 338 over-DTLS [RFC8094], however this is an Experimental specification 339 and there are no known implementations at the time of writing. 341 It is also noted that DNS privacy service might be provided over 342 IPSec, DNSCrypt, or VPNs. However, use of these transports for DNS 343 are not standardized in DNS specific RFCs and any discussion of best 344 practice for providing such a service is out of scope for this 345 document. 347 Whilst encryption of DNS traffic can protect against active injection 348 this does not diminish the need for DNSSEC, see Section 5.1.4. 350 5.1.2. Authentication of DNS privacy services 352 [RFC6973] Threats: 354 o Surveillance: 356 * Active attacks on client resolver configuration 358 Mitigations: 360 DNS privacy services should ensure clients can authenticate the 361 server. Note that this, in effect, commits the DNS privacy service 362 to a public identity users will trust. 364 When using DoT clients that select a 'Strict Privacy' usage profile 365 [RFC8310] (to mitigate the threat of active attack on the client) 366 require the ability to authenticate the DNS server. To enable this, 367 DNS privacy services that offer DNS-over-TLS need to provide 368 credentials in the form of either X.509 certificates [RFC5280] or 369 Subject Public Key Info (SPKI) pin sets [RFC8310]. 371 When offering DoH [RFC8484], HTTPS requires authentication of the 372 server as part of the protocol. 374 Server operators should also follow the best practices with regard to 375 certificate revocation as described in [RFC7525]. 377 5.1.2.1. Certificate management 379 Anecdotal evidence to date highlights the management of certificates 380 as one of the more challenging aspects for operators of traditional 381 DNS resolvers that choose to additionally provide a DNS privacy 382 service as management of such credentials is new to those DNS 383 operators. 385 It is noted that SPKI pin set management is described in [RFC7858] 386 but that key pinning mechanisms in general have fallen out of favor 387 operationally for various reasons such as the logistical overhead of 388 rolling keys. 390 DNS Privacy Threats: 392 o Invalid certificates, resulting in an unavailable service which 393 might force a user to fallback to cleartext. 395 o Mis-identification of a server by a client e.g., typos in DoH URL 396 templates [RFC8484] or authentication domain names [RFC8310] which 397 accidentally direct clients to attacker controlled servers. 399 Mitigations: 401 It is recommended that operators: 403 o Follow the guidance in Section 6.5 of [RFC7525] with regards to 404 certificate revocation. 406 o Automate the generation, publication, and renewal of certificates. 407 For example, ACME [RFC8555] provides a mechanism to actively 408 manage certificates through automation and has been implemented by 409 a number of certificate authorities. 411 o Monitor certificates to prevent accidental expiration of 412 certificates. 414 o Choose a short, memorable authentication domain name for the 415 service. 417 5.1.3. Protocol recommendations 419 5.1.3.1. DoT 421 DNS Privacy Threats: 423 o Known attacks on TLS such as those described in [RFC7457]. 425 o Traffic analysis, for example: [Pitfalls-of-DNS-Encryption]. 427 o Potential for client tracking via transport identifiers. 429 o Blocking of well known ports (e.g., 853 for DoT). 431 Mitigations: 433 In the case of DoT, TLS profiles from Section 9 of [RFC8310] and the 434 Countermeasures to DNS Traffic Analysis from section 11.1 of 435 [RFC8310] provide strong mitigations. This includes but is not 436 limited to: 438 o Adhering to [RFC7525]. 440 o Implementing only (D)TLS 1.2 or later as specified in [RFC8310]. 442 o Implementing EDNS(0) Padding [RFC7830] using the guidelines in 443 [RFC8467] or a successor specification. 445 o Servers should not degrade in any way the query service level 446 provided to clients that do not use any form of session resumption 447 mechanism, such as TLS session resumption [RFC5077] with TLS 1.2, 448 section 2.2 of [RFC8446], or Domain Name System (DNS) Cookies 449 [RFC7873]. 451 o A DoT privacy service on both port 853 and 443. If the operator 452 deploys DoH on the same IP address this requires the use of the 453 'dot' ALPN value [dot-ALPN]. 455 Optimizations: 457 o Concurrent processing of pipelined queries, returning responses as 458 soon as available, potentially out of order as specified in 459 [RFC7766]. This is often called 'OOOR' - out-of-order responses 460 (providing processing performance similar to HTTP multiplexing). 462 o Management of TLS connections to optimize performance for clients 463 using [RFC7766] and EDNS(0) Keepalive [RFC7828] 465 Additional Options: 467 Management of TLS connections to optimize performance for clients 468 using DNS Stateful Operations [RFC8490]. 470 5.1.3.2. DoH 472 DNS Privacy Threats: 474 o Known attacks on TLS such as those described in [RFC7457]. 476 o Use of HTTP/2 padding and/or EDNS(0) padding as described in 477 Section 9 of [RFC8484] 479 o Traffic analysis, for example: [DNS-Privacy-not-so-private]. 481 o Potential for client tracking via transport identifiers. 483 Mitigations: 485 o Clients must be able to forgo the use of HTTP Cookies [RFC6265] 486 and still use the service. 488 o Clients should not be required to include any headers beyond the 489 absolute minimum to obtain service from a DoH server. (See 490 Section 6.1 of [I-D.ietf-httpbis-bcp56bis].) 492 5.1.4. DNSSEC 494 DNS Privacy Threats: 496 o Users may be directed to bogus IP addresses which, depending on 497 the application, protocol and authentication method, might lead 498 users to reveal personal information to attackers. One example is 499 a website that doesn't use TLS or its TLS authentication can 500 somehow be subverted. 502 Mitigations: 504 o All DNS privacy services must offer a DNS privacy service that 505 performs Domain Name System Security Extensions (DNSSEC) 506 validation. In addition they must be able to provide the DNSSEC 507 RRs to the client so that it can perform its own validation. 509 The addition of encryption to DNS does not remove the need for DNSSEC 510 [RFC4033] - they are independent and fully compatible protocols, each 511 solving different problems. The use of one does not diminish the 512 need nor the usefulness of the other. 514 While the use of an authenticated and encrypted transport protects 515 origin authentication and data integrity between a client and a DNS 516 privacy service it provides no proof (for a non-validating client) 517 that the data provided by the DNS privacy service was actually DNSSEC 518 authenticated. As with cleartext DNS the user is still solely 519 trusting the AD bit (if present) set by the resolver. 521 It should also be noted that the use of an encrypted transport for 522 DNS actually solves many of the practical issues encountered by DNS 523 validating clients e.g. interference by middleboxes with cleartext 524 DNS payloads is completely avoided. In this sense a validating 525 client that uses a DNS privacy service which supports DNSSEC has a 526 far simpler task in terms of DNSSEC Roadblock avoidance [RFC8027]. 528 5.1.5. Availability 530 DNS Privacy Threats: 532 o A failed DNS privacy service could force the user to switch 533 providers, fallback to cleartext or accept no DNS service for the 534 outage. 536 Mitigations: 538 A DNS privacy service should strive to engineer encrypted services to 539 the same availability level as any unencrypted services they provide. 540 Particular care should to be taken to protect DNS privacy services 541 against denial-of-service attacks, as experience has shown that 542 unavailability of DNS resolving because of attacks is a significant 543 motivation for users to switch services. See, for example 544 Section IV-C of [Passive-Observations-of-a-Large-DNS]. 546 Techniques such as those described in Section 10 of [RFC7766] can be 547 of use to operators to defend against such attacks. 549 5.1.6. Service options 551 DNS Privacy Threats: 553 o Unfairly disadvantaging users of the privacy service with respect 554 to the services available. This could force the user to switch 555 providers, fallback to cleartext or accept no DNS service for the 556 outage. 558 Mitigations: 560 A DNS privacy service should deliver the same level of service as 561 offered on un-encrypted channels in terms of options such as 562 filtering (or lack thereof), DNSSEC validation, etc. 564 5.1.7. Impact of Encryption on Monitoring by DNS Privacy Service 565 Operators 567 DNS Privacy Threats: 569 o Increased use of encryption can impact DNS privacy service 570 operator ability to monitor traffic and therefore manage their DNS 571 servers [RFC8404]. 573 Many monitoring solutions for DNS traffic rely on the plain text 574 nature of this traffic and work by intercepting traffic on the wire, 575 either using a separate view on the connection between clients and 576 the resolver, or as a separate process on the resolver system that 577 inspects network traffic. Such solutions will no longer function 578 when traffic between clients and resolvers is encrypted. Many DNS 579 privacy service operators still have need to inspect DNS traffic, 580 e.g., to monitor for network security threats. Operators may 581 therefore need to invest in alternative means of monitoring that 582 relies on either the resolver software directly, or exporting DNS 583 traffic from the resolver using e.g., [dnstap]. 585 Optimization: 587 When implementing alternative means for traffic monitoring, operators 588 of a DNS privacy service should consider using privacy conscious 589 means to do so (see section Section 5.2 for more details on data 590 handling and also the discussion on the use of Bloom Filters in 591 Appendix B. 593 5.1.8. Limitations of fronting a DNS privacy service with a pure TLS 594 proxy 596 DNS Privacy Threats: 598 o Limited ability to manage or monitor incoming connections using 599 DNS specific techniques. 601 o Misconfiguration (e.g., of the target server address in the proxy 602 configuration) could lead to data leakage if the proxy to target 603 server path is not encrypted. 605 Optimization: 607 Some operators may choose to implement DoT using a TLS proxy (e.g. 608 [nginx], [haproxy], or [stunnel]) in front of a DNS nameserver 609 because of proven robustness and capacity when handling large numbers 610 of client connections, load balancing capabilities and good tooling. 611 Currently, however, because such proxies typically have no specific 612 handling of DNS as a protocol over TLS or DTLS using them can 613 restrict traffic management at the proxy layer and at the DNS server. 614 For example, all traffic received by a nameserver behind such a proxy 615 will appear to originate from the proxy and DNS techniques such as 616 ACLs, RRL, or DNS64 will be hard or impossible to implement in the 617 nameserver. 619 Operators may choose to use a DNS aware proxy such as [dnsdist] which 620 offers custom options (similar to that proposed in 621 [I-D.bellis-dnsop-xpf]) to add source information to packets to 622 address this shortcoming. It should be noted that such options 623 potentially significantly increase the leaked information in the 624 event of a misconfiguration. 626 5.2. Data at rest on the server 628 5.2.1. Data handling 630 [RFC6973] Threats: 632 o Surveillance. 634 o Stored data compromise. 636 o Correlation. 638 o Identification. 640 o Secondary use. 642 o Disclosure. 644 Other Threats 646 o Contravention of legal requirements not to process user data. 648 Mitigations: 650 The following are recommendations relating to common activities for 651 DNS service operators and in all cases such activities should be 652 minimized or completely avoided if possible for DNS privacy services. 653 If data is retained it should be encrypted and either aggregated, 654 pseudonymized, or anonymized whenever possible. In general the 655 principle of data minimization described in [RFC6973] should be 656 applied. 658 o Transient data (e.g., that is used for real time monitoring and 659 threat analysis which might be held only in memory) should be 660 retained for the shortest possible period deemed operationally 661 feasible. 663 o The retention period of DNS traffic logs should be only those 664 required to sustain operation of the service and, to the extent 665 that such exists, meet regulatory requirements. 667 o DNS privacy services should not track users except for the 668 particular purpose of detecting and remedying technically 669 malicious (e.g., DoS) or anomalous use of the service. 671 o Data access should be minimized to only those personnel who 672 require access to perform operational duties. It should also be 673 limited to anonymized or pseudonymized data where operationally 674 feasible, with access to full logs (if any are held) only 675 permitted when necessary. 677 Optimizations: 679 o Consider use of full disk encryption for logs and data capture 680 storage. 682 5.2.2. Data minimization of network traffic 684 Data minimization refers to collecting, using, disclosing, and 685 storing the minimal data necessary to perform a task, and this can be 686 achieved by removing or obfuscating privacy-sensitive information in 687 network traffic logs. This is typically personal data, or data that 688 can be used to link a record to an individual, but may also include 689 revealing other confidential information, for example on the 690 structure of an internal corporate network. 692 The problem of effectively ensuring that DNS traffic logs contain no 693 or minimal privacy-sensitive information is not one that currently 694 has a generally agreed solution or any standards to inform this 695 discussion. This section presents an overview of current techniques 696 to simply provide reference on the current status of this work. 698 Research into data minimization techniques (and particularly IP 699 address pseudonymization/anonymization) was sparked in the late 700 1990s/early 2000s, partly driven by the desire to share significant 701 corpuses of traffic captures for research purposes. Several 702 techniques reflecting different requirements in this area and 703 different performance/resource tradeoffs emerged over the course of 704 the decade. Developments over the last decade have been both a 705 blessing and a curse; the large increase in size between an IPv4 and 706 an IPv6 address, for example, renders some techniques impractical, 707 but also makes available a much larger amount of input entropy, the 708 better to resist brute force re-identification attacks that have 709 grown in practicality over the period. 711 Techniques employed may be broadly categorized as either 712 anonymization or pseudonymization. The following discussion uses the 713 definitions from [RFC6973] Section 3, with additional observations 714 from [van-Dijkhuizen-et-al.] 716 o Anonymization. To enable anonymity of an individual, there must 717 exist a set of individuals that appear to have the same 718 attribute(s) as the individual. To the attacker or the observer, 719 these individuals must appear indistinguishable from each other. 721 o Pseudonymization. The true identity is deterministically replaced 722 with an alternate identity (a pseudonym). When the 723 pseudonymization schema is known, the process can be reversed, so 724 the original identity becomes known again. 726 In practice there is a fine line between the two; for example, how to 727 categorize a deterministic algorithm for data minimization of IP 728 addresses that produces a group of pseudonyms for a single given 729 address. 731 5.2.3. IP address pseudonymization and anonymization methods 733 A major privacy risk in DNS is connecting DNS queries to an 734 individual and the major vector for this in DNS traffic is the client 735 IP address. 737 There is active discussion in the space of effective pseudonymization 738 of IP addresses in DNS traffic logs, however there seems to be no 739 single solution that is widely recognized as suitable for all or most 740 use cases. There are also as yet no standards for this that are 741 unencumbered by patents. 743 Appendix B provides a more detailed survey of various techniques 744 employed or under development in 2019. 746 5.2.4. Pseudonymization, anonymization, or discarding of other 747 correlation data 749 DNS Privacy Threats: 751 o Fingerprinting of the client OS via various means including: IP 752 TTL/Hoplimit, TCP parameters (e.g., window size, ECN support, 753 SACK), OS specific DNS query patterns (e.g., for network 754 connectivity, captive portal detection, or OS specific updates). 756 o Fingerprinting of the client application or TLS library by e.g., 757 HTTP headers (e.g., User-Agent, Accept, Accept-Encoding), TLS 758 version/Cipher suite combinations, or other connection parameters. 760 o Correlation of queries on multiple TCP sessions originating from 761 the same IP address. 763 o Correlating of queries on multiple TLS sessions originating from 764 the same client, including via session resumption mechanisms. 766 o Resolvers _might_ receive client identifiers e.g., MAC addresses 767 in EDNS(0) options - some Customer-premises equipment (CPE) 768 devices are known to add them [MAC-address-EDNS]. 770 Mitigations: 772 o Data minimization or discarding of such correlation data. 774 5.2.5. Cache snooping 776 [RFC6973] Threats: 778 o Surveillance: 780 * Profiling of client queries by malicious third parties. 782 Mitigations: 784 o See [ISC-Knowledge-database-on-cache-snooping] for an example 785 discussion on defending against cache snooping. 787 5.3. Data sent onwards from the server 789 In this section we consider both data sent on the wire in upstream 790 queries and data shared with third parties. 792 5.3.1. Protocol recommendations 794 [RFC6973] Threats: 796 o Surveillance: 798 * Transmission of identifying data upstream. 800 Mitigations: 802 As specified in [RFC8310] for DoT but applicable to any DNS Privacy 803 services the server should: 805 o Implement QNAME minimization [RFC7816]. 807 o Honor a SOURCE PREFIX-LENGTH set to 0 in a query containing the 808 EDNS(0) Client Subnet (ECS) option ([RFC7871] Section 7.1.2). 810 Optimizations: 812 o As per Section 2 of [RFC7871] the server should either: 814 * not use the ECS option in upstream queries at all, or 816 * offer alternative services, one that sends ECS and one that 817 does not. 819 If operators do offer a service that sends the ECS options upstream 820 they should use the shortest prefix that is operationally feasible 821 and ideally use a policy of allowlisting upstream servers to send ECS 822 to in order to minimize data leakage. Operators should make clear in 823 any policy statement what prefix length they actually send and the 824 specific policy used. 826 Allowlisting has the benefit that not only does the operator know 827 which upstream servers can use ECS but also allows the operator to 828 decide which upstream servers apply privacy policies that the 829 operator is happy with. However some operators consider allowlisting 830 to incur significant operational overhead compared to dynamic 831 detection of ECS on authoritative servers. 833 Additional options: 835 o Aggressive Use of DNSSEC-Validated Cache [RFC8198] and [RFC8020] 836 (NXDOMAIN: There Really Is Nothing Underneath) to reduce the 837 number of queries to authoritative servers to increase privacy. 839 o Run a copy of the root zone on loopback [RFC7706] to avoid making 840 queries to the root servers that might leak information. 842 5.3.2. Client query obfuscation 844 Additional options: 846 Since queries from recursive resolvers to authoritative servers are 847 performed using cleartext (at the time of writing), resolver services 848 need to consider the extent to which they may be directly leaking 849 information about their client community via these upstream queries 850 and what they can do to mitigate this further. Note, that even when 851 all the relevant techniques described above are employed there may 852 still be attacks possible, e.g. [Pitfalls-of-DNS-Encryption]. For 853 example, a resolver with a very small community of users risks 854 exposing data in this way and ought to obfuscate this traffic by 855 mixing it with 'generated' traffic to make client characterization 856 harder. The resolver could also employ aggressive pre-fetch 857 techniques as a further measure to counter traffic analysis. 859 At the time of writing there are no standardized or widely recognized 860 techniques to perform such obfuscation or bulk pre-fetches. 862 Another technique that particularly small operators may consider is 863 forwarding local traffic to a larger resolver (with a privacy policy 864 that aligns with their own practices) over an encrypted protocol so 865 that the upstream queries are obfuscated among those of the large 866 resolver. 868 5.3.3. Data sharing 870 [RFC6973] Threats: 872 o Surveillance. 874 o Stored data compromise. 876 o Correlation. 878 o Identification. 880 o Secondary use. 882 o Disclosure. 884 DNS Privacy Threats: 886 o Contravention of legal requirements not to process user data. 888 Mitigations: 890 Operators should not share identifiable data with third-parties. 892 If operators choose to share identifiable data with third-parties in 893 specific circumstance they should publish the terms under which data 894 is shared. 896 Operators should consider including specific guidelines for the 897 collection of aggregated and/or anonymized data for research 898 purposes, within or outside of their own organization. This can 899 benefit not only the operator (through inclusion in novel research) 900 but also the wider Internet community. See the policy published by 901 SURFnet [SURFnet-policy] on data sharing for research as an example. 903 6. DNS Recursive Operator Privacy (DROP) statement 905 To be compliant with this Best Common Practices document, a DNS 906 Recursive Operator SHOULD publish a DNS Recursive Operator Privacy 907 Statement. Adopting the outline, and including the headings in the 908 order provided, is a benefit to persons comparing multiple operators' 909 DROP statements. 911 Appendix C provides a comparison of some existing policy and privacy 912 statements. 914 6.1. Outline of a DROP statement 916 The contents of Section 6.1.1 and Section 6.1.2 are non-normative, 917 other than the order of the headings. Material under each topic is 918 present to assist the operator developing their own DROP statement 919 and: 921 o Relates _only_ to matters around to the technical operation of DNS 922 privacy services, and not on any other matters. 924 o Does not attempt to offer an exhaustive list for the contents of a 925 DROP statement. 927 o Is not intended to form the basis of any legal/compliance 928 documentation. 930 Appendix D provides an example (also non-normative) of a DROP 931 statement for a specific operator scenario. 933 6.1.1. Policy 935 1. Treatment of IP addresses. Make an explicit statement that IP 936 addresses are treated as personal data. 938 2. Data collection and sharing. Specify clearly what data 939 (including IP addresses) is: 941 * Collected and retained by the operator, and for what period it 942 is retained. 944 * Shared with partners. 946 * Shared, sold, or rented to third-parties. 948 and in each case whether it is aggregated, pseudonymized, or 949 anonymized and the conditions of data transfer. Where possible 950 provide details of the techniques used for the above data 951 minimizations. 953 3. Exceptions. Specify any exceptions to the above, for example, 954 technically malicious or anomalous behavior. 956 4. Associated entities. Declare and explicitly enumerate any 957 partners, third-party affiliations, or sources of funding. 959 5. Correlation. Whether user DNS data is correlated or combined 960 with any other personal information held by the operator. 962 6. Result filtering. This section should explain whether the 963 operator filters, edits or alters in any way the replies that it 964 receives from the authoritative servers for each DNS zone, before 965 forwarding them to the clients. For each category listed below, 966 the operator should also specify how the filtering lists are 967 created and managed, whether it employs any third-party sources 968 for such lists, and which ones. 970 * Specify if any replies are being filtered out or altered for 971 network and computer security reasons (e.g., preventing 972 connections to malware-spreading websites or botnet control 973 servers). 975 * Specify if any replies are being filtered out or altered for 976 mandatory legal reasons, due to applicable legislation or 977 binding orders by courts and other public authorities. 979 * Specify if any replies are being filtered out or altered for 980 voluntary legal reasons, due to an internal policy by the 981 operator aiming at reducing potential legal risks. 983 * Specify if any replies are being filtered out or altered for 984 any other reason, including commercial ones. 986 6.1.2. Practice 988 [NOTE FOR RFC EDITOR: Please update this section to use letters for 989 the sub-bullet points instead of numbers. This was not done during 990 review because the markdown tool used to write the document did not 991 support it.] 993 Communicate the current operational practices of the service. 995 1. Deviations. Specify any temporary or permanent deviations from 996 the policy for operational reasons. 998 2. Client facing capabilities. With reference to Section 5 provide 999 specific details of which capabilities are provided on which 1000 client facing addresses and ports: 1002 1. For DoT, specify the authentication domain name to be used 1003 (if any). 1005 2. For DoT, specify the SPKI pin sets to be used (if any) and 1006 policy for rolling keys. 1008 3. Upstream capabilities. With reference to section Section 5.3 1009 provide specific details of which capabilities are provided 1010 upstream for data sent to authoritative servers. 1012 4. Support. Provide contact/support information for the service. 1014 5. Jurisdiction. This section should communicate the applicable 1015 jurisdictions and law enforcement regimes under which the service 1016 is being provided. 1018 1. Specify the operator entity or entities that will control the 1019 data and be responsible for their treatment, and their legal 1020 place of business. 1022 2. Specify, either directly or by pointing to the applicable 1023 privacy policy, the relevant privacy laws that apply to the 1024 treatment of the data, the rights that users enjoy in regard 1025 to their own personal information that is treated by the 1026 service, and how they can contact the operator to exercise 1027 them. 1029 3. Additionally specify the countries in which the servers 1030 handling the DNS requests and the data are located (if the 1031 operator applies a geolocation policy so that requests from 1032 certain countries are only served by certain servers, this 1033 should be specified as well). 1035 4. Specify whether the operator has any agreement in place with 1036 law enforcement agencies, or other public and private 1037 parties, to give them access to the servers and/or to the 1038 data. 1040 6.2. Enforcement/accountability 1042 Transparency reports may help with building user trust that operators 1043 adhere to their policies and practices. 1045 Independent monitoring or analysis could be performed where possible 1046 of: 1048 o ECS, QNAME minimization, EDNS(0) padding, etc. 1050 o Filtering. 1052 o Uptime. 1054 This is by analogy with several TLS or website analysis tools that 1055 are currently available e.g., [SSL-Labs] or [Internet.nl]. 1057 Additionally operators could choose to engage the services of a third 1058 party auditor to verify their compliance with their published DROP 1059 statement. 1061 7. IANA considerations 1063 None 1065 8. Security considerations 1067 Security considerations for DNS-over-TCP are given in [RFC7766], many 1068 of which are generally applicable to session based DNS. Guidance on 1069 operational requirements for DNS-over-TCP are also available in [I- 1070 D.dnsop-dns-tcp-requirements]. 1072 9. Acknowledgements 1074 Many thanks to Amelia Andersdotter for a very thorough review of the 1075 first draft of this document and Stephen Farrell for a thorough 1076 review at WGLC and for suggesting the inclusion of an example DROP 1077 statement. Thanks to John Todd for discussions on this topic, and to 1078 Stephane Bortzmeyer, Puneet Sood and Vittorio Bertola for review. 1079 Thanks to Daniel Kahn Gillmor, Barry Green, Paul Hoffman, Dan York, 1080 Jon Reed, Lorenzo Colitti for comments at the mic. Thanks to 1081 Loganaden Velvindron for useful updates to the text. 1083 Sara Dickinson thanks the Open Technology Fund for a grant to support 1084 the work on this document. 1086 10. Contributors 1088 The below individuals contributed significantly to the document: 1090 John Dickinson 1091 Sinodun Internet Technologies 1092 Magdalen Centre 1093 Oxford Science Park 1094 Oxford OX4 4GA 1095 United Kingdom 1097 Jim Hague 1098 Sinodun Internet Technologies 1099 Magdalen Centre 1100 Oxford Science Park 1101 Oxford OX4 4GA 1102 United Kingdom 1104 11. Changelog 1106 draft-ietf-dprive-bcp-op-10 1108 o Remove direct references to draft-ietf-dprive-rfc7626-bis, instead 1109 have one general reference RFC7626 1111 o Clarify that the DROP statement outline is non-normative and add 1112 some further qualifications about content 1114 o Update wording on data sharing to remove explicit discussion of 1115 consent 1117 o Move table in section 5.2.3 to an appendix 1119 o Move section 6.2 to an appendix 1121 o Corrections to references, typos and editorial updates from 1122 initial IESG comments. 1124 draft-ietf-dprive-bcp-op-09 1126 o Fix references so they match the correct section numbers in draft- 1127 ietf-dprive-rfc7626-bis-05 1129 draft-ietf-dprive-bcp-op-08 1131 o Address IETF Last call comments. 1133 draft-ietf-dprive-bcp-op-07 1135 o Editorial changes following AD review. 1137 o Change all URIs to Informational References. 1139 draft-ietf-dprive-bcp-op-06 1141 o Final minor changes from second WGLC. 1143 draft-ietf-dprive-bcp-op-05 1145 o Remove some text on consent: 1147 * Paragraph 2 in section 5.3.3 1149 * Item 6 in the DROP Practice statement (and example) 1151 o Remove .onion and TLSA options 1152 o Include ACME as a reference for certificate management 1154 o Update text on session resumption usage 1156 o Update section 5.2.4 on client fingerprinting 1158 draft-ietf-dprive-bcp-op-04 1160 o Change DPPPS to DROP (DNS Recursive Operator Privacy) statement 1162 o Update structure of DROP slightly 1164 o Add example DROP statement 1166 o Add text about restricting access to full logs 1168 o Move table in section 5.2.3 from SVG to inline table 1170 o Fix many editorial and reference nits 1172 draft-ietf-dprive-bcp-op-03 1174 o Add paragraph about operational impact 1176 o Move DNSSEC requirement out of the Appendix into main text as a 1177 privacy threat that should be mitigated 1179 o Add TLS version/Cipher suite as tracking threat 1181 o Add reference to Mozilla TRR policy 1183 o Remove several TODOs and QUESTIONS. 1185 draft-ietf-dprive-bcp-op-02 1187 o Change 'open resolver' for 'public resolver' 1189 o Minor editorial changes 1191 o Remove recommendation to run a separate TLS 1.3 service 1193 o Move TLSA to purely a optimization in Section 5.2.1 1195 o Update reference on minimal DoH headers. 1197 o Add reference on user switching provider after service issues in 1198 Section 5.1.4 1200 o Add text in Section 5.1.6 on impact on operators. 1202 o Add text on additional threat to TLS proxy use (Section 5.1.7) 1204 o Add reference in Section 5.3.1 on example policies. 1206 draft-ietf-dprive-bcp-op-01 1208 o Many minor editorial fixes 1210 o Update DoH reference to RFC8484 and add more text on DoH 1212 o Split threat descriptions into ones directly referencing RFC6973 1213 and other DNS Privacy threats 1215 o Improve threat descriptions throughout 1217 o Remove reference to the DNSSEC TLS Chain Extension draft until new 1218 version submitted. 1220 o Clarify use of allowlisting for ECS 1222 o Re-structure the DPPPS, add Result filtering section. 1224 o Remove the direct inclusion of privacy policy comparison, now just 1225 reference dnsprivacy.org and an example of such work. 1227 o Add an appendix briefly discussing DNSSEC 1229 o Update affiliation of 1 author 1231 draft-ietf-dprive-bcp-op-00 1233 o Initial commit of re-named document after adoption to replace 1234 draft-dickinson-dprive-bcp-op-01 1236 12. References 1238 12.1. Normative References 1240 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1241 Requirement Levels", BCP 14, RFC 2119, 1242 DOI 10.17487/RFC2119, March 1997, . 1245 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1246 Rose, "DNS Security Introduction and Requirements", 1247 RFC 4033, DOI 10.17487/RFC4033, March 2005, 1248 . 1250 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 1251 Housley, R., and W. Polk, "Internet X.509 Public Key 1252 Infrastructure Certificate and Certificate Revocation List 1253 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 1254 . 1256 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 1257 Morris, J., Hansen, M., and R. Smith, "Privacy 1258 Considerations for Internet Protocols", RFC 6973, 1259 DOI 10.17487/RFC6973, July 2013, . 1262 [RFC7525] Sheffer, Y., Holz, R., and P. Saint-Andre, 1263 "Recommendations for Secure Use of Transport Layer 1264 Security (TLS) and Datagram Transport Layer Security 1265 (DTLS)", BCP 195, RFC 7525, DOI 10.17487/RFC7525, May 1266 2015, . 1268 [RFC7766] Dickinson, J., Dickinson, S., Bellis, R., Mankin, A., and 1269 D. Wessels, "DNS Transport over TCP - Implementation 1270 Requirements", RFC 7766, DOI 10.17487/RFC7766, March 2016, 1271 . 1273 [RFC7816] Bortzmeyer, S., "DNS Query Name Minimisation to Improve 1274 Privacy", RFC 7816, DOI 10.17487/RFC7816, March 2016, 1275 . 1277 [RFC7828] Wouters, P., Abley, J., Dickinson, S., and R. Bellis, "The 1278 edns-tcp-keepalive EDNS0 Option", RFC 7828, 1279 DOI 10.17487/RFC7828, April 2016, . 1282 [RFC7830] Mayrhofer, A., "The EDNS(0) Padding Option", RFC 7830, 1283 DOI 10.17487/RFC7830, May 2016, . 1286 [RFC7858] Hu, Z., Zhu, L., Heidemann, J., Mankin, A., Wessels, D., 1287 and P. Hoffman, "Specification for DNS over Transport 1288 Layer Security (TLS)", RFC 7858, DOI 10.17487/RFC7858, May 1289 2016, . 1291 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1292 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1293 May 2017, . 1295 [RFC8310] Dickinson, S., Gillmor, D., and T. Reddy, "Usage Profiles 1296 for DNS over TLS and DNS over DTLS", RFC 8310, 1297 DOI 10.17487/RFC8310, March 2018, . 1300 [RFC8404] Moriarty, K., Ed. and A. Morton, Ed., "Effects of 1301 Pervasive Encryption on Operators", RFC 8404, 1302 DOI 10.17487/RFC8404, July 2018, . 1305 [RFC8467] Mayrhofer, A., "Padding Policies for Extension Mechanisms 1306 for DNS (EDNS(0))", RFC 8467, DOI 10.17487/RFC8467, 1307 October 2018, . 1309 [RFC8484] Hoffman, P. and P. McManus, "DNS Queries over HTTPS 1310 (DoH)", RFC 8484, DOI 10.17487/RFC8484, October 2018, 1311 . 1313 [RFC8499] Hoffman, P., Sullivan, A., and K. Fujiwara, "DNS 1314 Terminology", BCP 219, RFC 8499, DOI 10.17487/RFC8499, 1315 January 2019, . 1317 12.2. Informative References 1319 [Bloom-filter] 1320 van Rijswijk-Deij, R., Rijnders, G., Bomhoff, M., and L. 1321 Allodi, "Privacy-Conscious Threat Intelligence Using 1322 DNSBLOOM", 2019, 1323 . 1325 [Brenker-and-Arnes] 1326 Brekne, T. and A. Arnes, "CIRCUMVENTING IP-ADDRESS 1327 PSEUDONYMIZATION", 2005, . 1330 [Crypto-PAn] 1331 CESNET, "Crypto-PAn", 2015, 1332 . 1335 [DNS-Privacy-not-so-private] 1336 Silby, S., Juarez, M., Vallina-Rodriguez, N., and C. 1337 Troncosol, "DNS Privacy not so private: the traffic 1338 analysis perspective.", 2019, 1339 . 1341 [dnsdist] PowerDNS, "dnsdist Overview", 2019, . 1343 [dnstap] dnstap.info, "DNSTAP", 2019, . 1345 [DoH-resolver-policy] 1346 Mozilla, "Security/DOH-resolver-policy", 2019, 1347 . 1349 [dot-ALPN] 1350 IANA (iana.org), "TLS Application-Layer Protocol 1351 Negotiation (ALPN) Protocol IDs", 2020, 1352 . 1355 [Geolocation-Impact-Assessement] 1356 Conversion Works, "Anonymize IP Geolocation Accuracy 1357 Impact Assessment", 2017, 1358 . 1361 [haproxy] haproxy.org, "HAPROXY", 2019, . 1363 [Harvan] Harvan, M., "Prefix- and Lexicographical-order-preserving 1364 IP Address Anonymization", 2006, 1365 . 1367 [I-D.bellis-dnsop-xpf] 1368 Bellis, R., Dijk, P., and R. Gacogne, "DNS X-Proxied-For", 1369 draft-bellis-dnsop-xpf-04 (work in progress), March 2018. 1371 [I-D.ietf-dnsop-dns-tcp-requirements] 1372 Kristoff, J. and D. Wessels, "DNS Transport over TCP - 1373 Operational Requirements", draft-ietf-dnsop-dns-tcp- 1374 requirements-06 (work in progress), May 2020. 1376 [I-D.ietf-httpbis-bcp56bis] 1377 Nottingham, M., "Building Protocols with HTTP", draft- 1378 ietf-httpbis-bcp56bis-09 (work in progress), November 1379 2019. 1381 [Internet.nl] 1382 Internet.nl, "Internet.nl Is Your Internet Up To Date?", 1383 2019, . 1385 [IP-Anonymization-in-Analytics] 1386 Google, "IP Anonymization in Analytics", 2019, 1387 . 1390 [ipcipher1] 1391 Hubert, B., "On IP address encryption: security analysis 1392 with respect for privacy", 2017, 1393 . 1396 [ipcipher2] 1397 PowerDNS, "ipcipher", 2017, . 1400 [ipcrypt] veorq, "ipcrypt: IP-format-preserving encryption", 2015, 1401 . 1403 [ipcrypt-analysis] 1404 Aumasson, J., "Analysis of ipcrypt?", 2018, 1405 . 1408 [ISC-Knowledge-database-on-cache-snooping] 1409 ISC Knowledge Database, "DNS Cache snooping - should I be 1410 concerned?", 2018, . 1412 [MAC-address-EDNS] 1413 DNS-OARC mailing list, "Embedding MAC address in DNS 1414 requests for selective filtering IDs", 2016, 1415 . 1418 [nginx] nginx.org, "NGINX", 2019, . 1420 [Passive-Observations-of-a-Large-DNS] 1421 de Vries, W., van Rijswijk-Deij, R., de Boer, P., and A. 1422 Pras, "Passive Observations of a Large DNS Service: 2.5 1423 Years in the Life of Google", 2018, 1424 . 1427 [pcap] tcpdump.org, "PCAP", 2016, . 1429 [Pitfalls-of-DNS-Encryption] 1430 Shulman, H., "Pretty Bad Privacy: Pitfalls of DNS 1431 Encryption", 2014, . 1434 [policy-comparison] 1435 dnsprivacy.org, "Comparison of policy and privacy 1436 statements 2019", 2019, 1437 . 1440 [PowerDNS-dnswasher] 1441 PowerDNS, "dnswasher", 2019, 1442 . 1445 [Ramaswamy-and-Wolf] 1446 Ramaswamy, R. and T. Wolf, "High-Speed Prefix-Preserving 1447 IP Address Anonymization for Passive Measurement Systems", 1448 2007, 1449 . 1451 [RFC5077] Salowey, J., Zhou, H., Eronen, P., and H. Tschofenig, 1452 "Transport Layer Security (TLS) Session Resumption without 1453 Server-Side State", RFC 5077, DOI 10.17487/RFC5077, 1454 January 2008, . 1456 [RFC6235] Boschi, E. and B. Trammell, "IP Flow Anonymization 1457 Support", RFC 6235, DOI 10.17487/RFC6235, May 2011, 1458 . 1460 [RFC6265] Barth, A., "HTTP State Management Mechanism", RFC 6265, 1461 DOI 10.17487/RFC6265, April 2011, . 1464 [RFC7457] Sheffer, Y., Holz, R., and P. Saint-Andre, "Summarizing 1465 Known Attacks on Transport Layer Security (TLS) and 1466 Datagram TLS (DTLS)", RFC 7457, DOI 10.17487/RFC7457, 1467 February 2015, . 1469 [RFC7626] Bortzmeyer, S., "DNS Privacy Considerations", RFC 7626, 1470 DOI 10.17487/RFC7626, August 2015, . 1473 [RFC7706] Kumari, W. and P. Hoffman, "Decreasing Access Time to Root 1474 Servers by Running One on Loopback", RFC 7706, 1475 DOI 10.17487/RFC7706, November 2015, . 1478 [RFC7871] Contavalli, C., van der Gaast, W., Lawrence, D., and W. 1479 Kumari, "Client Subnet in DNS Queries", RFC 7871, 1480 DOI 10.17487/RFC7871, May 2016, . 1483 [RFC7873] Eastlake 3rd, D. and M. Andrews, "Domain Name System (DNS) 1484 Cookies", RFC 7873, DOI 10.17487/RFC7873, May 2016, 1485 . 1487 [RFC8020] Bortzmeyer, S. and S. Huque, "NXDOMAIN: There Really Is 1488 Nothing Underneath", RFC 8020, DOI 10.17487/RFC8020, 1489 November 2016, . 1491 [RFC8027] Hardaker, W., Gudmundsson, O., and S. Krishnaswamy, 1492 "DNSSEC Roadblock Avoidance", BCP 207, RFC 8027, 1493 DOI 10.17487/RFC8027, November 2016, . 1496 [RFC8094] Reddy, T., Wing, D., and P. Patil, "DNS over Datagram 1497 Transport Layer Security (DTLS)", RFC 8094, 1498 DOI 10.17487/RFC8094, February 2017, . 1501 [RFC8198] Fujiwara, K., Kato, A., and W. Kumari, "Aggressive Use of 1502 DNSSEC-Validated Cache", RFC 8198, DOI 10.17487/RFC8198, 1503 July 2017, . 1505 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 1506 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 1507 . 1509 [RFC8490] Bellis, R., Cheshire, S., Dickinson, J., Dickinson, S., 1510 Lemon, T., and T. Pusateri, "DNS Stateful Operations", 1511 RFC 8490, DOI 10.17487/RFC8490, March 2019, 1512 . 1514 [RFC8555] Barnes, R., Hoffman-Andrews, J., McCarney, D., and J. 1515 Kasten, "Automatic Certificate Management Environment 1516 (ACME)", RFC 8555, DOI 10.17487/RFC8555, March 2019, 1517 . 1519 [RFC8618] Dickinson, J., Hague, J., Dickinson, S., Manderson, T., 1520 and J. Bond, "Compacted-DNS (C-DNS): A Format for DNS 1521 Packet Capture", RFC 8618, DOI 10.17487/RFC8618, September 1522 2019, . 1524 [SSL-Labs] 1525 SSL Labs, "SSL Server Test", 2019, 1526 . 1528 [stunnel] ISC Knowledge Database, "DNS-over-TLS", 2018, 1529 . 1531 [SURFnet-policy] 1532 SURFnet, "SURFnet Data Sharing Policy", 2016, 1533 . 1535 [TCPdpriv] 1536 Ipsilon Networks, Inc., "TCPdpriv", 2005, 1537 . 1539 [van-Dijkhuizen-et-al.] 1540 Van Dijkhuizen , N. and J. Van Der Ham, "A Survey of 1541 Network Traffic Anonymisation Techniques and 1542 Implementations", 2018, . 1544 [Xu-et-al.] 1545 Fan, J., Xu, J., Ammar, M., and S. Moon, "Prefix- 1546 preserving IP address anonymization: measurement-based 1547 security evaluation and a new cryptography-based scheme", 1548 2004, . 1551 Appendix A. Documents 1553 This section provides an overview of some DNS privacy-related 1554 documents, however, this is neither an exhaustive list nor a 1555 definitive statement on the characteristic of the document. 1557 A.1. Potential increases in DNS privacy 1559 These documents are limited in scope to communications between stub 1560 clients and recursive resolvers: 1562 o 'Specification for DNS over Transport Layer Security (TLS)' 1563 [RFC7858]. 1565 o 'DNS over Datagram Transport Layer Security (DTLS)' [RFC8094]. 1566 Note that this document has the Category of Experimental. 1568 o 'DNS Queries over HTTPS (DoH)' [RFC8484]. 1570 o 'Usage Profiles for DNS over TLS and DNS over DTLS' [RFC8310]. 1572 o 'The EDNS(0) Padding Option' [RFC7830] and 'Padding Policy for 1573 EDNS(0)' [RFC8467]. 1575 These documents apply to recursive and authoritative DNS but are 1576 relevant when considering the operation of a recursive server: 1578 o 'DNS Query Name minimization to Improve Privacy' [RFC7816]. 1580 A.2. Potential decreases in DNS privacy 1582 These documents relate to functionality that could provide increased 1583 tracking of user activity as a side effect: 1585 o 'Client Subnet in DNS Queries' [RFC7871]. 1587 o 'Domain Name System (DNS) Cookies' [RFC7873]). 1589 o 'Transport Layer Security (TLS) Session Resumption without Server- 1590 Side State' [RFC5077] referred to here as simply TLS session 1591 resumption. 1593 o [RFC8446] Appendix C.4 describes Client Tracking Prevention in TLS 1594 1.3 1596 o 'A DNS Packet Capture Format' [RFC8618]. 1598 o Passive DNS [RFC8499]. 1600 o Section 8 of [RFC8484] outlines the privacy considerations of DoH. 1601 Note that depending on the specifics of a DoH implementation there 1602 may be increased identification and tracking compared to other DNS 1603 transports. 1605 A.3. Related operational documents 1607 o 'DNS Transport over TCP - Implementation Requirements' [RFC7766]. 1609 o 'Operational requirements for DNS-over-TCP' 1610 [I-D.ietf-dnsop-dns-tcp-requirements]. 1612 o 'The edns-tcp-keepalive EDNS0 Option' [RFC7828]. 1614 o 'DNS Stateful Operations' [RFC8490]. 1616 Appendix B. IP address techniques 1618 The following table presents a high level comparison of various 1619 techniques employed or under development in 2019, and classifies them 1620 according to categorization of technique and other properties. Both 1621 the specific techniques and the categorisations are described in more 1622 detail in the following sections. The list of techniques includes 1623 the main techniques in current use, but does not claim to be 1624 comprehensive. 1626 +---------------------------+----+---+----+---+----+---+---+ 1627 | Categorization/Property | GA | d | TC | C | TS | i | B | 1628 +---------------------------+----+---+----+---+----+---+---+ 1629 | Anonymization | X | X | X | | | | X | 1630 | Pseudoanonymization | | | | X | X | X | | 1631 | Format preserving | X | X | X | X | X | X | | 1632 | Prefix preserving | | | X | X | X | | | 1633 | Replacement | | | X | | | | | 1634 | Filtering | X | | | | | | | 1635 | Generalization | | | | | | | X | 1636 | Enumeration | | X | | | | | | 1637 | Reordering/Shuffling | | | X | | | | | 1638 | Random substitution | | | X | | | | | 1639 | Cryptographic permutation | | | | X | X | X | | 1640 | IPv6 issues | | | | | X | | | 1641 | CPU intensive | | | | X | | | | 1642 | Memory intensive | | | X | | | | | 1643 | Security concerns | | | | | | X | | 1644 +---------------------------+----+---+----+---+----+---+---+ 1646 Table 1: Classification of techniques 1648 Legend of techniques: GA = Google Analytics, d = dnswasher, TC = 1649 TCPdpriv, C = CryptoPAn, TS = TSA, i = ipcipher, B = Bloom filter 1651 The choice of which method to use for a particular application will 1652 depend on the requirements of that application and consideration of 1653 the threat analysis of the particular situation. 1655 For example, a common goal is that distributed packet captures must 1656 be in an existing data format such as PCAP [pcap] or C-DNS [RFC8618] 1657 that can be used as input to existing analysis tools. In that case, 1658 use of a format-preserving technique is essential. This, though, is 1659 not cost-free - several authors (e.g., [Brenker-and-Arnes] have 1660 observed that, as the entropy in an IPv4 address is limited, given a 1661 de-identified log from a target, if an attacker is capable of 1662 ensuring packets are captured by the target and the attacker can send 1663 forged traffic with arbitrary source and destination addresses to 1664 that target, any format-preserving pseudonymization is vulnerable to 1665 an attack along the lines of a cryptographic chosen plaintext attack. 1667 B.1. Categorization of techniques 1669 Data minimization methods may be categorized by the processing used 1670 and the properties of their outputs. The following builds on the 1671 categorization employed in [RFC6235]: 1673 o Format-preserving. Normally when encrypting, the original data 1674 length and patterns in the data should be hidden from an attacker. 1675 Some applications of de-identification, such as network capture 1676 de-identification, require that the de-identified data is of the 1677 same form as the original data, to allow the data to be parsed in 1678 the same way as the original. 1680 o Prefix preservation. Values such as IP addresses and MAC 1681 addresses contain prefix information that can be valuable in 1682 analysis, e.g., manufacturer ID in MAC addresses, subnet in IP 1683 addresses. Prefix preservation ensures that prefixes are de- 1684 identified consistently; e.g., if two IP addresses are from the 1685 same subnet, a prefix preserving de-identification will ensure 1686 that their de-identified counterparts will also share a subnet. 1687 Prefix preservation may be fixed (i.e. based on a user selected 1688 prefix length identified in advance to be preserved ) or general. 1690 o Replacement. A one-to-one replacement of a field to a new value 1691 of the same type, for example, using a regular expression. 1693 o Filtering. Removing (and thus truncating) or replacing data in a 1694 field. Field data can be overwritten, often with zeros, either 1695 partially (grey marking) or completely (black marking). 1697 o Generalization. Data is replaced by more general data with 1698 reduced specificity. One example would be to replace all TCP/UDP 1699 port numbers with one of two fixed values indicating whether the 1700 original port was ephemeral (>=1024) or non-ephemeral (>1024). 1701 Another example, precision degradation, reduces the accuracy of 1702 e.g., a numeric value or a timestamp. 1704 o Enumeration. With data from a well-ordered set, replace the first 1705 data item data using a random initial value and then allocate 1706 ordered values for subsequent data items. When used with 1707 timestamp data, this preserves ordering but loses precision and 1708 distance. 1710 o Reordering/shuffling. Preserving the original data, but 1711 rearranging its order, often in a random manner. 1713 o Random substitution. As replacement, but using randomly generated 1714 replacement values. 1716 o Cryptographic permutation. Using a permutation function, such as 1717 a hash function or cryptographic block cipher, to generate a 1718 replacement de-identified value. 1720 B.2. Specific techniques 1722 B.2.1. Google Analytics non-prefix filtering 1724 Since May 2010, Google Analytics has provided a facility 1725 [IP-Anonymization-in-Analytics] that allows website owners to request 1726 that all their users IP addresses are anonymized within Google 1727 Analytics processing. This very basic anonymization simply sets to 1728 zero the least significant 8 bits of IPv4 addresses, and the least 1729 significant 80 bits of IPv6 addresses. The level of anonymization 1730 this produces is perhaps questionable. There are some analysis 1731 results [Geolocation-Impact-Assessement] which suggest that the 1732 impact of this on reducing the accuracy of determining the user's 1733 location from their IP address is less than might be hoped; the 1734 average discrepancy in identification of the user city for UK users 1735 is no more than 17%. 1737 Anonymization: Format-preserving, Filtering (grey marking). 1739 B.2.2. dnswasher 1741 Since 2006, PowerDNS have included a de-identification tool dnswasher 1742 [PowerDNS-dnswasher] with their PowerDNS product. This is a PCAP 1743 filter that performs a one-to-one mapping of end user IP addresses 1744 with an anonymized address. A table of user IP addresses and their 1745 de-identified counterparts is kept; the first IPv4 user addresses is 1746 translated to 0.0.0.1, the second to 0.0.0.2 and so on. The de- 1747 identified address therefore depends on the order that addresses 1748 arrive in the input, and running over a large amount of data the 1749 address translation tables can grow to a significant size. 1751 Anonymization: Format-preserving, Enumeration. 1753 B.2.3. Prefix-preserving map 1755 Used in [TCPdpriv], this algorithm stores a set of original and 1756 anonymised IP address pairs. When a new IP address arrives, it is 1757 compared with previous addresses to determine the longest prefix 1758 match. The new address is anonymized by using the same prefix, with 1759 the remainder of the address anonymized with a random value. The use 1760 of a random value means that TCPdrpiv is not deterministic; different 1761 anonymized values will be generated on each run. The need to store 1762 previous addresses means that TCPdpriv has significant and unbounded 1763 memory requirements, and because of the need to allocated anonymized 1764 addresses sequentially cannot be used in parallel processing. 1766 Anonymization: Format-preserving, prefix preservation (general). 1768 B.2.4. Cryptographic Prefix-Preserving Pseudonymization 1770 Cryptographic prefix-preserving pseudonymization was originally 1771 proposed as an improvement to the prefix-preserving map implemented 1772 in TCPdpriv, described in [Xu-et-al.] and implemented in the 1773 [Crypto-PAn] tool. Crypto-PAn is now frequently used as an acronym 1774 for the algorithm. Initially it was described for IPv4 addresses 1775 only; extension for IPv6 addresses was proposed in [Harvan]. This 1776 uses a cryptographic algorithm rather than a random value, and thus 1777 pseudonymity is determined uniquely by the encryption key, and is 1778 deterministic. It requires a separate AES encryption for each output 1779 bit, so has a non-trivial calculation overhead. This can be 1780 mitigated to some extent (for IPv4, at least) by pre-calculating 1781 results for some number of prefix bits. 1783 Pseudonymization: Format-preserving, prefix preservation (general). 1785 B.2.5. Top-hash Subtree-replicated Anonymization 1787 Proposed in [Ramaswamy-and-Wolf], Top-hash Subtree-replicated 1788 Anonymization (TSA) originated in response to the requirement for 1789 faster processing than Crypto-PAn. It used hashing for the most 1790 significant byte of an IPv4 address, and a pre-calculated binary tree 1791 structure for the remainder of the address. To save memory space, 1792 replication is used within the tree structure, reducing the size of 1793 the pre-calculated structures to a few Mb for IPv4 addresses. 1794 Address pseudonymization is done via hash and table lookup, and so 1795 requires minimal computation. However, due to the much increased 1796 address space for IPv6, TSA is not memory efficient for IPv6. 1798 Pseudonymization: Format-preserving, prefix preservation (general). 1800 B.2.6. ipcipher 1802 A recently-released proposal from PowerDNS, ipcipher [ipcipher1] 1803 [ipcipher2] is a simple pseudonymization technique for IPv4 and IPv6 1804 addresses. IPv6 addresses are encrypted directly with AES-128 using 1805 a key (which may be derived from a passphrase). IPv4 addresses are 1806 similarly encrypted, but using a recently proposed encryption 1807 [ipcrypt] suitable for 32bit block lengths. However, the author of 1808 ipcrypt has since indicated [ipcrypt-analysis] that it has low 1809 security, and further analysis has revealed it is vulnerable to 1810 attack. 1812 Pseudonymization: Format-preserving, cryptographic permutation. 1814 B.2.7. Bloom filters 1816 van Rijswijk-Deij et al. have recently described work using Bloom 1817 filters [Bloom-filter] to categorize query traffic and record the 1818 traffic as the state of multiple filters. The goal of this work is 1819 to allow operators to identify so-called Indicators of Compromise 1820 (IOCs) originating from specific subnets without storing information 1821 about, or be able to monitor the DNS queries of an individual user. 1822 By using a Bloom filter, it is possible to determine with a high 1823 probability if, for example, a particular query was made, but the set 1824 of queries made cannot be recovered from the filter. Similarly, by 1825 mixing queries from a sufficient number of users in a single filter, 1826 it becomes practically impossible to determine if a particular user 1827 performed a particular query. Large numbers of queries can be 1828 tracked in a memory-efficient way. As filter status is stored, this 1829 approach cannot be used to regenerate traffic, and so cannot be used 1830 with tools used to process live traffic. 1832 Anonymized: Generalization. 1834 Appendix C. Current policy and privacy statements 1836 A tabular comparison of policy and privacy statements from various 1837 DNS Privacy service operators based loosely on the proposed DROP 1838 structure can be found at [policy-comparison]. The analysis is based 1839 on the data available in December 2019. 1841 We note that the existing set of policies vary widely in style, 1842 content and detail and it is not uncommon for the full text for a 1843 given operator to equate to more than 10 pages of moderate font sized 1844 A4 text. It is a non-trivial task today for a user to extract a 1845 meaningful overview of the different services on offer. 1847 It is also noted that Mozilla have published a DoH resolver policy 1848 [DoH-resolver-policy], which describes the minimum set of policy 1849 requirements that a party must satisfy to be considered as a 1850 potential partner for Mozilla's Trusted Recursive Resolver (TRR) 1851 program. 1853 Appendix D. Example DROP statement 1855 The following example DROP statement is very loosely based on some 1856 elements of published privacy statements for some public resolvers, 1857 with additional fields populated to illustrate the what the full 1858 contents of a DROP statement might look like. This should not be 1859 interpreted as 1861 o having been reviewed or approved by any operator in any way 1863 o having any legal standing or validity at all 1865 o being complete or exhaustive 1867 This is a purely hypothetical example of a DROP statement to outline 1868 example contents - in this case for a public resolver operator 1869 providing a basic DNS Privacy service via one IP address and one DoH 1870 URI with security based filtering. It does aim to meet minimal 1871 compliance as specified in Section 5. 1873 D.1. Policy 1875 1. Treatment of IP addresses. Many nations classify IP addresses as 1876 personal data, and we take a conservative approach in treating IP 1877 addresses as personal data in all jurisdictions in which our 1878 systems reside. 1880 2. Data collection and sharing. 1882 1. IP addresses. Our normal course of data management does not 1883 have any IP address information or other personal data logged 1884 to disk or transmitted out of the location in which the query 1885 was received. We may aggregate certain counters to larger 1886 network block levels for statistical collection purposes, but 1887 those counters do not maintain specific IP address data nor 1888 is the format or model of data stored capable of being 1889 reverse-engineered to ascertain what specific IP addresses 1890 made what queries. 1892 2. Data collected in logs. We do keep some generalized location 1893 information (at the city/metropolitan area level) so that we 1894 can conduct debugging and analyze abuse phenomena. We also 1895 use the collected information for the creation and sharing of 1896 telemetry (timestamp, geolocation, number of hits, first 1897 seen, last seen) for contributors, public publishing of 1898 general statistics of system use (protections, threat types, 1899 counts, etc.) When you use our DNS Services, here is the 1900 full list of items that are included in our logs: 1902 + Request domain name, e.g., example.net 1904 + Record type of requested domain, e.g., A, AAAA, NS, MX, 1905 TXT, etc. 1907 + Transport protocol on which the request arrived, i.e. UDP, 1908 TCP, DoT, 1909 DoH 1911 + Origin IP general geolocation information: i.e. geocode, 1912 region ID, city ID, and metro code 1914 + IP protocol version - IPv4 or IPv6 1916 + Response code sent, e.g., SUCCESS, SERVFAIL, NXDOMAIN, 1917 etc. 1919 + Absolute arrival time 1921 + Name of the specific instance that processed this request 1923 + IP address of the specific instance to which this request 1924 was addressed (no relation to the requestor's IP address) 1926 We may keep the following data as summary information, 1927 including all the above EXCEPT for data about the DNS record 1928 requested: 1930 + Currently-advertised BGP-summarized IP prefix/netmask of 1931 apparent client origin 1933 + Autonomous system number (BGP ASN) of apparent client 1934 origin 1936 All the above data may be kept in full or partial form in 1937 permanent archives. 1939 3. Sharing of data. Except as described in this document, we do 1940 not intentionally share, sell, or rent individual personal 1941 information associated with the requestor (i.e. source IP 1942 address or any other information that can positively identify 1943 the client using our infrastructure) with anyone without your 1944 consent. We generate and share high level anonymized 1945 aggregate statistics including threat metrics on threat type, 1946 geolocation, and if available, sector, as well as other 1947 vertical metrics including performance metrics on our DNS 1948 Services (i.e. number of threats blocked, infrastructure 1949 uptime) when available with our threat intelligence (TI) 1950 partners, academic researchers, or the public. Our DNS 1951 Services share anonymized data on specific domains queried 1952 (records such as domain, timestamp, geolocation, number of 1953 hits, first seen, last seen) with our threat intelligence 1954 partners. Our DNS Services also builds, stores, and may 1955 share certain DNS data streams which store high level 1956 information about domain resolved, query types, result codes, 1957 and timestamp. These streams do not contain IP address 1958 information of requestor and cannot be correlated to IP 1959 address or other personal data. We do not and never will 1960 share any of its data with marketers, nor will it use this 1961 data for demographic analysis. 1963 3. Exceptions. There are exceptions to this storage model: In the 1964 event of actions or observed behaviors which we deem malicious or 1965 anomalous, we may utilize more detailed logging to collect more 1966 specific IP address data in the process of normal network defence 1967 and mitigation. This collection and transmission off-site will 1968 be limited to IP addresses that we determine are involved in the 1969 event. 1971 4. Associated entities. Details of our Threat Intelligence partners 1972 can be found at our website page (insert link). 1974 5. Correlation of Data. We do not correlate or combine information 1975 from our logs with any personal information that you have 1976 provided us for other services, or with your specific IP address. 1978 6. Result filtering. 1980 1. Filtering. We utilise cyber threat intelligence about 1981 malicious domains from a variety of public and private 1982 sources and blocks access to those malicious domains when 1983 your system attempts to contact them. An NXDOMAIN is 1984 returned for blocked sites. 1986 1. Censorship. We will not provide a censoring component 1987 and will limit our actions solely to the blocking of 1988 malicious domains around phishing, malware, and exploit 1989 kit domains. 1991 2. Accidental blocking. We implement allowlisting 1992 algorithms to make sure legitimate domains are not 1993 blocked by accident. However, in the rare case of 1994 blocking a legitimate domain, we work with the users to 1995 quickly allowlist that domain. Please use our support 1996 form (insert link) if you believe we are blocking a 1997 domain in error. 1999 D.2. Practice 2001 1. Deviations from Policy. None in place since (insert date). 2003 2. Client facing capabilities. 2005 1. We offer UDP and TCP DNS on port 53 on (insert IP address) 2007 2. We offer DNS over TLS as specified in RFC7858 on (insert IP 2008 address). It is available on port 853 and port 443. We also 2009 implement RFC7766. 2011 1. The DoT authentication domain name used is (insert domain 2012 name). 2014 2. We do not publish SPKI pin sets. 2016 3. We offer DNS over HTTPS as specified in RFC8484 on (insert 2017 URI template). Both POST and GET are supported. 2019 4. Both services offer TLS 1.2 and TLS 1.3. 2021 5. Both services pad DNS responses according to RFC8467. 2023 6. Both services provide DNSSEC validation. 2025 3. Upstream capabilities. 2027 1. Our servers implement QNAME minimization. 2029 2. Our servers do not send ECS upstream. 2031 4. Support. Support information for this service is available at 2032 (insert link). 2034 5. Jurisdiction. 2036 1. We operate as the legal entity (insert entity) registered in 2037 (insert country) as (insert company identifier e.g Company 2038 Number). Our Headquarters are located at (insert address). 2040 2. As such we operate under (insert country) law. For details 2041 of our company privacy policy see (insert link). For 2042 questions on this policy and enforcement contact our Data 2043 Protection Officer on (insert email address). 2045 3. We operate servers in the following countries (insert list). 2047 4. We have no agreements in place with law enforcement agencies 2048 to give them access to the data. Apart from as stated in the 2049 Policy section of this document with regard to cyber threat 2050 intelligence, we have no agreements in place with other 2051 public and private parties dealing with security and 2052 intelligence, to give them access to the servers and/or to 2053 the data. 2055 Authors' Addresses 2057 Sara Dickinson 2058 Sinodun IT 2059 Magdalen Centre 2060 Oxford Science Park 2061 Oxford OX4 4GA 2062 United Kingdom 2064 Email: sara@sinodun.com 2066 Benno J. Overeinder 2067 NLnet Labs 2068 Science Park 400 2069 Amsterdam 1098 XH 2070 The Netherlands 2072 Email: benno@nlnetLabs.nl 2074 Roland M. van Rijswijk-Deij 2075 NLnet Labs 2076 Science Park 400 2077 Amsterdam 1098 XH 2078 The Netherlands 2080 Email: roland@nlnetLabs.nl 2082 Allison Mankin 2083 Salesforce 2085 Email: allison.mankin@gmail.com