idnits 2.17.1 draft-ietf-dprive-bcp-op-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (January 24, 2020) is 1547 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Outdated reference: A later version (-09) exists of draft-ietf-dprive-rfc7626-bis-04 ** Downref: Normative reference to an Informational draft: draft-ietf-dprive-rfc7626-bis (ref. 'I-D.ietf-dprive-rfc7626-bis') ** Downref: Normative reference to an Informational RFC: RFC 6973 ** Obsolete normative reference: RFC 7525 (Obsoleted by RFC 9325) ** Obsolete normative reference: RFC 7816 (Obsoleted by RFC 9156) ** Downref: Normative reference to an Informational RFC: RFC 7871 ** Downref: Normative reference to an Informational RFC: RFC 8404 ** Downref: Normative reference to an Experimental RFC: RFC 8467 == Outdated reference: A later version (-15) exists of draft-ietf-dnsop-dns-tcp-requirements-05 == Outdated reference: A later version (-15) exists of draft-ietf-httpbis-bcp56bis-09 -- Obsolete informational reference (is this intentional?): RFC 2560 (Obsoleted by RFC 6960) -- Obsolete informational reference (is this intentional?): RFC 5077 (Obsoleted by RFC 8446) -- Obsolete informational reference (is this intentional?): RFC 7706 (Obsoleted by RFC 8806) -- Obsolete informational reference (is this intentional?): RFC 8499 (Obsoleted by RFC 9499) Summary: 7 errors (**), 0 flaws (~~), 5 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 dprive S. Dickinson 3 Internet-Draft Sinodun IT 4 Intended status: Best Current Practice B. Overeinder 5 Expires: July 27, 2020 R. van Rijswijk-Deij 6 NLnet Labs 7 A. Mankin 8 Salesforce 9 January 24, 2020 11 Recommendations for DNS Privacy Service Operators 12 draft-ietf-dprive-bcp-op-08 14 Abstract 16 This document presents operational, policy, and security 17 considerations for DNS recursive resolver operators who choose to 18 offer DNS Privacy services. With these recommendations, the operator 19 can make deliberate decisions regarding which services to provide, 20 and how the decisions and alternatives impact the privacy of users. 22 This document also presents a framework to assist writers of a DNS 23 Recursive Operator Privacy Statement (analogous to DNS Security 24 Extensions (DNSSEC) Policies and DNSSEC Practice Statements described 25 in RFC6841). 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on July 27, 2020. 44 Copyright Notice 46 Copyright (c) 2020 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 63 3. Privacy related documents . . . . . . . . . . . . . . . . . . 5 64 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 65 5. Recommendations for DNS privacy services . . . . . . . . . . 6 66 5.1. On the wire between client and server . . . . . . . . . . 7 67 5.1.1. Transport recommendations . . . . . . . . . . . . . . 7 68 5.1.2. Authentication of DNS privacy services . . . . . . . 8 69 5.1.3. Protocol recommendations . . . . . . . . . . . . . . 9 70 5.1.4. DNSSEC . . . . . . . . . . . . . . . . . . . . . . . 11 71 5.1.5. Availability . . . . . . . . . . . . . . . . . . . . 11 72 5.1.6. Service options . . . . . . . . . . . . . . . . . . . 12 73 5.1.7. Impact of Encryption on DNS Monitoring . . . . . . . 12 74 5.1.8. Limitations of fronting a DNS privacy service with a 75 pure TLS proxy . . . . . . . . . . . . . . . . . . . 13 76 5.2. Data at rest on the server . . . . . . . . . . . . . . . 13 77 5.2.1. Data handling . . . . . . . . . . . . . . . . . . . . 13 78 5.2.2. Data minimization of network traffic . . . . . . . . 15 79 5.2.3. IP address pseudonymization and anonymization methods 16 80 5.2.4. Pseudonymization, anonymization, or discarding of 81 other correlation data . . . . . . . . . . . . . . . 17 82 5.2.5. Cache snooping . . . . . . . . . . . . . . . . . . . 17 83 5.3. Data sent onwards from the server . . . . . . . . . . . . 18 84 5.3.1. Protocol recommendations . . . . . . . . . . . . . . 18 85 5.3.2. Client query obfuscation . . . . . . . . . . . . . . 19 86 5.3.3. Data sharing . . . . . . . . . . . . . . . . . . . . 19 87 6. DNS Recursive Operator Privacy (DROP) statement . . . . . . . 20 88 6.1. Recommended contents of a DROP statement . . . . . . . . 20 89 6.1.1. Policy . . . . . . . . . . . . . . . . . . . . . . . 20 90 6.1.2. Practice . . . . . . . . . . . . . . . . . . . . . . 21 91 6.2. Current policy and privacy statements . . . . . . . . . . 22 92 6.3. Enforcement/accountability . . . . . . . . . . . . . . . 23 93 7. IANA considerations . . . . . . . . . . . . . . . . . . . . . 23 94 8. Security considerations . . . . . . . . . . . . . . . . . . . 23 95 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 24 96 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 24 97 11. Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . 24 98 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 27 99 12.1. Normative References . . . . . . . . . . . . . . . . . . 27 100 12.2. Informative References . . . . . . . . . . . . . . . . . 28 101 Appendix A. Documents . . . . . . . . . . . . . . . . . . . . . 33 102 A.1. Potential increases in DNS privacy . . . . . . . . . . . 33 103 A.2. Potential decreases in DNS privacy . . . . . . . . . . . 34 104 A.3. Related operational documents . . . . . . . . . . . . . . 34 105 Appendix B. IP address techniques . . . . . . . . . . . . . . . 34 106 B.1. Google Analytics non-prefix filtering . . . . . . . . . . 35 107 B.2. dnswasher . . . . . . . . . . . . . . . . . . . . . . . . 36 108 B.3. Prefix-preserving map . . . . . . . . . . . . . . . . . . 36 109 B.4. Cryptographic Prefix-Preserving Pseudonymization . . . . 36 110 B.5. Top-hash Subtree-replicated Anonymization . . . . . . . . 37 111 B.6. ipcipher . . . . . . . . . . . . . . . . . . . . . . . . 37 112 B.7. Bloom filters . . . . . . . . . . . . . . . . . . . . . . 37 113 Appendix C. Example DROP statement . . . . . . . . . . . . . . . 38 114 C.1. Policy . . . . . . . . . . . . . . . . . . . . . . . . . 38 115 C.2. Practice . . . . . . . . . . . . . . . . . . . . . . . . 41 116 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 42 118 1. Introduction 120 The Domain Name System (DNS) is at the core of the Internet; almost 121 every activity on the Internet starts with a DNS query (and often 122 several). However the DNS was not originally designed with strong 123 security or privacy mechanisms. A number of developments have taken 124 place in recent years which aim to increase the privacy of the DNS 125 system and these are now seeing some deployment. This latest 126 evolution of the DNS presents new challenges to operators and this 127 document attempts to provide an overview of considerations for 128 privacy focused DNS services. 130 In recent years there has also been an increase in the availability 131 of "public resolvers" [RFC8499] which users may prefer to use instead 132 of the default network resolver because they offer a specific feature 133 (e.g. good reachability, encrypted transport, strong privacy policy, 134 filtering (or lack of), etc.). These open resolvers have tended to 135 be at the forefront of adoption of privacy related enhancements but 136 it is anticipated that operators of other resolver services will 137 follow. 139 Whilst protocols that encrypt DNS messages on the wire provide 140 protection against certain attacks, the resolver operator still has 141 (in principle) full visibility of the query data and transport 142 identifiers for each user. Therefore, a trust relationship exists. 143 The ability of the operator to provide a transparent, well 144 documented, and secure privacy service will likely serve as a major 145 differentiating factor for privacy conscious users if they make an 146 active selection of which resolver to use. 148 It should also be noted that the choice of a user to configure a 149 single resolver (or a fixed set of resolvers) and an encrypted 150 transport to use in all network environments has both advantages and 151 disadvantages. For example the user has a clear expectation of which 152 resolvers have visibility of their query data however this resolver/ 153 transport selection may provide an added mechanism to track them as 154 they move across network environments. Commitments from operators to 155 minimize such tracking are also likely to play a role in user 156 selection of resolvers. 158 More recently the global legislative landscape with regard to 159 personal data collection, retention, and pseudonymization has seen 160 significant activity. It is an untested area that simply using a DNS 161 resolution service constitutes consent from the user for the operator 162 to process their query data. The impact of recent legislative 163 changes on data pertaining to the users of both Internet Service 164 Providers and public DNS resolvers is not fully understood at the 165 time of writing. 167 This document has two main goals: 169 o To provide operational and policy guidance related to DNS over 170 encrypted transports and to outline recommendations for data 171 handling for operators of DNS privacy services. 173 o To introduce the DNS Recursive Operator Privacy (DROP) statement 174 and present a framework to assist writers of this document. A 175 DROP statement is a document that an operator can publish 176 outlining their operational practices and commitments with regard 177 to privacy thereby providing a means for clients to evaluate the 178 privacy properties of a given DNS privacy service. In particular, 179 the framework identifies the elements that should be considered in 180 formulating a DROP statement. This document does not, however, 181 define a particular Privacy statement, nor does it seek to provide 182 legal advice or recommendations as to the contents. 184 A desired operational impact is that all operators (both those 185 providing resolvers within networks and those operating large public 186 services) can demonstrate their commitment to user privacy thereby 187 driving all DNS resolution services to a more equitable footing. 188 Choices for users would (in this ideal world) be driven by other 189 factors e.g. differing security policies or minor difference in 190 operator policy rather than gross disparities in privacy concerns. 192 Community insight [or judgment?] about operational practices can 193 change quickly, and experience shows that a Best Current Practice 194 (BCP) document about privacy and security is a point-in-time 195 statement. Readers are advised to seek out any errata or updates 196 that apply to this document. 198 2. Scope 200 "DNS Privacy Considerations" [I-D.ietf-dprive-rfc7626-bis] describes 201 the general privacy issues and threats associated with the use of the 202 DNS by Internet users and much of the threat analysis here is lifted 203 from that document and from [RFC6973]. However this document is 204 limited in scope to best practice considerations for the provision of 205 DNS privacy services by servers (recursive resolvers) to clients 206 (stub resolvers or forwarders). Privacy considerations specifically 207 from the perspective of an end user, or those for operators of 208 authoritative nameservers are out of scope. 210 This document includes (but is not limited to) considerations in the 211 following areas (taken from [I-D.ietf-dprive-rfc7626-bis]): 213 1. Data "on the wire" between a client and a server. 215 2. Data "at rest" on a server (e.g. in logs). 217 3. Data "sent onwards" from the server (either on the wire or shared 218 with a third party). 220 Whilst the issues raised here are targeted at those operators who 221 choose to offer a DNS privacy service, considerations for areas 2 and 222 3 could equally apply to operators who only offer DNS over 223 unencrypted transports but who would like to align with privacy best 224 practice. 226 3. Privacy related documents 228 There are various documents that describe protocol changes that have 229 the potential to either increase or decrease the privacy of the DNS. 230 Note this does not imply that some documents are good or bad, better 231 or worse, just that (for example) some features may bring functional 232 benefits at the price of a reduction in privacy and conversely some 233 features increase privacy with an accompanying increase in 234 complexity. A selection of the most relevant documents are listed in 235 Appendix A for reference. 237 4. Terminology 239 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 240 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 241 "OPTIONAL" in this document are to be interpreted as described in BCP 242 14 [RFC2119] [RFC8174] when, and only when, they appear in all 243 capitals, as shown here. 245 DNS terminology is as described in [RFC8499] with one modification: 246 we restate the clause in the original definition of Privacy-enabling 247 DNS server in [RFC8310] to include the requirement that a DNS over 248 (D)TLS server should also offer at least one of the credentials 249 described in Section 8 of [RFC8310] and implement the (D)TLS profile 250 described in Section 9 of [RFC8310]. 252 Other Terms: 254 o DROP: DNS Recursive Operator Privacy statement, see Section 6. 256 o DNS privacy service: The service that is offered via a privacy- 257 enabling DNS server and is documented either in an informal 258 statement of policy and practice with regard to users privacy or a 259 formal DROP statement. 261 5. Recommendations for DNS privacy services 263 In the following sections we first outline the threats relevant to 264 the specific topic and then discuss the potential actions that can be 265 taken to mitigate them. 267 We describe two classes of threats: 269 o Threats described in [RFC6973] 'Privacy Considerations for 270 Internet Protocols' 272 * Privacy terminology, threats to privacy, and mitigations as 273 described in Sections 3, 5, and 6 of [RFC6973]. 275 o DNS Privacy Threats 277 * These are threats to the users and operators of DNS privacy 278 services that are not directly covered by [RFC6973]. These may 279 be more operational in nature such as certificate management or 280 service availability issues. 282 We describe three classes of actions that operators of DNS privacy 283 services can take: 285 o Threat mitigation for well understood and documented privacy 286 threats to the users of the service and in some cases to the 287 operators of the service. 289 o Optimization of privacy services from an operational or management 290 perspective. 292 o Additional options that could further enhance the privacy and 293 usability of the service. 295 This document does not specify policy - only best practice, however 296 for DNS Privacy services to be considered compliant with these best 297 practice guidelines they SHOULD implement (where appropriate) all: 299 o Threat mitigations to be minimally compliant. 301 o Optimizations to be moderately compliant. 303 o Additional options to be maximally compliant. 305 5.1. On the wire between client and server 307 In this section we consider both data on the wire and the service 308 provided to the client. 310 5.1.1. Transport recommendations 312 [RFC6973] Threats: 314 o Surveillance: 316 * Passive surveillance of traffic on the wire 317 [I-D.ietf-dprive-rfc7626-bis] Section 2.4.2. 319 DNS Privacy Threats: 321 o Active injection of spurious data or traffic. 323 Mitigations: 325 A DNS privacy service can mitigate these threats by providing service 326 over one or more of the following transports 328 o DNS-over-TLS [RFC7858] and [RFC8310]. 330 o DoH [RFC8484]. 332 It is noted that a DNS privacy service can also be provided over DNS- 333 over-DTLS [RFC8094], however this is an Experimental specification 334 and there are no known implementations at the time of writing. 336 It is also noted that DNS privacy service might be provided over 337 IPSec, DNSCrypt, or VPNs. However, use of these transports for DNS 338 are not standardized and any discussion of best practice for 339 providing such a service is out of scope for this document. 341 Whilst encryption of DNS traffic can protect against active injection 342 this does not diminish the need for DNSSEC, see Section 5.1.4. 344 5.1.2. Authentication of DNS privacy services 346 [RFC6973] Threats: 348 o Surveillance: 350 * Active attacks that can redirect traffic to rogue servers 351 [I-D.ietf-dprive-rfc7626-bis] Section 2.5.3. 353 Mitigations: 355 DNS privacy services should ensure clients can authenticate the 356 server. Note that this, in effect, commits the DNS privacy service 357 to a public identity users will trust. 359 When using DNS-over-TLS clients that select a 'Strict Privacy' usage 360 profile [RFC8310] (to mitigate the threat of active attack on the 361 client) require the ability to authenticate the DNS server. To 362 enable this, DNS privacy services that offer DNS-over-TLS should 363 provide credentials in the form of either X.509 certificates 364 [RFC5280] or Subject Public Key Info (SPKI) pin sets [RFC8310]. 366 When offering DoH [RFC8484], HTTPS requires authentication of the 367 server as part of the protocol. 369 Server operators should also follow the best practices with regard to 370 Online Certificate Status Protocol (OCSP) [RFC2560] as described in 371 [RFC7525]. 373 5.1.2.1. Certificate management 375 Anecdotal evidence to date highlights the management of certificates 376 as one of the more challenging aspects for operators of traditional 377 DNS resolvers that choose to additionally provide a DNS privacy 378 service as management of such credentials is new to those DNS 379 operators. 381 It is noted that SPKI pin set management is described in [RFC7858] 382 but that key pinning mechanisms in general have fallen out of favor 383 operationally for various reasons such as the logistical overhead of 384 rolling keys. 386 DNS Privacy Threats: 388 o Invalid certificates, resulting in an unavailable service. 390 o Mis-identification of a server by a client e.g. typos in URLs or 391 authentication domain names [RFC8310]. 393 Mitigations: 395 It is recommended that operators: 397 o Follow the guidance in Section 6.5 of [RFC7525] with regards to 398 certificate revocation. 400 o Automate the generation, publication, and renewal of certificates. 401 For example, ACME [RFC8555] provides a mechanism to actively 402 manage certificates through automation and has been implemented by 403 a number of certificate authorities. 405 o Monitor certificates to prevent accidental expiration of 406 certificates. 408 o Choose a short, memorable authentication domain name for the 409 service. 411 5.1.3. Protocol recommendations 413 5.1.3.1. DNS-over-TLS 415 DNS Privacy Threats: 417 o Known attacks on TLS such as those described in [RFC7457]. 419 o Traffic analysis, for example: [Pitfalls-of-DNS-Encryption]. 421 o Potential for client tracking via transport identifiers. 423 o Blocking of well known ports (e.g. 853 for DNS-over-TLS). 425 Mitigations: 427 In the case of DNS-over-TLS, TLS profiles from Section 9 and the 428 Countermeasures to DNS Traffic Analysis from section 11.1 of 430 [RFC8310] provide strong mitigations. This includes but is not 431 limited to: 433 o Adhering to [RFC7525]. 435 o Implementing only (D)TLS 1.2 or later as specified in [RFC8310]. 437 o Implementing EDNS(0) Padding [RFC7830] using the guidelines in 438 [RFC8467] or a successor specification. 440 o Servers should not degrade in any way the query service level 441 provided to clients that do not use any form of session resumption 442 mechanism, such as TLS session resumption [RFC5077] with TLS 1.2, 443 section 2.2 of [RFC8446], or Domain Name System (DNS) Cookies 444 [RFC7873]. 446 o A DNS-over-TLS privacy service on both port 853 and 443. This 447 practice may not be possible if e.g. the operator deploys DoH on 448 the same IP address. 450 Optimizations: 452 o Concurrent processing of pipelined queries, returning responses as 453 soon as available, potentially out of order as specified in 454 [RFC7766]. This is often called 'OOOR' - out-of-order responses 455 (providing processing performance similar to HTTP multiplexing). 457 o Management of TLS connections to optimize performance for clients 458 using either: 460 * [RFC7766] and EDNS(0) Keepalive [RFC7828] and/or 462 * DNS Stateful Operations [RFC8490]. 464 5.1.3.2. DoH 466 DNS Privacy Threats: 468 o Known attacks on TLS such as those described in [RFC7457]. 470 o Traffic analysis, for example: [DNS-Privacy-not-so-private]. 472 o Potential for client tracking via transport identifiers. 474 Mitigations: 476 o Clients must be able to forego the use of HTTP Cookies [RFC6265] 477 and still use the service. 479 o Clients should not be required to include any headers beyond the 480 absolute minimum to obtain service from a DoH server. (See 481 Section 6.1 of [I-D.ietf-httpbis-bcp56bis].) 483 5.1.4. DNSSEC 485 DNS Privacy Threats: 487 o Users may be directed to bogus IP addresses for e.g. websites 488 where they might reveal personal information to attackers. 490 Mitigations: 492 o All DNS privacy services must offer a DNS privacy service that 493 performs Domain Name System Security Extensions (DNSSEC) 494 validation. In addition they must be able to provide the DNSSEC 495 RRs to the client so that it can perform its own validation. 497 The addition of encryption to DNS does not remove the need for DNSSEC 498 [RFC4033] - they are independent and fully compatible protocols, each 499 solving different problems. The use of one does not diminish the 500 need nor the usefulness of the other. 502 While the use of an authenticated and encrypted transport protects 503 origin authentication and data integrity between a client and a DNS 504 privacy service it provides no proof (for a non-validating client) 505 that the data provided by the DNS privacy service was actually DNSSEC 506 authenticated. As with cleartext DNS the user is still solely 507 trusting the AD bit (if present) set by the resolver. 509 It should also be noted that the use of an encrypted transport for 510 DNS actually solves many of the practical issues encountered by DNS 511 validating clients e.g. interference by middleboxes with cleartext 512 DNS payloads is completely avoided. In this sense a validating 513 client that uses a DNS privacy service which supports DNSSEC has a 514 far simpler task in terms of DNSSEC Roadblock avoidance [RFC8027]. 516 5.1.5. Availability 518 DNS Privacy Threats: 520 o A failed DNS privacy service could force the user to switch 521 providers, fallback to cleartext or accept no DNS service for the 522 outage. 524 Mitigations: 526 A DNS privacy service should strive to engineer encrypted services to 527 the same availability level as any unencrypted services they provide. 528 Particular care should to be taken to protect DNS privacy services 529 against denial-of-service attacks, as experience has shown that 530 unavailability of DNS resolving because of attacks is a significant 531 motivation for users to switch services. See, for example 532 Section IV-C of [Passive-Observations-of-a-Large-DNS]. 534 Techniques such as those described in Section 10 of [RFC7766] can be 535 of use to operators to defend against such attacks. 537 5.1.6. Service options 539 DNS Privacy Threats: 541 o Unfairly disadvantaging users of the privacy service with respect 542 to the services available. This could force the user to switch 543 providers, fallback to cleartext or accept no DNS service for the 544 outage. 546 Mitigations: 548 A DNS privacy service should deliver the same level of service as 549 offered on un-encrypted channels in terms of options such as 550 filtering (or lack thereof), DNSSEC validation, etc. 552 5.1.7. Impact of Encryption on DNS Monitoring 554 DNS Privacy Threats: 556 o Increased use of encryption impacts operator ability to manage 557 their network [RFC8404]. 559 Many monitoring solutions for DNS traffic rely on the plain text 560 nature of this traffic and work by intercepting traffic on the wire, 561 either using a separate view on the connection between clients and 562 the resolver, or as a separate process on the resolver system that 563 inspects network traffic. Such solutions will no longer function 564 when traffic between clients and resolvers is encrypted. There are, 565 however, legitimate reasons for DNS privacy service operators to 566 inspect DNS traffic, e.g. to monitor for network security threats. 567 Operators may therefore need to invest in alternative means of 568 monitoring that relies on either the resolver software directly, or 569 exporting DNS traffic from the resolver using e.g. [dnstap]. 571 Optimization: 573 When implementing alternative means for traffic monitoring, operators 574 of a DNS privacy service should consider using privacy conscious 575 means to do so (see section Section 5.2 for more details on data 576 handling and also the discussion on the use of Bloom Filters in 577 Appendix B. 579 5.1.8. Limitations of fronting a DNS privacy service with a pure TLS 580 proxy 582 DNS Privacy Threats: 584 o Limited ability to manage or monitor incoming connections using 585 DNS specific techniques. 587 o Misconfiguration of the target server could lead to data leakage 588 if the proxy to target server path is not encrypted. 590 Optimization: 592 Some operators may choose to implement DNS-over-TLS using a TLS proxy 593 (e.g. [nginx], [haproxy], or [stunnel]) in front of a DNS nameserver 594 because of proven robustness and capacity when handling large numbers 595 of client connections, load balancing capabilities and good tooling. 596 Currently, however, because such proxies typically have no specific 597 handling of DNS as a protocol over TLS or DTLS using them can 598 restrict traffic management at the proxy layer and at the DNS server. 599 For example, all traffic received by a nameserver behind such a proxy 600 will appear to originate from the proxy and DNS techniques such as 601 ACLs, RRL, or DNS64 will be hard or impossible to implement in the 602 nameserver. 604 Operators may choose to use a DNS aware proxy such as [dnsdist] which 605 offers custom options (similar to that proposed in 606 [I-D.bellis-dnsop-xpf]) to add source information to packets to 607 address this shortcoming. It should be noted that such options 608 potentially significantly increase the leaked information in the 609 event of a misconfiguration. 611 5.2. Data at rest on the server 613 5.2.1. Data handling 615 [RFC6973] Threats: 617 o Surveillance. 619 o Stored data compromise. 621 o Correlation. 623 o Identification. 625 o Secondary use. 627 o Disclosure. 629 Other Threats 631 o Contravention of legal requirements not to process user data. 633 Mitigations: 635 The following are common activities for DNS service operators and in 636 all cases should be minimized or completely avoided if possible for 637 DNS privacy services. If data is retained it should be encrypted and 638 either aggregated, pseudonymized, or anonymized whenever possible. 639 In general the principle of data minimization described in [RFC6973] 640 should be applied. 642 o Transient data (e.g. that is used for real time monitoring and 643 threat analysis which might be held only in memory) should be 644 retained for the shortest possible period deemed operationally 645 feasible. 647 o The retention period of DNS traffic logs should be only those 648 required to sustain operation of the service and, to the extent 649 that such exists, meet regulatory requirements. 651 o DNS privacy services should not track users except for the 652 particular purpose of detecting and remedying technically 653 malicious (e.g. DoS) or anomalous use of the service. 655 o Data access should be minimized to only those personnel who 656 require access to perform operational duties. It should also be 657 limited to anonymized or pseudonymized data were operationally 658 feasible, with access to full logs (if any are held) only 659 permitted when necessary. 661 Optimizations: 663 o Consider use of full disk encryption for logs and data capture 664 storage. 666 5.2.2. Data minimization of network traffic 668 Data minimization refers to collecting, using, disclosing, and 669 storing the minimal data necessary to perform a task, and this can be 670 achieved by removing or obfuscating privacy-sensitive information in 671 network traffic logs. This is typically personal data, or data that 672 can be used to link a record to an individual, but may also include 673 revealing other confidential information, for example on the 674 structure of an internal corporate network. 676 The problem of effectively ensuring that DNS traffic logs contain no 677 or minimal privacy-sensitive information is not one that currently 678 has a generally agreed solution or any standards to inform this 679 discussion. This section presents an overview of current techniques 680 to simply provide reference on the current status of this work. 682 Research into data minimization techniques (and particularly IP 683 address pseudonymization/anonymization) was sparked in the late 684 1990s/early 2000s, partly driven by the desire to share significant 685 corpuses of traffic captures for research purposes. Several 686 techniques reflecting different requirements in this area and 687 different performance/resource tradeoffs emerged over the course of 688 the decade. Developments over the last decade have been both a 689 blessing and a curse; the large increase in size between an IPv4 and 690 an IPv6 address, for example, renders some techniques impractical, 691 but also makes available a much larger amount of input entropy, the 692 better to resist brute force re-identification attacks that have 693 grown in practicality over the period. 695 Techniques employed may be broadly categorized as either 696 anonymization or pseudonymization. The following discussion uses the 697 definitions from [RFC6973] Section 3, with additional observations 698 from [van-Dijkhuizen-et-al.] 700 o Anonymization. To enable anonymity of an individual, there must 701 exist a set of individuals that appear to have the same 702 attribute(s) as the individual. To the attacker or the observer, 703 these individuals must appear indistinguishable from each other. 705 o Pseudonymization. The true identity is deterministically replaced 706 with an alternate identity (a pseudonym). When the 707 pseudonymization schema is known, the process can be reversed, so 708 the original identity becomes known again. 710 In practice there is a fine line between the two; for example, how to 711 categorize a deterministic algorithm for data minimization of IP 712 addresses that produces a group of pseudonyms for a single given 713 address. 715 5.2.3. IP address pseudonymization and anonymization methods 717 As [I-D.ietf-dprive-rfc7626-bis] makes clear, the big privacy risk in 718 DNS is connecting DNS queries to an individual and the major vector 719 for this in DNS traffic is the client IP address. 721 There is active discussion in the space of effective pseudonymization 722 of IP addresses in DNS traffic logs, however there seems to be no 723 single solution that is widely recognized as suitable for all or most 724 use cases. There are also as yet no standards for this that are 725 unencumbered by patents. 727 The following table presents a high level comparison of various 728 techniques employed or under development in 2019 and classifies them 729 according to categorization of technique and other properties. 730 Appendix B provides a more detailed survey of these techniques and 731 definitions for the categories and properties listed below. The list 732 of techniques includes the main techniques in current use, but does 733 not claim to be comprehensive. 735 +---------------------------+----+---+----+---+----+---+---+ 736 | Categorization/Property | GA | d | TC | C | TS | i | B | 737 +---------------------------+----+---+----+---+----+---+---+ 738 | Anonymization | X | X | X | | | | X | 739 | Pseudoanonymization | | | | X | X | X | | 740 | Format preserving | X | X | X | X | X | X | | 741 | Prefix preserving | | | X | X | X | | | 742 | Replacement | | | X | | | | | 743 | Filtering | X | | | | | | | 744 | Generalization | | | | | | | X | 745 | Enumeration | | X | | | | | | 746 | Reordering/Shuffling | | | X | | | | | 747 | Random substitution | | | X | | | | | 748 | Cryptographic permutation | | | | X | X | X | | 749 | IPv6 issues | | | | | X | | | 750 | CPU intensive | | | | X | | | | 751 | Memory intensive | | | X | | | | | 752 | Security concerns | | | | | | X | | 753 +---------------------------+----+---+----+---+----+---+---+ 755 Table 1: Classification of techniques 757 GA = Google Analytics, d = dnswasher, TC = TCPdpriv, C = CryptoPAn, 758 TS = TSA, i = ipcipher, B = Bloom filter 760 The choice of which method to use for a particular application will 761 depend on the requirements of that application and consideration of 762 the threat analysis of the particular situation. 764 For example, a common goal is that distributed packet captures must 765 be in an existing data format such as PCAP [pcap] or C-DNS [RFC8618] 766 that can be used as input to existing analysis tools. In that case, 767 use of a format-preserving technique is essential. This, though, is 768 not cost-free - several authors (e.g. [Brenker-and-Arnes] have 769 observed that, as the entropy in an IPv4 address is limited, given a 770 de-identified log from a target, if an attacker is capable of 771 ensuring packets are captured by the target and the attacker can send 772 forged traffic with arbitrary source and destination addresses to 773 that target, any format-preserving pseudonymization is vulnerable to 774 an attack along the lines of a cryptographic chosen plaintext attack. 776 5.2.4. Pseudonymization, anonymization, or discarding of other 777 correlation data 779 DNS Privacy Threats: 781 o Fingerprinting of the client OS via various means including: IP 782 TTL/Hoplimit, TCP parameters (e.g. window size, ECN support, 783 SACK), OS specific DNS query patterns (e.g. for network 784 connectivity, captive portal detection, or OS specific updates). 786 o Fingerprinting of the client application or TLS library by e.g. 787 TLS version/Cipher suite combinations or other connection 788 parameters. 790 o Correlation of queries on multiple TCP sessions originating from 791 the same IP address. 793 o Correlating of queries on multiple TLS sessions originating from 794 the same client, including via session resumption mechanisms. 796 o Resolvers _might_ receive client identifiers e.g. MAC addresses 797 in EDNS(0) options - some Customer-premises equipment (CPE) 798 devices are known to add them. 800 o HTTP headers (e.g., User-Agent, Accept, Accept-Encoding). 802 Mitigations: 804 o Data minimization or discarding of such correlation data. 806 5.2.5. Cache snooping 808 [RFC6973] Threats: 810 o Surveillance: 812 * Profiling of client queries by malicious third parties. 814 Mitigations: 816 o See [ISC-Knowledge-database-on-cache-snooping] for an example 817 discussion on defending against cache snooping. 819 5.3. Data sent onwards from the server 821 In this section we consider both data sent on the wire in upstream 822 queries and data shared with third parties. 824 5.3.1. Protocol recommendations 826 [RFC6973] Threats: 828 o Surveillance: 830 * Transmission of identifying data upstream. 832 Mitigations: 834 As specified in [RFC8310] for DNS-over-TLS but applicable to any DNS 835 Privacy services the server should: 837 o Implement QNAME minimization [RFC7816]. 839 o Honor a SOURCE PREFIX-LENGTH set to 0 in a query containing the 840 EDNS(0) Client Subnet (ECS) option and not send an ECS option in 841 upstream queries. 843 Optimizations: 845 o The server should either: 847 * not use the ECS option in upstream queries at all, or 849 * offer alternative services, one that sends ECS and one that 850 does not. 852 If operators do offer a service that sends the ECS options upstream 853 they should use the shortest prefix that is operationally feasible 854 and ideally use a policy of whitelisting upstream servers to send ECS 855 to in order to minimize data leakage. Operators should make clear in 856 any policy statement what prefix length they actually send and the 857 specific policy used. 859 Whitelisting has the benefit that not only does the operator know 860 which upstream servers can use ECS but also allows the operator to 861 decide which upstream servers apply privacy policies that the 862 operator is happy with. However some operators consider whitelisting 863 to incur significant operational overhead compared to dynamic 864 detection of ECS on authoritative servers. 866 Additional options: 868 o Aggressive Use of DNSSEC-Validated Cache [RFC8198] and [RFC8020] 869 (NXDOMAIN: There Really Is Nothing Underneath) to reduce the 870 number of queries to authoritative servers to increase privacy. 872 o Run a copy of the root zone on loopback [RFC7706] to avoid making 873 queries to the root servers that might leak information. 875 5.3.2. Client query obfuscation 877 Additional options: 879 Since queries from recursive resolvers to authoritative servers are 880 performed using cleartext (at the time of writing), resolver services 881 need to consider the extent to which they may be directly leaking 882 information about their client community via these upstream queries 883 and what they can do to mitigate this further. Note, that even when 884 all the relevant techniques described above are employed there may 885 still be attacks possible, e.g. [Pitfalls-of-DNS-Encryption]. For 886 example, a resolver with a very small community of users risks 887 exposing data in this way and ought to obfuscate this traffic by 888 mixing it with 'generated' traffic to make client characterization 889 harder. The resolver could also employ aggressive pre-fetch 890 techniques as a further measure to counter traffic analysis. 892 At the time of writing there are no standardized or widely recognized 893 techniques to perform such obfuscation or bulk pre-fetches. 895 Another technique that particularly small operators may consider is 896 forwarding local traffic to a larger resolver (with a privacy policy 897 that aligns with their own practices) over an encrypted protocol so 898 that the upstream queries are obfuscated among those of the large 899 resolver. 901 5.3.3. Data sharing 903 [RFC6973] Threats: 905 o Surveillance. 907 o Stored data compromise. 909 o Correlation. 911 o Identification. 913 o Secondary use. 915 o Disclosure. 917 DNS Privacy Threats: 919 o Contravention of legal requirements not to process user data. 921 Mitigations: 923 Operators should not provide identifiable data to third-parties 924 without explicit consent from clients (we take the stance here that 925 simply using the resolution service itself does not constitute 926 consent). 928 Operators should consider including specific guidelines for the 929 collection of aggregated and/or anonymized data for research 930 purposes, within or outside of their own organization. This can 931 benefit not only the operator (through inclusion in novel research) 932 but also the wider Internet community. See the policy published by 933 SURFnet [SURFnet-policy] on data sharing for research as an example. 935 6. DNS Recursive Operator Privacy (DROP) statement 937 The following section outlines the recommended contents of a DROP 938 statement an operator might choose to publish. An example statement 939 for a specific scenario is provided for guidance only in Appendix C. 941 6.1. Recommended contents of a DROP statement 943 6.1.1. Policy 945 1. Treatment of IP addresses. Make an explicit statement that IP 946 addresses are treated as PII. 948 2. Data collection and sharing. Specify clearly what data 949 (including IP addresses) is: 951 * Collected and retained by the operator, and for what period it 952 is retained. 954 * Shared with partners. 956 * Shared, sold, or rented to third-parties. 958 and in each case whether it is aggregated, pseudonymized, or 959 anonymized and the conditions of data transfer. 961 3. Exceptions. Specify any exceptions to the above, for example 962 technically malicious or anomalous behavior. 964 4. Associated entities. Declare any partners, third-party 965 affiliations, or sources of funding. 967 5. Correlation. Whether user DNS data is correlated or combined 968 with any other personal information held by the operator. 970 6. Result filtering. This section should explain whether the 971 operator filters, edits or alters in any way the replies that it 972 receives from the authoritative servers for each DNS zone, before 973 forwarding them to the clients. For each category listed below, 974 the operator should also specify how the filtering lists are 975 created and managed, whether it employs any third-party sources 976 for such lists, and which ones. 978 * Specify if any replies are being filtered out or altered for 979 network and computer security reasons (e.g. preventing 980 connections to malware-spreading websites or botnet control 981 servers). 983 * Specify if any replies are being filtered out or altered for 984 mandatory legal reasons, due to applicable legislation or 985 binding orders by courts and other public authorities. 987 * Specify if any replies are being filtered out or altered for 988 voluntary legal reasons, due to an internal policy by the 989 operator aiming at reducing potential legal risks. 991 * Specify if any replies are being filtered out or altered for 992 any other reason, including commercial ones. 994 6.1.2. Practice 996 This section should explain the current operational practices of the 997 service. 999 1. Deviations. Specify any temporary or permanent deviations from 1000 the policy for operational reasons. 1002 2. Client facing capabilities. With reference to section Section 5 1003 provide specific details of which capabilities are provided on 1004 which client facing addresses and ports: 1006 1. For DoT, specify the authentication domain name to be used 1007 (if any). 1009 2. For DoT, specify the SPKI pin sets to be used (if any) and 1010 policy for rolling keys. 1012 3. Upstream capabilities. With reference to section Section 5.3 1013 provide specific details of which capabilities are provided 1014 upstream for data sent to authoritative servers. 1016 4. Support. Provide contact/support information for the service. 1018 5. Jurisdiction. This section should communicate the applicable 1019 jurisdictions and law enforcement regimes under which the service 1020 is being provided. 1022 1. Specify the operator entity or entities that will control the 1023 data and be responsible for their treatment, and their legal 1024 place of business. 1026 2. Specify, either directly or by pointing to the applicable 1027 privacy policy, the relevant privacy laws that apply to the 1028 treatment of the data, the rights that users enjoy in regard 1029 to their own personal information that is treated by the 1030 service, and how they can contact the operator to enforce 1031 them. 1033 3. Additionally specify the countries in which the servers 1034 handling the DNS requests and the data are located (if the 1035 operator applies a geolocation policy so that requests from 1036 certain countries are only served by certain servers, this 1037 should be specified as well). 1039 4. Specify whether the operator has any agreement in place with 1040 law enforcement agencies, or other public and private parties 1041 dealing with security and intelligence, to give them access 1042 to the servers and/or to the data. 1044 6.2. Current policy and privacy statements 1046 A tabular comparison of policy and privacy statements from various 1047 DNS Privacy service operators based loosely on the proposed DROP 1048 structure can be found at [policy-comparison]. The analysis is based 1049 on the data available in December 2019. 1051 We note that the existing set of policies vary widely in style, 1052 content and detail and it is not uncommon for the full text for a 1053 given operator to equate to more than 10 pages of moderate font sized 1054 A4 text. It is a non-trivial task today for a user to extract a 1055 meaningful overview of the different services on offer. 1057 It is also noted that Mozilla have published a DoH resolver policy 1058 [DoH-resolver-policy], which describes the minimum set of policy 1059 requirements that a party must satisfy to be considered as a 1060 potential partner for Mozilla's Trusted Recursive Resolver (TRR) 1061 program. 1063 6.3. Enforcement/accountability 1065 Transparency reports may help with building user trust that operators 1066 adhere to their policies and practices. 1068 Independent monitoring or analysis could be performed where possible 1069 of: 1071 o ECS, QNAME minimization, EDNS(0) padding, etc. 1073 o Filtering. 1075 o Uptime. 1077 This is by analogy with several TLS or website analysis tools that 1078 are currently available e.g. [SSL-Labs] or [Internet.nl]. 1080 Additionally operators could choose to engage the services of a third 1081 party auditor to verify their compliance with their published DROP 1082 statement. 1084 7. IANA considerations 1086 None 1088 8. Security considerations 1090 Security considerations for DNS-over-TCP are given in [RFC7766], many 1091 of which are generally applicable to session based DNS. Guidance on 1092 operational requirements for DNS-over-TCP are also available in [I- 1093 D.dnsop-dns-tcp-requirements]. 1095 9. Acknowledgements 1097 Many thanks to Amelia Andersdotter for a very thorough review of the 1098 first draft of this document and Stephen Farrell for a thorough 1099 review at WGLC and for suggesting the inclusion of an example DROP 1100 statement. Thanks to John Todd for discussions on this topic, and to 1101 Stephane Bortzmeyer, Puneet Sood and Vittorio Bertola for review. 1102 Thanks to Daniel Kahn Gillmor, Barry Green, Paul Hoffman, Dan York, 1103 John Reed, Lorenzo Colitti for comments at the mic. Thanks to 1104 Loganaden Velvindron for useful updates to the text. 1106 Sara Dickinson thanks the Open Technology Fund for a grant to support 1107 the work on this document. 1109 10. Contributors 1111 The below individuals contributed significantly to the document: 1113 John Dickinson 1114 Sinodun Internet Technologies 1115 Magdalen Centre 1116 Oxford Science Park 1117 Oxford OX4 4GA 1118 United Kingdom 1120 Jim Hague 1121 Sinodun Internet Technologies 1122 Magdalen Centre 1123 Oxford Science Park 1124 Oxford OX4 4GA 1125 United Kingdom 1127 11. Changelog 1129 draft-ietf-dprive-bcp-op-08 1131 o Address IETF Last call comments. 1133 draft-ietf-dprive-bcp-op-07 1135 o Editorial changes following AD review. 1137 o Change all URIs to Informational References. 1139 draft-ietf-dprive-bcp-op-06 1141 o Final minor changes from second WGLC. 1143 o Remove some text on consent: 1145 * Paragraph 2 in section 5.3.3 1147 * Item 6 in the DROP Practice statement (and example) 1149 o Remove .onion and TLSA options 1151 o Include ACME as a reference for certificate management 1153 o Update text on session resumption usage 1155 o Update section 5.2.4 on client fingerprinting 1157 draft-ietf-dprive-bcp-op-04 1159 o Change DPPPS to DROP (DNS Recursive Operator Privacy) statement 1161 o Update structure of DROP slightly 1163 o Add example DROP statement 1165 o Add text about restricting access to full logs 1167 o Move table in section 5.2.3 from SVG to inline table 1169 o Fix many editorial and reference nits 1171 draft-ietf-dprive-bcp-op-03 1173 o Add paragraph about operational impact 1175 o Move DNSSEC requirement out of the Appendix into main text as a 1176 privacy threat that should be mitigated 1178 o Add TLS version/Cipher suite as tracking threat 1180 o Add reference to Mozilla TRR policy 1182 o Remove several TODOs and QUESTIONS. 1184 draft-ietf-dprive-bcp-op-02 1186 o Change 'open resolver' for 'public resolver' 1188 o Minor editorial changes 1189 o Remove recommendation to run a separate TLS 1.3 service 1191 o Move TLSA to purely a optimization in Section 5.2.1 1193 o Update reference on minimal DoH headers. 1195 o Add reference on user switching provider after service issues in 1196 Section 5.1.4 1198 o Add text in Section 5.1.6 on impact on operators. 1200 o Add text on additional threat to TLS proxy use (Section 5.1.7) 1202 o Add reference in Section 5.3.1 on example policies. 1204 draft-ietf-dprive-bcp-op-01 1206 o Many minor editorial fixes 1208 o Update DoH reference to RFC8484 and add more text on DoH 1210 o Split threat descriptions into ones directly referencing RFC6973 1211 and other DNS Privacy threats 1213 o Improve threat descriptions throughout 1215 o Remove reference to the DNSSEC TLS Chain Extension draft until new 1216 version submitted. 1218 o Clarify use of whitelisting for ECS 1220 o Re-structure the DPPPS, add Result filtering section. 1222 o Remove the direct inclusion of privacy policy comparison, now just 1223 reference dnsprivacy.org and an example of such work. 1225 o Add an appendix briefly discussing DNSSEC 1227 o Update affiliation of 1 author 1229 draft-ietf-dprive-bcp-op-00 1231 o Initial commit of re-named document after adoption to replace 1232 draft-dickinson-dprive-bcp-op-01 1234 12. References 1236 12.1. Normative References 1238 [I-D.ietf-dprive-rfc7626-bis] 1239 Bortzmeyer, S. and S. Dickinson, "DNS Privacy 1240 Considerations", draft-ietf-dprive-rfc7626-bis-04 (work in 1241 progress), January 2020. 1243 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1244 Requirement Levels", BCP 14, RFC 2119, 1245 DOI 10.17487/RFC2119, March 1997, . 1248 [RFC6265] Barth, A., "HTTP State Management Mechanism", RFC 6265, 1249 DOI 10.17487/RFC6265, April 2011, . 1252 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 1253 Morris, J., Hansen, M., and R. Smith, "Privacy 1254 Considerations for Internet Protocols", RFC 6973, 1255 DOI 10.17487/RFC6973, July 2013, . 1258 [RFC7525] Sheffer, Y., Holz, R., and P. Saint-Andre, 1259 "Recommendations for Secure Use of Transport Layer 1260 Security (TLS) and Datagram Transport Layer Security 1261 (DTLS)", BCP 195, RFC 7525, DOI 10.17487/RFC7525, May 1262 2015, . 1264 [RFC7766] Dickinson, J., Dickinson, S., Bellis, R., Mankin, A., and 1265 D. Wessels, "DNS Transport over TCP - Implementation 1266 Requirements", RFC 7766, DOI 10.17487/RFC7766, March 2016, 1267 . 1269 [RFC7816] Bortzmeyer, S., "DNS Query Name Minimisation to Improve 1270 Privacy", RFC 7816, DOI 10.17487/RFC7816, March 2016, 1271 . 1273 [RFC7828] Wouters, P., Abley, J., Dickinson, S., and R. Bellis, "The 1274 edns-tcp-keepalive EDNS0 Option", RFC 7828, 1275 DOI 10.17487/RFC7828, April 2016, . 1278 [RFC7830] Mayrhofer, A., "The EDNS(0) Padding Option", RFC 7830, 1279 DOI 10.17487/RFC7830, May 2016, . 1282 [RFC7858] Hu, Z., Zhu, L., Heidemann, J., Mankin, A., Wessels, D., 1283 and P. Hoffman, "Specification for DNS over Transport 1284 Layer Security (TLS)", RFC 7858, DOI 10.17487/RFC7858, May 1285 2016, . 1287 [RFC7871] Contavalli, C., van der Gaast, W., Lawrence, D., and W. 1288 Kumari, "Client Subnet in DNS Queries", RFC 7871, 1289 DOI 10.17487/RFC7871, May 2016, . 1292 [RFC7873] Eastlake 3rd, D. and M. Andrews, "Domain Name System (DNS) 1293 Cookies", RFC 7873, DOI 10.17487/RFC7873, May 2016, 1294 . 1296 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1297 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1298 May 2017, . 1300 [RFC8310] Dickinson, S., Gillmor, D., and T. Reddy, "Usage Profiles 1301 for DNS over TLS and DNS over DTLS", RFC 8310, 1302 DOI 10.17487/RFC8310, March 2018, . 1305 [RFC8404] Moriarty, K., Ed. and A. Morton, Ed., "Effects of 1306 Pervasive Encryption on Operators", RFC 8404, 1307 DOI 10.17487/RFC8404, July 2018, . 1310 [RFC8467] Mayrhofer, A., "Padding Policies for Extension Mechanisms 1311 for DNS (EDNS(0))", RFC 8467, DOI 10.17487/RFC8467, 1312 October 2018, . 1314 [RFC8484] Hoffman, P. and P. McManus, "DNS Queries over HTTPS 1315 (DoH)", RFC 8484, DOI 10.17487/RFC8484, October 2018, 1316 . 1318 12.2. Informative References 1320 [Bloom-filter] 1321 van Rijswijk-Deij, R., Rijnders, G., Bomhoff, M., and L. 1322 Allodi, "Privacy-Conscious Threat Intelligence Using 1323 DNSBLOOM", 2019, 1324 . 1326 [Brenker-and-Arnes] 1327 Brekne, T. and A. Arnes, "CIRCUMVENTING IP-ADDRESS 1328 PSEUDONYMIZATION", 2005, . 1331 [Crypto-PAn] 1332 CESNET, "Crypto-PAn", 2015, 1333 . 1336 [DNS-Privacy-not-so-private] 1337 Silby, S., Juarez, M., Vallina-Rodriguez, N., and C. 1338 Troncosol, "DNS Privacy not so private: the traffic 1339 analysis perspective.", 2019, 1340 . 1342 [dnsdist] PowerDNS, "dnsdist Overview", 2019, . 1344 [dnstap] dnstap.info, "DNSTAP", 2019, . 1346 [DoH-resolver-policy] 1347 Mozilla, "Security/DOH-resolver-policy", 2019, 1348 . 1350 [Geolocation-Impact-Assessement] 1351 Conversion Works, "Anonymize IP Geolocation Accuracy 1352 Impact Assessment", 2017, 1353 . 1356 [haproxy] haproxy.org, "HAPROXY", 2019, . 1358 [Harvan] Harvan, M., "Prefix- and Lexicographical-order-preserving 1359 IP Address Anonymization", 2006, 1360 . 1362 [I-D.bellis-dnsop-xpf] 1363 Bellis, R., Dijk, P., and R. Gacogne, "DNS X-Proxied-For", 1364 draft-bellis-dnsop-xpf-04 (work in progress), March 2018. 1366 [I-D.ietf-dnsop-dns-tcp-requirements] 1367 Kristoff, J. and D. Wessels, "DNS Transport over TCP - 1368 Operational Requirements", draft-ietf-dnsop-dns-tcp- 1369 requirements-05 (work in progress), November 2019. 1371 [I-D.ietf-httpbis-bcp56bis] 1372 Nottingham, M., "Building Protocols with HTTP", draft- 1373 ietf-httpbis-bcp56bis-09 (work in progress), November 1374 2019. 1376 [Internet.nl] 1377 Internet.nl, "Internet.nl Is Your Internet Up To Date?", 1378 2019, . 1380 [IP-Anonymization-in-Analytics] 1381 Google, "IP Anonymization in Analytics", 2019, 1382 . 1385 [ipcipher1] 1386 Hubert, B., "On IP address encryption: security analysis 1387 with respect for privacy", 2017, 1388 . 1391 [ipcipher2] 1392 PowerDNS, "ipcipher", 2017, . 1395 [ipcrypt] veorq, "ipcrypt: IP-format-preserving encryption", 2015, 1396 . 1398 [ipcrypt-analysis] 1399 Aumasson, J., "Analysis of ipcrypt?", 2018, 1400 . 1403 [ISC-Knowledge-database-on-cache-snooping] 1404 ISC Knowledge Database, "DNS Cache snooping - should I be 1405 concerned?", 2018, . 1407 [nginx] nginx.org, "NGINX", 2019, . 1409 [Passive-Observations-of-a-Large-DNS] 1410 de Vries, W., van Rijswijk-Deij, R., de Boer, P., and A. 1411 Pras, "Passive Observations of a Large DNS Service: 2.5 1412 Years in the Life of Google", 2018, 1413 . 1416 [pcap] tcpdump.org, "PCAP", 2016, . 1418 [Pitfalls-of-DNS-Encryption] 1419 Shulman, H., "Pretty Bad Privacy: Pitfalls of DNS 1420 Encryption", 2014, . 1423 [policy-comparison] 1424 dnsprivacy.org, "Comparison of policy and privacy 1425 statements 2019", 2019, 1426 . 1429 [Ramaswamy-and-Wolf] 1430 Ramaswamy, R. and T. Wolf, "High-Speed Prefix-Preserving 1431 IP Address Anonymization for Passive Measurement Systems", 1432 2007, 1433 . 1435 [RFC2560] Myers, M., Ankney, R., Malpani, A., Galperin, S., and C. 1436 Adams, "X.509 Internet Public Key Infrastructure Online 1437 Certificate Status Protocol - OCSP", RFC 2560, 1438 DOI 10.17487/RFC2560, June 1999, . 1441 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1442 Rose, "DNS Security Introduction and Requirements", 1443 RFC 4033, DOI 10.17487/RFC4033, March 2005, 1444 . 1446 [RFC5077] Salowey, J., Zhou, H., Eronen, P., and H. Tschofenig, 1447 "Transport Layer Security (TLS) Session Resumption without 1448 Server-Side State", RFC 5077, DOI 10.17487/RFC5077, 1449 January 2008, . 1451 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 1452 Housley, R., and W. Polk, "Internet X.509 Public Key 1453 Infrastructure Certificate and Certificate Revocation List 1454 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 1455 . 1457 [RFC6235] Boschi, E. and B. Trammell, "IP Flow Anonymization 1458 Support", RFC 6235, DOI 10.17487/RFC6235, May 2011, 1459 . 1461 [RFC7457] Sheffer, Y., Holz, R., and P. Saint-Andre, "Summarizing 1462 Known Attacks on Transport Layer Security (TLS) and 1463 Datagram TLS (DTLS)", RFC 7457, DOI 10.17487/RFC7457, 1464 February 2015, . 1466 [RFC7706] Kumari, W. and P. Hoffman, "Decreasing Access Time to Root 1467 Servers by Running One on Loopback", RFC 7706, 1468 DOI 10.17487/RFC7706, November 2015, . 1471 [RFC8020] Bortzmeyer, S. and S. Huque, "NXDOMAIN: There Really Is 1472 Nothing Underneath", RFC 8020, DOI 10.17487/RFC8020, 1473 November 2016, . 1475 [RFC8027] Hardaker, W., Gudmundsson, O., and S. Krishnaswamy, 1476 "DNSSEC Roadblock Avoidance", BCP 207, RFC 8027, 1477 DOI 10.17487/RFC8027, November 2016, . 1480 [RFC8094] Reddy, T., Wing, D., and P. Patil, "DNS over Datagram 1481 Transport Layer Security (DTLS)", RFC 8094, 1482 DOI 10.17487/RFC8094, February 2017, . 1485 [RFC8198] Fujiwara, K., Kato, A., and W. Kumari, "Aggressive Use of 1486 DNSSEC-Validated Cache", RFC 8198, DOI 10.17487/RFC8198, 1487 July 2017, . 1489 [RFC8446] Rescorla, E., "The Transport Layer Security (TLS) Protocol 1490 Version 1.3", RFC 8446, DOI 10.17487/RFC8446, August 2018, 1491 . 1493 [RFC8490] Bellis, R., Cheshire, S., Dickinson, J., Dickinson, S., 1494 Lemon, T., and T. Pusateri, "DNS Stateful Operations", 1495 RFC 8490, DOI 10.17487/RFC8490, March 2019, 1496 . 1498 [RFC8499] Hoffman, P., Sullivan, A., and K. Fujiwara, "DNS 1499 Terminology", BCP 219, RFC 8499, DOI 10.17487/RFC8499, 1500 January 2019, . 1502 [RFC8555] Barnes, R., Hoffman-Andrews, J., McCarney, D., and J. 1503 Kasten, "Automatic Certificate Management Environment 1504 (ACME)", RFC 8555, DOI 10.17487/RFC8555, March 2019, 1505 . 1507 [RFC8618] Dickinson, J., Hague, J., Dickinson, S., Manderson, T., 1508 and J. Bond, "Compacted-DNS (C-DNS): A Format for DNS 1509 Packet Capture", RFC 8618, DOI 10.17487/RFC8618, September 1510 2019, . 1512 [SSL-Labs] 1513 SSL Labs, "SSL Server Test", 2019, 1514 . 1516 [stunnel] ISC Knowledge Database, "DNS-over-TLS", 2018, 1517 . 1519 [SURFnet-policy] 1520 SURFnet, "SURFnet Data Sharing Policy", 2016, 1521 . 1523 [TCPdpriv] 1524 Ipsilon Networks, Inc., "TCPdpriv", 2005, 1525 . 1527 [van-Dijkhuizen-et-al.] 1528 Van Dijkhuizen , N. and J. Van Der Ham, "A Survey of 1529 Network Traffic Anonymisation Techniques and 1530 Implementations", 2018, . 1532 [Xu-et-al.] 1533 Fan, J., Xu, J., Ammar, M., and S. Moon, "Prefix- 1534 preserving IP address anonymization: measurement-based 1535 security evaluation and a new cryptography-based scheme", 1536 2004, . 1539 Appendix A. Documents 1541 This section provides an overview of some DNS privacy related 1542 documents, however, this is neither an exhaustive list nor a 1543 definitive statement on the characteristic of the document. 1545 A.1. Potential increases in DNS privacy 1547 These documents are limited in scope to communications between stub 1548 clients and recursive resolvers: 1550 o 'Specification for DNS over Transport Layer Security (TLS)' 1551 [RFC7858], referred to here as 'DNS-over-TLS'. 1553 o 'DNS over Datagram Transport Layer Security (DTLS)' [RFC8094], 1554 referred to here as 'DNS-over-DTLS'. Note that this document has 1555 the Category of Experimental. 1557 o 'DNS Queries over HTTPS (DoH)' [RFC8484] referred to here as DoH. 1559 o 'Usage Profiles for DNS over TLS and DNS over DTLS' [RFC8310]. 1561 o 'The EDNS(0) Padding Option' [RFC7830] and 'Padding Policy for 1562 EDNS(0)' [RFC8467]. 1564 These documents apply to recursive and authoritative DNS but are 1565 relevant when considering the operation of a recursive server: 1567 o 'DNS Query Name minimization to Improve Privacy' [RFC7816] 1568 referred to here as 'QNAME minimization'. 1570 A.2. Potential decreases in DNS privacy 1572 These documents relate to functionality that could provide increased 1573 tracking of user activity as a side effect: 1575 o 'Client Subnet in DNS Queries' [RFC7871]. 1577 o 'Domain Name System (DNS) Cookies' [RFC7873]). 1579 o 'Transport Layer Security (TLS) Session Resumption without Server- 1580 Side State' [RFC5077] referred to here as simply TLS session 1581 resumption. 1583 o [RFC8446] Appendix C.4 describes Client Tracking Prevention in TLS 1584 1.3 1586 o 'A DNS Packet Capture Format' [RFC8618]. 1588 o Passive DNS [RFC8499]. 1590 Section 8 of [RFC8484] outlines the privacy considerations of DoH. 1591 Note that depending on the specifics of a DoH implementation there 1592 may be increased identification and tracking compared to other DNS 1593 transports. 1595 A.3. Related operational documents 1597 o 'DNS Transport over TCP - Implementation Requirements' [RFC7766]. 1599 o 'Operational requirements for DNS-over-TCP' 1600 [I-D.ietf-dnsop-dns-tcp-requirements]. 1602 o 'The edns-tcp-keepalive EDNS0 Option' [RFC7828]. 1604 o 'DNS Stateful Operations' [RFC8490]. 1606 Appendix B. IP address techniques 1608 Data minimization methods may be categorized by the processing used 1609 and the properties of their outputs. The following builds on the 1610 categorization employed in [RFC6235]: 1612 o Format-preserving. Normally when encrypting, the original data 1613 length and patterns in the data should be hidden from an attacker. 1614 Some applications of de-identification, such as network capture 1615 de-identification, require that the de-identified data is of the 1616 same form as the original data, to allow the data to be parsed in 1617 the same way as the original. 1619 o Prefix preservation. Values such as IP addresses and MAC 1620 addresses contain prefix information that can be valuable in 1621 analysis, e.g. manufacturer ID in MAC addresses, subnet in IP 1622 addresses. Prefix preservation ensures that prefixes are de- 1623 identified consistently; e.g. if two IP addresses are from the 1624 same subnet, a prefix preserving de-identification will ensure 1625 that their de-identified counterparts will also share a subnet. 1626 Prefix preservation may be fixed (i.e. based on a user selected 1627 prefix length identified in advance to be preserved ) or general. 1629 o Replacement. A one-to-one replacement of a field to a new value 1630 of the same type, for example using a regular expression. 1632 o Filtering. Removing (and thus truncating) or replacing data in a 1633 field. Field data can be overwritten, often with zeros, either 1634 partially (grey marking) or completely (black marking). 1636 o Generalization. Data is replaced by more general data with 1637 reduced specificity. One example would be to replace all TCP/UDP 1638 port numbers with one of two fixed values indicating whether the 1639 original port was ephemeral (>=1024) or non-ephemeral (>1024). 1640 Another example, precision degradation, reduces the accuracy of 1641 e.g. a numeric value or a timestamp. 1643 o Enumeration. With data from a well-ordered set, replace the first 1644 data item data using a random initial value and then allocate 1645 ordered values for subsequent data items. When used with 1646 timestamp data, this preserves ordering but loses precision and 1647 distance. 1649 o Reordering/shuffling. Preserving the original data, but 1650 rearranging its order, often in a random manner. 1652 o Random substitution. As replacement, but using randomly generated 1653 replacement values. 1655 o Cryptographic permutation. Using a permutation function, such as 1656 a hash function or cryptographic block cipher, to generate a 1657 replacement de-identified value. 1659 B.1. Google Analytics non-prefix filtering 1661 Since May 2010, Google Analytics has provided a facility 1662 [IP-Anonymization-in-Analytics] that allows website owners to request 1663 that all their users IP addresses are anonymized within Google 1664 Analytics processing. This very basic anonymization simply sets to 1665 zero the least significant 8 bits of IPv4 addresses, and the least 1666 significant 80 bits of IPv6 addresses. The level of anonymization 1667 this produces is perhaps questionable. There are some analysis 1668 results [Geolocation-Impact-Assessement] which suggest that the 1669 impact of this on reducing the accuracy of determining the user's 1670 location from their IP address is less than might be hoped; the 1671 average discrepancy in identification of the user city for UK users 1672 is no more than 17%. 1674 Anonymization: Format-preserving, Filtering (grey marking). 1676 B.2. dnswasher 1678 Since 2006, PowerDNS have included a de-identification tool 1679 Appendix B.2 with their PowerDNS product. This is a PCAP filter that 1680 performs a one-to-one mapping of end user IP addresses with an 1681 anonymized address. A table of user IP addresses and their de- 1682 identified counterparts is kept; the first IPv4 user addresses is 1683 translated to 0.0.0.1, the second to 0.0.0.2 and so on. The de- 1684 identified address therefore depends on the order that addresses 1685 arrive in the input, and running over a large amount of data the 1686 address translation tables can grow to a significant size. 1688 Anonymization: Format-preserving, Enumeration. 1690 B.3. Prefix-preserving map 1692 Used in [TCPdpriv], this algorithm stores a set of original and 1693 anonymised IP address pairs. When a new IP address arrives, it is 1694 compared with previous addresses to determine the longest prefix 1695 match. The new address is anonymized by using the same prefix, with 1696 the remainder of the address anonymized with a random value. The use 1697 of a random value means that TCPdrpiv is not deterministic; different 1698 anonymized values will be generated on each run. The need to store 1699 previous addresses means that TCPdpriv has significant and unbounded 1700 memory requirements, and because of the need to allocated anonymized 1701 addresses sequentially cannot be used in parallel processing. 1703 Anonymization: Format-preserving, prefix preservation (general). 1705 B.4. Cryptographic Prefix-Preserving Pseudonymization 1707 Cryptographic prefix-preserving pseudonymization was originally 1708 proposed as an improvement to the prefix-preserving map implemented 1709 in TCPdpriv, described in [Xu-et-al.] and implemented in the 1710 [Crypto-PAn] tool. Crypto-PAn is now frequently used as an acronym 1711 for the algorithm. Initially it was described for IPv4 addresses 1712 only; extension for IPv6 addresses was proposed in [Harvan]. This 1713 uses a cryptographic algorithm rather than a random value, and thus 1714 pseudonymity is determined uniquely by the encryption key, and is 1715 deterministic. It requires a separate AES encryption for each output 1716 bit, so has a non-trivial calculation overhead. This can be 1717 mitigated to some extent (for IPv4, at least) by pre-calculating 1718 results for some number of prefix bits. 1720 Pseudonymization: Format-preserving, prefix preservation (general). 1722 B.5. Top-hash Subtree-replicated Anonymization 1724 Proposed in [Ramaswamy-and-Wolf], Top-hash Subtree-replicated 1725 Anonymization (TSA) originated in response to the requirement for 1726 faster processing than Crypto-PAn. It used hashing for the most 1727 significant byte of an IPv4 address, and a pre-calculated binary tree 1728 structure for the remainder of the address. To save memory space, 1729 replication is used within the tree structure, reducing the size of 1730 the pre-calculated structures to a few Mb for IPv4 addresses. 1731 Address pseudonymization is done via hash and table lookup, and so 1732 requires minimal computation. However, due to the much increased 1733 address space for IPv6, TSA is not memory efficient for IPv6. 1735 Pseudonymization: Format-preserving, prefix preservation (general). 1737 B.6. ipcipher 1739 A recently-released proposal from PowerDNS, ipcipher [ipcipher1] 1740 [ipcipher2] is a simple pseudonymization technique for IPv4 and IPv6 1741 addresses. IPv6 addresses are encrypted directly with AES-128 using 1742 a key (which may be derived from a passphrase). IPv4 addresses are 1743 similarly encrypted, but using a recently proposed encryption 1744 [ipcrypt] suitable for 32bit block lengths. However, the author of 1745 ipcrypt has since indicated [ipcrypt-analysis] that it has low 1746 security, and further analysis has revealed it is vulnerable to 1747 attack. 1749 Pseudonymization: Format-preserving, cryptographic permutation. 1751 B.7. Bloom filters 1753 van Rijswijk-Deij et al. have recently described work using Bloom 1754 filters [Bloom-filter] to categorize query traffic and record the 1755 traffic as the state of multiple filters. The goal of this work is 1756 to allow operators to identify so-called Indicators of Compromise 1757 (IOCs) originating from specific subnets without storing information 1758 about, or be able to monitor the DNS queries of an individual user. 1759 By using a Bloom filter, it is possible to determine with a high 1760 probability if, for example, a particular query was made, but the set 1761 of queries made cannot be recovered from the filter. Similarly, by 1762 mixing queries from a sufficient number of users in a single filter, 1763 it becomes practically impossible to determine if a particular user 1764 performed a particular query. Large numbers of queries can be 1765 tracked in a memory-efficient way. As filter status is stored, this 1766 approach cannot be used to regenerate traffic, and so cannot be used 1767 with tools used to process live traffic. 1769 Anonymized: Generalization. 1771 Appendix C. Example DROP statement 1773 The following example DROP statement is very loosely based on some 1774 elements of published privacy statements for some public resolvers, 1775 with additional fields populated to illustrate the what the full 1776 contents of a DROP statement might look like. This should not be 1777 interpreted as 1779 o having been reviewed or approved by any operator in any way 1781 o having any legal standing or validity at all 1783 o being complete or exhaustive 1785 This is a purely hypothetical example of a DROP statement to outline 1786 example contents - in this case for a public resolver operator 1787 providing a basic DNS Privacy service via one IP address and one DoH 1788 URI with security based filtering. It does aim to meet minimal 1789 compliance as specified in Section 5. 1791 C.1. Policy 1793 1. Treatment of IP addresses. Many nations classify IP addresses as 1794 Personally-Identifiable Information (PII), and we take a 1795 conservative approach in treating IP addresses as PII in all 1796 jurisdictions in which our systems reside. 1798 2. Data collection and sharing. 1800 1. IP addresses. Our normal course of data management does not 1801 have any IP address information or other PII logged to disk 1802 or transmitted out of the location in which the query was 1803 received. We may aggregate certain counters to larger 1804 network block levels for statistical collection purposes, but 1805 those counters do not maintain specific IP address data nor 1806 is the format or model of data stored capable of being 1807 reverse-engineered to ascertain what specific IP addresses 1808 made what queries. 1810 2. Data collected in logs. We do keep some generalized location 1811 information (at the city/metropolitan area level) so that we 1812 can conduct debugging and analyze abuse phenomena. We also 1813 use the collected information for the creation and sharing of 1814 telemetry (timestamp, geolocation, number of hits, first 1815 seen, last seen) for contributors, public publishing of 1816 general statistics of system use (protections, threat types, 1817 counts, etc.) When you use our DNS Services, here is the 1818 full list of items that are included in our logs: 1820 + Request domain name, e.g. example.net 1822 + Record type of requested domain, e.g. A, AAAA, NS, MX, 1823 TXT, etc. 1825 + Transport protocol on which the request arrived, i.e. UDP, 1826 TCP, DoT, 1827 DoH 1829 + Origin IP general geolocation information: i.e. geocode, 1830 region ID, city ID, and metro code 1832 + IP protocol version - IPv4 or IPv6 1834 + Response code sent, e.g. SUCCESS, SERVFAIL, NXDOMAIN, 1835 etc. 1837 + Absolute arrival time 1839 + Name of the specific instance that processed this request 1841 + IP address of the specific instance to which this request 1842 was addressed (no relation to the requestor's IP address) 1844 We may keep the following data as summary information, 1845 including all the above EXCEPT for data about the DNS record 1846 requested: 1848 + Currently-advertised BGP-summarized IP prefix/netmask of 1849 apparent client origin 1851 + Autonomous system number (BGP ASN) of apparent client 1852 origin 1854 All the above data may be kept in full or partial form in 1855 permanent archives. 1857 3. Sharing of data. Except as described in this document, we do 1858 not intentionally share, sell, or rent individual personal 1859 information associated with the requestor (i.e. source IP 1860 address or any other information that can positively identify 1861 the client using our infrastructure) with anyone without your 1862 consent. We generate and share high level anonymized 1863 aggregate statistics including threat metrics on threat type, 1864 geolocation, and if available, sector, as well as other 1865 vertical metrics including performance metrics on our DNS 1866 Services (i.e. number of threats blocked, infrastructure 1867 uptime) when available with our threat intelligence (TI) 1868 partners, academic researchers, or the public. Our DNS 1869 Services share anonymized data on specific domains queried 1870 (records such as domain, timestamp, geolocation, number of 1871 hits, first seen, last seen) with our threat intelligence 1872 partners. Our DNS Services also builds, stores, and may 1873 share certain DNS data streams which store high level 1874 information about domain resolved, query types, result codes, 1875 and timestamp. These streams do not contain IP address 1876 information of requestor and cannot be correlated to IP 1877 address or other PII. We do not and never will share any of 1878 its data with marketers, nor will it use this data for 1879 demographic analysis. 1881 3. Exceptions. There are exceptions to this storage model: In the 1882 event of actions or observed behaviors which we deem malicious or 1883 anomalous, we may utilize more detailed logging to collect more 1884 specific IP address data in the process of normal network defence 1885 and mitigation. This collection and transmission off-site will 1886 be limited to IP addresses that we determine are involved in the 1887 event. 1889 4. Associated entities. Details of our Threat Intelligence partners 1890 can be found at our website page (insert link). 1892 5. Correlation of Data. We do not correlate or combine information 1893 from our logs with any personal information that you have 1894 provided us for other services, or with your specific IP address. 1896 6. Result filtering. 1898 1. Filtering. We utilise cyber threat intelligence about 1899 malicious domains from a variety of public and private 1900 sources and blocks access to those malicious domains when 1901 your system attempts to contact them. An NXDOMAIN is 1902 returned for blocked sites. 1904 1. Censorship. We will not provide a censoring component 1905 and will limit our actions solely to the blocking of 1906 malicious domains around phishing, malware, and exploit 1907 kit domains. 1909 2. Accidental blocking. We implement whitelisting 1910 algorithms to make sure legitimate domains are not 1911 blocked by accident. However, in the rare case of 1912 blocking a legitimate domain, we work with the users to 1913 quickly whitelist that domain. Please use our support 1914 form (insert link) if you believe we are blocking a 1915 domain in error. 1917 C.2. Practice 1919 1. Deviations from Policy. None currently in place. 1921 2. Client facing capabilities. 1923 1. We offer UDP and TCP DNS on port 53 on (insert IP address) 1925 2. We offer DNS-over-TLS as specified in RFC7858 on (insert IP 1926 address). It is available on port 853 and port 443. We also 1927 implement RFC7766. 1929 1. The DoT authentication domain name used is (insert domain 1930 name). 1932 2. We do not publish SPKI pin sets. 1934 3. We offer DNS-over-HTTPS as specified in RFC8484 on (insert 1935 URI template). Both POST and GET are supported. 1937 4. Both services offer TLS 1.2 and TLS 1.3. 1939 5. Both services pad DNS responses according to RFC8467. 1941 6. Both services provide DNSSEC validation. 1943 3. Upstream capabilities. 1945 1. Our servers implement QNAME minimization. 1947 2. Our servers do not send ECS upstream. 1949 4. Support. Support information for this service is available at 1950 (insert link). 1952 5. Jurisdiction. 1954 1. We operate as the legal entity (insert entity) registered in 1955 (insert country) as (insert company identifier e.g Company 1956 Number). Our Headquarters are located at (insert address). 1958 2. As such we operate under (insert country) law. For details 1959 of our company privacy policy see (insert link). For 1960 questions on this policy and enforcement contact our Data 1961 Protection Officer on (insert email address). 1963 3. We operate servers in the following countries (insert list). 1965 4. We have no agreements in place with law enforcement agencies 1966 to give them access to the data. Apart from as stated in the 1967 Policy section of this document with regard to cyber threat 1968 intelligence, we have no agreements in place with other 1969 public and private parties dealing with security and 1970 intelligence, to give them access to the servers and/or to 1971 the data. 1973 Authors' Addresses 1975 Sara Dickinson 1976 Sinodun IT 1977 Magdalen Centre 1978 Oxford Science Park 1979 Oxford OX4 4GA 1980 United Kingdom 1982 Email: sara@sinodun.com 1984 Benno J. Overeinder 1985 NLnet Labs 1986 Science Park 400 1987 Amsterdam 1098 XH 1988 The Netherlands 1990 Email: benno@nlnetLabs.nl 1991 Roland M. van Rijswijk-Deij 1992 NLnet Labs 1993 Science Park 400 1994 Amsterdam 1098 XH 1995 The Netherlands 1997 Email: roland@nlnetLabs.nl 1999 Allison Mankin 2000 Salesforce 2002 Email: allison.mankin@gmail.com