idnits 2.17.1 draft-ietf-dprive-bcp-op-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (October 4, 2019) is 1637 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Looks like a reference, but probably isn't: '1' on line 1370 -- Looks like a reference, but probably isn't: '2' on line 1372 -- Looks like a reference, but probably isn't: '3' on line 1375 -- Looks like a reference, but probably isn't: '4' on line 1377 -- Looks like a reference, but probably isn't: '5' on line 1379 -- Looks like a reference, but probably isn't: '6' on line 1381 -- Looks like a reference, but probably isn't: '7' on line 1383 -- Looks like a reference, but probably isn't: '8' on line 1385 -- Looks like a reference, but probably isn't: '9' on line 1387 -- Looks like a reference, but probably isn't: '10' on line 1390 -- Looks like a reference, but probably isn't: '11' on line 1392 -- Looks like a reference, but probably isn't: '12' on line 1394 -- Looks like a reference, but probably isn't: '13' on line 1397 -- Looks like a reference, but probably isn't: '14' on line 1399 -- Looks like a reference, but probably isn't: '15' on line 1401 -- Looks like a reference, but probably isn't: '16' on line 1549 -- Looks like a reference, but probably isn't: '17' on line 1555 -- Looks like a reference, but probably isn't: '18' on line 1565 -- Looks like a reference, but probably isn't: '19' on line 1578 -- Looks like a reference, but probably isn't: '20' on line 1595 -- Looks like a reference, but probably isn't: '21' on line 1596 -- Looks like a reference, but probably isn't: '22' on line 1599 -- Looks like a reference, but probably isn't: '23' on line 1611 -- Looks like a reference, but probably isn't: '24' on line 1626 -- Looks like a reference, but probably isn't: '25' on line 1626 -- Looks like a reference, but probably isn't: '26' on line 1630 -- Looks like a reference, but probably isn't: '27' on line 1632 -- Looks like a reference, but probably isn't: '28' on line 1639 ** Downref: Normative reference to an Informational RFC: RFC 6973 ** Obsolete normative reference: RFC 7525 (Obsoleted by RFC 9325) ** Obsolete normative reference: RFC 7816 (Obsoleted by RFC 9156) ** Downref: Normative reference to an Informational RFC: RFC 7871 ** Downref: Normative reference to an Informational RFC: RFC 8404 ** Downref: Normative reference to an Experimental RFC: RFC 8467 == Outdated reference: A later version (-15) exists of draft-ietf-dnsop-dns-tcp-requirements-04 == Outdated reference: A later version (-15) exists of draft-ietf-httpbis-bcp56bis-08 -- Obsolete informational reference (is this intentional?): RFC 5077 (Obsoleted by RFC 8446) -- Obsolete informational reference (is this intentional?): RFC 7706 (Obsoleted by RFC 8806) -- Obsolete informational reference (is this intentional?): RFC 8499 (Obsoleted by RFC 9499) Summary: 6 errors (**), 0 flaws (~~), 5 warnings (==), 32 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 dprive S. Dickinson 3 Internet-Draft Sinodun IT 4 Intended status: Best Current Practice B. Overeinder 5 Expires: April 6, 2020 R. van Rijswijk-Deij 6 NLnet Labs 7 A. Mankin 8 Salesforce 9 October 4, 2019 11 Recommendations for DNS Privacy Service Operators 12 draft-ietf-dprive-bcp-op-04 14 Abstract 16 This document presents operational, policy and security 17 considerations for DNS recursive resolver operators who choose to 18 offer DNS Privacy services. With these recommendations, the operator 19 can make deliberate decisions regarding which services to provide, 20 and how the decisions and alternatives impact the privacy of users. 22 This document also presents a framework to assist writers of a DNS 23 Recursive Operator Privacy Statement (analogous to DNS Security 24 Extensions (DNSSEC) Policies and DNSSEC Practice Statements described 25 in RFC6841). 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at http://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on April 6, 2020. 44 Copyright Notice 46 Copyright (c) 2019 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (http://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 62 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 63 3. Privacy related documents . . . . . . . . . . . . . . . . . . 5 64 4. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 65 5. Recommendations for DNS privacy services . . . . . . . . . . 6 66 5.1. On the wire between client and server . . . . . . . . . . 7 67 5.1.1. Transport recommendations . . . . . . . . . . . . . . 7 68 5.1.2. Authentication of DNS privacy services . . . . . . . 8 69 5.1.3. Protocol recommendations . . . . . . . . . . . . . . 9 70 5.1.4. DNSSEC . . . . . . . . . . . . . . . . . . . . . . . 11 71 5.1.5. Availability . . . . . . . . . . . . . . . . . . . . 12 72 5.1.6. Service options . . . . . . . . . . . . . . . . . . . 12 73 5.1.7. Impact on DNS Privacy Service Operators . . . . . . . 12 74 5.1.8. Limitations of using a pure TLS proxy . . . . . . . . 13 75 5.2. Data at rest on the server . . . . . . . . . . . . . . . 14 76 5.2.1. Data handling . . . . . . . . . . . . . . . . . . . . 14 77 5.2.2. Data minimization of network traffic . . . . . . . . 15 78 5.2.3. IP address pseudonymization and anonymization methods 16 79 5.2.4. Pseudonymization, anonymization or discarding of 80 other correlation data . . . . . . . . . . . . . . . 17 81 5.2.5. Cache snooping . . . . . . . . . . . . . . . . . . . 18 82 5.3. Data sent onwards from the server . . . . . . . . . . . . 18 83 5.3.1. Protocol recommendations . . . . . . . . . . . . . . 18 84 5.3.2. Client query obfuscation . . . . . . . . . . . . . . 19 85 5.3.3. Data sharing . . . . . . . . . . . . . . . . . . . . 20 86 6. DNS Recursive Operator Privacy (DROP) statement . . . . . . . 21 87 6.1. Recommended contents of a DROP statement . . . . . . . . 21 88 6.1.1. Policy . . . . . . . . . . . . . . . . . . . . . . . 21 89 6.1.2. Practice . . . . . . . . . . . . . . . . . . . . . . 22 90 6.2. Current policy and privacy statements . . . . . . . . . . 23 91 6.3. Enforcement/accountability . . . . . . . . . . . . . . . 24 92 7. IANA considerations . . . . . . . . . . . . . . . . . . . . . 24 93 8. Security considerations . . . . . . . . . . . . . . . . . . . 24 94 9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 24 95 10. Contributors . . . . . . . . . . . . . . . . . . . . . . . . 25 96 11. Changelog . . . . . . . . . . . . . . . . . . . . . . . . . . 25 97 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 27 98 12.1. Normative References . . . . . . . . . . . . . . . . . . 27 99 12.2. Informative References . . . . . . . . . . . . . . . . . 28 100 12.3. URIs . . . . . . . . . . . . . . . . . . . . . . . . . . 30 101 Appendix A. Documents . . . . . . . . . . . . . . . . . . . . . 31 102 A.1. Potential increases in DNS privacy . . . . . . . . . . . 32 103 A.2. Potential decreases in DNS privacy . . . . . . . . . . . 32 104 A.3. Related operational documents . . . . . . . . . . . . . . 33 105 Appendix B. IP address techniques . . . . . . . . . . . . . . . 33 106 B.1. Google Analytics non-prefix filtering . . . . . . . . . . 34 107 B.2. dnswasher . . . . . . . . . . . . . . . . . . . . . . . . 34 108 B.3. Prefix-preserving map . . . . . . . . . . . . . . . . . . 35 109 B.4. Cryptographic Prefix-Preserving Pseudonymisation . . . . 35 110 B.5. Top-hash Subtree-replicated Anonymisation . . . . . . . . 35 111 B.6. ipcipher . . . . . . . . . . . . . . . . . . . . . . . . 36 112 B.7. Bloom filters . . . . . . . . . . . . . . . . . . . . . . 36 113 Appendix C. Example DROP statement . . . . . . . . . . . . . . . 36 114 C.1. Policy . . . . . . . . . . . . . . . . . . . . . . . . . 37 115 C.2. Practice . . . . . . . . . . . . . . . . . . . . . . . . 39 116 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 41 118 1. Introduction 120 The Domain Name System (DNS) is at the core of the Internet; almost 121 every activity on the Internet starts with a DNS query (and often 122 several). However the DNS was not originally designed with strong 123 security or privacy mechanisms. A number of developments have taken 124 place in recent years which aim to increase the privacy of the DNS 125 system and these are now seeing some deployment. This latest 126 evolution of the DNS presents new challenges to operators and this 127 document attempts to provide an overview of considerations for 128 privacy focused DNS services. 130 In recent years there has also been an increase in the availability 131 of "public resolvers" [RFC8499] which users may prefer to use instead 132 of the default network resolver because they offer a specific feature 133 (e.g. good reachability, encrypted transport, strong privacy policy, 134 filtering (or lack of), etc.). These open resolvers have tended to 135 be at the forefront of adoption of privacy related enhancements but 136 it is anticipated that operators of other resolver services will 137 follow. 139 Whilst protocols that encrypt DNS messages on the wire provide 140 protection against certain attacks, the resolver operator still has 141 (in principle) full visibility of the query data and transport 142 identifiers for each user. Therefore, a trust relationship exists. 143 The ability of the operator to provide a transparent, well 144 documented, and secure privacy service will likely serve as a major 145 differentiating factor for privacy conscious users if they make an 146 active selection of which resolver to use. 148 It should also be noted that the choice of a user to configure a 149 single resolver (or a fixed set of resolvers) and an encrypted 150 transport to use in all network environments has both advantages and 151 disadvantages. For example the user has a clear expectation of which 152 resolvers have visibility of their query data however this resolver/ 153 transport selection may provide an added mechanism to track them as 154 they move across network environments. Commitments from operators to 155 minimize such tracking are also likely to play a role in user 156 selection of resolvers. 158 More recently the global legislative landscape with regard to 159 personal data collection, retention, and pseudonymization has seen 160 significant activity. It is an untested area that simply using a DNS 161 resolution service constitutes consent from the user for the operator 162 to process their query data. The impact of recent legislative 163 changes on data pertaining to the users of both Internet Service 164 Providers and public DNS resolvers is not fully understood at the 165 time of writing. 167 This document has two main goals: 169 o To provide operational and policy guidance related to DNS over 170 encrypted transports and to outline recommendations for data 171 handling for operators of DNS privacy services. 173 o To introduce the DNS Recursive Operator Privacy (DROP) statement 174 and present a framework to assist writers of this document. A 175 DROP statement is a document that an operator can publish 176 outlining their operational practices and commitments with regard 177 to privacy thereby providing a means for clients to evaluate the 178 privacy properties of a given DNS privacy service. In particular, 179 the framework identifies the elements that should be considered in 180 formulating a DROP statement. This document does not, however, 181 define a particular Privacy statement, nor does it seek to provide 182 legal advice or recommendations as to the contents. 184 A desired operational impact is that all operators (both those 185 providing resolvers within networks and those operating large anycast 186 services) can demonstrate their commitment to user privacy thereby 187 driving all DNS resolution services to a more equitable footing. 188 Choices for users would (in this ideal world) be driven by other 189 factors e.g. differing security policies or minor difference in 190 operator policy rather than gross disparities in privacy concerns. 192 Community insight [or judgment?] about operational practices can 193 change quickly, and experience shows that a Best Current Practice 194 (BCP) document about privacy and security is a point-in-time 195 statement. Readers are advised to seek out any errata or updates 196 that apply to this document. 198 2. Scope 200 "DNS Privacy Considerations" [I-D.bortzmeyer-dprive-rfc7626-bis] 201 describes the general privacy issues and threats associated with the 202 use of the DNS by Internet users and much of the threat analysis here 203 is lifted from that document and from [RFC6973]. However this 204 document is limited in scope to best practice considerations for the 205 provision of DNS privacy services by servers (recursive resolvers) to 206 clients (stub resolvers or forwarders). Privacy considerations 207 specifically from the perspective of an end user, or those for 208 operators of authoritative nameservers are out of scope. 210 This document includes (but is not limited to) considerations in the 211 following areas (taken from [I-D.bortzmeyer-dprive-rfc7626-bis]): 213 1. Data "on the wire" between a client and a server 215 2. Data "at rest" on a server (e.g. in logs) 217 3. Data "sent onwards" from the server (either on the wire or shared 218 with a third party) 220 Whilst the issues raised here are targeted at those operators who 221 choose to offer a DNS privacy service, considerations for areas 2 and 222 3 could equally apply to operators who only offer DNS over 223 unencrypted transports but who would like to align with privacy best 224 practice. 226 3. Privacy related documents 228 There are various documents that describe protocol changes that have 229 the potential to either increase or decrease the privacy of the DNS. 230 Note this does not imply that some documents are good or bad, better 231 or worse, just that (for example) some features may bring functional 232 benefits at the price of a reduction in privacy and conversely some 233 features increase privacy with an accompanying increase in 234 complexity. A selection of the most relevant documents are listed in 235 Appendix A for reference. 237 4. Terminology 239 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 240 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 241 "OPTIONAL" in this document are to be interpreted as described in BCP 242 14 [RFC2119] and [RFC8174] when, and only when, they appear in all 243 capitals, as shown here. 245 DNS terminology is as described in [RFC8499] with one modification: 246 we restate the clause in the original definition of Privacy-enabling 247 DNS server in [RFC8310] to include the requirement that a DNS over 248 (D)TLS server should also offer at least one of the credentials 249 described in Section 8 and implement the (D)TLS profile described in 250 Section 9 of [RFC8310]. 252 Other Terms: 254 o DROP: DNS Recursive Operator Privacy statement, see Section 6. 256 o DNS privacy service: The service that is offered via a privacy- 257 enabling DNS server and is documented either in an informal 258 statement of policy and practice with regard to users privacy or a 259 formal DROP statement. 261 5. Recommendations for DNS privacy services 263 We describe two classes of threats: 265 o 'Privacy Considerations for Internet Protocols' [RFC6973] Threats 267 * Privacy terminology, threats to privacy and mitigations as 268 described in Sections 3, 5 and 6 of [RFC6973]. 270 o DNS Privacy Threats 272 * These are threats to the users and operators of DNS privacy 273 services that are not directly covered by [RFC6973]. These may 274 be more operational in nature such as certificate management or 275 service availability issues. 277 We describe three classes of actions that operators of DNS privacy 278 services can take: 280 o Threat mitigation for well understood and documented privacy 281 threats to the users of the service and in some cases to the 282 operators of the service. 284 o Optimization of privacy services from an operational or management 285 perspective 287 o Additional options that could further enhance the privacy and 288 usability of the service 290 This document does not specify policy only best practice, however for 291 DNS Privacy services to be considered compliant with these best 292 practice guidelines they SHOULD implement (where appropriate) all: 294 o Threat mitigations to be minimally compliant 296 o Optimizations to be moderately compliant 298 o Additional options to be maximally compliant 300 5.1. On the wire between client and server 302 In this section we consider both data on the wire and the service 303 provided to the client. 305 5.1.1. Transport recommendations 307 [RFC6973] Threats: 309 o Surveillance: 311 * Passive surveillance of traffic on the wire 312 [I-D.bortzmeyer-dprive-rfc7626-bis] Section 2.4.2. 314 DNS Privacy Threats: 316 o Active injection of spurious data or traffic 318 Mitigations: 320 A DNS privacy service can mitigate these threats by providing service 321 over one or more of the following transports 323 o DNS-over-TLS [RFC7858] and [RFC8310] 325 o DoH [RFC8484] 327 It is noted that a DNS privacy service can also be provided over DNS- 328 over-DTLS [RFC8094], however this is an Experimental specification 329 and there are no known implementations at the time of writing. 331 It is also noted that DNS privacy service might be provided over 332 IPSec, DNSCrypt or VPNs. However, use of these transports for DNS 333 are not standardized and any discussion of best practice for 334 providing such a service is out of scope for this document. 336 Whilst encryption of DNS traffic can protect against active injection 337 this does not diminish the need for DNSSEC, see Section 5.1.4. 339 5.1.2. Authentication of DNS privacy services 341 [RFC6973] Threats: 343 o Surveillance: 345 * Active attacks that can redirect traffic to rogue servers 346 [I-D.bortzmeyer-dprive-rfc7626-bis] Section 2.5.3. 348 Mitigations: 350 DNS privacy services should ensure clients can authenticate the 351 server. Note that this, in effect, commits the DNS privacy service 352 to a public identity users will trust. 354 When using DNS-over-TLS clients that select a 'Strict Privacy' usage 355 profile [RFC8310] (to mitigate the threat of active attack on the 356 client) require the ability to authenticate the DNS server. To 357 enable this, DNS privacy services that offer DNS-over-TLS should 358 provide credentials in the form of either X.509 certificates 359 [RFC5280] or SPKI pin sets [RFC8310]. 361 When offering DoH [RFC8484], HTTPS requires authentication of the 362 server as part of the protocol. 364 Optimizations: 366 DNS privacy services can also consider the following capabilities/ 367 options: 369 o As recommended in [RFC8310] providing DANE TLSA records for the 370 nameserver 372 * In particular, the service could provide TLSA records such that 373 authenticating solely via the PKIX infrastructure can be 374 avoided. 376 5.1.2.1. Certificate management 378 Anecdotal evidence to date highlights the management of certificates 379 as one of the more challenging aspects for operators of traditional 380 DNS resolvers that choose to additionally provide a DNS privacy 381 service as management of such credentials is new to those DNS 382 operators. 384 It is noted that SPKI pin set management is described in [RFC7858] 385 but that key pinning mechanisms in general have fallen out of favor 386 operationally for various reasons such as the logistical overhead of 387 rolling keys. 389 DNS Privacy Threats: 391 o Invalid certificates, resulting in an unavailable service. 393 o Mis-identification of a server by a client e.g. typos in URLs or 394 authentication domain names 396 Mitigations: 398 It is recommended that operators: 400 o Follow the guidance in Section 6.5 of [RFC7525] with regards to 401 certificate revocation 403 o Choose a short, memorable authentication name for the service 405 o Automate the generation and publication of certificates 407 o Monitor certificates to prevent accidental expiration of 408 certificates 410 5.1.3. Protocol recommendations 412 5.1.3.1. DNS-over-TLS 414 DNS Privacy Threats: 416 o Known attacks on TLS such as those described in [RFC7457] 418 o Traffic analysis, for example: [Pitfalls-of-DNS-Encryption] 420 o Potential for client tracking via transport identifiers 422 o Blocking of well known ports (e.g. 853 for DNS-over-TLS) 423 Mitigations: 425 In the case of DNS-over-TLS, TLS profiles from Section 9 and the 426 Countermeasures to DNS Traffic Analysis from section 11.1 of 427 [RFC8310] provide strong mitigations. This includes but is not 428 limited to: 430 o Adhering to [RFC7525] 432 o Implementing only (D)TLS 1.2 or later as specified in [RFC8310] 434 o Implementing EDNS(0) Padding [RFC7830] using the guidelines in 435 [RFC8467] or a successor specification. 437 o Clients should not be required to use TLS session resumption 438 [RFC5077] with TLS 1.2 or Domain Name System (DNS) Cookies 439 [RFC7873]. 441 o A DNS-over-TLS privacy service on both port 853 and 443. This 442 practice may not be possible if e.g. the operator deploys DoH on 443 the same IP address. 445 Optimizations: 447 o Concurrent processing of pipelined queries, returning responses as 448 soon as available, potentially out of order as specified in 449 [RFC7766]. This is often called 'OOOR' - out-of-order responses. 450 (Providing processing performance similar to HTTP multiplexing) 452 o Management of TLS connections to optimize performance for clients 453 using either 455 * [RFC7766] and EDNS(0) Keepalive [RFC7828] and/or 457 * DNS Stateful Operations [RFC8490] 459 Additional options that providers may consider: 461 o Offer a .onion [RFC7686] service endpoint 463 5.1.3.2. DoH 465 DNS Privacy Threats: 467 o Known attacks on TLS such as those described in [RFC7457] 469 o Traffic analysis, for example: DNS Privacy not so private: the 470 traffic analysis perspective [1] 472 o Potential for client tracking via transport identifiers 474 Mitigations: 476 o Clients must be able to forego the use of HTTP Cookies [RFC6265] 477 and still use the service 479 o Clients should not be required to include any headers beyond the 480 absolute minimum to obtain service from a DoH server. (See 481 Section 6.1 of [I-D.ietf-httpbis-bcp56bis].) 483 5.1.4. DNSSEC 485 DNS Privacy Threats: 487 o Users may be directed to bogus IP addresses for e.g. websites 488 where they might reveal personal information to attackers. 490 Mitigations: 492 o All DNS privacy services must offer a DNS privacy service that 493 performs DNSSEC validation. In addition they must be able to 494 provide the DNSSEC RRs to the client so that it can perform its 495 own validation. 497 The addition of encryption to DNS does not remove the need for DNSSEC 498 [RFC4033] - they are independent and fully compatible protocols, each 499 solving different problems. The use of one does not diminish the 500 need nor the usefulness of the other. 502 While the use of an authenticated and encrypted transport protects 503 origin authentication and data integrity between a client and a DNS 504 privacy service it provides no proof (for a non-validating client) 505 that the data provided by the DNS privacy service was actually DNSSEC 506 authenticated. As with cleartext DNS the user is still solely 507 trusting the AD bit (if present) set by the resolver. 509 It should also be noted that the use of an encrypted transport for 510 DNS actually solves many of the practical issues encountered by DNS 511 validating clients e.g. interference by middleboxes with cleartext 512 DNS payloads is completely avoided. In this sense a validating 513 client that uses a DNS privacy service which supports DNSSEC has a 514 far simpler task in terms of DNS Roadblock avoidance. 516 5.1.5. Availability 518 DNS Privacy Threats: 520 o A failed DNS privacy service could force the user to switch 521 providers, fallback to cleartext or accept no DNS service for the 522 outage. 524 Mitigations: 526 A DNS privacy service must be engineered for high availability. 527 Particular care should to be taken to protect DNS privacy services 528 against denial-of-service attacks, as experience has shown that 529 unavailability of DNS resolving because of attacks is a significant 530 motivation for users to switch services. See, for example 531 Section IV-C of Passive Observations of a Large DNS Service: 2.5 532 Years in the Life of Google [2]. 534 Techniques such as those described in Section 10 of [RFC7766] can be 535 of use to operators to defend against such attacks. 537 5.1.6. Service options 539 DNS Privacy Threats: 541 o Unfairly disadvantaging users of the privacy service with respect 542 to the services available. This could force the user to switch 543 providers, fallback to cleartext or accept no DNS service for the 544 outage. 546 Mitigations: 548 A DNS privacy service should deliver the same level of service as 549 offered on un-encrypted channels in terms of such options as 550 filtering (or lack thereof), DNSSEC validation, etc. 552 5.1.7. Impact on DNS Privacy Service Operators 554 DNS Privacy Threats: 556 o Increased use of encryption impacts operator ability to manage 557 their network [RFC8404] 559 Many monitoring solutions for DNS traffic rely on the plain text 560 nature of this traffic and work by intercepting traffic on the wire, 561 either using a separate view on the connection between clients and 562 the resolver, or as a separate process on the resolver system that 563 inspects network traffic. Such solutions will no longer function 564 when traffic between clients and resolvers is encrypted. There are, 565 however, legitimate reasons for operators to inspect DNS traffic, 566 e.g. to monitor for network security threats. Operators may 567 therefore need to invest in alternative means of monitoring that 568 relies on either the resolver software directly, or exporting DNS 569 traffic from the resolver using e.g. dnstap [3]. 571 Optimization: 573 When implementing alternative means for traffic monitoring, operators 574 of a DNS privacy service should consider using privacy conscious 575 means to do so (see, for example, the discussion on the use of Bloom 576 Filters in the #documents appendix in this document). 578 5.1.8. Limitations of using a pure TLS proxy 580 DNS Privacy Threats: 582 o Limited ability to manage or monitor incoming connections using 583 DNS specific techniques 585 o Misconfiguration of the target server could lead to data leakage 586 if the proxy to target server path is not encrypted. 588 Optimization: 590 Some operators may choose to implement DNS-over-TLS using a TLS proxy 591 (e.g. nginx [4], haproxy [5] or stunnel [6]) in front of a DNS 592 nameserver because of proven robustness and capacity when handling 593 large numbers of client connections, load balancing capabilities and 594 good tooling. Currently, however, because such proxies typically 595 have no specific handling of DNS as a protocol over TLS or DTLS using 596 them can restrict traffic management at the proxy layer and at the 597 DNS server. For example, all traffic received by a nameserver behind 598 such a proxy will appear to originate from the proxy and DNS 599 techniques such as ACLs, RRL or DNS64 will be hard or impossible to 600 implement in the nameserver. 602 Operators may choose to use a DNS aware proxy such as dnsdist [7] 603 which offer custom options (similar to that proposed in 604 [I-D.bellis-dnsop-xpf]) to add source information to packets to 605 address this shortcoming. It should be noted that such options 606 potentially significantly increase the leaked information in the 607 event of a misconfiguration. 609 5.2. Data at rest on the server 611 5.2.1. Data handling 613 [RFC6973] Threats: 615 o Surveillance 617 o Stored data compromise 619 o Correlation 621 o Identification 623 o Secondary use 625 o Disclosure 627 Other Threats 629 o Contravention of legal requirements not to process user data? 631 Mitigations: 633 The following are common activities for DNS service operators and in 634 all cases should be minimized or completely avoided if possible for 635 DNS privacy services. If data is retained it should be encrypted and 636 either aggregated, pseudonymized or anonymized whenever possible. In 637 general the principle of data minimization described in [RFC6973] 638 should be applied. 640 o Transient data (e.g. that is used for real time monitoring and 641 threat analysis which might be held only memory) should be 642 retained for the shortest possible period deemed operationally 643 feasible. 645 o The retention period of DNS traffic logs should be only those 646 required to sustain operation of the service and, to the extent 647 that such exists, meet regulatory requirements. 649 o DNS privacy services should not track users except for the 650 particular purpose of detecting and remedying technically 651 malicious (e.g. DoS) or anomalous use of the service. 653 o Data access should be minimized to only those personnel who 654 require access to perform operational duties. It should also be 655 limited to anonymized or pseudonymized data were operationally 656 feasible, with access to full logs (if any are held) only 657 permitted when necessary. 659 Optimizations: 661 o Consider use of full disk encryption for logs and data capture 662 storage. 664 5.2.2. Data minimization of network traffic 666 Data minimization refers to collecting, using, disclosing, and 667 storing the minimal data necessary to perform a task, and this can be 668 achieved by removing or obfuscating privacy-sensitive information in 669 network traffic logs. This is typically personal data, or data that 670 can be used to link a record to an individual, but may also include 671 revealing other confidential information, for example on the 672 structure of an internal corporate network. 674 The problem of effectively ensuring that DNS traffic logs contain no 675 or minimal privacy-sensitive information is not one that currently 676 has a generally agreed solution or any Standards to inform this 677 discussion. This section presents and overview of current techniques 678 to simply provide reference on the current status of this work. 680 Research into data minimization techniques (and particularly IP 681 address pseudonymization/anonymization) was sparked in the late 682 1990s/early 2000s, partly driven by the desire to share significant 683 corpuses of traffic captures for research purposes. Several 684 techniques reflecting different requirements in this area and 685 different performance/resource tradeoffs emerged over the course of 686 the decade. Developments over the last decade have been both a 687 blessing and a curse; the large increase in size between an IPv4 and 688 an IPv6 address, for example, renders some techniques impractical, 689 but also makes available a much larger amount of input entropy, the 690 better to resist brute force re-identification attacks that have 691 grown in practicality over the period. 693 Techniques employed may be broadly categorized as either 694 anonymization or pseudonymization. The following discussion uses the 695 definitions from [RFC6973] Section 3, with additional observations 696 from van Dijkhuizen et al. [8] 698 o Anonymization. To enable anonymity of an individual, there must 699 exist a set of individuals that appear to have the same 700 attribute(s) as the individual. To the attacker or the observer, 701 these individuals must appear indistinguishable from each other. 703 o Pseudonymization. The true identity is deterministically replaced 704 with an alternate identity (a pseudonym). When the 705 pseudonymization schema is known, the process can be reversed, so 706 the original identity becomes known again. 708 In practice there is a fine line between the two; for example, how to 709 categorize a deterministic algorithm for data minimization of IP 710 addresses that produces a group of pseudonyms for a single given 711 address. 713 5.2.3. IP address pseudonymization and anonymization methods 715 As [I-D.bortzmeyer-dprive-rfc7626-bis] makes clear, the big privacy 716 risk in DNS is connecting DNS queries to an individual and the major 717 vector for this in DNS traffic is the client IP address. 719 There is active discussion in the space of effective pseudonymization 720 of IP addresses in DNS traffic logs, however there seems to be no 721 single solution that is widely recognized as suitable for all or most 722 use cases. There are also as yet no standards for this that are 723 unencumbered by patents. 725 The following table presents a high level comparison of various 726 techniques employed or under development today and classifies them 727 according to categorization of technique and other properties. 728 Appendix B provides a more detailed survey of these techniques and 729 definitions for the categories and properties listed below. The list 730 of techniques includes the main techniques in current use, but does 731 not claim to be comprehensive. 733 +---------------------------+----+---+----+---+----+---+---+ 734 | Categorisation/Property | GA | d | TC | C | TS | i | B | 735 +---------------------------+----+---+----+---+----+---+---+ 736 | Anonymisation | X | X | X | | | | X | 737 | Pseudoanonymisation | | | | X | X | X | | 738 | Format preserving | X | X | X | X | X | X | | 739 | Prefix preserving | | | X | X | X | | | 740 | Replacement | | | X | | | | | 741 | Filtering | X | | | | | | | 742 | Generalisation | | | | | | | X | 743 | Enumeration | | X | | | | | | 744 | Reordering/Shuffling | | | X | | | | | 745 | Random substitution | | | X | | | | | 746 | Crytpographic permutation | | | | X | X | X | | 747 | IPv6 issues | | | | | X | | | 748 | CPU intensive | | | | X | | | | 749 | Memory intensive | | | X | | | | | 750 | Security concerns | | | | | | X | | 751 +---------------------------+----+---+----+---+----+---+---+ 753 Table 1: Classification of techniques 755 GA = Google Analytics, d = dnswasher, TC = TCPdpriv, C = CryptoPAn, 756 TS = TSA, i = ipcipher, B = Bloom filter 758 The choice of which method to use for a particular application will 759 depend on the requirements of that application and consideration of 760 the threat analysis of the particular situation. 762 For example, a common goal is that distributed packet captures must 763 be in an existing data format such as PCAP [pcap] or C-DNS [RFC8618] 764 that can be used as input to existing analysis tools. In that case, 765 use of a format-preserving technique is essential. This, though, is 766 not cost-free - several authors (e.g. Brenker & Arnes [9]) have 767 observed that, as the entropy in an IPv4 address is limited, given a 768 de-identified log from a target, if an attacker is capable of 769 ensuring packets are captured by the target and the attacker can send 770 forged traffic with arbitrary source and destination addresses to 771 that target, any format-preserving pseudonymization is vulnerable to 772 an attack along the lines of a cryptographic chosen plaintext attack. 774 5.2.4. Pseudonymization, anonymization or discarding of other 775 correlation data 777 DNS Privacy Threats: 779 o IP TTL/Hoplimit can be used to fingerprint client OS 780 o TLS version/Cipher suite combinations can be used to fingerprint 781 the client application or TLS library 783 o Tracking of TCP sessions 785 o Tracking of TLS sessions and session resumption mechanisms 787 o Resolvers _might_ receive client identifiers e.g. MAC addresses 788 in EDNS(0) options - some CPE devices are known to add them. 790 o HTTP headers 792 Mitigations: 794 o Data minimization or discarding of such correlation data. 796 5.2.5. Cache snooping 798 [RFC6973] Threats: 800 o Surveillance: 802 * Profiling of client queries by malicious third parties 804 Mitigations: 806 o See ISC Knowledge database on cache snooping [10] for an example 807 discussion on defending against cache snooping 809 5.3. Data sent onwards from the server 811 In this section we consider both data sent on the wire in upstream 812 queries and data shared with third parties. 814 5.3.1. Protocol recommendations 816 [RFC6973] Threats: 818 o Surveillance: 820 * Transmission of identifying data upstream. 822 Mitigations: 824 As specified in [RFC8310] for DNS-over-TLS but applicable to any DNS 825 Privacy services the server should: 827 o Implement QNAME minimization [RFC7816] 828 o Honor a SOURCE PREFIX-LENGTH set to 0 in a query containing the 829 EDNS(0) Client Subnet (ECS) option and not send an ECS option in 830 upstream queries. 832 Optimizations: 834 o The server should either 836 * not use the ECS option in upstream queries at all, or 838 * offer alternative services, one that sends ECS and one that 839 does not. 841 If operators do offer a service that sends the ECS options upstream 842 they should use the shortest prefix that is operationally feasible 843 and ideally use a policy of whitelisting upstream servers to send ECS 844 to in order to minimize data leakage. Operators should make clear in 845 any policy statement what prefix length they actually send and the 846 specific policy used. 848 Whitelisting has the benefit that not only does the operator know 849 which upstream servers can use ECS but also allows the operator to 850 decide which upstream servers apply privacy policies that the 851 operator is happy with. However some operators consider whitelisting 852 to incur significant operational overhead compared to dynamic 853 detection of ECS on authoritative servers. 855 Additional options: 857 o Aggressive Use of DNSSEC-Validated Cache [RFC8198] to reduce the 858 number of queries to authoritative servers to increase privacy. 860 o Run a copy of the root zone on loopback [RFC7706] to avoid making 861 queries to the root servers that might leak information. 863 5.3.2. Client query obfuscation 865 Additional options: 867 Since queries from recursive resolvers to authoritative servers are 868 performed using cleartext (at the time of writing), resolver services 869 need to consider the extent to which they may be directly leaking 870 information about their client community via these upstream queries 871 and what they can do to mitigate this further. Note, that even when 872 all the relevant techniques described above are employed there may 873 still be attacks possible, e.g. [Pitfalls-of-DNS-Encryption]. For 874 example, a resolver with a very small community of users risks 875 exposing data in this way and OUGHT obfuscate this traffic by mixing 876 it with 'generated' traffic to make client characterization harder. 877 The resolver could also employ aggressive pre-fetch techniques as a 878 further measure to counter traffic analysis. 880 At the time of writing there are no standardized or widely recognized 881 techniques to perform such obfuscation or bulk pre-fetches. 883 Another technique that particularly small operators may consider is 884 forwarding local traffic to a larger resolver (with a privacy policy 885 that aligns with their own practices) over an encrypted protocol so 886 that the upstream queries are obfuscated among those of the large 887 resolver. 889 5.3.3. Data sharing 891 [RFC6973] Threats: 893 o Surveillance 895 o Stored data compromise 897 o Correlation 899 o Identification 901 o Secondary use 903 o Disclosure 905 DNS Privacy Threats: 907 o Contravention of legal requirements not to process user data 909 Mitigations: 911 Operators should not provide identifiable data to third-parties 912 without explicit consent from clients (we take the stance here that 913 simply using the resolution service itself does not constitute 914 consent). 916 Even when consent is granted operators should employ data 917 minimization techniques such as those described in Section 5.2.1 if 918 data is shared with third-parties. 920 Operators should consider including specific guidelines for the 921 collection of aggregated and/or anonymized data for research 922 purposes, within or outside of their own organization. This can 923 benefit not only the operator (through inclusion in novel research) 924 but also the wider Internet community. See SURFnet's policy [11] on 925 data sharing for research as an example. 927 6. DNS Recursive Operator Privacy (DROP) statement 929 The following section outlines the recommended contents of a DROP 930 statement an operator might choose to publish. An example statement 931 for a specific scenario is provided for guidance only in Appendix C. 933 6.1. Recommended contents of a DROP statement 935 6.1.1. Policy 937 1. Treatment of IP addresses. Make an explicit statement that IP 938 addresses are treated as PII. 940 2. Data collection and sharing. Specify clearly what data 941 (including IP addresses) is: 943 * Collected and retained by the operator, and for what period it 944 is retained 946 * Shared with partners 948 * Shared, sold or rented to third-parties 950 and in each case whether it is aggregated, pseudonymized or 951 anonymized and the conditions of data transfer. 953 3. Exceptions. Specify any exceptions to the above, for example 954 technically malicious or anomalous behavior. 956 4. Associated entities. Declare any partners, third-party 957 affiliations or sources of funding. 959 5. Correlation. Whether user DNS data is correlated or combined 960 with any other personal information held by the operator. 962 6. Result filtering. This section should explain whether the 963 operator filters, edits or alters in any way the replies that it 964 receives from the authoritative servers for each DNS zone, before 965 forwarding them to the clients. For each category listed below, 966 the operator should also specify how the filtering lists are 967 created and managed, whether it employs any third-party sources 968 for such lists, and which ones. 970 * Specify if any replies are being filtered out or altered for 971 network and computer security reasons (e.g. preventing 972 connections to malware-spreading websites or botnet control 973 servers) 975 * Specify if any replies are being filtered out or altered for 976 mandatory legal reasons, due to applicable legislation or 977 binding orders by courts and other public authorities 979 * Specify if any replies are being filtered out or altered for 980 voluntary legal reasons, due to an internal policy by the 981 operator aiming at reducing potential legal risks 983 * Specify if any replies are being filtered out or altered for 984 any other reason, including commercial ones 986 6.1.2. Practice 988 This section should explain the current operational practices of the 989 service. 991 1. Deviations. Specify any temporary or permanent deviations from 992 the policy for operational reasons. 994 2. Client facing capabilities. With reference to section Section 5 995 provide specific details of which capabilities are provided on 996 which client facing addresses and ports: 998 1. For DoT, specify the authentication name to be used (if any) 999 and if TLSA records are published (including options used in 1000 the TLSA records) 1002 2. For DoT, specify the SPKI pin sets to be used (if any) and 1003 policy for rolling keys 1005 3. Upstream capabilities. With reference to section Section 5.3 1006 provide specific details of which capabilities are provided 1007 upstream for data sent to authoritative servers. 1009 4. Support. Provide contact/support information for the service. 1011 5. Jurisdiction. This section should communicate the applicable 1012 jurisdictions and law enforcement regimes under which the service 1013 is being provided. 1015 1. Specify the operator entity or entities that will control the 1016 data and be responsible for their treatment, and their legal 1017 place of business. 1019 2. Specify, either directly or by pointing to the applicable 1020 privacy policy, the relevant privacy laws that apply to the 1021 treatment of the data, the rights that users enjoy in regard 1022 to their own personal information that is treated by the 1023 service, and how they can contact the operator to enforce 1024 them. 1026 3. Additionally specify the countries in which the servers 1027 handling the DNS requests and the data are located (if the 1028 operator applies a geolocation policy so that requests from 1029 certain countries are only served by certain servers, this 1030 should be specified as well). 1032 4. Specify whether the operator has any agreement in place with 1033 law enforcement agencies, or other public and private parties 1034 dealing with security and intelligence, to give them access 1035 to the servers and/or to the data. 1037 6. Consent. For any activity which is documented in this statement 1038 as 'requiring consent' before being performed, describe the full 1039 process of what you as an operator consider 'obtaining consent', 1040 distinguishing clearly between any implicit and explicit consent 1041 models. Additionally, state if these processes are considered by 1042 you the operator to conform to any relevant legislation (this may 1043 prove relevant in the context of e.g. the GDPR as it relates to 1044 consent). 1046 6.2. Current policy and privacy statements 1048 A tabular comparison of existing policy and privacy statements from 1049 various DNS Privacy service operators based loosely on the proposed 1050 DROP structure can be found on dnsprivacy.org [12]. 1052 We note that the existing set of policies vary widely in style, 1053 content and detail and it is not uncommon for the full text for a 1054 given operator to equate to more than 10 pages of moderate font sized 1055 A4 text. It is a non-trivial task today for a user to extract a 1056 meaningful overview of the different services on offer. 1058 It is also noted that Mozilla have published a Security/DoH-resolver 1059 policy [13], which describes the minimum set of policy requirements 1060 that a party must satisfy to be considered as a potential partner for 1061 Mozilla's Trusted Recursive Resolver (TRR) program. 1063 6.3. Enforcement/accountability 1065 Transparency reports may help with building user trust that operators 1066 adhere to their policies and practices. 1068 Independent monitoring or analysis could be performed where possible 1069 of: 1071 o ECS, QNAME minimization, EDNS(0) padding, etc. 1073 o Filtering 1075 o Uptime 1077 This is by analogy with e.g. several TLS or website analysis tools 1078 that are currently available e.g. SSL Labs [14] or Internet.nl [15]. 1080 Additionally operators could choose to engage the services of a third 1081 party auditor to verify their compliance with their published DROP 1082 statement. 1084 7. IANA considerations 1086 None 1088 8. Security considerations 1090 Security considerations for DNS-over-TCP are given in [RFC7766], many 1091 of which are generally applicable to session based DNS. 1093 9. Acknowledgements 1095 Many thanks to Amelia Andersdotter for a very thorough review of the 1096 first draft of this document and Stephen Farrell for a thorough 1097 review at WGLC and for suggesting the inclusion of an example DROP 1098 statement. Thanks to John Todd for discussions on this topic, and to 1099 Stephane Bortzmeyer, Puneet Sood and Vittorio Bertola for review. 1100 Thanks to Daniel Kahn Gillmor, Barry Green, Paul Hoffman, Dan York, 1101 John Reed, Lorenzo Colitti for comments at the mic. Thanks to 1102 Loganaden Velvindron for useful updates to the text. 1104 Sara Dickinson thanks the Open Technology Fund for a grant to support 1105 the work on this document. 1107 10. Contributors 1109 The below individuals contributed significantly to the document: 1111 John Dickinson 1112 Sinodun Internet Technologies 1113 Magdalen Centre 1114 Oxford Science Park 1115 Oxford OX4 4GA 1116 United Kingdom 1118 Jim Hague 1119 Sinodun Internet Technologies 1120 Magdalen Centre 1121 Oxford Science Park 1122 Oxford OX4 4GA 1123 United Kingdom 1125 11. Changelog 1127 draft-ietf-dprive-bcp-op-04 1129 o Change DPPPS to DROP (DNS Recursive Operator Privacy) statement 1131 o Update structure of DROP slightly 1133 o Add example DROP statement 1135 o Add text about restricting access to full logs 1137 o Move table in section 5.2.3 from SVG to inline table 1139 o Fix many editorial and reference nits 1141 draft-ietf-dprive-bcp-op-03 1143 o Add paragraph about operational impact 1145 o Move DNSSEC requirement out of the Appendix into main text as a 1146 privacy threat that should be mitigated 1148 o Add TLS version/Cipher suite as tracking threat 1150 o Add reference to Mozilla TRR policy 1152 o Remove several TODOs and QUESTIONS. 1154 draft-ietf-dprive-bcp-op-02 1155 o Change 'open resolver' for 'public resolver' 1157 o Minor editorial changes 1159 o Remove recommendation to run a separate TLS 1.3 service 1161 o Move TLSA to purely a optimisation in Section 5.2.1 1163 o Update reference on minimal DoH headers. 1165 o Add reference on user switching provider after service issues in 1166 Section 5.1.4 1168 o Add text in Section 5.1.6 on impact on operators. 1170 o Add text on additional threat to TLS proxy use (Section 5.1.7) 1172 o Add reference in Section 5.3.1 on example policies. 1174 draft-ietf-dprive-bcp-op-01 1176 o Many minor editorial fixes 1178 o Update DoH reference to RFC8484 and add more text on DoH 1180 o Split threat descriptions into ones directly referencing RFC6973 1181 and other DNS Privacy threats 1183 o Improve threat descriptions throughout 1185 o Remove reference to the DNSSEC TLS Chain Extension draft until new 1186 version submitted. 1188 o Clarify use of whitelisting for ECS 1190 o Re-structure the DPPPS, add Result filtering section. 1192 o Remove the direct inclusion of privacy policy comparison, now just 1193 reference dnsprivacy.org and an example of such work. 1195 o Add an appendix briefly discussing DNSSEC 1197 o Update affiliation of 1 author 1199 draft-ietf-dprive-bcp-op-00 1201 o Initial commit of re-named document after adoption to replace 1202 draft-dickinson-dprive-bcp-op-01 1204 12. References 1206 12.1. Normative References 1208 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1209 Requirement Levels", BCP 14, RFC 2119, 1210 DOI 10.17487/RFC2119, March 1997, . 1213 [RFC6265] Barth, A., "HTTP State Management Mechanism", RFC 6265, 1214 DOI 10.17487/RFC6265, April 2011, . 1217 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J., 1218 Morris, J., Hansen, M., and R. Smith, "Privacy 1219 Considerations for Internet Protocols", RFC 6973, 1220 DOI 10.17487/RFC6973, July 2013, . 1223 [RFC7525] Sheffer, Y., Holz, R., and P. Saint-Andre, 1224 "Recommendations for Secure Use of Transport Layer 1225 Security (TLS) and Datagram Transport Layer Security 1226 (DTLS)", BCP 195, RFC 7525, DOI 10.17487/RFC7525, May 1227 2015, . 1229 [RFC7766] Dickinson, J., Dickinson, S., Bellis, R., Mankin, A., and 1230 D. Wessels, "DNS Transport over TCP - Implementation 1231 Requirements", RFC 7766, DOI 10.17487/RFC7766, March 2016, 1232 . 1234 [RFC7816] Bortzmeyer, S., "DNS Query Name Minimisation to Improve 1235 Privacy", RFC 7816, DOI 10.17487/RFC7816, March 2016, 1236 . 1238 [RFC7828] Wouters, P., Abley, J., Dickinson, S., and R. Bellis, "The 1239 edns-tcp-keepalive EDNS0 Option", RFC 7828, 1240 DOI 10.17487/RFC7828, April 2016, . 1243 [RFC7830] Mayrhofer, A., "The EDNS(0) Padding Option", RFC 7830, 1244 DOI 10.17487/RFC7830, May 2016, . 1247 [RFC7858] Hu, Z., Zhu, L., Heidemann, J., Mankin, A., Wessels, D., 1248 and P. Hoffman, "Specification for DNS over Transport 1249 Layer Security (TLS)", RFC 7858, DOI 10.17487/RFC7858, May 1250 2016, . 1252 [RFC7871] Contavalli, C., van der Gaast, W., Lawrence, D., and W. 1253 Kumari, "Client Subnet in DNS Queries", RFC 7871, 1254 DOI 10.17487/RFC7871, May 2016, . 1257 [RFC7873] Eastlake 3rd, D. and M. Andrews, "Domain Name System (DNS) 1258 Cookies", RFC 7873, DOI 10.17487/RFC7873, May 2016, 1259 . 1261 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 1262 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 1263 May 2017, . 1265 [RFC8310] Dickinson, S., Gillmor, D., and T. Reddy, "Usage Profiles 1266 for DNS over TLS and DNS over DTLS", RFC 8310, 1267 DOI 10.17487/RFC8310, March 2018, . 1270 [RFC8404] Moriarty, K., Ed. and A. Morton, Ed., "Effects of 1271 Pervasive Encryption on Operators", RFC 8404, 1272 DOI 10.17487/RFC8404, July 2018, . 1275 [RFC8467] Mayrhofer, A., "Padding Policies for Extension Mechanisms 1276 for DNS (EDNS(0))", RFC 8467, DOI 10.17487/RFC8467, 1277 October 2018, . 1279 [RFC8484] Hoffman, P. and P. McManus, "DNS Queries over HTTPS 1280 (DoH)", RFC 8484, DOI 10.17487/RFC8484, October 2018, 1281 . 1283 12.2. Informative References 1285 [I-D.bellis-dnsop-xpf] 1286 Bellis, R., Dijk, P., and R. Gacogne, "DNS X-Proxied-For", 1287 draft-bellis-dnsop-xpf-04 (work in progress), March 2018. 1289 [I-D.bortzmeyer-dprive-rfc7626-bis] 1290 Bortzmeyer, S. and S. Dickinson, "DNS Privacy 1291 Considerations", draft-bortzmeyer-dprive-rfc7626-bis-02 1292 (work in progress), January 2019. 1294 [I-D.ietf-dnsop-dns-tcp-requirements] 1295 Kristoff, J. and D. Wessels, "DNS Transport over TCP - 1296 Operational Requirements", draft-ietf-dnsop-dns-tcp- 1297 requirements-04 (work in progress), June 2019. 1299 [I-D.ietf-httpbis-bcp56bis] 1300 Nottingham, M., "Building Protocols with HTTP", draft- 1301 ietf-httpbis-bcp56bis-08 (work in progress), November 1302 2018. 1304 [pcap] tcpdump.org, "PCAP", 2016, . 1306 [Pitfalls-of-DNS-Encryption] 1307 Shulman, H., "Pretty Bad Privacy: Pitfalls of DNS 1308 Encryption", 2014, . 1311 [RFC4033] Arends, R., Austein, R., Larson, M., Massey, D., and S. 1312 Rose, "DNS Security Introduction and Requirements", 1313 RFC 4033, DOI 10.17487/RFC4033, March 2005, 1314 . 1316 [RFC5077] Salowey, J., Zhou, H., Eronen, P., and H. Tschofenig, 1317 "Transport Layer Security (TLS) Session Resumption without 1318 Server-Side State", RFC 5077, DOI 10.17487/RFC5077, 1319 January 2008, . 1321 [RFC5280] Cooper, D., Santesson, S., Farrell, S., Boeyen, S., 1322 Housley, R., and W. Polk, "Internet X.509 Public Key 1323 Infrastructure Certificate and Certificate Revocation List 1324 (CRL) Profile", RFC 5280, DOI 10.17487/RFC5280, May 2008, 1325 . 1327 [RFC6235] Boschi, E. and B. Trammell, "IP Flow Anonymization 1328 Support", RFC 6235, DOI 10.17487/RFC6235, May 2011, 1329 . 1331 [RFC7457] Sheffer, Y., Holz, R., and P. Saint-Andre, "Summarizing 1332 Known Attacks on Transport Layer Security (TLS) and 1333 Datagram TLS (DTLS)", RFC 7457, DOI 10.17487/RFC7457, 1334 February 2015, . 1336 [RFC7686] Appelbaum, J. and A. Muffett, "The ".onion" Special-Use 1337 Domain Name", RFC 7686, DOI 10.17487/RFC7686, October 1338 2015, . 1340 [RFC7706] Kumari, W. and P. Hoffman, "Decreasing Access Time to Root 1341 Servers by Running One on Loopback", RFC 7706, 1342 DOI 10.17487/RFC7706, November 2015, . 1345 [RFC8094] Reddy, T., Wing, D., and P. Patil, "DNS over Datagram 1346 Transport Layer Security (DTLS)", RFC 8094, 1347 DOI 10.17487/RFC8094, February 2017, . 1350 [RFC8198] Fujiwara, K., Kato, A., and W. Kumari, "Aggressive Use of 1351 DNSSEC-Validated Cache", RFC 8198, DOI 10.17487/RFC8198, 1352 July 2017, . 1354 [RFC8490] Bellis, R., Cheshire, S., Dickinson, J., Dickinson, S., 1355 Lemon, T., and T. Pusateri, "DNS Stateful Operations", 1356 RFC 8490, DOI 10.17487/RFC8490, March 2019, 1357 . 1359 [RFC8499] Hoffman, P., Sullivan, A., and K. Fujiwara, "DNS 1360 Terminology", BCP 219, RFC 8499, DOI 10.17487/RFC8499, 1361 January 2019, . 1363 [RFC8618] Dickinson, J., Hague, J., Dickinson, S., Manderson, T., 1364 and J. Bond, "Compacted-DNS (C-DNS): A Format for DNS 1365 Packet Capture", RFC 8618, DOI 10.17487/RFC8618, September 1366 2019, . 1368 12.3. URIs 1370 [1] https://petsymposium.org/2018/files/hotpets/4-siby.pdf 1372 [2] http://tma.ifip.org/2018/wp-content/uploads/sites/3/2018/06/ 1373 tma2018_paper30.pdf 1375 [3] http://dnstap.info 1377 [4] https://nginx.org/ 1379 [5] https://www.haproxy.org/ 1381 [6] https://kb.isc.org/article/AA-01386/0/DNS-over-TLS.html 1383 [7] https://dnsdist.org 1385 [8] https://doi.org/10.1145/3182660 1387 [9] https://pdfs.semanticscholar.org/7b34/12c951cebe71cd2cddac5fda164 1388 fb2138a44.pdf 1390 [10] https://kb.isc.org/docs/aa-00482 1392 [11] https://surf.nl/datasharing 1394 [12] https://dnsprivacy.org/wiki/display/DP/ 1395 Comparison+of+policy+and+privacy+statements 1397 [13] https://wiki.mozilla.org/Security/DOH-resolver-policy 1399 [14] https://www.ssllabs.com/ssltest/ 1401 [15] https://internet.nl 1403 [16] https://support.google.com/analytics/answer/2763052?hl=en 1405 [17] https://www.conversionworks.co.uk/blog/2017/05/19/anonymize-ip- 1406 geo-impact-test/ 1408 [18] https://github.com/edmonds/pdns/blob/master/pdns/dnswasher.cc 1410 [19] http://ita.ee.lbl.gov/html/contrib/tcpdpriv.html 1412 [20] http://an.kaist.ac.kr/~sbmoon/paper/intl-journal/2004-cn- 1413 anon.pdf 1415 [21] https://www.cc.gatech.edu/computing/Telecomm/projects/cryptopan/ 1417 [22] http://mharvan.net/talks/noms-ip_anon.pdf 1419 [23] http://www.ecs.umass.edu/ece/wolf/pubs/ton2007.pdf 1421 [24] https://medium.com/@bert.hubert/on-ip-address-encryption- 1422 security-analysis-with-respect-for-privacy-dabe1201b476 1424 [25] https://github.com/PowerDNS/ipcipher 1426 [26] https://github.com/veorq/ipcrypt 1428 [27] https://www.ietf.org/mail-archive/web/cfrg/current/msg09494.html 1430 [28] http://dl.ifip.org/db/conf/im/im2019/189282.pdf 1432 Appendix A. Documents 1434 This section provides an overview of some DNS privacy related 1435 documents, however, this is neither an exhaustive list nor a 1436 definitive statement on the characteristic of the document. 1438 A.1. Potential increases in DNS privacy 1440 These documents are limited in scope to communications between stub 1441 clients and recursive resolvers: 1443 o 'Specification for DNS over Transport Layer Security (TLS)' 1444 [RFC7858], referred to here as 'DNS-over-TLS'. 1446 o 'DNS over Datagram Transport Layer Security (DTLS)' [RFC8094], 1447 referred to here as 'DNS-over-DTLS'. Note that this document has 1448 the Category of Experimental. 1450 o 'DNS Queries over HTTPS (DoH)' [RFC8484] referred to here as DoH. 1452 o 'Usage Profiles for DNS over TLS and DNS over DTLS' [RFC8310] 1454 o 'The EDNS(0) Padding Option' [RFC7830] and 'Padding Policy for 1455 EDNS(0)' [RFC8467] 1457 These documents apply to recursive to authoritative DNS but are 1458 relevant when considering the operation of a recursive server: 1460 o 'DNS Query Name minimization to Improve Privacy' [RFC7816] 1461 referred to here as 'QNAME minimization' 1463 A.2. Potential decreases in DNS privacy 1465 These documents relate to functionality that could provide increased 1466 tracking of user activity as a side effect: 1468 o 'Client Subnet in DNS Queries' [RFC7871] 1470 o 'Domain Name System (DNS) Cookies' [RFC7873]) 1472 o 'Transport Layer Security (TLS) Session Resumption without Server- 1473 Side State' [RFC5077] referred to here as simply TLS session 1474 resumption. 1476 o 'A DNS Packet Capture Format' [RFC8618] 1478 o Passive DNS [RFC8499] 1480 Note that depending on the specifics of the implementation [RFC8484] 1481 may also provide increased tracking. 1483 A.3. Related operational documents 1485 o 'DNS Transport over TCP - Implementation Requirements' [RFC7766] 1487 o 'Operational requirements for DNS-over-TCP' 1488 [I-D.ietf-dnsop-dns-tcp-requirements] 1490 o 'The edns-tcp-keepalive EDNS0 Option' [RFC7828] 1492 o 'DNS Stateful Operations' [RFC8490] 1494 Appendix B. IP address techniques 1496 Data minimization methods may be categorized by the processing used 1497 and the properties of their outputs. The following builds on the 1498 categorization employed in [RFC6235]: 1500 o Format-preserving. Normally when encrypting, the original data 1501 length and patterns in the data should be hidden from an attacker. 1502 Some applications of de-identification, such as network capture 1503 de-identification, require that the de-identified data is of the 1504 same form as the original data, to allow the data to be parsed in 1505 the same way as the original. 1507 o Prefix preservation. Values such as IP addresses and MAC 1508 addresses contain prefix information that can be valuable in 1509 analysis, e.g. manufacturer ID in MAC addresses, subnet in IP 1510 addresses. Prefix preservation ensures that prefixes are de- 1511 identified consistently; e.g. if two IP addresses are from the 1512 same subnet, a prefix preserving de-identification will ensure 1513 that their de-identified counterparts will also share a subnet. 1514 Prefix preservation may be fixed (i.e. based on a user selected 1515 prefix length identified in advance to be preserved ) or general. 1517 o Replacement. A one-to-one replacement of a field to a new value 1518 of the same type, for example using a regular expression. 1520 o Filtering. Removing (and thus truncating) or replacing data in a 1521 field. Field data can be overwritten, often with zeros, either 1522 partially (grey marking) or completely (black marking). 1524 o Generalization. Data is replaced by more general data with 1525 reduced specificity. One example would be to replace all TCP/UDP 1526 port numbers with one of two fixed values indicating whether the 1527 original port was ephemeral (>=1024) or non-ephemeral (>1024). 1528 Another example, precision degradation, reduces the accuracy of 1529 e.g. a numeric value or a timestamp. 1531 o Enumeration. With data from a well-ordered set, replace the first 1532 data item data using a random initial value and then allocate 1533 ordered values for subsequent data items. When used with 1534 timestamp data, this preserves ordering but loses precision and 1535 distance. 1537 o Reordering/shuffling. Preserving the original data, but 1538 rearranging its order, often in a random manner. 1540 o Random substitution. As replacement, but using randomly generated 1541 replacement values. 1543 o Cryptographic permutation. Using a permutation function, such as 1544 a hash function or cryptographic block cipher, to generate a 1545 replacement de-identified value. 1547 B.1. Google Analytics non-prefix filtering 1549 Since May 2010, Google Analytics has provided a facility [16] that 1550 allows website owners to request that all their users IP addresses 1551 are anonymized within Google Analytics processing. This very basic 1552 anonymization simply sets to zero the least significant 8 bits of 1553 IPv4 addresses, and the least significant 80 bits of IPv6 addresses. 1554 The level of anonymization this produces is perhaps questionable. 1555 There are some analysis results [17] which suggest that the impact of 1556 this on reducing the accuracy of determining the user's location from 1557 their IP address is less than might be hoped; the average discrepancy 1558 in identification of the user city for UK users is no more than 17%. 1560 Anonymization: Format-preserving, Filtering (grey marking). 1562 B.2. dnswasher 1564 Since 2006, PowerDNS have included a de-identification tool dnswasher 1565 [18] with their PowerDNS product. This is a PCAP filter that 1566 performs a one-to-one mapping of end user IP addresses with an 1567 anonymized address. A table of user IP addresses and their de- 1568 identified counterparts is kept; the first IPv4 user addresses is 1569 translated to 0.0.0.1, the second to 0.0.0.2 and so on. The de- 1570 identified address therefore depends on the order that addresses 1571 arrive in the input, and running over a large amount of data the 1572 address translation tables can grow to a significant size. 1574 Anonymization: Format-preserving, Enumeration. 1576 B.3. Prefix-preserving map 1578 Used in TCPdpriv [19], this algorithm stores a set of original and 1579 anonymised IP address pairs. When a new IP address arrives, it is 1580 compared with previous addresses to determine the longest prefix 1581 match. The new address is anonymized by using the same prefix, with 1582 the remainder of the address anonymized with a random value. The use 1583 of a random value means that TCPdrpiv is not deterministic; different 1584 anonymized values will be generated on each run. The need to store 1585 previous addresses means that TCPdpriv has significant and unbounded 1586 memory requirements, and because of the need to allocated anonymized 1587 addresses sequentially cannot be used in parallel processing. 1589 Anonymization: Format-preserving, prefix preservation (general). 1591 B.4. Cryptographic Prefix-Preserving Pseudonymisation 1593 Cryptographic prefix-preserving pseudonymisation was originally 1594 proposed as an improvement to the prefix-preserving map implemented 1595 in TCPdpriv, described in Xu et al. [20] and implemented in the 1596 Crypto-PAn tool [21]. Crypto-PAn is now frequently used as an 1597 acronym for the algorithm. Initially it was described for IPv4 1598 addresses only; extension for IPv6 addresses was proposed in Harvan & 1599 Schoenwaelder [22] and implemented in snmpdump. This uses a 1600 cryptographic algorithm rather than a random value, and thus 1601 pseudonymity is determined uniquely by the encryption key, and is 1602 deterministic. It requires a separate AES encryption for each output 1603 bit, so has a non-trivial calculation overhead. This can be 1604 mitigated to some extent (for IPv4, at least) by pre-calculating 1605 results for some number of prefix bits. 1607 Pseudonymization: Format-preserving, prefix preservation (general). 1609 B.5. Top-hash Subtree-replicated Anonymisation 1611 Proposed in Ramaswamy & Wolf [23], Top-hash Subtree-replicated 1612 Anonymisation (TSA) originated in response to the requirement for 1613 faster processing than Crypto-PAn. It used hashing for the most 1614 significant byte of an IPv4 address, and a pre-calculated binary tree 1615 structure for the remainder of the address. To save memory space, 1616 replication is used within the tree structure, reducing the size of 1617 the pre-calculated structures to a few Mb for IPv4 addresses. 1618 Address pseudonymization is done via hash and table lookup, and so 1619 requires minimal computation. However, due to the much increased 1620 address space for IPv6, TSA is not memory efficient for IPv6. 1622 Pseudonymization: Format-preserving, prefix preservation (general). 1624 B.6. ipcipher 1626 A recently-released proposal from PowerDNS [24], ipcipher [25] is a 1627 simple pseudonymization technique for IPv4 and IPv6 addresses. IPv6 1628 addresses are encrypted directly with AES-128 using a key (which may 1629 be derived from a passphrase). IPv4 addresses are similarly 1630 encrypted, but using a recently proposed encryption ipcrypt [26] 1631 suitable for 32bit block lengths. However, the author of ipcrypt has 1632 since indicated [27] that it has low security, and further analysis 1633 has revealed it is vulnerable to attack. 1635 Pseudonymization: Format-preserving, cryptographic permutation. 1637 B.7. Bloom filters 1639 van Rijswijk-Deij et al. [28] have recently described work using 1640 Bloom filters to categorize query traffic and record the traffic as 1641 the state of multiple filters. The goal of this work is to allow 1642 operators to identify so-called Indicators of Compromise (IOCs) 1643 originating from specific subnets without storing information about, 1644 or be able to monitor the DNS queries of an individual user. By 1645 using a Bloom filter, it is possible to determine with a high 1646 probability if, for example, a particular query was made, but the set 1647 of queries made cannot be recovered from the filter. Similarly, by 1648 mixing queries from a sufficient number of users in a single filter, 1649 it becomes practically impossible to determine if a particular user 1650 performed a particular query. Large numbers of queries can be 1651 tracked in a memory-efficient way. As filter status is stored, this 1652 approach cannot be used to regenerate traffic, and so cannot be used 1653 with tools used to process live traffic. 1655 Anonymized: Generalization. 1657 Appendix C. Example DROP statement 1659 The following example DROP statement is very loosely based on some 1660 elements of published privacy statements for some public resolvers, 1661 with additional fields populated to illustrate the what the full 1662 contents of a DROP statement might look like. This should not be 1663 interpreted as 1665 o having been reviewed or approved by any operator in any way 1667 o having any legal standing or validity at all 1669 o being complete or exhaustive 1670 This is a purely hypothetical example of a DROP statement to outline 1671 example contents - in this case for a public resolver operator 1672 providing a basic DNS Privacy service via one IP address and one DoH 1673 URI with security based filtering. It does aim to meet minimal 1674 compliance as specified in Section 5. 1676 C.1. Policy 1678 1. Treatment of IP addresses. Many nations classify IP addresses as 1679 Personally-Identifiable Information (PII), and we take a 1680 conservative approach in treating IP addresses as PII in all 1681 jurisdictions in which our systems reside. 1683 2. Data collection and sharing. 1685 1. IP addresses. Our normal course of data management does not 1686 have any IP address information or other PII logged to disk 1687 or transmitted out of the location in which the query was 1688 received. We may aggregate certain counters to larger 1689 network block levels for statistical collection purposes, but 1690 those counters do not maintain specific IP address data nor 1691 is the format or model of data stored capable of being 1692 reverse-engineered to ascertain what specific IP addresses 1693 made what queries. 1695 2. Data collected in logs. We do keep some generalized location 1696 information (at the city/metropolitan area level) so that we 1697 can conduct debugging and analyze abuse phenomena. We also 1698 use the collected information for the creation and sharing of 1699 telemetry (timestamp, geolocation, number of hits, first 1700 seen, last seen) for contributors, public publishing of 1701 general statistics of use of system (protections, threat 1702 types, counts, etc.) When you use our DNS Services, here is 1703 the full list of items that are 1704 included in our logs: 1706 + Request domain name, e.g. example.net 1708 + Record type of requested domain, e.g. A, AAAA, NS, MX, 1709 TXT, etc. 1711 + Transport protocol on which the request arrived, i.e. UDP, 1712 TCP, DoT, 1713 DoH 1715 + Origin IP general geolocation information: i.e. geocode, 1716 region ID, city ID, and metro code 1718 + IP protocol version - IPv4 or IPv6 1720 + Response code sent, e.g. SUCCESS, SERVFAIL, NXDOMAIN, 1721 etc. 1723 + Absolute arrival time 1725 + Name of the specific instance that processed this request 1727 + IP address of the specific instance to which this request 1728 was addressed (no relation to the requestor's IP address) 1730 We may keep the following data as summary information, 1731 including all the above EXCEPT for data about the DNS record 1732 requested: 1734 + Currently-advertised BGP-summarized IP prefix/netmask of 1735 apparent client origin 1737 + Autonomous system number (BGP ASN) of apparent client 1738 origin 1740 All the above data may be kept in full or partial form in 1741 permanent archives. 1743 3. Sharing of data. Except as described in this document, we do 1744 not intentionally share, sell, or rent individual personal 1745 information associated with the requestor (i.e. source IP 1746 address or any other information that can positively identify 1747 the client using our infrastructure) with anyone without your 1748 consent. We generate and share high level anonymized 1749 aggregate statistics including threat metrics on threat type, 1750 geolocation, and if available, sector, as well as other 1751 vertical metrics including performance metrics on our DNS 1752 Services (i.e. number of threats blocked, infrastructure 1753 uptime) when available with the our threat intelligence (TI) 1754 partners, academic researchers, or the public. Our DNS 1755 Services share anonymized data on specific domains queried 1756 (records such as domain, timestamp, geolocation, number of 1757 hits, first seen, last seen) with its threat intelligence 1758 partners. Our DNS Services also builds, stores, and may 1759 share certain DNS data streams which store high level 1760 information about domain resolved, query types, result codes, 1761 and timestamp. These streams do not contain IP address 1762 information of requestor and cannot be correlated to IP 1763 address or other PII. We do not and never will share any of 1764 its data with marketers, nor will it use this data for 1765 demographic analysis. 1767 3. Exceptions. There are exceptions to this storage model: In the 1768 event of events or observed behaviors which we deem malicious or 1769 anomalous, we may utilize more detailed logging to collect more 1770 specific IP address data in the process of normal network defence 1771 and mitigation. This collection and transmission off-site will 1772 be limited to IP addresses that we determine are involved in the 1773 event. 1775 4. Associated entities. Details of our Threat Intelligence partners 1776 can be found at our website page (insert link). 1778 5. Correlation of Data. We do not correlate or combine information 1779 from our logs with any personal information that you have 1780 provided us for other services, or with your specific IP address. 1782 6. Result filtering. 1784 1. Filtering. We utilise cyber threat intelligence about 1785 malicious domains from a variety of public and private 1786 sources and blocks access to those malicious domains when 1787 your system attempts to contact them. An NXDOMAIN is 1788 returned for blocked sites. 1790 1. Censorship. We will not provide a censoring component 1791 and will limit our actions solely to the blocking of 1792 malicious domains around phishing, malware, and exploit 1793 kit domains. 1795 2. Accidental blocking. We implement whitelisting 1796 algorithms to make sure legitimate domains are not 1797 blocked by accident. However, in the rare case of 1798 blocking a legitimate domain, we work with the users to 1799 quickly whitelist that domain. Please use our support 1800 form (insert link) if you believe we are blocking a 1801 domain in error. 1803 C.2. Practice 1805 1. Deviations from Policy. None currently in place. 1807 2. Client facing capabilities. 1809 1. We offer UDP and TCP DNS on port 53 on (insert IP address) 1811 2. We offer DNS-over-TLS as specified in RFC7858 on (insert IP 1812 address). It is available on port 853 and port 443. We also 1813 implement RFC7766. 1815 1. The DoT authentication name used is (insert domain name). 1816 No TLSA records are available for this domain name. 1818 2. We do not publish SPKI pin sets. 1820 3. We offer DNS-over-HTTPS as specified in RFC8484 on (insert 1821 URI template). Both POST and GET are supported. 1823 4. Both services offer TLS 1.2 and TLS 1.3. 1825 5. Both services pad DNS responses according to RFC8467. 1827 6. Both services provide DNSSEC validation. 1829 3. Upstream capabilities. 1831 1. Our servers implement QNAME minimisation. 1833 2. Our servers do not send ECS upstream. 1835 4. Support. Support information for this service is available at 1836 (insert link). 1838 5. Jurisdiction. 1840 1. We operate as the legal entity (insert entity) registered in 1841 (insert country) as (insert company identifier e.g Company 1842 Number). Our Headquarters are located at (insert address). 1844 2. As such we operate under (insert country) law. For details 1845 of our company privacy policy see (insert link). For 1846 questions on this policy and enforcement contact our Data 1847 Protection Officer on (insert email address). 1849 3. We operate servers in the following countries (insert list). 1851 4. We have no agreements in place with law enforcement agencies 1852 to give them access to the data. Apart from as stated in the 1853 Policy section of this document with regard to cyber threat 1854 intelligence, we have no agreements in place with other 1855 public and private parties dealing with security and 1856 intelligence, to give them access to the servers and/or to 1857 the data. 1859 6. Consent. As described, we do not intentionally share, sell, or 1860 rent individual personal information associated with the 1861 requestor with anyone without your consent. In order to provide 1862 consent you must have a user account for our service - this can 1863 be set up via our support page (insert link). We may contact 1864 existing users with accounts to enquire if you would be willing 1865 to provide consent for specific situations. Users can then 1866 provide explicit consent by choosing to enable certain account 1867 options which are disabled by default. 1869 Authors' Addresses 1871 Sara Dickinson 1872 Sinodun IT 1873 Magdalen Centre 1874 Oxford Science Park 1875 Oxford OX4 4GA 1876 United Kingdom 1878 Email: sara@sinodun.com 1880 Benno J. Overeinder 1881 NLnet Labs 1882 Science Park 400 1883 Amsterdam 1098 XH 1884 The Netherlands 1886 Email: benno@nlnetLabs.nl 1888 Roland M. van Rijswijk-Deij 1889 NLnet Labs 1890 Science Park 400 1891 Amsterdam 1098 XH 1892 The Netherlands 1894 Email: roland@nlnetLabs.nl 1896 Allison Mankin 1897 Salesforce 1899 Email: allison.mankin@gmail.com