idnits 2.17.1 draft-ietf-dnssd-prireq-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 25, 2019) is 1708 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group C. Huitema 3 Internet-Draft Private Octopus Inc. 4 Intended status: Informational D. Kaiser 5 Expires: January 26, 2020 University of Luxembourg 6 July 25, 2019 8 DNS-SD Privacy and Security Requirements 9 draft-ietf-dnssd-prireq-02 11 Abstract 13 DNS-SD (DNS Service Discovery) normally discloses information about 14 devices offering and requesting services. This information includes 15 host names, network parameters, and possibly a further description of 16 the corresponding service instance. Especially when mobile devices 17 engage in DNS Service Discovery over Multicast DNS at a public 18 hotspot, serious privacy problems arise. We analyze the requirements 19 of a privacy respecting discovery service. 21 Status of This Memo 23 This Internet-Draft is submitted in full conformance with the 24 provisions of BCP 78 and BCP 79. 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF). Note that other groups may also distribute 28 working documents as Internet-Drafts. The list of current Internet- 29 Drafts is at https://datatracker.ietf.org/drafts/current/. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 This Internet-Draft will expire on January 26, 2020. 38 Copyright Notice 40 Copyright (c) 2019 IETF Trust and the persons identified as the 41 document authors. All rights reserved. 43 This document is subject to BCP 78 and the IETF Trust's Legal 44 Provisions Relating to IETF Documents 45 (https://trustee.ietf.org/license-info) in effect on the date of 46 publication of this document. Please review these documents 47 carefully, as they describe your rights and restrictions with respect 48 to this document. Code Components extracted from this document must 49 include Simplified BSD License text as described in Section 4.e of 50 the Trust Legal Provisions and are provided without warranty as 51 described in the Simplified BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 56 1.1. Requirements . . . . . . . . . . . . . . . . . . . . . . 3 57 2. Service Discovery Scenarios . . . . . . . . . . . . . . . . . 3 58 2.1. Private Client and Public Server . . . . . . . . . . . . 3 59 2.2. Private Client and Private Server . . . . . . . . . . . . 4 60 2.3. Wearable Client and Server . . . . . . . . . . . . . . . 5 61 3. DNS-SD Privacy Considerations . . . . . . . . . . . . . . . . 6 62 3.1. Privacy Implication of Publishing Service Instance Names 7 63 3.2. Privacy Implication of Publishing Node Names . . . . . . 8 64 3.3. Privacy Implication of Publishing Service Attributes . . 8 65 3.4. Device Fingerprinting . . . . . . . . . . . . . . . . . . 9 66 3.5. Privacy Implication of Discovering Services . . . . . . . 10 67 4. Security Considerations . . . . . . . . . . . . . . . . . . . 10 68 4.1. Authenticity, Integrity & Freshness . . . . . . . . . . . 10 69 4.2. Confidentiality . . . . . . . . . . . . . . . . . . . . . 10 70 4.3. Resistance to Dictionary Attacks . . . . . . . . . . . . 11 71 4.4. Resistance to Denial-of-Service Attacks . . . . . . . . . 11 72 4.5. Resistance to Sender Impersonation . . . . . . . . . . . 11 73 4.6. Sender Deniability . . . . . . . . . . . . . . . . . . . 11 74 5. Operational Considerations . . . . . . . . . . . . . . . . . 11 75 5.1. Power Management . . . . . . . . . . . . . . . . . . . . 11 76 5.2. Protocol Efficiency . . . . . . . . . . . . . . . . . . . 12 77 5.3. Secure Initialization and Trust Models . . . . . . . . . 12 78 5.4. External Dependencies . . . . . . . . . . . . . . . . . . 13 79 6. Requirements for a DNS-SD Privacy Extension . . . . . . . . . 13 80 6.1. Private Client requirements . . . . . . . . . . . . . . . 14 81 6.2. Private Server Requirements . . . . . . . . . . . . . . . 14 82 6.3. Security and Operation . . . . . . . . . . . . . . . . . 15 83 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 15 84 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 15 85 9. Informative References . . . . . . . . . . . . . . . . . . . 15 86 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 17 88 1. Introduction 90 DNS-SD [RFC6763] over mDNS [RFC6762] enables zero-configuration 91 service discovery in local networks. It is very convenient for 92 users, but it requires the public exposure of the offering and 93 requesting identities along with information about the offered and 94 requested services. Parts of the published information can seriously 95 breach the user's privacy. These privacy issues and potential 96 solutions are discussed in [KW14a], [KW14b] and [K17]. 98 There are cases when nodes connected to a network want to provide or 99 consume services without exposing their identity to the other parties 100 connected to the same network. Consider for example a traveler 101 wanting to upload pictures from a phone to a laptop when connected to 102 the Wi-Fi network of an Internet cafe, or two travelers who want to 103 share files between their laptops when waiting for their plane in an 104 airport lounge. 106 We expect that these exchanges will start with a discovery procedure 107 using DNS-SD [RFC6763] over mDNS [RFC6762]. One of the devices will 108 publish the availability of a service, such as a picture library or a 109 file store in our examples. The user of the other device will 110 discover this service, and then connect to it. 112 When analyzing these scenarios in Section 3, we find that the DNS-SD 113 messages leak identifying information such as the service instance 114 name, the host name, or service properties. 116 1.1. Requirements 118 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 119 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 120 document are to be interpreted as described in [RFC2119]. 122 2. Service Discovery Scenarios 124 In this section, we review common service discovery scenarios and 125 discuss their privacy requirements. 127 2.1. Private Client and Public Server 129 Perhaps the simplest private discovery scenario involves a single 130 client connecting to a public server through a public network. A 131 common example would be a traveler using a publicly available printer 132 in a business center, in an hotel, or at an airport. 134 ( Taking notes: 135 ( David is printing 136 ( a document 137 ~~~~~~~~~~~ 138 o 139 ___ o ___ 140 / \ _|___|_ 141 | | |* *| 142 \_/ __ \_/ 143 | / / Discovery +----------+ | 144 /|\ /_/ <-----------> | +----+ | /|\ 145 / | \__/ +--| |--+ / | \ 146 / | |____/ / | \ 147 / | / | \ 148 / \ / \ 149 / \ / \ 150 / \ / \ 151 / \ / \ 152 / \ / \ 154 In that scenario, the server is public and wants to be discovered, 155 but the client is private. The adversary will be listening to the 156 network traffic, trying to identify the visitors' devices and their 157 activity. Identifying devices leads to identifying people, either 158 just for tracking people or as a preliminary to targeted attacks. 160 The requirement in that scenario is that the discovery activity 161 should not disclose the identity of the client. 163 2.2. Private Client and Private Server 165 The second private discovery scenario involves a private client 166 connecting to a private server. A common example would be two people 167 engaging in a collaborative application in a public place, such as 168 for example an airport's lounge. 170 ( Taking notes: 171 ( David is meeting 172 ( with Stuart 173 ~~~~~~~~~~~ 174 o 175 ___ ___ o ___ 176 / \ / \ _|___|_ 177 | | | | |* *| 178 \_/ __ __ \_/ \_/ 179 | / / Discovery \ \ | | 180 /|\ /_/ <-----------> \_\ /|\ /|\ 181 / | \__/ \__/ | \ / | \ 182 / | | \ / | \ 183 / | | \ / | \ 184 / \ / \ / \ 185 / \ / \ / \ 186 / \ / \ / \ 187 / \ / \ / \ 188 / \ / \ / \ 190 In that scenario, the collaborative application on one of the devices 191 will act as a server, and the application on the other device will 192 act as a client. The server wants to be discovered by the client, 193 but has no desire to be discovered by anyone else. The adversary 194 will be listening to network traffic, attempting to discover the 195 identity of devices as in the first scenario, and also attempting to 196 discover the patterns of traffic, as these patterns reveal the 197 business and social interactions between the owners of the devices. 199 The requirement in that scenario is that the discovery activity 200 should not disclose the identity of either the client or the server. 202 2.3. Wearable Client and Server 204 The third private discovery scenario involves wearable devices. A 205 typical example would be the watch on someone's wrist connecting to 206 the phone in their pocket. 208 ( Taking notes: 209 ( David' is here. His watch is 210 ( talking to his phone 211 ~~~~~~~~~~~ 212 o 213 ___ o ___ 214 / \ _|___|_ 215 | | |* *| 216 \_/ \_/ 217 | _/ | 218 /|\ // /|\ 219 / | \__/ ^ / | \ 220 / |__ | Discovery / | \ 221 / |\ \ v / | \ 222 / \\_\ / \ 223 / \ / \ 224 / \ / \ 225 / \ / \ 226 / \ / \ 228 This third scenario is in many ways similar to the second scenario. 229 It involves two devices, one acting as server and the other acting as 230 client, and it leads to the same requirement that the discovery 231 traffic not disclose the identity of either the client or the server. 232 The main difference is that the devices are managed by a single 233 owner, which can lead to different methods for establishing secure 234 relations between the device. There is also an added emphasis in 235 hiding the type of devices that the person wears. 237 In addition to tracking the identity of the owner of the devices, the 238 adversary is interested by the characteristics of the devices, such 239 as type, brand, and model. Identifying the type of device can lead 240 to further attacks, from theft to device specific hacking. The 241 combination of devices worn by the same person will also provide a 242 "fingerprint" of the person, allowing identification. 244 3. DNS-SD Privacy Considerations 246 The discovery scenarios in Section Section 2 illustrate three 247 separate abstract privacy requirements that vary based on the use 248 case: 250 1. Client identity privacy: Client identities are not leaked during 251 service discovery or use. 253 2. Multi-owner, mutual client and server identity privacy: Neither 254 client nor server identities are leaked during service discovery 255 or use. 257 3. Single-owner, mutual client and server identity privacy: 258 Identities of clients and servers owned and managed by the same 259 application, device, or user are not leaked during service 260 discovery or use. 262 In the this section, we describe aspects of DNS-SD that make these 263 requirements difficult to achieve in practice. 265 DNS-Based Service Discovery (DNS-SD) is defined in [RFC6763]. It 266 allows nodes to publish the availability of an instance of a service 267 by inserting specific records in the DNS ([RFC1033], [RFC1034], 268 [RFC1035]) or by publishing these records locally using multicast DNS 269 (mDNS) [RFC6762]. Available services are described using three types 270 of records: 272 PTR Record: Associates a service type in the domain with an 273 "instance" name of this service type. 275 SRV Record: Provides the node name, port number, priority and weight 276 associated with the service instance, in conformance with 277 [RFC2782]. 279 TXT Record: Provides a set of attribute-value pairs describing 280 specific properties of the service instance. 282 3.1. Privacy Implication of Publishing Service Instance Names 284 In the first phase of discovery, clients obtain all PTR records 285 associated with a service type in a given naming domain. Each PTR 286 record contains a Service Instance Name defined in Section 4 of 287 [RFC6763]: 289 Service Instance Name = . . 291 The portion of the Service Instance Name is meant to 292 convey enough information for users of discovery clients to easily 293 select the desired service instance. Nodes that use DNS-SD over mDNS 294 [RFC6762] in a mobile environment will rely on the specificity of the 295 instance name to identify the desired service instance. In our 296 example of users wanting to upload pictures to a laptop in an 297 Internet Cafe, the list of available service instances may look like: 299 Alice's Images . _imageStore._tcp . local 300 Alice's Mobile Phone . _presence._tcp . local 301 Alice's Notebook . _presence._tcp . local 302 Bob's Notebook . _presence._tcp . local 303 Carol's Notebook . _presence._tcp . local 305 Alice will see the list on her phone and understand intuitively that 306 she should pick the first item. The discovery will "just work". 308 However, DNS-SD/mDNS will reveal to anybody that Alice is currently 309 visiting the Internet Cafe. It further discloses the fact that she 310 uses two devices, shares an image store, and uses a chat application 311 supporting the _presence protocol on both of her devices. She might 312 currently chat with Bob or Carol, as they are also using a _presence 313 supporting chat application. This information is not just available 314 to devices actively browsing for and offering services, but to 315 anybody passively listening to the network traffic. 317 3.2. Privacy Implication of Publishing Node Names 319 The SRV records contain the DNS name of the node publishing the 320 service. Typical implementations construct this DNS name by 321 concatenating the "host name" of the node with the name of the local 322 domain. The privacy implications of this practice are reviewed in 323 [RFC8117]. Depending on naming practices, the host name is either a 324 strong identifier of the device, or at a minimum a partial 325 identifier. It enables tracking of both the device, and, by 326 extension, the device's owner. 328 3.3. Privacy Implication of Publishing Service Attributes 330 The TXT record's attribute-value pairs contain information on the 331 characteristics of the corresponding service instance. This in turn 332 reveals information about the devices that publish services. The 333 amount of information varies widely with the particular service and 334 its implementation: 336 o Some attributes like the paper size available in a printer, are 337 the same on many devices, and thus only provide limited 338 information to a tracker. 340 o Attributes that have freeform values, such as the name of a 341 directory, may reveal much more information. 343 Combinations of attributes have more information power than specific 344 attributes, and can potentially be used for "fingerprinting" a 345 specific device. 347 Information contained in TXT records does not only breach privacy by 348 making devices trackable, but might directly contain private 349 information about the user. For instance the _presence service 350 reveals the "chat status" to everyone in the same network. Users 351 might not be aware of that. 353 Further, TXT records often contain version information about services 354 allowing potential attackers to identify devices running exploit- 355 prone versions of a certain service. 357 3.4. Device Fingerprinting 359 The combination of information published in DNS-SD has the potential 360 to provide a "fingerprint" of a specific device. Such information 361 includes: 363 o List of services published by the device, which can be retrieved 364 because the SRV records will point to the same host name. 366 o Specific attributes describing these services. 368 o Port numbers used by the services. 370 o Priority and weight attributes in the SRV records. 372 This combination of services and attributes will often be sufficient 373 to identify the version of the software running on a device. If a 374 device publishes many services with rich sets of attributes, the 375 combination may be sufficient to identify the specific device. 377 A sometimes heard argument is that devices providing services can be 378 identified by observing the local traffic, and that trying to hide 379 the presence of the service is futile. This argument, however, does 380 not carry much weight because 382 1. Proving privacy at the discovery layer is of the essence for 383 enabling automatically configured privacy-preserving network 384 applications. Application layer protocols are not forced to 385 leverage the offered privacy, but if device tracking is not 386 prevented at the deeper layers, including the service discovery 387 layer, obfuscating a certain service's protocol at the 388 application layer is futile. 390 2. Further, even if the application layer does not protect privacy, 391 it is hard to record and analyse the unicast traffic (which most 392 applications will generate) compared to just listening to the 393 multicast messages sent by DNS-SD/mDNS. 395 The same argument can be extended to say that the pattern of services 396 offered by a device allows for fingerprinting the device. This may 397 or may not be true, since we can expect that services will be 398 designed or updated to avoid leaking fingerprints. In any case, the 399 design of the discovery service should avoid making a bad situation 400 worse, and should as much as possible avoid providing new 401 fingerprinting information. 403 3.5. Privacy Implication of Discovering Services 405 The consumers of services engage in discovery, and in doing so reveal 406 some information such as the list of services they are interested in 407 and the domains in which they are looking for the services. When the 408 clients select specific instances of services, they reveal their 409 preference for these instances. This can be benign if the service 410 type is very common, but it could be more problematic for sensitive 411 services, such as for example some private messaging services. 413 One way to protect clients would be to somehow encrypt the requested 414 service types. Of course, just as we noted in Section 3.4, traffic 415 analysis can often reveal the service. 417 4. Security Considerations 419 For each of the operations described above, we must also consider 420 security threats we are concerned about. 422 4.1. Authenticity, Integrity & Freshness 424 Can we trust the information we receive? Has it been modified in 425 flight by an adversary? Do we trust the source of the information? 426 Is the source of information fresh, i.e., not replayed? Freshness 427 may or may not be required depending on whether the discovery process 428 is meant to be online. In some cases, publishing discovery 429 information to a shared directory or registry, rather than to each 430 online recipient through a broadcast channel, may suffice. 432 4.2. Confidentiality 434 Confidentiality is about restricting information access to only 435 authorized individuals. Ideally this should only be the appropriate 436 trusted parties, though it can be challenging to define who are "the 437 appropriate trusted parties." In some uses cases, this may mean that 438 only mutually authenticated and trusting clients and servers can read 439 messages sent for one another. The "Discover" operation in 440 particular is often used to discover new entities that the device did 441 not previously know about. It may be tricky to work out how a device 442 can have an established trust relationship with a new entity it has 443 never previously communicated with. 445 4.3. Resistance to Dictionary Attacks 447 It can be tempting to use (publicly computable) hash functions to 448 obscure sensitive identifiers. This transforms a sensitive unique 449 identifier such as an email address into a "scrambled" (but still 450 unique) identifier. Unfortunately simple solutions may be vulnerable 451 to offline dictionary attacks. 453 4.4. Resistance to Denial-of-Service Attacks 455 In any protocol where the receiver of messages has to perform 456 cryptographic operations on those messages, there is a risk of a 457 brute-force flooding attack causing the receiver to expend excessive 458 amounts of CPU time (and battery power) just processing and 459 discarding those messages. 461 4.5. Resistance to Sender Impersonation 463 Sender impersonation is an attack wherein messages such as service 464 offers are forged by entities who do not possess the corresponding 465 secret key material. These attacks may be used to learn the identity 466 of a communicating party, actively or passively. 468 4.6. Sender Deniability 470 Deniability of sender activity, e.g., of broadcasting a discovery 471 request, may be desirable or necessary in some use cases. This 472 property ensures that eavesdroppers cannot prove senders issued a 473 specific message destined for one or more peers. 475 5. Operational Considerations 477 5.1. Power Management 479 Many modern devices, especially battery-powered devices, use power 480 management techniques to conserve energy. One such technique is for 481 a device to transfer information about itself to a proxy, which will 482 act on behalf of the device for some functions, while the device 483 itself goes to sleep to reduce power consumption. When the proxy 484 determines that some action is required which only the device itself 485 can perform, the proxy may have some way (such as Ethernet "Magic 486 Packet") to wake the device. 488 In many cases, the device may not trust the network proxy 489 sufficiently to share all its confidential key material with the 490 proxy. This poses challenges for combining private discovery that 491 relies on per-query cryptographic operations, with energy-saving 492 techniques that rely on having (somewhat untrusted) network proxies 493 answer queries on behalf of sleeping devices. 495 5.2. Protocol Efficiency 497 Creating a discovery protocol that has the desired security 498 properties may result in a design that is not efficient. To perform 499 the necessary operations the protocol may need to send and receive a 500 large number of network packets. This may consume an unreasonable 501 amount of network capacity (particularly problematic when it's shared 502 wireless spectrum), cause an unnecessary level of power consumption 503 (particularly problematic on battery devices) and may result in the 504 discovery process being slow. 506 It is a difficult challenge to design a discovery protocol that has 507 the property of obscuring the details of what it is doing from 508 unauthorized observers, while also managing to do that efficiently. 510 5.3. Secure Initialization and Trust Models 512 One of the challenges implicit in the preceding discussions is that 513 whenever we discuss "trusted entities" versus "untrusted entities", 514 there needs to be some way that trust is initially established, to 515 convert an "untrusted entity" into a "trusted entity". 517 One way to establish trust between two entities is to trust a third 518 party to make that determination for us. For example, the X.509 519 certificates used by TLS and HTTPS web browsing are based on the 520 model of trusting a third party to tell us whom to trust. There are 521 some difficulties in using this model for establishing trust for 522 service discovery uses. If we want to print our tax returns or 523 medical documents on "our" printer, then we need to know which 524 printer on the network we can trust be be "our" printer. All of the 525 printers we discover on the network may be legitimate printers made 526 by legitimate printer manufacturers, but not all of them are "our" 527 printer. A third-party certificate authority cannot tell us which 528 one of the printers is ours. 530 Another common way to establish a trust relationship is Trust On 531 First Use (TOFU), as used by ssh. The first usage is a Leap Of 532 Faith, but after that public keys are exchanged and at least we can 533 confirm that subsequent communications are with the same entity. In 534 today's world, where there may be attackers present even at that 535 first use, it would be preferable to be able to establish a trust 536 relationship without requiring an initial Leap Of Faith. 538 Techniques now exist for securely establishing a trust relationship 539 without requiring an initial Leap Of Faith. Trust can be established 540 securely using a short passphrase or PIN with cryptographic 541 algorithms such as Secure Remote Password (SRP) [RFC5054] or a 542 Password Authenticated Key Exchange like J-PAKE [RFC8236] using a 543 Schnorr Non-interactive Zero-Knowledge Proof [RFC8235]. 545 Such techniques require a user to enter the correct passphrase or PIN 546 in order for the cryptographic algorithms to establish working 547 communication. This avoids the human tendency to simply press the 548 "OK" button when asked if they want to do something on their 549 electronic device. It removes the human fallibility element from the 550 equation, and avoids the human users inadvertently sabotaging their 551 own security. 553 Using these techniques, if a user tries to print their tax return on 554 a printer they've never used before (even though the name looks 555 right) they'll be prompted to enter a pairing PIN, and the user 556 *cannot* ignore that warning. They can't just press an "OK" button. 557 They have to walk to the printer and read the displayed PIN and enter 558 it. And if the intended printer is not displaying a pairing PIN, or 559 is displaying a different pairing PIN, that means the user may be 560 being spoofed, and the connection will not succeed, and the failure 561 will not reveal any secret information to the attacker. As much as 562 the human desires to "just give me an OK button to make it print" 563 (and the attacker desires them to click that OK button too) the 564 cryptographic algorithms do not give the user the ability to opt out 565 of the security, and consequently do not give the attacker any way to 566 persuade the user to opt out of the security protections. 568 5.4. External Dependencies 570 Trust establishment may depend on external, and optionally online, 571 parties. Systems which have such a dependency may be attacked by 572 interfering with communication to external dependencies. Where 573 possible, such dependencies should be minimized. Local trust models 574 are best for secure initialization in the presence of active 575 attackers. 577 6. Requirements for a DNS-SD Privacy Extension 579 Given the considerations discussed in the previous sections, we state 580 requirements for privacy preserving DNS-SD in the following 581 subsections. 583 Defining a solution according to these requirements will lead to a 584 solution that does not transmit privacy violating DNS-SD messages and 585 further does not open pathways to new attacks against the operation 586 of DNS-SD. However, while this document gives advice on which 587 privacy protecting mechanisms should be used on deeper layer network 588 protocols and on how to actually connect to services in a privacy 589 preserving way, stating corresponding requirements is out of the 590 scope of this document. 592 6.1. Private Client requirements 594 For all three scenarios described in Section 2, client privacy is a 595 requirement. Client privacy, as a requirement, can be subdivided 596 into: 598 1. DNS-SD messages transmitted by clients MUST NOT disclose the 599 client's identity, either directly or via inference, to nodes 600 other than select servers. 602 2. DNS-SD messages transmitted by clients MUST NOT disclose the 603 client's interest in specific service instances or service types 604 to nodes other than select servers. 606 3. DNS-SD messages transmitted by clients MUST NOT contain linkable 607 identifiers that allow tracing client devices. 609 DNS-SD, without privacy protection, discloses both service instance 610 names and the service types of the service instances a client is 611 interested in. Further, clients using DNS-SD disclose their host 612 name and network parameters. 614 6.2. Private Server Requirements 616 Servers like the "printer" discussed in scenario 1 are public, but 617 the servers discussed in scenarios 2 and 3 are by essence private. 618 Private servers have server privacy as a requirement, which can be 619 subdivided into: 621 1. Servers MUST neither publish static identifiers such as host 622 names or service names. When those fields are required by the 623 protocol, servers should publish randomized values. (See 624 [RFC8117] for a discussion of host names.). 626 2. Servers MUST use privacy options available at lower layers, and 627 for example avoid publishing static IPv4 or IPv6 addresses, or 628 static IEEE 802 MAC addresses. 630 3. Servers MUST NOT disclose service instance names of offered 631 services to unauthorized clients. 633 4. Servers MUST NOT disclose information about about the services 634 they offer to unauthorized clients. 636 Offering services via DNS-SD, servers typically disclose their 637 hostnames (SRV, A/AAAA), instance names of offered services (PRT, 638 SRV), and information about services (TXT). Heeding all three 639 service privacy requirements makes servers immune to fingerprinting 640 attacks on the DNS-SD level. 642 6.3. Security and Operation 644 In order to be secure and feasible, a DNS-SD privacy extension must 645 also heed the following security and operational requirements. 647 All scenarios require: 649 1. DoS resistance: The privacy protecting measures added to DNS-SD 650 MUST neither add a significant CPU overhead on nodes, nor cause 651 significantly higher network load. Further, amplification 652 attacks MUST NOT be allowed. 654 7. IANA Considerations 656 This draft does not require any IANA action. 658 8. Acknowledgments 660 This draft incorporates many contributions from Stuart Cheshire and 661 Chris Wood. 663 9. Informative References 665 [K17] Kaiser, D., "Efficient Privacy-Preserving 666 Configurationless Service Discovery Supporting Multi-Link 667 Networks", 2017, 668 . 670 [KW14a] Kaiser, D. and M. Waldvogel, "Adding Privacy to Multicast 671 DNS Service Discovery", DOI 10.1109/TrustCom.2014.107, 672 2014, . 675 [KW14b] Kaiser, D. and M. Waldvogel, "Efficient Privacy Preserving 676 Multicast DNS Service Discovery", 677 DOI 10.1109/HPCC.2014.141, 2014, 678 . 681 [RFC1033] Lottor, M., "Domain Administrators Operations Guide", 682 RFC 1033, DOI 10.17487/RFC1033, November 1987, 683 . 685 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 686 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 687 . 689 [RFC1035] Mockapetris, P., "Domain names - implementation and 690 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, 691 November 1987, . 693 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 694 Requirement Levels", BCP 14, RFC 2119, 695 DOI 10.17487/RFC2119, March 1997, 696 . 698 [RFC2782] Gulbrandsen, A., Vixie, P., and L. Esibov, "A DNS RR for 699 specifying the location of services (DNS SRV)", RFC 2782, 700 DOI 10.17487/RFC2782, February 2000, 701 . 703 [RFC5054] Taylor, D., Wu, T., Mavrogiannopoulos, N., and T. Perrin, 704 "Using the Secure Remote Password (SRP) Protocol for TLS 705 Authentication", RFC 5054, DOI 10.17487/RFC5054, November 706 2007, . 708 [RFC6762] Cheshire, S. and M. Krochmal, "Multicast DNS", RFC 6762, 709 DOI 10.17487/RFC6762, February 2013, 710 . 712 [RFC6763] Cheshire, S. and M. Krochmal, "DNS-Based Service 713 Discovery", RFC 6763, DOI 10.17487/RFC6763, February 2013, 714 . 716 [RFC8117] Huitema, C., Thaler, D., and R. Winter, "Current Hostname 717 Practice Considered Harmful", RFC 8117, 718 DOI 10.17487/RFC8117, March 2017, 719 . 721 [RFC8235] Hao, F., Ed., "Schnorr Non-interactive Zero-Knowledge 722 Proof", RFC 8235, DOI 10.17487/RFC8235, September 2017, 723 . 725 [RFC8236] Hao, F., Ed., "J-PAKE: Password-Authenticated Key Exchange 726 by Juggling", RFC 8236, DOI 10.17487/RFC8236, September 727 2017, . 729 Authors' Addresses 731 Christian Huitema 732 Private Octopus Inc. 733 Friday Harbor, WA 98250 734 U.S.A. 736 Email: huitema@huitema.net 737 URI: http://privateoctopus.com/ 739 Daniel Kaiser 740 University of Luxembourg 741 6, avenue de la Fonte 742 Esch-sur-Alzette 4364 743 Luxembourg 745 Email: daniel.kaiser@uni.lu 746 URI: https://secan-lab.uni.lu/