idnits 2.17.1 draft-ietf-dnssd-prireq-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (September 30, 2018) is 2034 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group C. Huitema 3 Internet-Draft Private Octopus Inc. 4 Intended status: Informational September 30, 2018 5 Expires: April 3, 2019 7 DNS-SD Privacy and Security Requirements 8 draft-ietf-dnssd-prireq-00 10 Abstract 12 DNS-SD (DNS Service Discovery) normally discloses information about 13 devices offering and requesting services. This information includes 14 host names, network parameters, and possibly a further description of 15 the corresponding service instance. Especially when mobile devices 16 engage in DNS Service Discovery over Multicast DNS at a public 17 hotspot, serious privacy problems arise. We analyze the requirements 18 of a privacy respecting discovery service. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at https://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on April 3, 2019. 37 Copyright Notice 39 Copyright (c) 2018 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (https://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 55 1.1. Requirements . . . . . . . . . . . . . . . . . . . . . . 3 56 2. DNS-SD Discovery Scenarios . . . . . . . . . . . . . . . . . 3 57 2.1. Private client and public server . . . . . . . . . . . . 3 58 2.2. Private client and private server . . . . . . . . . . . . 4 59 2.3. Wearable client and server . . . . . . . . . . . . . . . 5 60 3. Privacy Considerations . . . . . . . . . . . . . . . . . . . 6 61 3.1. Privacy Implication of Publishing Service Instance Names 7 62 3.2. Privacy Implication of Publishing Node Names . . . . . . 7 63 3.3. Privacy Implication of Publishing Service Attributes . . 8 64 3.4. Device Fingerprinting . . . . . . . . . . . . . . . . . . 8 65 3.5. Privacy Implication of Discovering Services . . . . . . . 9 66 4. Security Considerations . . . . . . . . . . . . . . . . . . . 10 67 4.1. Authenticity, Integrity & Freshness . . . . . . . . . . . 10 68 4.2. Confidentiality . . . . . . . . . . . . . . . . . . . . . 10 69 4.3. Resistance to Dictionary Attacks . . . . . . . . . . . . 10 70 4.4. Resistance to Denial-of-Service Attack . . . . . . . . . 10 71 4.5. Resistance to Sender Impersonation . . . . . . . . . . . 11 72 4.6. Sender Deniability . . . . . . . . . . . . . . . . . . . 11 73 5. Operational Considerations . . . . . . . . . . . . . . . . . 11 74 5.1. Power Management . . . . . . . . . . . . . . . . . . . . 11 75 5.2. Protocol Efficiency . . . . . . . . . . . . . . . . . . . 11 76 5.3. Secure Initialization and Trust Models . . . . . . . . . 12 77 5.4. External Dependencies . . . . . . . . . . . . . . . . . . 13 78 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 79 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 13 80 8. Informative References . . . . . . . . . . . . . . . . . . . 13 81 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 15 83 1. Introduction 85 DNS-SD [RFC6763] over mDNS [RFC6762] enables zero-configuration 86 service discovery in local networks. It is very convenient for 87 users, but it requires the public exposure of the offering and 88 requesting identities along with information about the offered and 89 requested services. Parts of the published information can seriously 90 breach the user's privacy. These privacy issues and potential 91 solutions are discussed in [KW14a], [KW14b] and [K17]. 93 There are cases when nodes connected to a network want to provide or 94 consume services without exposing their identity to the other parties 95 connected to the same network. Consider for example a traveler 96 wanting to upload pictures from a phone to a laptop when connected to 97 the Wi-Fi network of an Internet cafe, or two travelers who want to 98 share files between their laptops when waiting for their plane in an 99 airport lounge. 101 We expect that these exchanges will start with a discovery procedure 102 using DNS-SD [RFC6763] over mDNS [RFC6762]. One of the devices will 103 publish the availability of a service, such as a picture library or a 104 file store in our examples. The user of the other device will 105 discover this service, and then connect to it. 107 When analyzing these scenarios in Section 3, we find that the DNS-SD 108 messages leak identifying information such as the instance name, the 109 host name or service properties. 111 1.1. Requirements 113 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 114 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 115 document are to be interpreted as described in [RFC2119]. 117 2. DNS-SD Discovery Scenarios 119 DNS-Based Service Discovery (DNS-SD) is defined in [RFC6763]. It 120 allows nodes to publish the availability of an instance of a service 121 by inserting specific records in the DNS ([RFC1033], [RFC1034], 122 [RFC1035]) or by publishing these records locally using multicast DNS 123 (mDNS) [RFC6762]. Available services are described using three types 124 of records: 126 PTR Record: Associates a service type in the domain with an 127 "instance" name of this service type. 129 SRV Record: Provides the node name, port number, priority and weight 130 associated with the service instance, in conformance with 131 [RFC2782]. 133 TXT Record: Provides a set of attribute-value pairs describing 134 specific properties of the service instance. 136 In the remaining sections, we review common discovery scenarios 137 provided by DNS-SD and discuss their privacy requirements. 139 2.1. Private client and public server 141 Perhaps the simplest private discovery scenario involves a single 142 client connecting to a public server through a public network. A 143 common example would be a traveler using a publicly available printer 144 in a business center, in an hotel or at an airport. 146 ( Taking notes: 147 ( David is printing 148 ( a document 149 ~~~~~~~~~~~ 150 o 151 ___ o ___ 152 / \ _|___|_ 153 | | |* *| 154 \_/ __ \_/ 155 | / / Discovery +----------+ | 156 /|\ /_/ <-----------> | +----+ | /|\ 157 / | \__/ +--| |--+ / | \ 158 / | |____/ / | \ 159 / | / | \ 160 / \ / \ 161 / \ / \ 162 / \ / \ 163 / \ / \ 164 / \ / \ 166 In that scenario, the server is public and wants to be discovered, 167 but the client is private. The adversary will be listening to the 168 network traffic, trying to identify the visitors' devices and their 169 activity. Identifying devices leads to identifying people, either 170 just for tracking people or as a preliminary to targeted attacks. 172 The requirement in that scenario is that the discovery activity 173 should not disclose the identity of the client. 175 2.2. Private client and private server 177 The second private discovery scenario involves private client 178 connecting to a private server. A common example would be two people 179 engaging in a collaborative application in a public place, such as 180 for example an airport's lounge. 182 ( Taking notes: 183 ( David is meeting 184 ( with Stuart 185 ~~~~~~~~~~~ 186 o 187 ___ ___ o ___ 188 / \ / \ _|___|_ 189 | | | | |* *| 190 \_/ __ __ \_/ \_/ 191 | / / Discovery \ \ | | 192 /|\ /_/ <-----------> \_\ /|\ /|\ 193 / | \__/ \__/ | \ / | \ 194 / | | \ / | \ 195 / | | \ / | \ 196 / \ / \ / \ 197 / \ / \ / \ 198 / \ / \ / \ 199 / \ / \ / \ 200 / \ / \ / \ 202 In that scenario, the collaborative application on one of the device 203 will act as server, and the application on the other device will act 204 as client. The server wants to be discovered by the client, but has 205 no desire to be discovered by anyone else. The adversary will be 206 listening to network traffic, attempting to discover the identity of 207 devices as in the first scenario, and also attempting to discover the 208 patterns of traffic, as these patterns reveal the business and social 209 interactions between the owners of the devices. 211 The requirement in that scenario is that the discovery activity 212 should not disclose the identity of either the client or the server. 214 2.3. Wearable client and server 216 The third private discovery scenario involves wearable devices. A 217 typical example would be the watch on someone's wrist connecting to 218 the phone in their pocket. 220 ( Taking notes: 221 ( David' is here. His watch is 222 ( talking to his phone 223 ~~~~~~~~~~~ 224 o 225 ___ o ___ 226 / \ _|___|_ 227 | | |* *| 228 \_/ \_/ 229 | _/ | 230 /|\ // /|\ 231 / | \__/ ^ / | \ 232 / |__ | Discovery / | \ 233 / |\ \ v / | \ 234 / \\_\ / \ 235 / \ / \ 236 / \ / \ 237 / \ / \ 238 / \ / \ 240 This third scenario is in many ways similar to the second scenario. 241 It involves two devices, one acting as server and the other acting as 242 client, and it leads to the same requirement that the discovery 243 traffic not disclose the identity of either the client or the server. 244 The main difference is that the devices are managed by a single 245 owner, which can lead to different methods for establishing secure 246 relations between the device. There is also an added emphasis in 247 hiding the type of devices that the person wears. 249 In addition to tracking the identity of the owner of the devices, the 250 adversary is interested by the characteristics of the devices, such 251 as type, brand, and model. Identifying the type of device can lead 252 to further attacks, from theft to device specific hacking. The 253 combination of devices worn by the same person will also provide a 254 "fingerprint" of the person, allowing identification. 256 3. Privacy Considerations 258 The discovery scenarios in Section Section 2 illustrate three 259 separate privacy requirements that vary based on use case: 261 1. Client identity privacy: Client identities are not leaked during 262 service discovery or use. 264 2. Multi-owner, mutual client and server identity privacy: Neither 265 client nor server identities are leaked during service discovery 266 or use. 268 3. Single-owner, mutual client and server identity privacy: 269 Identities of clients and servers owned and managed by the same 270 application, device, or user are not leaked during service 271 discovery or use. 273 In the remaining subsections, we describe aspects of DNS-SD that make 274 these requirements difficult to achieve in practice. 276 3.1. Privacy Implication of Publishing Service Instance Names 278 In the first phase of discovery, client obtain all PTR records 279 associated with a service type in a given naming domain. Each PTR 280 record contains a Service Instance Name defined in Section 4 of 281 [RFC6763]: 283 Service Instance Name = . . 285 The portion of the Service Instance Name is meant to 286 convey enough information for users of discovery clients to easily 287 select the desired service instance. Nodes that use DNS-SD over mDNS 288 [RFC6762] in a mobile environment will rely on the specificity of the 289 instance name to identify the desired service instance. In our 290 example of users wanting to upload pictures to a laptop in an 291 Internet Cafe, the list of available service instances may look like: 293 Alice's Images . _imageStore._tcp . local 294 Alice's Mobile Phone . _presence._tcp . local 295 Alice's Notebook . _presence._tcp . local 296 Bob's Notebook . _presence._tcp . local 297 Carol's Notebook . _presence._tcp . local 299 Alice will see the list on her phone and understand intuitively that 300 she should pick the first item. The discovery will "just work". 302 However, DNS-SD/mDNS will reveal to anybody that Alice is currently 303 visiting the Internet Cafe. It further discloses the fact that she 304 uses two devices, shares an image store, and uses a chat application 305 supporting the _presence protocol on both of her devices. She might 306 currently chat with Bob or Carol, as they are also using a _presence 307 supporting chat application. This information is not just available 308 to devices actively browsing for and offering services, but to 309 anybody passively listening to the network traffic. 311 3.2. Privacy Implication of Publishing Node Names 313 The SRV records contain the DNS name of the node publishing the 314 service. Typical implementations construct this DNS name by 315 concatenating the "host name" of the node with the name of the local 316 domain. The privacy implications of this practice are reviewed in 317 [RFC8117]. Depending on naming practices, the host name is either a 318 strong identifier of the device, or at a minimum a partial 319 identifier. It enables tracking of both the device, and, by 320 extension, the device's owner. 322 3.3. Privacy Implication of Publishing Service Attributes 324 The TXT record's attribute-value pairs contain information on the 325 characteristics of the corresponding service instance. This in turn 326 reveals information about the devices that publish services. The 327 amount of information varies widely with the particular service and 328 its implementation: 330 o Some attributes like the paper size available in a printer, are 331 the same on many devices, and thus only provide limited 332 information to a tracker. 334 o Attributes that have freeform values, such as the name of a 335 directory, may reveal much more information. 337 Combinations of attributes have more information power than specific 338 attributes, and can potentially be used for "fingerprinting" a 339 specific device. 341 Information contained in TXT records does not only breach privacy by 342 making devices trackable, but might directly contain private 343 information about the user. For instance the _presence service 344 reveals the "chat status" to everyone in the same network. Users 345 might not be aware of that. 347 Further, TXT records often contain version information about services 348 allowing potential attackers to identify devices running exploit- 349 prone versions of a certain service. 351 3.4. Device Fingerprinting 353 The combination of information published in DNS-SD has the potential 354 to provide a "fingerprint" of a specific device. Such information 355 includes: 357 o List of services published by the device, which can be retrieved 358 because the SRV records will point to the same host name. 360 o Specific attributes describing these services. 362 o Port numbers used by the services. 364 o Priority and weight attributes in the SRV records. 366 This combination of services and attributes will often be sufficient 367 to identify the version of the software running on a device. If a 368 device publishes many services with rich sets of attributes, the 369 combination may be sufficient to identify the specific device. 371 A sometimes heard argument is that devices providing services can be 372 identified by observing the local traffic, and that trying to hide 373 the presence of the service is futile. This argument, however, does 374 not carry much weight because 376 1. Proving privacy at the discovery layer is of the essence for 377 enabling automatically configured privacy-preserving network 378 applications. Application layer protocols are not forced to 379 leverage the offered privacy, but if device tracking is not 380 prevented at the deeper layers, including the service discovery 381 layer, obfuscating a certain service's protocol at the 382 application layer is futile. 384 2. Further, even if the application layer does not protect privacy, 385 it is hard to record and analyse the unicast traffic (which most 386 applications will generate) compared to just listening to the 387 multicast messages sent by DNS-SD/mDNS. 389 The same argument can be extended to say that the pattern of services 390 offered by a device allows for fingerprinting the device. This may 391 or may not be true, since we can expect that services will be 392 designed or updated to avoid leaking fingerprints. In any case, the 393 design of the discovery service should avoid making a bad situation 394 worse, and should as much as possible avoid providing new 395 fingerprinting information. 397 3.5. Privacy Implication of Discovering Services 399 The consumers of services engage in discovery, and in doing so reveal 400 some information such as the list of services they are interested in 401 and the domains in which they are looking for the services. When the 402 clients select specific instances of services, they reveal their 403 preference for these instances. This can be benign if the service 404 type is very common, but it could be more problematic for sensitive 405 services, such as for example some private messaging services. 407 One way to protect clients would be to somehow encrypt the requested 408 service types. Of course, just as we noted in Section 3.4, traffic 409 analysis can often reveal the service. 411 4. Security Considerations 413 For each of the operations described above, we must also consider 414 security threats we are concerned about. 416 4.1. Authenticity, Integrity & Freshness 418 Can we trust the information we receive? Has it been modified in 419 flight by an adversary? Do we trust the source of the information? 420 Is the source of information fresh, i.e., not replayed? Freshness 421 may or may not be required depending on whether the discovery process 422 is meant to be online. In some cases, publishing discovery 423 information to a shared directory or registry, rather than to each 424 online recipient through a broadcast channel, may suffice. 426 4.2. Confidentiality 428 Confidentiality is about restricting information access to only 429 authorized individuals. Ideally this should only be the appropriate 430 trusted parties, though it can be challenging to define who are "the 431 appropriate trusted parties." In some uses cases, this may mean that 432 only mutually authenticated and trusting clients and servers can read 433 messages sent for one another. The "Discover" operation in 434 particular is often used to discover new entities that the device did 435 not previously know about. It may be tricky to work out how a device 436 can have an established trust relationship with a new entity it has 437 never previously communicated with. 439 4.3. Resistance to Dictionary Attacks 441 It can be tempting to use (publicly computable) hash functions to 442 obscure sensitive identifiers. This transforms a sensitive unique 443 identifier such as an email address into a "scrambled" (but still 444 unique) identifier. Unfortunately simple solutions may be vulnerable 445 to offline dictionary attacks. 447 4.4. Resistance to Denial-of-Service Attack 449 In any protocol where the receiver of messages has to perform 450 cryptographic operations on those messages, there is a risk of a 451 brute-force flooding attack causing the receiver to expend excessive 452 amounts of CPU time (and battery power) just processing and 453 discarding those messages. 455 4.5. Resistance to Sender Impersonation 457 Sender impersonation is an attack wherein messages such as service 458 offers are forged by entities who do not possess the corresponding 459 secret key material. These attacks may be used to learn the identity 460 of a communicating party, actively or passively. 462 4.6. Sender Deniability 464 Deniability of sender activity, e.g., of broadcasting a discovery 465 request, may be desirable or necessary in some use cases. This 466 property ensures that eavesdroppers cannot prove senders issued a 467 specific message destined for one or more peers. 469 5. Operational Considerations 471 5.1. Power Management 473 Many modern devices, especially battery-powered devices, use power 474 management techniques to conserve energy. One such technique is for 475 a device to transfer information about itself to a proxy, which will 476 act on behalf of the device for some functions, while the device 477 itself goes to sleep to reduce power consumption. When the proxy 478 determines that some action is required which only the device itself 479 can perform, the proxy may have some way (such as Ethernet "Magic 480 Packet") to wake the device. 482 In many cases, the device may not trust the network proxy 483 sufficiently to share all its confidential key material with the 484 proxy. This poses challenges for combining private discovery that 485 relies on per-query cryptographic operations, with energy-saving 486 techniques that rely on having (somewhat untrusted) network proxies 487 answer queries on behalf of sleeping devices. 489 5.2. Protocol Efficiency 491 Creating a discovery protocol that has the desired security 492 properties may result in a design that is not efficient. To perform 493 the necessary operations the protocol may need to send and receive a 494 large number of network packets. This may consume an unreasonable 495 amount of network capacity (particularly problematic when it's shared 496 wireless spectrum), cause an unnecessary level of power consumption 497 (particularly problematic on battery devices) and may result in the 498 discovery process being slow. 500 It is a difficult challenge to design a discovery protocol that has 501 the property of obscuring the details of what it is doing from 502 unauthorized observers, while also managing to do that quickly and 503 efficiently. 505 5.3. Secure Initialization and Trust Models 507 One of the challenges implicit in the preceding discussions is that 508 whenever we discuss "trusted entities" versus "untrusted entities", 509 there needs to be some way that trust is initially established, to 510 convert an "untrusted entity" into a "trusted entity". 512 One way to establish trust between two entities is to trust a third 513 party to make that determination for us. For example, the X.509 514 certificates used by TLS and HTTPS web browsing are based on the 515 model of trusting a third party to tell us who to trust. There are 516 some difficulties in using this model for establishing trust for 517 service discovery uses. If we want to print our tax returns or 518 medical documents on "our" printer, then we need to know which 519 printer on the network we can trust be be "our" printer. All of the 520 printers we discover on the network may be legitimate printers made 521 by legitimate printer manufacturers, but not all of them are "our" 522 printer. A third-party certificate authority cannot tell us which 523 one of the printers is ours. 525 Another common way to establish a trust relationship is Trust On 526 First Use (TOFU), as used by ssh. The first usage is a Leap Of 527 Faith, but after that public keys are exchanged and at least we can 528 confirm that subsequent communications are with the same entity. In 529 today's world, where there may be attackers present even at that 530 first use, it would be preferable to be able to establish a trust 531 relationship without requiring an initial Leap Of Faith. 533 Techniques now exist for securely establishing a trust relationship 534 without requiring an initial Leap Of Faith. Trust can be established 535 securely using a short passphrase or PIN with cryptographic 536 algorithms such as Secure Remote Password (SRP) [RFC5054] or a 537 Password Authenticated Key Exchange like J-PAKE [RFC8236] using a 538 Schnorr Non-interactive Zero-Knowledge Proof [RFC8235]. 540 Such techniques require a user to enter the correct passphrase or PIN 541 in order for the cryptographic algorithms to establish working 542 communication. This avoids the human tendency to simply press the 543 "OK" button when asked if they want to do something on their 544 electronic device. It removes the human fallibility element from the 545 equation, and avoids the human users inadvertently sabotaging their 546 own security. 548 Using these techniques, if a user tries to print their tax return on 549 a printer they've never used before (even though the name looks 550 right) they'll be prompted to enter a pairing PIN, and the user 551 *cannot* ignore that warning. They can't just press an "OK" button. 552 They have to walk to the printer and read the displayed PIN and enter 553 it. And if the intended printer is not displaying a pairing PIN, or 554 is displaying a different pairing PIN, that means the user may be 555 being spoofed, and the connection will not succeed, and the failure 556 will not reveal any secret information to the attacker. As much as 557 the human desires to "just give me an OK button to make it print" 558 (and the attacker desires them to click that OK button too) the 559 cryptographic algorithms do not give the user the ability to opt out 560 of the security, and consequently do not give the attacker any way to 561 persuade the user to opt out of the security protections. 563 5.4. External Dependencies 565 Trust establishment may depend on external, and optionally online, 566 parties. Systems which have such a dependency may be attacked by 567 interfering with communication to external dependencies. Where 568 possible, such dependencies should be minimized. Local trust models 569 are best for secure initialization in the presence of active 570 attackers. 572 6. IANA Considerations 574 This draft does not require any IANA action. 576 7. Acknowledgments 578 This draft incorporates many contributions from Stuart Cheshire and 579 Chris Wood. 581 8. Informative References 583 [K17] Kaiser, D., "Efficient Privacy-Preserving 584 Configurationless Service Discovery Supporting Multi-Link 585 Networks", 2017, 586 . 588 [KW14a] Kaiser, D. and M. Waldvogel, "Adding Privacy to Multicast 589 DNS Service Discovery", DOI 10.1109/TrustCom.2014.107, 590 2014, . 593 [KW14b] Kaiser, D. and M. Waldvogel, "Efficient Privacy Preserving 594 Multicast DNS Service Discovery", 595 DOI 10.1109/HPCC.2014.141, 2014, 596 . 599 [RFC1033] Lottor, M., "Domain Administrators Operations Guide", 600 RFC 1033, DOI 10.17487/RFC1033, November 1987, 601 . 603 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 604 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 605 . 607 [RFC1035] Mockapetris, P., "Domain names - implementation and 608 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, 609 November 1987, . 611 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 612 Requirement Levels", BCP 14, RFC 2119, 613 DOI 10.17487/RFC2119, March 1997, 614 . 616 [RFC2782] Gulbrandsen, A., Vixie, P., and L. Esibov, "A DNS RR for 617 specifying the location of services (DNS SRV)", RFC 2782, 618 DOI 10.17487/RFC2782, February 2000, 619 . 621 [RFC5054] Taylor, D., Wu, T., Mavrogiannopoulos, N., and T. Perrin, 622 "Using the Secure Remote Password (SRP) Protocol for TLS 623 Authentication", RFC 5054, DOI 10.17487/RFC5054, November 624 2007, . 626 [RFC6762] Cheshire, S. and M. Krochmal, "Multicast DNS", RFC 6762, 627 DOI 10.17487/RFC6762, February 2013, 628 . 630 [RFC6763] Cheshire, S. and M. Krochmal, "DNS-Based Service 631 Discovery", RFC 6763, DOI 10.17487/RFC6763, February 2013, 632 . 634 [RFC8117] Huitema, C., Thaler, D., and R. Winter, "Current Hostname 635 Practice Considered Harmful", RFC 8117, 636 DOI 10.17487/RFC8117, March 2017, 637 . 639 [RFC8235] Hao, F., Ed., "Schnorr Non-interactive Zero-Knowledge 640 Proof", RFC 8235, DOI 10.17487/RFC8235, September 2017, 641 . 643 [RFC8236] Hao, F., Ed., "J-PAKE: Password-Authenticated Key Exchange 644 by Juggling", RFC 8236, DOI 10.17487/RFC8236, September 645 2017, . 647 Author's Address 649 Christian Huitema 650 Private Octopus Inc. 651 Friday Harbor, WA 98250 652 U.S.A. 654 Email: huitema@huitema.net 655 URI: http://privateoctopus.com/