idnits 2.17.1 draft-song-yeti-testbed-experience-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 40 instances of too long lines in the document, the longest one being 27 characters in excess of 72. == There are 1 instance of lines with non-RFC2606-compliant FQDNs in the document. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (November 14, 2016) is 2720 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'KROLL-ISSUE' is defined on line 856, but no explicit reference was found in the text == Unused Reference: 'RFC1035' is defined on line 877, but no explicit reference was found in the text == Outdated reference: A later version (-05) exists of draft-bortzmeyer-dname-root-00 == Outdated reference: A later version (-11) exists of draft-ietf-dnsop-resolver-priming-07 ** Obsolete normative reference: RFC 7626 (Obsoleted by RFC 9076) Summary: 4 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force L. Song 3 Internet-Draft S. Kerr 4 Intended status: Informational D. Liu 5 Expires: May 18, 2017 Beijing Internet Institute 6 P. Vixie 7 TISF 8 Kato 9 Keio University/WIDE Project 10 November 14, 2016 12 Experiences from Root Testbed in the Yeti DNS Project 13 draft-song-yeti-testbed-experience-03 15 Abstract 17 This document reports and discusses issues in DNS root services, 18 based on experiences from the experiments in the Yeti DNS Project. 19 These issues include IPv6-only operation, the root DNS server naming 20 scheme, DNSSEC KSK rollover, root server renumbering, multiple root 21 zone signer, and so on. This project was founded in May 2015 and has 22 since built a live root DNS server system testbed with volunteer root 23 server and resolver operations. 25 REMOVE BEFORE PUBLICATION: Although this document is submitted as an 26 independent submission, comments are welcome in the IETF DNSOP (DNS 27 Operations) working group mailing list. The source of the document 28 is currently placed at GitHub [xml-file]. 30 Status of This Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at http://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on May 18, 2017. 47 Copyright Notice 49 Copyright (c) 2016 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (http://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 4 66 3. Yeti Testbed and Experiment Setup . . . . . . . . . . . . . . 5 67 3.1. Distribution Master . . . . . . . . . . . . . . . . . . . 6 68 3.1.1. Yeti root zone SOA SERIAL . . . . . . . . . . . . . . 7 69 3.1.2. Timing of Root Zone Fetch . . . . . . . . . . . . . . 7 70 3.1.3. Information Synchronization . . . . . . . . . . . . . 8 71 3.2. Yeti Root Servers . . . . . . . . . . . . . . . . . . . . 8 72 3.3. Yeti Resolvers and Experimental Traffic . . . . . . . . . 9 73 4. Experiments in Yeti Testbed . . . . . . . . . . . . . . . . . 10 74 4.1. Root Naming Scheme . . . . . . . . . . . . . . . . . . . 10 75 4.2. Multiple-Signers with Multi-ZSK . . . . . . . . . . . . . 12 76 4.2.1. MZSK lab experiment . . . . . . . . . . . . . . . . . 12 77 4.2.2. MZSK Yeti experiment . . . . . . . . . . . . . . . . 13 78 4.3. Root Renumbering Issue and Hint File Update . . . . . . . 13 79 4.4. DNS Fragments . . . . . . . . . . . . . . . . . . . . . . 14 80 4.5. The KSK Rollover Experiment in Yeti . . . . . . . . . . . 14 81 4.6. Bigger ZSK for Yeti . . . . . . . . . . . . . . . . . . . 15 82 5. Other Technical findings and bugs . . . . . . . . . . . . . . 16 83 5.1. IPv6 fragments issue . . . . . . . . . . . . . . . . . . 16 84 5.2. Root name compression issue . . . . . . . . . . . . . . . 17 85 5.3. SOA update delay issue . . . . . . . . . . . . . . . . . 17 86 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 17 87 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 17 88 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 18 89 Appendix A. The Yeti root server in hint file . . . . . . . . . 20 90 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 22 92 1. Introduction 94 [RFC1034] says the domain name space is a tree structure. The top 95 level of the tree for the unique identifier system is served by the 96 DNS root system. It has been operational for 25+ years. It is 97 pivotal to making the current Internet useful. So it is considered 98 somewhat ossified for stability reasons. It is hard to test and 99 implement new ideas evolving to a more advanced level to counter 100 challenges like IPv6-only operation, DNSSEC key/algorithm 101 rollover[RFC4986], scaling issues, and so on. In order to make the 102 test more practical, it is also necessary to involve users' 103 environments which are highly diversified, in order to study the 104 effects of the changes in question. 106 To benefit Internet development as a whole, the Yeti DNS Project was 107 proposed to build a parallel, experimental, live IPv6 DNS root system 108 to discover the limits of DNS root name service and deliver useful 109 technical output. Possible research agenda will be explored on this 110 testbed, covering several aspects (but not limited to): 112 o IPv6-only operation 114 o DNSSEC key rollover 116 o Renumbering issues 118 o Scalability issues 120 o Multiple zone file signers 122 Starting from May 2015, three coordinators began to build this live 123 experimental environment and called for participants. At the time of 124 writing, there are 25 Yeti root servers with 16 operators, and 125 experimental traffic from volunteers, universities, DNS vendors, 126 mirrored traffic, non-Yeti traffic, and RIPE Atlas probes. 128 Note that the Yeti DNS Project has complete fealty to IANA as the DNS 129 name space manager. All IANA top-level domain names will be 130 precisely expressed in the Yeti DNS system, including all TLD data 131 and meta-data[Root-Zone-Database]. Therefore, the Yeti DNS Project 132 is never an "alternative root" in the usual sense of that term. It 133 is expected to inform the IANA community by peer-reviewed science as 134 to future possibilities to consider for the IANA root DNS system. 136 In order to let people know the technical activities in Yeti DNS 137 Project, this document reports and discusses issues on root DNS 138 services, based on experiences so far from the testbed construction 139 and experiments in the Yeti DNS Project. 141 2. Problem Statement 143 Some problems and policy concerns over the DNS Root Server system 144 stem from centralization from the point of view of DNS content 145 consumers. These include external dependencies and surveillance 146 threats. 148 o External Dependency: Currently, there are 12 DNS Root Server 149 operators for the 13 Root Server letters, with more than 500 150 instances deployed globally. Yet compared to the number of 151 connected devices, AS networks, and recursive DNS servers, the 152 number of root instances may not be sufficient. Connectivity loss 153 between one autonomous network and all of the IANA root name 154 servers usually results in loss of not only global service but 155 also local service within the local network, even when internal 156 connectivity is perfect. 158 o Surveillance risk: Even when one or more root name server anycast 159 instances are deployed locally or in a nearby network, the queries 160 sent to the root servers carry DNS lookup information which 161 enables root operators or other parties to analyze the DNS query 162 traffic. This is a kind of information leakage[RFC7626] which is 163 to some extent not acceptable to some policy makers. 165 People are often told that the current root system with 13 root 166 servers is not able to be extended to alleviate the above concerns, 167 because it is limited to 13 by the current DNS protocol[ROOT-FAQ]. 168 This restriction may be relaxed when EDNS is considered completely 169 deployed and/or when the root system doesn't have to support IPv4 170 anymore. 172 There are some technical issues in the areas of IPv6 and DNSSEC, 173 which were introduced to the DNS root server system after it was 174 created. Renumbering DNS root servers also creates some technical 175 issues. 177 o IPv6-only capability: Currently some DNS servers which support 178 both A and AAAA (IPv4 and IPv6) records still do not respond to 179 IPv6 queries. IPv6 introduces larger minimum MTU (1280 bytes) and 180 a different fragmentation model[Fragmenting-IPv6]. It is not 181 clear whether DNS can survive without IPv4 (in an IPv6-only 182 environment), or what impact in IPv6-only environment introduces 183 to current DNS operations especially in the DNS root server 184 system. 186 o KSK rollover: Currently, IANA rolls the ZSK every six weeks but 187 the KSK has never been rolled as of the writing. Is RFC5011 188 [RFC5011] widely supported by resolvers? How about larger key 189 size or different encryption algorithm? Is the DNS packet size 190 limitation (512 or 1280 bytes) should be respected during KSK 191 rollover nowadays? There are some issues still unknown. 193 o Renumbering issue: It is likely that root operators may change 194 their IP addresses for root servers as well. Currently resolver 195 can use priming exchange [I-D.ietf-dnsop-resolver-priming] to 196 update its memory in real time. Or it may combine out-band way to 197 periodically get the current list of NS server of Root. However 198 it is observed root renumbering is still a concern which need 199 coordination and interference from human labor which deserves 200 exploring for automation. 202 3. Yeti Testbed and Experiment Setup 204 To use the Yeti testbed operationally, the information that is 205 required for correct root name service is a matching set of the 206 following: 208 o a root "hints file" 210 o the root zone apex NS record set 212 o the root zone's signing key 214 o root zone trust anchor 216 Although Yeti DNS Project publishes strictly IANA information for TLD 217 data and meta-data, it is necessary to use a special hint file to 218 replace the apex NS RRset with Yeti authority name servers, which 219 will enable the resolves to find and stick to the Yeti root system. 221 Below is a figure to demonstrate the topology of Yeti and the basic 222 data flow, which consists of the Yeti distribution master, Yeti root 223 server, and Yeti resolver: 225 +------------------------+ 226 +-+ IANA Root Zone +--+ 227 | +-----------+------------+ | 228 +-----------+ | | | IANA root zone 229 | Yeti | | | | 230 | Traffic | +--v---+ +---v--+ +-----v+ 231 | Collection| | BII | | WIDE | | TISF | 232 | | | DM | | DM | | DM | 233 +---+----+--+ +------+ +-+----+ +---+--+ 234 ^ ^ | | | 235 | | | | | Yeti root zone 236 | | v v v 237 | | +------+ +------+ +------+ 238 | +---+ Yeti | | Yeti | . . . . . . | Yeti | 239 | | Root | | Root | | Root | 240 | +---+--+ +---+--+ +--+---+ 241 | | | | 242 | pcap ^ ^ ^ DNS lookup 243 | upload | | | 244 | 245 | +--------------------------+ 246 +-------------------+ Yeti Resolvers | 247 | (with Yeti Hint) | 248 +--------------------------+ 250 Figure 1. The topology of Yeti testbed 252 3.1. Distribution Master 254 As shown in figure 1, the Yeti Root system takes the IANA root zone 255 and performs minimal changes needed to serve the zone from the Yeti 256 root servers instead of the IANA root servers. In Yeti, this 257 modified root zone is generated by the Yeti Distribution Masters 258 (DM), which provide it to the Yeti root servers. 260 The zone generation process is: 262 o DM downloads the latest IANA root zone at a certain time 264 o DM makes modifications to change from the IANA to Yeti root 265 servers 267 o DM signs the new Yeti root zone with Yeti key 269 o DM publishes the new Yeti root zone to Yeti root servers 270 While in principle this could be done by a single DM, Yeti uses a set 271 of three DMs to avoid any sense that the Yeti DNS Project is run by a 272 single organization. Each of three DMs independently fetches the 273 root zone from IANA, signs it and publishes the latest zone data to 274 Yeti root servers. 276 In the same while, these DMs coordinate their work so that the 277 resulting Yeti root zone is always consistent. There are two aspects 278 of coordination between three DMs: timing and information 279 synchronization. 281 3.1.1. Yeti root zone SOA SERIAL 283 Consistency with IANA root zone except the apex record is one of most 284 important point for the project. As part of Yeti DM design, the Yeti 285 SOA SERIAL which reflects the changes of yeti root zone is one factor 286 to be considered. 288 Currently IANA SOA SERIAL number for root zone is in the form of 289 YYYYMMDDNN, like 2015111801. In Yeti root system, IANA SOA SERIAL is 290 directly copied in to Yeti SOA SERIAL. So once the IANA root zone 291 has changed with a new SOA SERIAL, a new version of the Yeti root 292 zone is generated with the same SOA SERIAL. 294 There is a case of Yeti DM operation that when a new Yeti root server 295 added, DM operators change the Yeti root zone without change the SOA 296 SERIAL which introduces inconsistency of Yeti root system. To avoid 297 inconsistency, the DMs publish changes only when new IANA SOA SERIAL 298 is observed. 300 An analysis of IANA convention shows IANA SOA SERIAL change twice a 301 day (NN=00, 01). Since October 2007 the maximum of NN was 03 while 302 NN=2 was observed in 13 times. 304 3.1.2. Timing of Root Zone Fetch 306 Yeti root system operators do not receive notify messages when IANA 307 root zone is updated. So each Yeti DM checks the root zone serial 308 periodically. At the time of writing, each Yeti DM checks to see if 309 the IANA root zone has changed hourly, on the following schedule: 311 +-------------+---------+ 312 | DM Operator | Time | 313 +-------------+---------+ 314 | BII | hour+00 | 315 | WIDE | hour+20 | 316 | TISF | hour+40 | 317 +-------------+---------+ 319 Note that Yeti DMs can check IANA root zone more frequently (every 320 minute for example). A test done by Yeti participant shows that the 321 delay of IANA root zone update from the first IANA root server to 322 last one is around 20 minute. Once a Yeti DM fetch the new root 323 zone, it will notify all the Yeti root servers with a new SOA serial 324 number. So normally Yeti root server will be notified in less than 325 20 minute after new IANA root zone generated. Ideally, if an IANA DM 326 notifies the Yeti DMs, Yeti root zone will be updated in more timely 327 manner. 329 3.1.3. Information Synchronization 331 Given three DMs operational in Yeti root system, it is necessary to 332 prevent any inconsistency caused by human mistakes in operation. The 333 straight method is to share the same parameters to produce the Yeti 334 root zone. There parameters includes following set of files: 336 o the list of Yeti root servers, including: 338 * public IPv6 address and host name 340 * IPv6 addresses originating zone transfer 342 * IPv6 addresses to send DNS notify to 344 o the ZSKs used to sign the root 346 o the KSK used to sign the root 348 o the SERIAL when this information is active 350 The operation is simple that each DM operator synchronize the files 351 with the information needed to produce the Yeti root zone. When a 352 change is desired (such as adding a new server or rolling the ZSK), a 353 DM operator updates the local file and push to other DM. A SOA 354 SERIAL in the future is chosen for when the changes become active. 356 3.2. Yeti Root Servers 358 In Yeti root system, authoritative servers donated and operated by 359 Yeti volunteers are configured as a slave to the Yeti DM. As the 360 time of writing, there are 25 Yeti root servers distributed around 361 the world, one of which use IDN as its name (see Yeti hint file in 362 Appendix A). As one of operational research goal, all authoritative 363 servers are required to work in an IPv6-only environment. In 364 addition, different from the IANA root, Yeti root server only serve 365 the Yeti root zone. No root-servers.org zone and .arpa zone are 366 served. 368 Since Yeti is a scientific research project, it needs to capture DNS 369 traffic sent to one of the Yeti root servers for later analysis. 370 Today some servers use dnscap, which is a DNS-specific tool to 371 produce pcap files. There are several versions of dnscap floating 372 around; some people use the VeriSign one. Since dnscap loses packets 373 in some cases (tested on a Linux kernel), some people use pcapdump. 374 It requires the patch attached to this bug report 375 [pcapdump-bug-report] 377 System diversity is also a requirement and observed for current 25 378 Yeti root server. Here are the results of a survey regarding the 379 machine, operation system and DNS software: 381 o Machine: 20 out of 25 root server operator are using a VPS to 382 provide service. 384 o OS: 6 operators use Linux (including Ubuntu, Debian, CentOS, 385 ArchLinux). 5 operators use FreeBSD and 1 NetBSD. And other 386 servers are unknown. 388 o DNS software: 18 our of 25 root server use BIND (varying from 389 9.9.7 to 9.10.3). 4 of them use NSD (4.10 and 4.15). The other 2 390 servers use Knot (2.0.1 and 2.1.0). And one use Bundy (1.2.0) 392 3.3. Yeti Resolvers and Experimental Traffic 394 In client side of Yeti DNS Project, there are DNS resolvers with IPv6 395 support, updated with Yeti "hints" file to use the Yeti root servers 396 instead of the IANA root servers, and using Yeti KSK as trust anchor. 397 The Yeti KSK rollover experiment is expected to change key often 398 (typically every three months), it is required that resolver operator 399 to configure the resolver compliant to RFC 5011 for automatic update. 400 For Yeti resolver, it is also interesting to try some mechanism end- 401 system resolvers to signal to a server about their DNSSEC key status, 402 like [I-D.wessels-edns-key-tag] and 403 [I-D.wkumari-dnsop-trust-management] mentioned. 405 Participants and volunteers are expected from individual researchers, 406 labs of universities, companies and institutes, and vendors (for 407 example, the DNS software implementers), developers of CPE devices & 408 IoT devices, and middle box developers who can test their products 409 and connect their own testbed into Yeti testbed. Resolvers donated 410 by Yeti volunteers are required to be configured with Yeti hint file 411 and Yeti DNSSEC KSK. It is required that Yeti resolver can speak 412 both IPv4 and IPv6, given that not all authoritative servers on the 413 Internet are IPv6 capable. 415 At the time of writing several universities and labs have joined us 416 and contributed certain amount of traffic to Yeti testbed. To 417 introduce desired volume of experiment traffic, Yeti Project adopts 418 two alternative ways to increase the experimental traffic in the Yeti 419 testbed and check the functionality of Yeti root system. 421 One approach is to mirror the real DNS query to IANA root system by 422 off-path method and replay it into Yeti testbed; this is implemented 423 by some Yeti root server operators. Another approach is to use 424 traffic generating tool such as RIPE Atlas probes to generate 425 specific queries against Yeti servers. 427 4. Experiments in Yeti Testbed 429 The main goal of Yeti DNS Project is to act as an experimental 430 network. Experiments will be conducted on this network. In order to 431 make the findings that result from these experiments more rigorous, 432 an experiment protocol is proposed. 434 A Yeti experiment goes through four phases: 436 o Proposal. The first step is to make a proposal. It is discussed 437 and if accepted by the Yeti participants then it can proceed to 438 the next phase. 440 o Lab Test. The next phase is to run a version of the experiment in 441 a controlled environment. The goal is to check for problems such 442 as software crashes or protocol errors that may cause failures on 443 the Yeti network, before putting onto the experimental network. 445 o Yeti Test. The next phase actually running the experiment on the 446 Yeti network. Details of this will depend on the experiment. It 447 must be coordinated with the Yeti participants. 449 o Report of Findings. When completed, a report of the findings of 450 the experiment should be made. It need not be an extensive 451 document. 453 In this section, we are going to introduce some experiments 454 implemented and planned in the Yeti DNS Project. 456 4.1. Root Naming Scheme 458 In root server history, the naming scheme for individual root servers 459 was not fixed. Current IANA Root server adopt [a-m].root-servers.net 460 naming scheme to represent 13 servers which are labeled with letter 461 from A to M. The authoritativeness is achieved by hosting "root- 462 servers.net" zone in every root server. One reason behind this 463 naming scheme is that DNS label compression can be used to produce a 464 smaller DNS response within 512 bytes. But in Yeti testbed there is 465 a chance to design and test alternative naming schemes to solve some 466 issues with current naming scheme. 468 o Currently root-servers.net is not signed. Kaminsky-like attacks 469 are still possible for the the important information of Root 470 server. 472 o The dependency to a single name(i.e..net) make the root system 473 fragile in extreme case that all .net servers are down or 474 unreachable but the root server still alive. 476 Currently, there are two naming schemes proposed in Yeti Project. 477 One is to use separate and normal domains for root servers 478 (Appendix A). One consideration is to get rid of the dependency on 479 the single name. Another consideration is to intentionally produces 480 larger packets for priming responses for less name compression 481 efficiency. Note that the Yeti root has a priming response which is 482 1031 Bytes as of the writing. 484 There is also a issue for this naming scheme in which the priming 485 response may not contain all glue record for Yeti Root servers. It 486 is documented as a technical findings [Yeti-glue-issue]. There are 487 two approaches to solve the issue: one is to patch BIND 9 to includes 488 the glue records in the additional section. The other one is to add 489 a zone file for each root server and answer for all of them at each 490 Yeti server. That means each Yeti root server would have a small 491 zone file for "bii.dns-lab.net", "yeti-ns.wide.ad.jp", "yeti- 492 ns.tisf.net", and so on. 494 Another naming scheme under Yeti lab test is to use a special non- 495 delegated TLD, like .yeti-dns for root server operated by BII. The 496 benefit of non-delegated TLD naming scheme are in two aspects: 1) the 497 response to a priming query is protected by DNSSEC; 2) To meet a 498 political reason that the zone authoritative for root server is not 499 delegated and belong to particular companies or organizations except 500 IANA; 3) reduce the dependency of root server names to other DNS 501 service; 4) to mitigate some kind of cache poisoning activities. 503 The obvious concern of this naming scheme is the size of the signed 504 response with RRSIG for each root server and optionally DNSKEY RRs. 505 There is a Lab test result regarding the different size of priming 506 response in Octet : 1) with no additional data, with RRISG in 507 additional section , with DNSKEY+RRSIG in additional section (7 keys 508 in MZSK experiment. MZSK is to be described in section 4.2) 509 +--------------------+--------+----------------+ 510 | No additional data | RRSIG | RRISG +DNSKEY | 511 +--------------------+--------+----------------+ 512 | 753 | 3296 | 4004 | 513 +--------------------+--------+----------------+ 515 We found that modification of IANA root zone by adding a new TLD is 516 so controversial even for scientific purpose. There are non-trivial 517 discussions on this issue in Yeti discuss mailing list, regarding the 518 proposal .yeti-dns for root name or .local for new AS112 519 [I-D.bortzmeyer-dname-root]. It is argued that this kind of 520 experiment should based on community consensus from technical bodies 521 like IETF and be operated within a limited duration in some cases. 523 Note that a document named "Technical Analysis of the Naming Scheme 524 used for Individual Root Servers" is being developed in RSSAC Caucus. 525 And it will be published soon 527 4.2. Multiple-Signers with Multi-ZSK 529 According to the Problem statement of Yeti DNS Project, more 530 independent participants and operators of the root system is 531 desirable. As the name implies, multi-ZSK (MZSK) mode introduces 532 different ZSKs sharing a single unique KSK, as opposed to the IANA 533 root system (which uses a single ZSK to sign the root zone). On the 534 condition of good availability and consistency on the root system, 535 the Multi-ZSK proposal is designed to give each DM operator enough 536 room to manage their own ZSK, by choosing different ZSK, length, 537 duration, and so on; even the encryption algorithm may vary (although 538 this may cause some problem with older versions of the Unbound 539 resolver). 541 4.2.1. MZSK lab experiment 543 In the lab test phase, we simply setup two root servers (A and B) and 544 a resolver switch between them (BIND only). Root A and Root B use 545 their own ZSK to sign the zone. It is proved that Multi-ZSK works by 546 adding multiple ZSK to the root zone. As a result, the resolver will 547 cache the key sets instead of a single ZSK to validate the data no 548 matter it is signed by Root A or Root B. We also tested Unbound and 549 the test concluded in success with more than 10 DMs and 10 ZSKs. 551 Although more DMs and ZSKs can be added into the test, adding more 552 ZSKs to the root zone enlarges the DNS response size for DNSKEY 553 queries which may be a concern given the limitation of DNS packet 554 size. Current IANA root server operators are inclined to keep the 555 packets size as small as possible. So the number of DM and ZSK will 556 be parameter which is decided based on operation experience. In the 557 current Yeti root testbed, there are 3 DMs, each with a separate ZSK. 559 4.2.2. MZSK Yeti experiment 561 After the lab test, the MZSK experiment is being conducted on the 562 Yeti platform. There are two phases: 564 o Phase 1. In the first phase, we confirmed that using multiple 565 ZSKs works in the wild. We insured that using the maximum number 566 of ZSKs continues to work in the resolver side. Here one of the 567 DM (BII) created and added 5 ZSKs using the existing 568 synchronization mechanism. (If all 3 ZSKs are rolling then we 569 have 6 total. To get this number we add 5.) 571 o Phase 2. In the second phase, we delegated the management of the 572 ZSKs so that each DM creates and publishes a separate ZSK. For 573 this phase, modified zone generation protocol and software was 574 used [Yeti-DM-Sync-MZSK], which allows the DM to sign without 575 access to the private parts of ZSKs generated by other DMs. In 576 this phase we roll all three ZSKs separately. 578 The MZSK experiment was finished by the end of 2016-04. Almost 579 everything appears to be working. But there have been some findings 580 [Experiment-MZSK-notes], including discovering that IPv6 fragmented 581 packets are not forwarded on an Ethernet bridge with netfilter 582 ip6_tables loaded on one authority server, and issue with IXFR 583 falling back to AXFR due to multiple signers which is described in 584 [I-D.song-dnsop-ixfr-fallback] as a problem statement. 586 4.3. Root Renumbering Issue and Hint File Update 588 With the recent renumbering of H root Server's IP address, there is a 589 discussion of ways that resolvers can update their hint file. 590 Traditional ways include using FTP protocol by doing a wget and using 591 dig to double-check the servers' addresses manually. Each way would 592 depend on manual operation. As a result, there are many old machines 593 that have not updated their hint files. As a proof, after completion 594 of renumbering in thirteen years ago, there is an observation that 595 the "Old J-Root" can still receive DNS query traffic 596 [Renumbering-J-Root]. 598 This experiment proposal aims to find an automatic way for hint-file 599 updating. The already-completed work is a shell script tool which 600 provides the function that updates a hint-file in file system 601 automatically with DNSSEC and trust anchor validation. 602 [Hintfile-Auto-Update] 603 The methodology is straightforward. The tool first queries the NS 604 list for "." domain and queries A and AAAA records for every name on 605 the NS list. It requires DNSSEC validation for both the NS list and 606 the A and AAAA answers. After getting all the answers, the tool 607 compares the new hint file with the old one. If there is a 608 difference, it renames the old one with a time-stamp and replaces the 609 old one with the new one. Otherwise the tool deletes the new hint 610 file and nothing will be changed. 612 Note that in current IANA root system the servers named in the root 613 NS record are not signed. So the tool can not fully work in the 614 production network. In Yeti root system some of the names listed in 615 the NS record are signed, which provides a test environment for such 616 a proposal. 618 4.4. DNS Fragments 620 In consideration of new DNS protocol and operation, there is always a 621 hard limit on the DNS packet size. Take Yeti for example: adding 622 more root servers, using the Yeti naming scheme, rolling the KSK, and 623 Multi-ZSK all increase the packet size. The fear of large DNS 624 packets mainly stem from two aspects: one is IP-fragments and the 625 other is frequently falling back to TCP. 627 Fragmentation may cause serious issues; if one of the fragment is 628 lost at random, it results in the loss of entire packet and involve 629 timeout. If the fragment is dropped by a middle-box, the query 630 always results in failure, and result in name resolution failure 631 unless the resolver falls back to TCP. It is known at this moment 632 that limited number of security middle-box implementations support 633 IPv6 fragments. 635 A possible solution is to split a single DNS message across multiple 636 UDP datagrams. This DNS fragments mechanism is documented in 637 [I-D.muks-dns-message-fragments] as an experimental IETF draft. 639 4.5. The KSK Rollover Experiment in Yeti 641 The Yeti DNS Project provides a good basis to conduct a real-world 642 experiment of a KSK rollover in the root zone. It is not a perfect 643 analogy to the IANA root because all of the resolvers to the Yeti 644 experiment are "opt-in", and are presumably run by administrators who 645 are interested in the DNS and knowledgeable about it. Still, it can 646 inform the IANA root KSK roll. 648 The IANA root KSK has not been rolled as of the writing. ICANN put 649 together a design team to analyze the problem and make 650 recommendations. The design team put together a 651 plan[ICANN-ROOT-ROLL]. The Yeti DNS Project may evaluate this 652 scenario for an experimental KSK roll. The experiment may not be 653 identical, since the time-lines laid out in the current IANA plan are 654 very long, and the Yeti DNS Project would like to conduct the 655 experiment in a shorter time, which may considered much difficult. 657 The Yeti KSK is rolled twice in Yeti testbed as of the writing. In 658 the first trial, it made old KSK inactive and new key active in one 659 week after new key created, and deleted the old key in another week, 660 which was totally unaware the timer specified in RFC5011. Because 661 the hold-down timer was not correctly set in the server side, some 662 clients (like Unbound) receive SERVFAILs (like dig without +cd) 663 because the new key was still in AddPend state when old key was 664 inactive. The lesson from the first KSK trial is that both server 665 and client should compliant to RFC5011. 667 For the second KSK rollover, it waited 30 days after a new KSK is 668 published in the root zone. Different from ICANN rollover plan, it 669 revokes the old key once the new key become active. We don't want to 670 wait too long, so we shorten the time for key publish and delete in 671 server side. As of the writing, only one bug [KROLL-ISSUE]spotted on 672 one Yeti resolver (using BIND 9.10.4-p2) during the second Yeti KSK 673 rollover. The resolver is configured with multiple views before the 674 KSK rollover. DNSSEC failures are reported once we added new view 675 for new users after rolling the key. By checking the manual of 676 BIND9.10.4-P2, it is said that unlike trusted-keys, managed-keys may 677 only be set at the top level of named.conf, not within a view. It 678 gives an assumption that for each view, managed-key can not be set 679 per view in BIND. But right after setting the managed-keys of new 680 views, the DNSSEC validation works for this view. As a conclusion 681 for this issue, we suggest currently BIND multiple-view operation 682 needs extra guidance for RFC5011. The manage-keys should be set 683 carefully during the KSK rollover for each view when the it is 684 created. 686 Another of the questions of KSK rollover is how can an authority 687 server know the resolver is ready for RFC5011. Two Internet-Drafts 688 [I-D.wessels-edns-key-tag] and [I-D.wkumari-dnsop-trust-management] 689 try to address the problem. In addition a compliant resolver 690 implementation may fail without any complain if it is not correctly 691 configured. In the case of Unbound 1.5.8, the key is only readable 692 for DNS users [auto-trust-anchor-file]. 694 4.6. Bigger ZSK for Yeti 696 Currently IANA root system uses 1024-bits ZSK which is no longer 697 recommended cryptography. VeriSign announced at DNS-OARC 24th 698 workshop that the IANA root zone ZSK will be increased from 1024 bits 699 to 2048 bits in 2016. However, it is not fully tested by the real 700 environment. 702 Bigger key tend to produce a larger response which requires IP 703 fragmentation and is commonly considered harm for DNS system. In 704 Yeti DNS Project, it is desirable to test bigger responses in many 705 aspects. The Big ZSK experiment is designed to test operating the 706 Yeti root with a 2048-bit ZSK. The traffic is monitored before and 707 after we lengthen the ZSK to see if there are any changes, such as a 708 drop off of packets or a increase in retries. The current status of 709 this experiment is under monitoring data analysis. 711 5. Other Technical findings and bugs 713 Besides the experiments with specific goals and procedures, some 714 unexpected bugs have been reported. It is worthwhile to record them 715 as technical findings from Yeti DNS Project. 717 5.1. IPv6 fragments issue 719 There are two cases in Yeti testbed reported that some Yeti root 720 servers failed to pull the zone from a Distribution Master via AXFR/ 721 IXFR. Two facts have been revealed in both client side and server 722 side after trouble shooting. 724 One fact in client side is that some operation system can not handle 725 IPv6 fragments correctly and AXRF/IXFR in TCP fails. The bug covers 726 several OSs and one VM platform (listed below). 728 +-----------------------+-----------------+ 729 | OS | VM | 730 +-----------------------+-----------------+ 731 | NetBSD 6.1 and 7.0RC1 | VMware ESXI 5.5 | 732 | FreeBSD10.0 | | 733 | Debian 3.2 | | 734 +-----------------------+-----------------+ 736 Another fact is from server side in which one TCP segment of AXRF/ 737 IXFR is fragmented in IP layer resulting in two fragmented packets. 738 This weird behavior has been documented IETF 739 draft[I-D.andrews-tcp-and-ipv6-use-minmtu]. It reports a situation 740 that some implementations of TCP running over IPv6 neglect to check 741 the IPV6_USE_MIN_MTU value when performing MSS negotiation and when 742 constructing a TCP segment. It will cause TCP MSS option set to 1440 743 bytes, but IP layer will limit the packet less than 1280 bytes and 744 fragment the packet to two fragmented packets. 746 While the latter is not a technical error, but it will cause the 747 error in the former fact which deserves much attention in IPv6 748 operation . 750 5.2. Root name compression issue 752 [RFC1035]specifies DNS massage compression scheme which allows a 753 domain name in a message to be represented as either: 1) a sequence 754 of labels ending in a zero octet, 2) a pointer, 3) or a sequence of 755 labels ending with a pointer. It is designed to save more room of 756 DNS packet. 758 However in Yeti testbed, it is found that Knot 2.0 server compresses 759 even the root. It means in a DNS message the name of root (a zero 760 octet) is replaced by a pointer of 2 octets. As well, it is legal 761 but breaks some tools (Go DNS lib in this bug report) which does not 762 expect such name compression for root. Both Knot and Go DNS lib have 763 fixed that bug by now. 765 5.3. SOA update delay issue 767 It is observed one server on Yeti testbed have some bugs on SOA 768 update with more than 10 hours delay. It is running on Bundy 1.2.0 769 on FreeBSD 10.2-RELEASE. A workaround is to check DM's SOA status in 770 regular base. But it still need some work to find the bug in code 771 path to improve the software. 773 6. IANA Considerations 775 This document requires no action from the IANA. 777 7. Acknowledgments 779 The editors fully acknowledge that this memo is based on joint work 780 and discussions of many people in the mailing list of the Yeti DNS 781 Project [Yeti-DNS-Project]. Some of them actually are co-authors of 782 this memo but limited by the number of co-authors listed in the 783 headline. The people deserve the credit who help to construct the 784 Yeti testbed and contribute to this document, so their effort is 785 acknowledged here with a name list: 787 Tomohiro Ishihara, Antonio Prado, Stephane Bortzmeyer, Mickael 788 Jouanne, Pierre Beyssac, Joao Damas, Pavel Khramtsov, Ma Yan, Otmar 789 Lendl, Praveen Misra, Carsten Strotmann, Edwin Gomez, Remi Gacogne, 790 Guillaume de Lafond, Yves Bovard, Hugo Salgado-Hernandez, Andreas 791 Schulze,Li Zhen, Daobiao Gong, Runxia Wan. 793 Acknowledgment to all anonymous Yeti participants and volunteers who 794 contribute Yeti resolvers to make the experimental testbed functional 795 and workable. 797 8. References 799 [auto-trust-anchor-file] 800 "Unbound should test that auto-* files are writable", 801 2016, . 804 [Experiment-MZSK-notes] 805 "MZSK Experiment Notes", 2016, . 809 [Fragmenting-IPv6] 810 Huston, G., "Fragmenting-IPv6", May 2016, 811 . 813 [Hintfile-Auto-Update] 814 "Hintfile Auto Update", 2015, . 817 [I-D.andrews-tcp-and-ipv6-use-minmtu] 818 Andrews, M., "TCP Fails To Respect IPV6_USE_MIN_MTU", 819 draft-andrews-tcp-and-ipv6-use-minmtu-04 (work in 820 progress), October 2015. 822 [I-D.bortzmeyer-dname-root] 823 Bortzmeyer, S., "Using DNAME in the root for the 824 delegation of special-use TLDs", draft-bortzmeyer-dname- 825 root-00 (work in progress), April 2016. 827 [I-D.ietf-dnsop-resolver-priming] 828 Koch, P., Larson, M., and P. Hoffman, "Initializing a DNS 829 Resolver with Priming Queries", draft-ietf-dnsop-resolver- 830 priming-07 (work in progress), March 2016. 832 [I-D.muks-dns-message-fragments] 833 Sivaraman, M., Kerr, S., and D. Song, "DNS message 834 fragments", draft-muks-dns-message-fragments-00 (work in 835 progress), July 2015. 837 [I-D.song-dnsop-ixfr-fallback] 838 Song, L., "An IXFR Fallback to AXFR Case", draft-song- 839 dnsop-ixfr-fallback-01 (work in progress), May 2016. 841 [I-D.wessels-edns-key-tag] 842 Wessels, D., "The EDNS Key Tag Option", draft-wessels- 843 edns-key-tag-00 (work in progress), July 2015. 845 [I-D.wkumari-dnsop-trust-management] 846 Kumari, W., Huston, G., Hunt, E., and R. Arends, 847 "Signalling of DNS Security (DNSSEC) Trust Anchors", 848 draft-wkumari-dnsop-trust-management-01 (work in 849 progress), October 2015. 851 [ICANN-ROOT-ROLL] 852 "Root Zone KSK Rollover Plan", 2016, 853 . 856 [KROLL-ISSUE] 857 "A DNSSEC issue during Yeti KSK rollover", 2016, 858 . 861 [pcapdump-bug-report] 862 Bortzmeyer, S., "pcaputils: IWBN to have an option to run 863 a program after file rotation in pcapdump", 2009, 864 . 867 [Renumbering-J-Root] 868 Wessels, D., "Thirteen Years of "Old J-Root"", 2015, 869 . 873 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 874 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 875 . 877 [RFC1035] Mockapetris, P., "Domain names - implementation and 878 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, 879 November 1987, . 881 [RFC4986] Eland, H., Mundy, R., Crocker, S., and S. Krishnaswamy, 882 "Requirements Related to DNS Security (DNSSEC) Trust 883 Anchor Rollover", RFC 4986, DOI 10.17487/RFC4986, August 884 2007, . 886 [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC) 887 Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011, 888 September 2007, . 890 [RFC7626] Bortzmeyer, S., "DNS Privacy Considerations", RFC 7626, 891 DOI 10.17487/RFC7626, August 2015, 892 . 894 [ROOT-FAQ] 895 Karrenberg, D., "DNS Root Name Server FAQ", 2007, 896 . 898 [Root-Zone-Database] 899 "Root Zone Database", 900 . 902 [xml-file] 903 "XML source file of Yeti experience draft", 2016, 904 . 907 [Yeti-DM-Sync-MZSK] 908 "Yeti DM Synchronization for MZSK", 2016, 909 . 912 [Yeti-DNS-Project] 913 "Website of Yeti DNS Project", . 915 [Yeti-glue-issue] 916 "Yeti Glue Issue", 2015, 917 . 919 Appendix A. The Yeti root server in hint file 921 REMOVE BEFORE PUBLICATION: Currently in Yeti testbed, there are cases 922 that multiple servers run by single operator, like VeriSgin runs A 923 and J. It is allowed because we need more server to satisfy Yeti 924 experiment requirement. The name of those servers share common top 925 domain name like yeti.eu.org, dns-lab.net, yeti-dns.net. We 926 intentionally pick two random labels (first 30 characters of 927 SHA256([a-e])) to offset the effect of name compression. According 928 to the Yeti policy those servers will be reclaimed if qualified 929 volunteers apply to host a Yeti server. 931 . 3600000 IN NS bii.dns-lab.net 932 bii.dns-lab.net 3600000 IN AAAA 240c:f:1:22::6 933 . 3600000 IN NS yeti-ns.tisf.net 934 yeti-ns.tisf.net 3600000 IN AAAA 2001:559:8000::6 935 . 3600000 IN NS yeti-ns.wide.ad.jp 936 yeti-ns.wide.ad.jp 3600000 IN AAAA 2001:200:1d9::35 937 . 3600000 IN NS yeti-ns.as59715.net 938 yeti-ns.as59715.net 3600000 IN AAAA 2a02:cdc5:9715:0:185:5:203:53 939 . 3600000 IN NS dahu1.yeti.eu.org 940 dahu1.yeti.eu.org 3600000 IN AAAA 2001:4b98:dc2:45:216:3eff:fe4b:8c5b 941 . 3600000 IN NS ns-yeti.bondis.org 942 ns-yeti.bondis.org 3600000 IN AAAA 2a02:2810:0:405::250 943 . 3600000 IN NS yeti-ns.ix.ru 944 yeti-ns.ix.ru 3600000 IN AAAA 2001:6d0:6d06::53 945 . 3600000 IN NS yeti.bofh.priv.at 946 yeti.bofh.priv.at 3600000 IN AAAA 2a01:4f8:161:6106:1::10 947 . 3600000 IN NS yeti.ipv6.ernet.in 948 yeti.ipv6.ernet.in 3600000 IN AAAA 2001:e30:1c1e:1::333 949 . 3600000 IN NS yeti-dns01.dnsworkshop.org 950 yeti-dns01.dnsworkshop.org 3600000 IN AAAA 2001:1608:10:167:32e::53 951 . 3600000 IN NS yeti-ns.conit.co 952 yeti-ns.conit.co 3600000 IN AAAA 2604:6600:2000:11::4854:a010 953 . 3600000 IN NS dahu2.yeti.eu.org 954 dahu2.yeti.eu.org 3600000 IN AAAA 2001:67c:217c:6::2 955 . 3600000 IN NS yeti.aquaray.com 956 yeti.aquaray.com 3600000 IN AAAA 2a02:ec0:200::1 957 . 3600000 IN NS yeti-ns.switch.ch 958 yeti-ns.switch.ch 3600000 IN AAAA 2001:620:0:ff::29 959 . 3600000 IN NS yeti-ns.lab.nic.cl 960 yeti-ns.lab.nic.cl 3600000 IN AAAA 2001:1398:1:21::8001 961 . 3600000 IN NS yeti-ns1.dns-lab.net 962 yeti-ns1.dns-lab.net 3600000 IN AAAA 2001:da8:a3:a027::6 963 . 3600000 IN NS yeti-ns2.dns-lab.net 964 yeti-ns2.dns-lab.net 3600000 IN AAAA 2001:da8:268:4200::6 965 . 3600000 IN NS yeti-ns3.dns-lab.net 966 yeti-ns3.dns-lab.net 3600000 IN AAAA 2400:a980:30ff::6 967 . 3600000 IN NS ca978112ca1bbdcafac231b39a23dc.yeti-dns.net 968 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net 3600000 IN AAAA 2c0f:f530::6 969 . 3600000 IN NS 3f79bb7b435b05321651daefd374cd.yeti-dns.net 970 3f79bb7b435b05321651daefd374cd.yeti-dns.net 3600000 IN AAAA 2401:c900:1401:3b:c::6 971 . 3600000 IN NS xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c 972 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c 3600000 IN AAAA 2001:e30:1c1e:10::333 973 . 3600000 IN NS yeti1.ipv6.ernet.in 974 yeti1.ipv6.ernet.in 3600000 IN AAAA 2001:e30:187d::333 975 . 3600000 IN NS yeti-dns02.dnsworkshop.org 976 yeti-dns02.dnsworkshop.org 3600000 IN AAAA 2001:19f0:0:1133::53 977 . 3600000 IN NS yeti.mind-dns.nl 978 yeti.mind-dns.nl 3600000 IN AAAA 2a02:990:100:b01::53:0 979 . 3600000 IN NS yeti-ns.datev.net 980 yeti-ns.datev.net 3600000 IN AAAA 2a00:e50:f15c:1000::1:53 981 Authors' Addresses 983 Linjian Song 984 Beijing Internet Institute 985 2508 Room, 25th Floor, Tower A, Time Fortune 986 Beijing 100028 987 P. R. China 989 Email: songlinjian@gmail.com 990 URI: http://www.biigroup.com/ 992 Shane Kerr 993 Beijing Internet Institute 994 2/F, Building 5, No.58 Jinghai Road, BDA 995 Beijing 100176 996 CN 998 Email: shane@biigroup.cn 999 URI: http://www.biigroup.com/ 1001 Dong Liu 1002 Beijing Internet Institute 1003 2508 Room, 25th Floor, Tower A, Time Fortune 1004 Beijing 100028 1005 P. R. China 1007 Email: dliu@biigroup.com 1008 URI: http://www.biigroup.com/ 1010 Paul Vixie 1011 TISF 1012 11400 La Honda Road 1013 Woodside, California 94062 1014 US 1016 Email: vixie@tisf.net 1017 URI: http://www.redbarn.org/ 1018 Akira Kato 1019 Keio University/WIDE Project 1020 Graduate School of Media Design, 4-1-1 Hiyoshi, Kohoku 1021 Yokohama 223-8526 1022 JAPAN 1024 Email: kato@wide.ad.jp 1025 URI: http://www.kmd.keio.ac.jp/