idnits 2.17.1 draft-song-yeti-testbed-experience-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 26 instances of lines with non-RFC2606-compliant FQDNs in the document. == There are 4 instances of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 19, 2018) is 2107 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: '1' on line 1493 -- Looks like a reference, but probably isn't: '3' on line 1498 -- Looks like a reference, but probably isn't: '4' on line 1501 -- Looks like a reference, but probably isn't: '5' on line 1503 -- Looks like a reference, but probably isn't: '6' on line 1505 -- Looks like a reference, but probably isn't: '7' on line 1507 -- Looks like a reference, but probably isn't: '8' on line 1509 -- Looks like a reference, but probably isn't: '9' on line 1511 -- Looks like a reference, but probably isn't: '10' on line 1513 -- Looks like a reference, but probably isn't: '11' on line 1516 -- Looks like a reference, but probably isn't: '2' on line 1495 == Outdated reference: A later version (-03) exists of draft-song-atr-large-resp-00 -- No information found for draft-icann-dnssec-keymgmt - is the name correct? -- Obsolete informational reference (is this intentional?): RFC 2845 (Obsoleted by RFC 8945) Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 14 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force (IETF) L. Song, Ed. 3 Internet-Draft D. Liu 4 Intended status: Informational Beijing Internet Institute 5 Expires: January 20, 2019 P. Vixie 6 TISF 7 Akira. Kato 8 Keio University/WIDE Project 9 S. Kerr 10 July 19, 2018 12 Yeti DNS Testbed 13 draft-song-yeti-testbed-experience-10 15 Abstract 17 The Internet's Domain Name System is designed and built on a single 18 root, known as the Root Server System. 20 Yeti DNS is an experimental, non-production root server testbed that 21 provides an environment where technical and operational experiments 22 can safely be performed without risk to production root server 23 infrastructure. Yeti DNS is an independently-coordinated project and 24 is not affiliated with the IETF, ICANN, IANA, or any Root Server 25 Operator. The objectives of the Yeti Project were set by the 26 participants in the project based on experiments that they considered 27 would provide valuable information, and with the aim of developing a 28 non-production testbed that would be open for use by anyone from the 29 technical community to propose or run experiments. 31 The Yeti DNS testbed implementation includes various novel and 32 experimental components. These differences from the Root Server 33 System have operational consequences; by deploying such a system 34 globally but outside the production DNS system, the Yeti DNS project 35 provides an opportunity to gain insight into those consequences 36 without threatening the stability of the DNS. 38 This document neither addresses the relevant policies under which the 39 Root Server System is operated nor makes any proposal for changing 40 any aspect of its implementation or operation. This document aims 41 solely to document the technical and operational experience of 42 deploying a system which is similar to but different from the Root 43 Server System. 45 Status of This Memo 47 This Internet-Draft is submitted in full conformance with the 48 provisions of BCP 78 and BCP 79. 50 Internet-Drafts are working documents of the Internet Engineering 51 Task Force (IETF). Note that other groups may also distribute 52 working documents as Internet-Drafts. The list of current Internet- 53 Drafts is at https://datatracker.ietf.org/drafts/current/. 55 Internet-Drafts are draft documents valid for a maximum of six months 56 and may be updated, replaced, or obsoleted by other documents at any 57 time. It is inappropriate to use Internet-Drafts as reference 58 material or to cite them other than as "work in progress." 60 This Internet-Draft will expire on January 20, 2019. 62 Copyright Notice 64 Copyright (c) 2018 IETF Trust and the persons identified as the 65 document authors. All rights reserved. 67 This document is subject to BCP 78 and the IETF Trust's Legal 68 Provisions Relating to IETF Documents 69 (https://trustee.ietf.org/license-info) in effect on the date of 70 publication of this document. Please review these documents 71 carefully, as they describe your rights and restrictions with respect 72 to this document. Code Components extracted from this document must 73 include Simplified BSD License text as described in Section 4.e of 74 the Trust Legal Provisions and are provided without warranty as 75 described in the Simplified BSD License. 77 Table of Contents 79 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 4 80 2. Requirements Notation and Conventions . . . . . . . . . . . . 5 81 3. Areas of Study . . . . . . . . . . . . . . . . . . . . . . . 5 82 3.1. Implementation of a Root Server System-like Testbed . . . 6 83 3.2. Yeti-Root Zone Distribution . . . . . . . . . . . . . . . 6 84 3.3. Yeti-Root Server Names and Addressing . . . . . . . . . . 6 85 3.4. IPv6-Only Yeti-Root Servers . . . . . . . . . . . . . . . 6 86 3.5. DNSSEC in the Yeti-Root Zone . . . . . . . . . . . . . . 6 87 4. Yeti DNS Testbed Infrastructure . . . . . . . . . . . . . . . 7 88 4.1. Root Zone Retrieval . . . . . . . . . . . . . . . . . . . 9 89 4.2. Transformation of Root Zone to Yeti-Root Zone . . . . . . 9 90 4.2.1. ZSK and KSK Key Sets Shared Between DMs . . . . . . . 10 91 4.2.2. Unique ZSK per DM; No Shared KSK . . . . . . . . . . 11 92 4.2.3. Preserving Root Zone NSEC Chain and ZSK RRSIGs . . . 12 94 4.3. Yeti-Root Zone Distribution . . . . . . . . . . . . . . . 12 95 4.4. Synchronization of Service Metadata . . . . . . . . . . . 12 96 4.5. Yeti-Root Server Naming Scheme . . . . . . . . . . . . . 13 97 4.6. Yeti-Root Servers . . . . . . . . . . . . . . . . . . . . 14 98 4.7. Experimental Traffic . . . . . . . . . . . . . . . . . . 16 99 4.8. Traffic Capture and Analysis . . . . . . . . . . . . . . 16 100 5. Operational Experience with the Yeti DNS Testbed . . . . . . 17 101 5.1. Viability of IPv6-Only Operation . . . . . . . . . . . . 17 102 5.1.1. IPv6 Fragmentation . . . . . . . . . . . . . . . . . 17 103 5.1.2. Serving IPv4-Only End-Users . . . . . . . . . . . . . 18 104 5.2. Zone Distribution . . . . . . . . . . . . . . . . . . . . 19 105 5.2.1. Zone Transfers . . . . . . . . . . . . . . . . . . . 19 106 5.2.2. Delays in Yeti-Root Zone Distribution . . . . . . . . 20 107 5.2.3. Mixed RRSIGs from different DM ZSKs . . . . . . . . . 20 108 5.3. DNSSEC KSK Rollover . . . . . . . . . . . . . . . . . . . 21 109 5.3.1. Failure-Case KSK Rollover . . . . . . . . . . . . . . 21 110 5.3.2. KSK Rollover vs. BIND9 Views . . . . . . . . . . . . 22 111 5.3.3. Large Responses during KSK Rollover . . . . . . . . . 22 112 5.4. Capture of Large DNS Response . . . . . . . . . . . . . . 23 113 5.5. Automated Hints File Maintenance . . . . . . . . . . . . 24 114 5.6. Root Label Compression in Knot DNS Server . . . . . . . . 25 115 6. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 25 116 7. Security Considerations . . . . . . . . . . . . . . . . . . . 27 117 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28 118 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 28 119 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 29 120 10.1. Normative References . . . . . . . . . . . . . . . . . . 29 121 10.2. Informative References . . . . . . . . . . . . . . . . . 29 122 10.3. URIs . . . . . . . . . . . . . . . . . . . . . . . . . . 32 123 Appendix A. Yeti-Root Hints File . . . . . . . . . . . . . . . . 33 124 Appendix B. Yeti-Root Server Priming Response . . . . . . . . . 34 125 Appendix C. Active IPv6 Prefixes in Yeti DNS testbed . . . . . . 36 126 Appendix D. Tools developed for Yeti DNS testbed . . . . . . . . 36 127 Appendix E. Controversy . . . . . . . . . . . . . . . . . . . . 37 128 Appendix F. About This Document . . . . . . . . . . . . . . . . 38 129 F.1. Venue . . . . . . . . . . . . . . . . . . . . . . . . . . 38 130 F.2. Revision History . . . . . . . . . . . . . . . . . . . . 38 131 F.2.1. draft-song-yeti-testbed-experience-00 through -03 . . 38 132 F.2.2. draft-song-yeti-testbed-experience-04 . . . . . . . . 38 133 F.2.3. draft-song-yeti-testbed-experience-05 . . . . . . . . 39 134 F.2.4. draft-song-yeti-testbed-experience-06 . . . . . . . . 39 135 F.2.5. draft-song-yeti-testbed-experience-07 . . . . . . . . 39 136 F.2.6. draft-song-yeti-testbed-experience-08 . . . . . . . . 39 137 F.2.7. draft-song-yeti-testbed-experience-09 . . . . . . . . 39 138 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 39 140 1. Introduction 142 The Domain Name System (DNS), as originally specified in [RFC1034] 143 and [RFC1035], has proved to be an enduring and important platform 144 upon which almost every end-user of the Internet relies. Despite its 145 longevity, extensions to the protocol, new implementations and 146 refinements to DNS operations continue to emerge both inside and 147 outside the IETF. 149 The Root Server System in particular has seen technical innovation 150 and development, for example in the form of wide-scale anycast 151 deployment, the mitigation of unwanted traffic on a global scale, the 152 widespread deployment of Response Rate Limiting [RRL], the 153 introduction of IPv6 transport, the deployment of DNSSEC, changes in 154 DNSSEC key sizes, and preparations to roll the root zone's Key 155 Signing Key (KSK) and corresponding trust anchor. These projects 156 created tremendous qualitative operational change, and required 157 impressive caution and study prior to implementation. They took 158 place in parallel with the quantitative expansion or delegations for 159 new TLDs [1]. 161 Aspects of the operational structure of the Root Server System have 162 been described in such documents as [TNO2009], [ISC-TN-2003-1], 163 [RSSAC001] and [RFC7720]. Such references, considered together, 164 provide sufficient insight into the operations of the system as a 165 whole that it is straightforward to imagine structural changes to the 166 root server system's infrastructure and to wonder what the 167 operational implications of such changes might be. 169 The Yeti DNS Project was conceived in May 2015 with the aim of 170 providing a non-production testbed that would be open for use by 171 anyone from the technical community to propose or run experiments 172 designed to answer these kinds of questions. Coordination for the 173 project was provided by BII, TISF and the WIDE Project. Thus, Yeti 174 DNS is an independently-coordinated project and is not affiliated 175 with the IETF, ICANN, IANA, or any Root Server Operator. The 176 objectives of the Yeti Project were set by the participants in the 177 project based on experiments that they considered would provide 178 valuable information. 180 Many volunteers collaborated to build a distributed testbed that at 181 the time of writing includes 25 Yeti root servers with 16 operators 182 and handles experimental traffic from individual volunteers, 183 universities, DNS vendors, and distributed measurement networks. 185 By design, the Yeti testbed system serves the root zone published by 186 the IANA with only those structural modifications necessary to ensure 187 that it is able to function usefully in the Yeti testbed system 188 instead of the production Root Server system. In particular, no 189 delegation for any top-level zone is changed, added or removed from 190 the IANA-published root zone to construct the root zone served by the 191 Yeti testbed system, and changes in the root zone are reflected in 192 the testbed in near real-time. In this document, for clarity, we 193 refer to the zone derived from the IANA-published root zone as the 194 Yeti-Root zone. 196 The Yeti DNS testbed serves a similar function to the Root Server 197 System in the sense that they both serve similar zones: the Yeti-Root 198 zone and the IANA-published root zone. However, the Yeti DNS testbed 199 only serves clients that are explicitly configured to participate in 200 the experiment, whereas the Root Server System serves the whole 201 Internet. Since the dependent end-users and systems of the Yeti DNS 202 testbed are known and their operations well-coordinated with those of 203 the Yeti project, it has been possible to deploy structural changes 204 in the Yeti DNS testbed with effective measurement and analysis, 205 something that is difficult or simply impractical in the production 206 Root Server System. 208 This document describes the motivation for the Yeti project, 209 describes the Yeti testbed infrastructure, and provides the technical 210 and operational experiences of some users of the Yeti testbed. 212 2. Requirements Notation and Conventions 214 Through the document, any mention to "Root" with an uppercase R and 215 without other prefix, refers to the "IANA Root" systems used in the 216 production Internet. Proper mentions to the Yeti infrastructure will 217 be prefixed with "Yeti", like "Yeti-Root Zone", "Yeti-DNS", and so 218 on. 220 3. Areas of Study 222 This section provides some examples of the topics that the developers 223 of the Yeti DNS Testbed considered important to address. As noted in 224 Section 1, the Yeti DNS is an independently-coordinated project and 225 is not affiliated with the IETF, ICANN, IANA, or any Root Server 226 Operator. Thus, the topics and areas for study were selected by (and 227 for) the proponents of the Yeti project to address their own concerns 228 and in the hope that the information and tools provided would be of 229 wider interest. 231 Each example included below is illustrated with indicative questions. 233 3.1. Implementation of a Root Server System-like Testbed 235 o How can a testbed be constructed and deployed on the Internet, 236 allowing useful public participation without any risk of 237 disruption of the Root Server System? 239 o How can representative traffic be introduced into such a testbed 240 such that insights into the impact of specific differences between 241 the testbed and the Root Server System can be observed? 243 3.2. Yeti-Root Zone Distribution 245 o What are the scaling properties of Yeti-Root zone distribution as 246 the number of Yeti-Root servers, Yeti-Root server instances or 247 intermediate distribution points increase? 249 3.3. Yeti-Root Server Names and Addressing 251 o What naming schemes other than those closely analogous to the use 252 of ROOT-SERVERS.NET in the production root zone are practical, and 253 what are their respective advantages and disadvantages? 255 o What are the risks and benefits of signing the zone that contains 256 the names of the Yeti-Root servers? 258 o What automatic mechanisms might be useful to improve the rate at 259 which clients of Yeti-Root servers are able to react to a Yeti- 260 Root server renumbering event? 262 3.4. IPv6-Only Yeti-Root Servers 264 o Are there negative operational effects in the use of IPv6-only 265 Yeti-Root servers, compared to the use of servers that are dual- 266 stack? 268 o What effect does the IPv6 fragmentation model have on the 269 operation of Yeti-Root servers, compared with that of IPv4? 271 3.5. DNSSEC in the Yeti-Root Zone 273 o Is it practical to sign the Yeti-Root zone using multiple, 274 independently-operated DNSSEC signers and multiple corresponding 275 Zone Signing Key(ZSK)? 277 o To what extent is [RFC5011]: "Automated Updates of DNS Security 278 (DNSSEC) Trust Anchors" supported by resolvers? 280 o Does the KSK Rollover plan designed and in the process of being 281 implemented by ICANN work as expected on the Yeti testbed? 283 o What is the operational impact of using much larger RSA key sizes 284 in the ZSKs used in a root? 286 o What are the operational consequences of choosing DNSSEC 287 algorithms other than RSA to sign a root? 289 4. Yeti DNS Testbed Infrastructure 291 The purpose of the testbed is to allow DNS queries from stub 292 resolvers, mediated by recursive resolvers, to be delivered to Yeti- 293 Root servers, and for corresponding responses generated on the Yeti- 294 Root servers to be returned, as illustrated in Figure 1. 296 ,----------. ,-----------. ,------------. 297 | stub +------> | recursive +------> | Yeti-Root | 298 | resolver | <------+ resolver | <------+ nameserver | 299 `----------' `-----------' `------------' 300 ^ ^ ^ 301 | appropriate | Yeti-Root hints; | Yeti-Root zone 302 `- resolver `- Yeti-Root trust `- with DNSKEY RRSet 303 configured anchor signed by 304 Yeti-Root KSK 306 Figure 1: High-Level Testbed Components 308 To use the Yeti DNS testbed, a recursive resolver must be configured 309 to use the Yeti-Root servers. That configuration consists of a list 310 of names and addresses for the Yeti-Root servers (often referred to 311 as a "hints file") that replaces the corresponding hints used for the 312 production Root Server System (Appendix A). If resolvers are 313 configured to validate DNSSEC, then they also need to be configured 314 with a DNSSEC trust anchor that corresponds to a KSK used in the Yeti 315 DNS Project, in place of the normal trust anchor set used for the 316 Root Zone. 318 Since the Yeti root(s) are signed with Yeti keys, rather than those 319 used by the IANA root, corresponding changes are needed in the 320 resolver trust anchors. Corresponding changes are required in the 321 Yeti-Root hints file Appendix A. Those changes would be properly 322 rejected by any validator using the production Root Server System's 323 root zone trust anchor set as bogus. 325 Stub resolvers become part of the Yeti DNS Testbed by their use of 326 recursive resolvers that are configured as described above. 328 The data flow from IANA to stub resolvers through the Yeti testbed is 329 illustrated in Figure 2 and are described in more detail in the 330 sections that follow. 332 ,----------------. 333 ,-- / IANA Root Zone / ---. 334 | `----------------' | 335 | | | 336 | | | Root Zone 337 ,--------------. ,---V---. ,---V---. ,---V---. 338 | Yeti Traffic | | BII | | WIDE | | TISF | 339 | Collection | | DM | | DM | | DM | 340 `----+----+----' `---+---' `---+---' `---+---' 341 | | ,-----' ,-------' `----. 342 | | | | | Yeti-Root 343 ^ ^ | | | Zone 344 | | ,---V---. ,---V---. ,---V---. 345 | `---+ Yeti | | Yeti | . . . . . . . | Yeti | 346 | | Root | | Root | | Root | 347 | `---+---' `---+---' `---+---' 348 | | | | DNS 349 | | | | Response 350 | ,--V----------V-------------------------V--. 351 `---------+ Yeti Resolvers | 352 `--------------------+---------------------' 353 | DNS 354 | Response 355 ,--------------------V---------------------. 356 | Yeti Stub Resolvers | 357 `------------------------------------------' 359 The three coordinators of Yeti DNS testbed : 360 BII : Beijing Internet Institute 361 WIDE: Widely Integrated Distributed Environment Project 362 TISF: A collaborative engineering and security project by Paul Vixie 364 Figure 2: Testbed Data Flow 366 Note that the roots are not bound to Distribution Masters(DM). DMs 367 update their zone in a time schedule describe in Section 4.1 Each of 368 DMs who update the latest zone can send notify to all roots. So the 369 zone transfer can happened between any DM and any root. 371 4.1. Root Zone Retrieval 373 The Yeti-Root Zone is distributed within the Yeti DNS testbed through 374 a set of internal master servers that are referred to as Distribution 375 Masters (DMs). These server elements distribute the Yeti-Root zone 376 to all Yeti-Root servers. The means by which the Yeti DMs construct 377 the Yeti-Root zone for distribution is described below. 379 Since Yeti DNS DMs do not receive DNS NOTIFY [RFC1996] messages from 380 the Root Server System, a polling approach is used to determine when 381 new revisions of the root zone are available from the production Root 382 Server System. Each Yeti DM requests the Root Zone Start of 383 Authority(SOA) record from a Root server that permits unauthenticated 384 zone transfers of the root zone, and performs a zone transfer from 385 that server if the retrieved value of SOA.SERIAL is greater than that 386 of the last retrieved zone. 388 At the time of writing, unauthenticated zone transfers of the Root 389 Zone are available directly from B-Root, C-Root, F-Root, G-Root and 390 K-Root, and from L-Root via the two servers XFR.CJR.DNS.ICANN.ORG and 391 XFR.LAX.DNS.ICANN.ORG, as well as via FTP from sites maintained by 392 the Root Zone Maintainer and the IANA Functions Operator. The Yeti 393 DNS Testbed retrieves the Root Zone using zone transfers from F-Root. 394 The schedule on which F-Root is polled by each Yeti DM is as follows: 396 +-------------+-----------------------+ 397 | DM Operator | Time | 398 +-------------+-----------------------+ 399 | BII | UTC hour + 00 minutes | 400 | WIDE | UTC hour + 20 minutes | 401 | TISF | UTC hour + 40 minutes | 402 +-------------+-----------------------+ 404 The Yeti DNS testbed uses multiple DMs, each of which acts 405 autonomously and equivalently to its siblings. Any single DM can act 406 to distribute new revisions of the Yeti-Root zone, and is also 407 responsible for signing the RRSets that are changed as part of the 408 transformation of the Root Zone into the Yeti-Root zone described in 409 Section 4.2. This multiple DM model intend to provide a basic 410 structure to implement idea of shared zone control proposed in 411 [ITI2014]. 413 4.2. Transformation of Root Zone to Yeti-Root Zone 415 Two distinct approaches have been deployed in the Yeti-DNS Testbed, 416 separately, to transform the Root Zone into the Yeti-Root Zone. At a 417 high level both approaches are equivalent in the sense that they 418 replace a minimal set of information in the root zone with 419 corresponding data for the Yeti DNS Testbed; the mechanisms by which 420 the transforms are executed are different, however. Each is 421 discussed in turn in Section 4.2.1 and Section 4.2.2, respectively. 423 A third approach has also been proposed, but not yet implemented. 424 The motivations and changes implied by that approach are described in 425 Section 4.2.3. 427 4.2.1. ZSK and KSK Key Sets Shared Between DMs 429 The approach described here was the first to be implemented. It 430 features entirely autonomous operation of each DM, but also requires 431 secret key material (the private key in each of the Yeti-Root KSK and 432 ZSK key-pairs) to be distributed and maintained on each DM in a 433 coordinated way. 435 The Root Zone is transformed as follows to produce the Yeti-Root 436 Zone. This transformation is carried out autonomously on each Yeti 437 DNS Project DM. Each DM carries an authentic copy of the current set 438 of Yeti KSK and ZSK key pairs, synchronized between all DMs (see 439 Section 4.4). 441 1. SOA.MNAME is set to www.yeti-dns.org. 443 2. SOA.RNAME is set to .yeti-dns.org. where is currently one of "wide", "bii" or "tisf". 446 3. All DNSKEY, RRSIG and NSEC records are removed. 448 4. The apex Name Server(NS) RRSet is removed, with the corresponding 449 root server glue (A and AAAA) RRSets. 451 5. A Yeti DNSKEY RRSet is added to the apex, comprising the public 452 parts of all Yeti KSK and ZSKs. 454 6. A Yeti NS RRSet is added to the apex that includes all Yeti-Root 455 servers. 457 7. Glue records (AAAA only, since Yeti-Root servers are v6-only) for 458 all Yeti-Root servers are added. 460 8. The Yeti-Root Zone is signed: the NSEC chain is regenerated; the 461 Yeti KSK is used to sign the DNSKEY RRSet, and the shared ZSK is 462 used to sign every other RRSet. 464 Note that the SOA.SERIAL value published in the Yeti-Root Zone is 465 identical to that found in the root zone. 467 4.2.2. Unique ZSK per DM; No Shared KSK 469 The approach described here was the second to be implemented and 470 maintained as stable state. Each DM is provisioned with its own, 471 dedicated ZSK key pairs that are not shared with other DMs. A Yeti- 472 Root DNSKEY RRSet is constructed and signed upstream of all DMs as 473 the union of the set of active Yeti-Root KSKs and the set of active 474 ZSKs for every individual DM. Each DM now only requires the secret 475 part of its own dedicated ZSK key pairs to be available locally, and 476 no other secret key material is shared. The high-level approach is 477 illustrated in Figure 3. 479 ,----------. ,-----------. 480 .--------> BII ZSK +---------> Yeti-Root | 481 | signs `----------' signs `-----------' 482 | 483 ,-----------. | ,----------. ,-----------. 484 | Yeti KSK +-+--------> TISF ZSK +---------> Yeti-Root | 485 `-----------' | signs `----------' signs `-----------' 486 | 487 | ,----------. ,-----------. 488 `--------> WIDE ZSK +---------> Yeti-Root | 489 signs `----------' signs `-----------' 491 Figure 3: Unique ZSK per DM 493 The process of retrieving the Root Zone from the Root Server System 494 and replacing and signing the apex DNSKEY RRSet no longer takes place 495 on the DMs, and instead takes place on a central Hidden Master. The 496 production of signed DNSKEY RRSets is analogous to the use of Signed 497 Key Responses (SKR) produced during ICANN KSK key ceremonies 498 [ICANN2010]. 500 Each DM now retrieves source data (with pre-modified and Yeti-signed 501 DNSKEY RRset, but otherwise unchanged) from the Yeti DNS Hidden 502 Master instead of from the Root Server System. 504 Each DM carries out a similar transformation to that described in 505 Section 4.2.1, except that DMs no longer need to modify or sign the 506 DNSKEY RRSet, and the DM's unique local ZSK is used to sign every 507 other RRset. 509 4.2.3. Preserving Root Zone NSEC Chain and ZSK RRSIGs 511 A change to the transformation described in Section 4.2.2 has been 512 proposed as a Yeti experiment called PINZ [2]which would preserve the 513 NSEC chain from the Root Zone and all RRSIG RRs generated using the 514 Root Zone's ZSKs. The DNSKEY RRSet would continue to be modified to 515 replace the Root Zone KSKs, but Root Zone ZSKs will be kept intact, 516 and the Yeti KSK would be used to generate replacement signatures 517 over the apex DNSKEY and NS RRSets. Source data would continue to 518 flow from the Root Server System through the Hidden Master to the set 519 of DMs, but no DNSSEC operations would be required on the DMs and the 520 source NSEC and most RRSIGs would remain intact. 522 This approach has been suggested in order to keep minimal changes 523 from IANA root zone and provide cryptographically-verifiable 524 confidence that no owner name in the root zone had been changed in 525 the process of producing the Yeti-Root zone from the Root Zone, 526 addressing one of the concerns described in Appendix E in a way that 527 can be verified automatically. 529 4.3. Yeti-Root Zone Distribution 531 Each Yeti DM is configured with a full list of Yeti-Root Server 532 addresses to send NOTIFY [RFC1996] messages to, which also forms the 533 basis for an address-based access-control list for zone transfers. 534 Authentication by address could be replaced with more rigorous 535 mechanisms (e.g. using Transaction Signatures (TSIG) [RFC2845]); this 536 has not been done at the time of writing since the use of address- 537 based controls avoids the need for the distribution of shared secrets 538 amongst the Yeti-Root Server Operators. 540 Individual Yeti-Root Servers are configured with a full set of Yeti 541 DM addresses to which SOA and Authoritative Transfer (AXFR) queries 542 may be sent in the conventional manner. 544 4.4. Synchronization of Service Metadata 546 Changes in the Yeti-DNS Testbed infrastructure such as the addition 547 or removal of Yeti-Root servers, renumbering Yeti-Root Servers or 548 DNSSEC key rollovers require coordinated changes to take place on all 549 DMs. The Yeti-DNS Testbed is subject to more frequent changes than 550 are observed in the Root Server System and includes substantially 551 more Yeti-Root Servers than there are IANA Root Servers, and hence a 552 manual change process in the Yeti Testbed would be more likely to 553 suffer from human error. An automated and cooperated process was 554 consequently implemented. 556 The theory of this operation is that each DM operator runs a Git 557 repository locally, containing all service metadata involved in the 558 operation of each DM. When a change is desired and approved among 559 all Yeti coordinators, one DM operator (usually BII) updates the 560 local Git repository. A serial number in the future (in two days) is 561 chosen for when the changes become active. The DM operator then 562 pushes the changes to the Git repositories of the other two DM 563 operators who have chance to check and edit the repo. When the 564 serial of the root zone passes the number chosen, then changes were 565 pulled automatically to individual DMs and promoted to production. 567 The three Git repositories are synchronized by configuring them as 568 remote servers. For example at BII we push to all three DM's repo as 569 follows: 571 $ git remote -v 572 origin yeticonf@yeti-conf.dns-lab.net:dm (fetch) 573 origin yeticonf@yeti-conf.dns-lab.net:dm (push) 574 origin yeticonf@yeti-dns.tisf.net:dm (push) 575 origin yeticonf@yeti-repository.wide.ad.jp:dm (push) 577 Figure 4 579 More detailed information of DM Synchronization, please find the 580 Yeti-DM-Sync-MZSK.md [3] document in Yeti's GitHub repo. 582 4.5. Yeti-Root Server Naming Scheme 584 The current naming scheme for Root Servers was normalized to use 585 single-character host names (A through M) under the domain ROOT- 586 SERVERS.NET, as described in [RSSAC023]. The principal benefit of 587 this naming scheme was that DNS label compression could be used to 588 produce a priming response that would fit within 512 bytes at the 589 time it was introduced, 512 bytes being the maximum DNS message size 590 using UDP transport without EDNS(0) [RFC6891]. 592 Yeti-Root Servers do not use this optimization, but rather use free- 593 form nameserver names chosen by their respective operators -- in 594 other words, no attempt is made to minimize the size of the priming 595 response through the use of label compression. This approach aims to 596 challenge the need for a minimally-sized priming response in a modern 597 DNS ecosystem where EDNS(0) is prevalent. 599 Priming responses from Yeti-Root Servers do not always include server 600 addresses in the additional section, as is the case with priming 601 responses from Root Servers. In particular, Yeti-Root Servers 602 running BIND9 return an empty additional section if the configuration 603 parameter minimum-responses is set, forcing resolvers to complete the 604 priming process with a set of conventional recursive lookups in order 605 to resolve addresses for each Yeti-Root server. The Yeti-Root 606 Servers running NSD were observed to return a fully-populated 607 additional section (depending of course of the EDNS buffer size in 608 use). 610 Various approaches to normalize the composition of the priming 611 response were considered, including: 613 o Require use of DNS implementations that exhibit a desired 614 behaviour in the priming response; 616 o Modify nameserver software or configuration as used by Yeti-Root 617 Servers; 619 o Isolate the names of Yeti-Root Servers in one or more zones that 620 could be slaved on each Yeti-Root Server, renaming servers as 621 necessary, giving each a source of authoritative data with which 622 the authority section of a priming response could be fully 623 populated. This is the approach used in the Root Server System 624 with the ROOT-SERVERS.NET zone. 626 The potential mitigation of renaming all Yeti-Root Servers using a 627 scheme that would allow their names to exist directly in the root 628 zone was not considered, since that approach implies the invention of 629 new top-level labels not present in the Root Zone. 631 Given the relative infrequency of priming queries by individual 632 resolvers and the additional complexity or other compromises implied 633 by each of those mitigations, the decision was made to make no effort 634 to ensure that the composition of priming responses was identical 635 across servers. Even the empty additional sections generated by 636 Yeti-Root Servers running BIND9 seem to be sufficient for all 637 resolver software tested; resolvers simply perform a new recursive 638 lookup for each authoritative server name they need to resolve. 640 4.6. Yeti-Root Servers 642 Various volunteers have donated authoritative servers to act as Yeti- 643 Root servers. At the time of writing there are 25 Yeti-Root servers 644 distributed globally, one of which is named using an IDNA2008 645 [RFC5890] label, shown in the following list in punycode. 647 +-------------------------------------+---------------+-------------+ 648 | Name | Operator | Location | 649 +-------------------------------------+---------------+-------------+ 650 | bii.dns-lab.net | BII | CHINA | 651 | yeti-ns.tsif.net | TSIF | USA | 652 | yeti-ns.wide.ad.jp | WIDE Project | Japan | 653 | yeti-ns.as59715.net | as59715 | Italy | 654 | dahu1.yeti.eu.org | Dahu Group | France | 655 | ns-yeti.bondis.org | Bond Internet | Spain | 656 | | Systems | | 657 | yeti-ns.ix.ru | Russia | MSK-IX | 658 | yeti.bofh.priv.at | CERT Austria | Austria | 659 | yeti.ipv6.ernet.in | ERNET India | India | 660 | yeti-dns01.dnsworkshop.org | dnsworkshop | Germany | 661 | | /informnis | | 662 | dahu2.yeti.eu.org | Dahu Group | France | 663 | yeti.aquaray.com | Aqua Ray SAS | France | 664 | yeti-ns.switch.ch | SWITCH | Switzerland | 665 | yeti-ns.lab.nic.cl | NIC Chile | Chile | 666 | yeti-ns1.dns-lab.net | BII | China | 667 | yeti-ns2.dns-lab.net | BII | China | 668 | yeti-ns3.dns-lab.net | BII | China | 669 | ca...a23dc.yeti-dns.net | Yeti-ZA | South | 670 | | | Africa | 671 | 3f...374cd.yeti-dns.net | Yeti-AU | Australia | 672 | yeti1.ipv6.ernet.in | ERNET India | India | 673 | xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c | ERNET India | India | 674 | yeti-dns02.dnsworkshop.org | dnsworkshop | USA | 675 | | /informnis | | 676 | yeti.mind-dns.nl | Monshouwer | Netherlands | 677 | | Internet | | 678 | | Diensten | | 679 | yeti-ns.datev.net | DATEV | Germany | 680 | yeti.jhcloos.net. | jhcloos | USA | 681 +-------------------------------------+---------------+-------------+ 683 The current list of Yeti-Root server is made available to a 684 participating resolver first using a substitute hints file Appendix A 685 and subsequently by the usual resolver priming process [RFC8109]. 686 All Yeti-Root servers are IPv6-only, foreshadowing a future IPv6-only 687 Internet, and hence the Yeti-Root hints file contains no IPv4 688 addresses and the Yeti-Root zone contains no IPv4 glues. Note that 689 the rationale of an IPv6-only testbed is to test whether IPv6-only 690 root can survive any problem or impact when IPv4 is turned off, much 691 like the context of IETF sunset4 WG [4]. 693 At the time of writing, all root servers within the Root Server 694 System serve the ROOT-SERVERS.NET zone in addition to the root zone, 695 and all but one also serve the ARPA zone. Yeti-Root servers serve 696 the Yeti-Root zone only. 698 Significant software diversity exists across the set of Yeti-Root 699 servers, as reported by their volunteer operators at the time of 700 writing: 702 o Platform: 18 of 25 Yeti-Root servers are implemented on a Virtual 703 Private Server(VPS) rather than bare metal. 705 o Operating System: 15 Yeti-Root servers run on Linux (Ubuntu, 706 Debian, CentOS, Red Hat and ArchLinux); 4 run on FreeBSD, 1 on 707 NetBSD and 1 in Windows server 2016. 709 o DNS software: 16 of 25 Yeti-Root servers use BIND9 (versions 710 varying between 9.9.7 and 9.10.3); 4 use NSD (4.10 and 4.15); 2 711 use Knot (2.0.1 and 2.1.0), 1 uses Bundy (1.2.0), 1 uses PowerDNS 712 (4.1.3) and 1 uses MS DNS (10.0.14300.1000). 714 4.7. Experimental Traffic 716 For the Yeti DNS Testbed to be useful as a platform for 717 experimentation, it needs to carry statistically representative 718 traffic. Several approaches have been taken to load the system with 719 traffic, including both real-world traffic triggered by end-users and 720 synthetic traffic. 722 Resolvers that have been explicitly configured to participate in the 723 testbed, as described in Section 4, are a source of real-world, end- 724 user traffic. Due to efficient cache mechanism, the mean query rate 725 is less than 100 qps in Yeti testbed, but a variety of sources are 726 observed active in past one year, as summarized inAppendix C. 728 Synthetic traffic has been introduced to the system from time to time 729 in order to increase traffic loads. Approaches include the use of 730 distributed measurement platforms such as RIPE ATLAS to send DNS 731 queries to Yeti-Root servers, and the capture of traffic sent from 732 non-Yeti resolvers to the Root Server System which was subsequently 733 modified and replayed towards Yeti-Root servers. 735 4.8. Traffic Capture and Analysis 737 Query and response traffic capture is available in the testbed in 738 both Yeti resolvers and Yeti-Root servers in anticipation of 739 experiments that require packet-level visibility into DNS traffic. 741 Traffic capture is performed on Yeti-Root servers using either dnscap 742 [5] or pcapdump (part of the pcaputils Debian package [6], with a 743 patch to facilitate triggered file upload [7]. PCAP-format files 744 containing packet captures are uploaded using rsync to central 745 storage. 747 5. Operational Experience with the Yeti DNS Testbed 749 The following sections provide commentary on the operation and impact 750 analyses of the Yeti-DNS Testbed described in Section 4. More 751 detailed descriptions of observed phenomena are available in Yeti DNS 752 mailing list archives [8] and on the Yeti DNS blog [9]. 754 5.1. Viability of IPv6-Only Operation 756 All Yeti-Root servers were deployed with IPv6 connectivity, and no 757 IPv4 addresses for any Yeti-Root server were made available (e.g. in 758 the Yeti hints file, or in the DNS itself). This implementation 759 decision constrained the Yeti-Root system to be v6-only. 761 DNS implementations are generally adept at using both IPv4 and IPv6 762 when both are available. Servers that cannot be reliably reached 763 over one protocol might be better queried over the other, to the 764 benefit of end-users in the common case where DNS resolution is on 765 the critical path for end-users' perception of performance. However, 766 this optimisation also means that systemic problems with one protocol 767 can be masked by the other. By forcing all traffic to be carried 768 over IPv6, the Yeti DNS testbed aimed to expose any such problems and 769 make them easier to identify and understand. Several examples of 770 IPv6-specific phenomena observed during the operation of the testbed 771 are described in the sections that follow. 773 Although the Yeti-Root servers themselves were only reachable using 774 IPv6, real-world end-users often have no IPv6 connectivity. The 775 testbed was also able to explore the degree to which IPv6-only Yeti- 776 Root servers were able to serve single-stack, IPv4-only end-user 777 populations through the use of dual-stack Yeti resolvers. 779 5.1.1. IPv6 Fragmentation 781 In the Root Server System, structural changes with the potential to 782 increase response sizes (and hence fragmentation, fallback to TCP 783 transport or both) have been exercised with great care, since the 784 impact on clients has been difficult to predict or measure. The Yeti 785 DNS Testbed is experimental and has the luxury of a known client 786 base, making it far easier to make such changes and measure their 787 impact. 789 Many of the experimental design choices described in this document 790 were expected to trigger larger responses. For example, the choice 791 of naming scheme for Yeti-Root Servers described in Section 4.5 792 defeats label compression. It makes a large priming response (up to 793 1754 octets with 25 NS server and their glue) ; the Yeti-Root zone 794 transformation approach described in Section 4.2.2 greatly enlarges 795 the apex DNSKEY RRSet especially during the KSK rollover (up to 1975 796 octets with 3 ZSK and 2 KSK). An increased incidence of 797 fragmentation was therefore expected. 799 The Yeti-DNS Testbed provides service on IPv6 only. However 800 middlebox like firewall, and some routers are not friendly on IPv6 801 fragments. It is reported there is notable packets drop rate due to 802 the mistreatment of middle-box on IPv6 fragment 803 [I-D.taylor-v6ops-fragdrop] [RFC7872]. One APNIC study 804 [IPv6-frag-DNS] reported that 37% of endpoints using IPv6-capable DNS 805 resolver cannot receive a fragmented IPv6 response over UDP. 807 To study the impact, RIPE Atlas probes were used. For each Yeti-Root 808 server, an Atlas measurement was setup using 100 IPv6-enabled probes 809 from five regions, sending a DNS query for ./IN/DNSKEY using UDP 810 transport with DO=1. This measurement, when carried out concurrently 811 with a Yeti KSK rollover, further exacerbating the potential for 812 fragmentation, identified a 7% failure rate compared with a non- 813 fragmented control. A failure rate of 2% was observed with response 814 sizes of 1414 octets, which was surprising given the expected 815 prevalence of 1500-octet (Ethernet-framed) MTUs. 817 The consequences of fragmentation were not limited to failures in 818 delivering DNS responses over UDP transport. There were two cases 819 where a Yeti-Root server failed to transfer the Yeti-Root zone from a 820 DM using TCP. DM log files revealed "socket is not connected" errors 821 corresponding to zone transfer requests. Further experimentation 822 revealed that combinations of NetBSD 6.1, NetBSD 7.0RC1, FreeBSD 823 10.0, Debian 3.2 and VMWare ESXI 5.5 resulted in a high TCP MSS value 824 of 1440 octets being negotiated between client and server despite the 825 presence of the IPV6_USE_MIN_MTU socket option, as described in 826 [I-D.andrews-tcp-and-ipv6-use-minmtu]. The mismatch appears to cause 827 outbound segments greater in size than 1280 octets to be dropped 828 before sending. Setting the local TCP MSS to 1220 octets (chosen as 829 1280-60, the size of the IPv6/TCP header with no other extension 830 headers) was observed to be a pragmatic mitigation. 832 5.1.2. Serving IPv4-Only End-Users 834 Yeti resolvers have been successfully used by real-world end-users 835 for general name resolution within a number of participant 836 organisations, including resolution of names to IPv4 addresses and 837 resolution by IPv4-only end-user devices. 839 Some participants, recognising the operational importance of 840 reliability in resolver infrastructure and concerned about the 841 stability of their IPv6 connectivity, chose to deploy Yeti resolvers 842 in parallel to conventional resolvers, making both available to end- 843 users. While the viability of this approach provides a useful data 844 point, end-users using Yeti resolvers exclusively provided a better 845 opportunity to identify and understand any failures in the Yeti DNS 846 testbed infrastructure. 848 Resolvers deployed in IPv4-only environments were able to join the 849 Yeti DNS testbed by way of upstream, dual-stack Yeti resolvers, or in 850 one case, in CERNET2, by assigning IPv4 addresses to Yeti-Root 851 servers and mapping them in dual-stack IVI translation devices 852 [RFC6219]. 854 5.2. Zone Distribution 856 The Yeti DNS testbed makes use of multiple DMs to distribute the 857 Yeti-Root zone, an approach that would allow the number of Yeti-Root 858 servers to scale to a higher number than could be supported by a 859 single distribution source and which provided redundancy. The use of 860 multiple DMs introduced some operational challenges, however, which 861 are described in the following sections. 863 5.2.1. Zone Transfers 865 Yeti-Root Servers were configured to serve the Yeti-Root zone as 866 slaves. Each slave had all DMs configured as masters, providing 867 redundancy in zone synchronisation. 869 Each DM in the Yeti testbed served a Yeti-Root zone which is 870 functionally equivalent but not congruent to that served by every 871 other DM (see Section 4.3). The differences included variations in 872 the SOA.MNAME field and, more critically, in the RRSIGs for 873 everything other than the apex DNSKEY RRSet, since signatures for all 874 other RRSets are generated using a private key that is only available 875 to the DM serving its particular variant of the zone (see 876 Section 4.2, Section 4.2.2). 878 Incremental Zone Transfer (IXFR), as described in [RFC1995], is a 879 viable mechanism to use for zone synchronization between any Yeti- 880 Root server and a consistent, single DM. However, if that Yeti-Root 881 server ever selected a different DM, IXFR would no longer be a safe 882 mechanism; structural changes between the incongruent zones on 883 different DMs would not be included in any transferred delta and the 884 result would be a zone that was not internally self-consistent. For 885 this reason the first transfer after a change of DM would require 886 AXFR, not IXFR. 888 None of the DNS software in use on Yeti-Root Servers supports this 889 mixture of IXFR/AXFR according to the master server in use. This is 890 unsurprising, given that the environment described above in the Yeti- 891 Root system is idiosyncratic; conventional zone transfer graphs 892 involve zones that are congruent between all nodes. For this reason, 893 all Yeti-Root servers are configured to use AXFR at all times, and 894 never IXFR, to ensure that zones being served are internally self- 895 consistent. 897 5.2.2. Delays in Yeti-Root Zone Distribution 899 Each Yeti DM polled the Root Server System for a new revision of the 900 root zone on an interleaved schedule, as described in Section 4.1. 901 Consequently, different DMs were expected to retrieve each revision 902 of the root zone, and make a corresponding revision of the Yeti-Root 903 zone available, at different times. The availability of a new 904 revision of the Yeti-Root zone on the first DM would typically 905 precede that of the last by 40 minutes. 907 It might be expected given this distribution mechanism that the 908 maximum latency between the publication of a new revision of the root 909 zone and the availability of the corresponding Yeti-Root zone on any 910 Yeti-Root server would be 20 minutes, since in normal operation at 911 least one DM should serve that Yeti-Zone within 20 minutes of root 912 zone publication. In practice, this was not observed. 914 In one case a Yeti-Root server running Bundy 1.2.0 on FreeBSD 915 10.2-RELEASE was found to lag root zone publication by as much as ten 916 hours, which upon investigation was due to software defects that were 917 subsequently corrected. 919 More generally, Yeti-Root servers were observed routinely to lag root 920 zone publication by more than 20 minutes, and relatively often by 921 more than 40 minutes. Whilst in some cases this might be assumed to 922 be a result of connectivity problems, perhaps suppressing the 923 delivery of NOTIFY messages, it was also observed that Yeti-Root 924 servers receiving a NOTIFY from one DM would often send SOA queries 925 and AXFR requests to a different DM. If that DM was not yet serving 926 the new revision of the Yeti-Root zone, a delay in updating the Yeti- 927 Root server would naturally result. 929 5.2.3. Mixed RRSIGs from different DM ZSKs 931 The second approach doing the transformation of Root Zone to Yeti- 932 Root Zone (Section 4.2.2) introduce a situation that mixed RRSIGs 933 from different DM ZSKs will be cached in one resolver. 935 It is observed that the Yeti-Root Zone served by any particular Yeti- 936 Root Server will include signatures generated using the ZSK from the 937 DM that served the Yeti-Root Zone to that Yeti-Root Server. 938 Signatures cached at resolvers might be retrieved from any Yeti-Root 939 Server, and hence are expected to be a mixture of signatures 940 generated by different ZSKs. Since all ZSKs can be trusted through 941 the signature by the Yeti KSK over the DNSKEY RRSet, which includes 942 all ZSKs, the mixture of signatures was predicted not to be a threat 943 to reliable validation. 945 It was first tested in BII's lab environment as a proof of concept. 946 It is observed in resolver's DNSSEC log that the process of verifying 947 rdataset show "success" with a key (keyid) in DNSKEY RRSet. It was 948 implemented later in three DMs which was carefully coordinated and 949 made public to all Yeti resolver operators and participants in Yeti's 950 mailing list. At least 45 Yeti resolvers (deployed by Yeti 951 operators) were under monitoring and set reporting trigger if 952 anything wrong. In addition, Yeti mailing list is open for error 953 reports from other participants. So far Yeti testbed has been 954 operated in this configuration (with multiple ZSKs) for 2 years. It 955 is proved that this configuration is workable and reliable, even when 956 individual ZSKs are rolled on different schedules. 958 Another consequence of this approach is that the apex DNSKEY RRSet in 959 the Yeti-Root zone is much larger than the corresponding DNSKEY RRSet 960 in the Root Zone. This requires more space and produce larger 961 response to the query for DNSKEY RRset especially during the KSK 962 rollover. 964 5.3. DNSSEC KSK Rollover 966 At the time of writing, the Root Zone KSK is expected to undergo a 967 carefully-orchestrated rollover as described in [ICANN2016]. ICANN 968 has commissioned various tests and has published an external test 969 plan [ICANN2017]. 971 Three related DNSSEC KSK rollover exercises were carried out on the 972 Yeti DNS testbed, somewhat concurrent with the planning and execution 973 of the rollover in the root zone. Brief descriptions of these 974 exercises are included below. 976 5.3.1. Failure-Case KSK Rollover 978 The first KSK rollover that was executed on the Yeti DNS testbed 979 deliberately ignored the 30-day hold-down timer specified in 980 [RFC5011] before retiring the outgoing KSK. 982 It was confirmed that clients of some (but not all) validating Yeti 983 resolvers experienced resolution failures (received SERVFAIL 984 responses) following this change. Those resolvers required 985 administrator intervention to install a functional trust anchor 986 before resolution was restored. 988 5.3.2. KSK Rollover vs. BIND9 Views 990 The second Yeti KSK rollover was designed with similar phases to the 991 ICANN's KSK rollover roll, although with modified timings to reduce 992 the time required to complete the process. The "slot" used in this 993 rollover was ten days long, as follows: 995 +--------------+----------+----------+ 996 | | 19444 | New Key | 997 +--------------+----------+----------+ 998 | slot 1 | pub+sign | | 999 | slot 2,3,4,5 | pub+sign | pub | 1000 | slot 6,7 | pub | pub+sign | 1001 | slot 8 | revoke | pub+sign | 1002 | slot 9 | | pub+sign | 1003 +--------------+----------+----------+ 1005 During this rollover exercise, a problem was observed on one Yeti 1006 resolver that was running BIND 9.10.4-p2 [KROLL-ISSUE]. That 1007 resolver was configured with multiple views serving clients in 1008 different subnets at the time that the KSK rollover began. DNSSEC 1009 validation failures were observed following the completion of the KSK 1010 rollover, triggered by the addition of a new view, intended to serve 1011 clients from a new subnet. 1013 BIND 9.10 requires "managed-keys" configuration to be specified in 1014 every view, a detail that was apparently not obvious to the operator 1015 in this case and which was subsequently highlighted by ISC in their 1016 general advice relating to KSK rollover in the root zone to users of 1017 BIND 9 [10]. When the "managed-keys" configuration is present in 1018 every view that is configured to perform validation, trust anchors 1019 for all views are updated during a KSK rollover. 1021 5.3.3. Large Responses during KSK Rollover 1023 Since a KSK rollover necessarily involves the publication of outgoing 1024 and incoming public keys simultaneously, an increase in the size of 1025 DNSKEY responses is expected. The third KSK rollover carried out on 1026 the Yeti DNS testbed was accompanied by a concerted effort to observe 1027 response sizes and their impact on end-users. 1029 As described in Section 4.2.2, in the Yeti DNS testbed each DM can 1030 maintain control of its own set of ZSKs, which can undergo rollover 1031 independently. During a KSK rollover where concurrent ZSK rollovers 1032 are executed by each of three DMs the maximum number of apex DNSKEY 1033 RRs present is eight (incoming and outcoming KSK, plus incoming and 1034 outgoing of each of three ZSKs). In practice, however, such 1035 concurrency did not occur; only the BII ZSK was rolled during the KSK 1036 rollover, and hence only three DNSKEY RRSet configurations were 1037 observed: 1039 o 3 ZSK and 2 KSK, DNSKEY response of 1975 octets; 1041 o 3 ZSK and 1 KSK, DNSKEY response of 1414 octets; and 1043 o 2 ZSK and 1 KSK, DNSKEY response of 1139 octets. 1045 RIPE Atlas probes were used as described in Section 5.1.1 to send 1046 DNSKEY queries directly to Yeti-Root servers. The numbers of queries 1047 and failures were recorded and categorized according to the response 1048 sizes at the time the queries were sent. A summary of the results 1049 ([YetiLR]) is as follows: 1051 +---------------+----------+---------------+--------------+ 1052 | Response Size | Failures | Total Queries | Failure rate | 1053 +---------------+----------+---------------+--------------+ 1054 | 1139 | 274 | 64252 | 0.0042 | 1055 | 1414 | 3141 | 126951 | 0.0247 | 1056 | 1975 | 2920 | 42529 | 0.0687 | 1057 +---------------+----------+---------------+--------------+ 1059 The general approach illustrated briefly here provides a useful 1060 example of how the design of the Yeti DNS testbed, separate from the 1061 Root Server System but constructed as a live testbed on the Internet, 1062 facilitates the use of general-purpose active measurement facilities 1063 such as RIPE Atlas probes as well as internal passive measurement 1064 such as packet capture. 1066 5.4. Capture of Large DNS Response 1068 Packet capture is a common approach in production DNS systems where 1069 operators require fine-grained insight into traffic in order to 1070 understand production traffic. For authoritative servers, capture of 1071 inbound query traffic is often sufficient, since responses can be 1072 synthesized with knowledge of the zones being served at the time the 1073 query was received. Queries are generally small enough not to be 1074 fragmented, and even with TCP transport are generally packed within a 1075 single segment. 1077 The Yeti DNS testbed has different requirements; in particular there 1078 is a desire to compare responses obtained from the Yeti 1079 infrastructure with those received from the Root Server System in 1080 response to a single query stream (e.g. using YmmV as described in 1081 Appendix D). Some Yeti-Root servers were capable of recovering 1082 complete DNS messages from within nameservers, e.g. using dnstap; 1083 however, not all servers provided that functionality and a consistent 1084 approach was desirable. 1086 The requirement passive capture of responses from the wire together 1087 with experiments that were expected (and in some cases designed) to 1088 trigger fragmentation and use of TCP transport led to the development 1089 of a new tool, PcapParser, to perform fragment and TCP stream 1090 reassembly from raw packet capture data. A brief description of 1091 PcapParser is included in Appendix D. 1093 5.5. Automated Hints File Maintenance 1095 Renumbering events in the Root Server System are relatively rare. 1096 Although each such event is accompanied by the publication of an 1097 updated hints file in standard locations, the task of updating local 1098 copies of that file used by DNS resolvers is manual, and the process 1099 has an observably-long tail: for example, in 2015 J-Root was still 1100 receiving traffic at its old address some thirteen years after 1101 renumbering [Wessels2015]. 1103 The observed impact of these old, deployed hints file is minimal, 1104 likely due to the very low frequency of such renumbering events. 1105 Even the oldest of hints file would still contain some accurate root 1106 server addresses from which priming responses could be obtained. 1108 By contrast, due to the experimental nature of the system and the 1109 fact that it is operated mainly by volunteers, Yeti-Root Servers are 1110 added, removed and renumbered with much greater frequency. A tool to 1111 facilitate automatic maintenance of hints files was therefore 1112 created, [hintUpdate]. 1114 The automated procedure followed by the hintUpdate tool is as 1115 follows. 1117 1. Use the local resolver to obtain a response to the query ./IN/NS; 1119 2. Use the local resolver to obtain a set of IPv4 and IPv6 addresses 1120 for each name server; 1122 3. Validate all signatures obtained from the local resolvers, and 1123 confirm that all data is signed; 1125 4. Compare the data obtained to that contained within the currently- 1126 active hints file; if there are differences, rotate the old one 1127 away and replace it with a new one. 1129 This tool would not function unmodified when used in the Root Server 1130 System, since the names of individual Root Servers (e.g. A.ROOT- 1131 SERVERS.NET) are not DNSSEC signed. All Yeti-Root Server names are 1132 DNSSEC signed, however, and hence this tool functions as expected in 1133 that environment. 1135 5.6. Root Label Compression in Knot DNS Server 1137 [RFC1035] specifies that domain names can be compressed when encoded 1138 in DNS messages, being represented as one of 1140 1. a sequence of labels ending in a zero octet; 1142 2. a pointer; or 1144 3. a sequence of labels ending with a pointer. 1146 The purpose of this flexibility is to reduce the size of domain names 1147 encoded in DNS messages. 1149 It was observed that Yeti-Root Servers running Knot 2.0 would 1150 compress the zero-length label (the root domain, often represented as 1151 ".") using a pointer to an earlier example. Although legal, this 1152 encoding increases the encoded size of the root label from one octet 1153 to two; it was also found to break some client software, in 1154 particular the Go DNS library. Bug reports were filed against both 1155 Knot and the Go DNS library, and both were resolved in subsequent 1156 releases. 1158 6. Conclusions 1160 Yeti DNS was designed and implemented as a live DNS root system 1161 testbed. It serves a root zone ("Yeti-Root" in this document) 1162 derived from the root zone root zone published by the IANA with only 1163 those structural modifications necessary to ensure its function in 1164 the testbed system. The Yeti DNS testbed has proven to be a useful 1165 platform to address many questions that would be challenging to 1166 answer using the production Root Server System, such as those 1167 included in Section 3. 1169 Indicative findings following from the construction and operation of 1170 the Yeti DNS testbed include: 1172 o Operation in a pure IPv6-only environment; confirmation of a 1173 significant failure rate in the transmission of large responses 1174 (~7%), but no other persistent failures observed. Two cases in 1175 which Yeti-Root servers failed to retrieve the Yeti-Root zone due 1176 to fragmentation of TCP segments; mitigated by setting a TCP MSS 1177 of 1220 octets (see Section 5.1.1). 1179 o Successful operation with three autonomous Yeti-Root zone signers 1180 and 25 Yeti-Root servers, and confirmation that IXFR is not an 1181 appropriate transfer mechanism of zones that are structurally 1182 incongruent across different transfer paths (see Section 5.2). 1184 o ZSK size increased to 2048 bits and multiple KSK rollovers 1185 executed to exercise RFC 5011 support in validating resolvers; 1186 identification of pitfalls relating to views in BIND9 when 1187 configured with "managed-keys" (see Section 5.3). 1189 o Use of natural (non-normalized) names for Yeti-Root servers 1190 exposed some differences between implementations in the inclusion 1191 of additional-section glue in responses to priming queries; 1192 however, despite this inefficiency, Yeti resolvers were observed 1193 to function adequately (see Section 4.5). 1195 o It was observed that Knot 2.0 performed label compression on the 1196 root (empty) label. This results in an increased encoding size 1197 for references to the root label, since a pointer is encoded as 1198 two octets whilst the root label itself only requires one (see 1199 Section 5.6). 1201 o Some tools were developed in response to the operational 1202 experience of running and using the Yeti DNS testbed: DNS fragment 1203 and DNS Additional Truncated Response (ATR) for large DNS 1204 responses, a BIND9 patch for additional section glue, YmmV and 1205 IPv6 defrag for capturing and mirroring traffic. In addition a 1206 tool to facilitate automatic maintenance of hints files was 1207 created (see Appendix D). 1209 The Yeti DNS testbed was used only by end-users whose local 1210 infrastructure providers had made the conscious decision to do so, as 1211 is appropriate for an experimental, non-production system. So far no 1212 serious user complains reached Yeti's mailing list during Yeti normal 1213 operation. Although adding more instances into Yeti root system may 1214 help to better enhance the quality of service, but it is generally 1215 accepted that Yeti DNS performance is good enough to serve the 1216 purpose of DNS Root testbed. 1218 The experience gained during the operation of the Yeti DNS testbed 1219 suggested several topics worthy of further study: 1221 o Priming Truncation and TCP-only Yeti-Root servers: observe and 1222 measure the worst-possible case for priming truncation by 1223 responding with TC=1 to all priming queries received over UDP 1224 transport, forcing clients to retry using TCP. This should also 1225 give some insight into the usefulness of TCP-only DNS in general. 1227 o KSK ECDSA Rollover: one possible way to reduce DNSKEY response 1228 sizes is to change to an elliptic curve signing algorithm. While 1229 in principle this can be done separately for the KSK and the ZSK, 1230 the RIPE NCC has done research recently and discovered that some 1231 resolvers require that both KSK and ZSK use the same algorithm. 1232 This means that an algorithm roll also involves a KSK roll. 1233 Performing an algorithm roll at the root would be an interesting 1234 challenge. 1236 o Sticky Notify for zone transfer: the non-applicability of IXFR as 1237 a zone transfer mechanism in the Yeti DNS testbed could be 1238 mitigated by the implementation of a sticky preference for master 1239 server for each slave, such that an initial AXFR response could be 1240 followed up with IXFR requests without compromising zone integrity 1241 in the case (as with Yeti) that equivalent but incongruent 1242 versions of a zone are served by different masters. 1244 o Key distribution for zone transfer credentials: the use of a 1245 shared secret between slave and master requires key distribution 1246 and management whose scaling properties are not ideally suited to 1247 systems with large numbers of transfer clients. Other approaches 1248 for key distribution and authentication could be considered. 1250 o DNS is a tree-based hierarchical database. Mathematically it has 1251 a root node and dependency between parent and child nodes. So any 1252 failures and instability of parent nodes (Root in Yeti's case) may 1253 impact their child nodes in case of human mistake, malicious 1254 attack or even earthquake. It is proposed to define technology 1255 and practices to allow any organization, from the smallest company 1256 to nations to be self-sufficient in their DNS. 1258 o In section 3.12 of [RFC8324], a "Centrally Controlled Root" is 1259 viewed as a issue of DNS. It is interesting in future study to 1260 try and test some technical tools like blockchain [11] to either 1261 remove the technical requirement for a central authority over the 1262 root or enhance the security and stability of existing Root. 1264 7. Security Considerations 1266 As introduced in Section 4.4,service metadata is synchronized among 3 1267 DMs using Git tool. Any security issue around Git may affect Yeti DM 1268 operation. For example, hacker may compromise one DM's git 1269 repository and push unwanted changes to Yeti DM system which may 1270 introduce a bad root server or bad key for a period of time. 1272 Yeti resolver needs the bootstrapping files to join the testbed, like 1273 hint file and trust anchor of Yeti. All required information is 1274 published in yeti-dns.org and Github.com. If hacker tampers those 1275 websites with a fake page, new resolver may lose its way and 1276 configured with a bad root. 1278 DNSSEC is an important research goal in Yeti DNS testbed. To reduce 1279 the central function of DNSSEC for Root zone, we sign the Yeti-Root 1280 zone using multiple, independently-operated DNSSEC signers and 1281 multiple corresponding ZSKs (see Section 4.2). To verify ICANN's KSK 1282 rollover, we rolled the Yeti KSK three times according to RFC5011 and 1283 do have some observations(see Section 5.3). In addition larger RSA 1284 key sizes was testbed before 2048-bits key was used in VeriSgin ZSK 1285 signing process. 1287 8. IANA Considerations 1289 This document requests no action of the IANA. 1291 9. Acknowledgments 1293 Firstly the authors would like to acknowledge the contributions from 1294 the people who were involved in the implementation and operation of 1295 the Yeti DNS by donating their time and resource. They are: 1297 Tomohiro Ishihara, Antonio Prado, Stephane Bortzmeyer, Mickael 1298 Jouanne, Pierre Beyssac, Joao Damas, Pavel Khramtsov, Dmitry 1299 Burkov, Dima Burkov, Kovalenko Dmitry, Otmar Lendl, Praveen Misra, 1300 Carsten Strotmann, Edwin Gomez, Daniel Stirnimann, Andreas Schulze 1301 Remi Gacogne, Guillaume de Lafond, Yves Bovard, Hugo Salgado, Kees 1302 Monshouwer, Li Zhen, Daobiao Gong, Andreas Schulze, James Cloos, 1303 Runxia Wan 1305 Thanks are gave to all People who sent important advices and comments 1306 to Yeti, either in face-to-face meeting or virtually by via call and 1307 mailing list. Some of them are acknowledged in following: 1309 Wu hequan, Zhou Hongren, Cheng Yunqing, Xia Chongfeng, Tang 1310 Xiongyan, Li Yuxiao, Feng Ming, Zhang Tongxu, Duan Xiaodong, Wang 1311 Yang, Wang JiYe, Wang Lei, Zhao Zhifeng, Chen Wei, Wang Wei, Wang 1312 Jilong, Du Yuejing, Tan XiaoSheng, Chen Shangyi, Huang Chenqing, 1313 Ma Yan, Li Xing, Cui Yong, Bi Jun, Duan Haixing, Marc Blanchet, 1314 Andrew Sullivan, Suzanne Wolf, Terry Manderson, Geoff Huston, Jaap 1315 Akkerhuis, Kaveh Ranjbar, Jun Murai, Paul Wilson, Kilnam Chonm 1317 The editors also acknowledge the assistance of the Independent 1318 Submissions Editorial Board, and of the following reviewers whose 1319 opinions helped improve the clarity of this document: 1321 Joe Abley, Paul Mockapetris, Subramanian Moonesamy 1323 10. References 1325 10.1. Normative References 1327 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 1328 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 1329 . 1331 [RFC1035] Mockapetris, P., "Domain names - implementation and 1332 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, 1333 November 1987, . 1335 [RFC1995] Ohta, M., "Incremental Zone Transfer in DNS", RFC 1995, 1336 DOI 10.17487/RFC1995, August 1996, 1337 . 1339 [RFC1996] Vixie, P., "A Mechanism for Prompt Notification of Zone 1340 Changes (DNS NOTIFY)", RFC 1996, DOI 10.17487/RFC1996, 1341 August 1996, . 1343 [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC) 1344 Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011, 1345 September 2007, . 1347 [RFC5890] Klensin, J., "Internationalized Domain Names for 1348 Applications (IDNA): Definitions and Document Framework", 1349 RFC 5890, DOI 10.17487/RFC5890, August 2010, 1350 . 1352 10.2. Informative References 1354 [hintUpdate] 1355 "Hintfile Auto Update", 2015, 1356 . 1358 [How_ATR_work] 1359 APNIC, "How well does ATR actually work?", April 2018, 1360 . 1363 [I-D.andrews-tcp-and-ipv6-use-minmtu] 1364 Andrews, M., "TCP Fails To Respect IPV6_USE_MIN_MTU", 1365 draft-andrews-tcp-and-ipv6-use-minmtu-04 (work in 1366 progress), October 2015. 1368 [I-D.muks-dns-message-fragments] 1369 Sivaraman, M., Kerr, S., and D. Song, "DNS message 1370 fragments", draft-muks-dns-message-fragments-00 (work in 1371 progress), July 2015. 1373 [I-D.song-atr-large-resp] 1374 Song, L., "ATR: Additional Truncated Response for Large 1375 DNS Response", draft-song-atr-large-resp-00 (work in 1376 progress), September 2017. 1378 [I-D.taylor-v6ops-fragdrop] 1379 Jaeggli, J., Colitti, L., Kumari, W., Vyncke, E., Kaeo, 1380 M., and T. Taylor, "Why Operators Filter Fragments and 1381 What It Implies", draft-taylor-v6ops-fragdrop-02 (work in 1382 progress), December 2013. 1384 [ICANN2010] 1385 "DNSSEC Key Management Implementation for the Root Zone", 1386 May 2010, . 1390 [ICANN2016] 1391 "Root Zone KSK Rollover Plan", 2016, 1392 . 1395 [ICANN2017] 1396 "2017 KSK Rollover External Test Plan", July 2016, 1397 . 1400 [IPv6-frag-DNS] 1401 APNIC, "Dealing with IPv6 fragmentation in the DNS", 1402 August 2017, . 1405 [ISC-TN-2003-1] 1406 Abley, J., "Hierarchical Anycast for Global Service 1407 Distribution", March 2003, 1408 . 1410 [ITI2014] "Identifier Technology Innovation Report", May 2014, 1411 . 1414 [KROLL-ISSUE] 1415 "A DNSSEC issue during Yeti KSK rollover", 2016, 1416 . 1419 [RFC2826] Internet Architecture Board, "IAB Technical Comment on the 1420 Unique DNS Root", RFC 2826, DOI 10.17487/RFC2826, May 1421 2000, . 1423 [RFC2845] Vixie, P., Gudmundsson, O., Eastlake 3rd, D., and B. 1424 Wellington, "Secret Key Transaction Authentication for DNS 1425 (TSIG)", RFC 2845, DOI 10.17487/RFC2845, May 2000, 1426 . 1428 [RFC6219] Li, X., Bao, C., Chen, M., Zhang, H., and J. Wu, "The 1429 China Education and Research Network (CERNET) IVI 1430 Translation Design and Deployment for the IPv4/IPv6 1431 Coexistence and Transition", RFC 6219, 1432 DOI 10.17487/RFC6219, May 2011, 1433 . 1435 [RFC6891] Damas, J., Graff, M., and P. Vixie, "Extension Mechanisms 1436 for DNS (EDNS(0))", STD 75, RFC 6891, 1437 DOI 10.17487/RFC6891, April 2013, 1438 . 1440 [RFC7720] Blanchet, M. and L-J. Liman, "DNS Root Name Service 1441 Protocol and Deployment Requirements", BCP 40, RFC 7720, 1442 DOI 10.17487/RFC7720, December 2015, 1443 . 1445 [RFC7872] Gont, F., Linkova, J., Chown, T., and W. Liu, 1446 "Observations on the Dropping of Packets with IPv6 1447 Extension Headers in the Real World", RFC 7872, 1448 DOI 10.17487/RFC7872, June 2016, 1449 . 1451 [RFC8109] Koch, P., Larson, M., and P. Hoffman, "Initializing a DNS 1452 Resolver with Priming Queries", BCP 209, RFC 8109, 1453 DOI 10.17487/RFC8109, March 2017, 1454 . 1456 [RFC8324] Klensin, J., "DNS Privacy, Authorization, Special Uses, 1457 Encoding, Characters, Matching, and Root Structure: Time 1458 for Another Look?", RFC 8324, DOI 10.17487/RFC8324, 1459 February 2018, . 1461 [RRL] Vixie, P. and V. Schryver, "Response Rate Limiting (RRL)", 1462 June 2012, . 1464 [RSSAC001] 1465 "Service Expectations of Root Servers", December 2015, 1466 . 1469 [RSSAC023] 1470 "History of the Root Server System", November 2016, 1471 . 1474 [TNO2009] Gijsen, B., Jamakovic, A., and F. Roijers, "Root Scaling 1475 Study: Description of the DNS Root Scaling Model", 1476 September 2009, 1477 . 1480 [Wessels2015] 1481 Wessels, D., "Thirteen Years of Old J-Root", 2015, 1482 . 1486 [YetiLR] "Observation on Large response issue during Yeti KSK 1487 rollover", August 2017, . 1491 10.3. URIs 1493 [1] https://newgtlds.icann.org/ 1495 [2] http://yeti-dns.org/yeti/blog/2018/05/01/Experiment-plan-for- 1496 PINZ.html 1498 [3] https://github.com/BII-Lab/Yeti-Project/blob/master/doc/Yeti-DM- 1499 Sync-MZSK.md 1501 [4] https://datatracker.ietf.org/wg/sunset4/about/ 1503 [5] https://www.dns-oarc.net/tools/dnscap 1505 [6] https://packages.debian.org/sid/pcaputils 1507 [7] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=545985 1509 [8] http://lists.yeti-dns.org/pipermail/discuss/ 1511 [9] https://yeti-dns.org/blog.html 1513 [10] https://www.isc.org/blogs/2017-root-key-rollover-what-does-it- 1514 mean-for-bind-users/ 1516 [11] https://en.wikipedia.org/wiki/Blockchain 1518 Appendix A. Yeti-Root Hints File 1520 The following hints file (complete and accurate at the time of 1521 writing) causes a DNS resolver to use the Yeti DNS testbed in place 1522 of the production Root Server System and hence participate in 1523 experiments running on the testbed. 1525 Note that some lines have been wrapped in the text that follows in 1526 order to fit within the production constraints of this document. 1527 Wrapped lines are indicated with a blackslash character ("\"), 1528 following common convention. 1530 . 3600000 IN NS bii.dns-lab.net 1531 bii.dns-lab.net 3600000 IN AAAA 240c:f:1:22::6 1532 . 3600000 IN NS yeti-ns.tisf.net 1533 yeti-ns.tisf.net 3600000 IN AAAA 2001:559:8000::6 1534 . 3600000 IN NS yeti-ns.wide.ad.jp 1535 yeti-ns.wide.ad.jp 3600000 IN AAAA 2001:200:1d9::35 1536 . 3600000 IN NS yeti-ns.as59715.net 1537 yeti-ns.as59715.net 3600000 IN AAAA \ 1538 2a02:cdc5:9715:0:185:5:203:53 1539 . 3600000 IN NS dahu1.yeti.eu.org 1540 dahu1.yeti.eu.org 3600000 IN AAAA \ 1541 2001:4b98:dc2:45:216:3eff:fe4b:8c5b 1542 . 3600000 IN NS ns-yeti.bondis.org 1543 ns-yeti.bondis.org 3600000 IN AAAA 2a02:2810:0:405::250 1544 . 3600000 IN NS yeti-ns.ix.ru 1545 yeti-ns.ix.ru 3600000 IN AAAA 2001:6d0:6d06::53 1546 . 3600000 IN NS yeti.bofh.priv.at 1547 yeti.bofh.priv.at 3600000 IN AAAA 2a01:4f8:161:6106:1::10 1548 . 3600000 IN NS yeti.ipv6.ernet.in 1549 yeti.ipv6.ernet.in 3600000 IN AAAA 2001:e30:1c1e:1::333 1550 . 3600000 IN NS yeti-dns01.dnsworkshop.org 1551 yeti-dns01.dnsworkshop.org \ 1552 3600000 IN AAAA 2001:1608:10:167:32e::53 1553 . 3600000 IN NS yeti-ns.conit.co 1554 yeti-ns.conit.co 3600000 IN AAAA \ 1555 2604:6600:2000:11::4854:a010 1556 . 3600000 IN NS dahu2.yeti.eu.org 1557 dahu2.yeti.eu.org 3600000 IN AAAA 2001:67c:217c:6::2 1558 . 3600000 IN NS yeti.aquaray.com 1559 yeti.aquaray.com 3600000 IN AAAA 2a02:ec0:200::1 1560 . 3600000 IN NS yeti-ns.switch.ch 1561 yeti-ns.switch.ch 3600000 IN AAAA 2001:620:0:ff::29 1562 . 3600000 IN NS yeti-ns.lab.nic.cl 1563 yeti-ns.lab.nic.cl 3600000 IN AAAA 2001:1398:1:21::8001 1564 . 3600000 IN NS yeti-ns1.dns-lab.net 1565 yeti-ns1.dns-lab.net 3600000 IN AAAA 2001:da8:a3:a027::6 1566 . 3600000 IN NS yeti-ns2.dns-lab.net 1567 yeti-ns2.dns-lab.net 3600000 IN AAAA 2001:da8:268:4200::6 1568 . 3600000 IN NS yeti-ns3.dns-lab.net 1569 yeti-ns3.dns-lab.net 3600000 IN AAAA 2400:a980:30ff::6 1570 . 3600000 IN NS \ 1571 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net 1572 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net \ 1573 3600000 IN AAAA 2c0f:f530::6 1574 . 3600000 IN NS \ 1575 3e23e8160039594a33894f6564e1b1.yeti-dns.net 1576 3e23e8160039594a33894f6564e1b1.yeti-dns.net \ 1577 3600000 IN AAAA 2803:80:1004:63::1 1578 . 3600000 IN NS \ 1579 3f79bb7b435b05321651daefd374cd.yeti-dns.net 1580 3f79bb7b435b05321651daefd374cd.yeti-dns.net \ 1581 3600000 IN AAAA 2401:c900:1401:3b:c::6 1582 . 3600000 IN NS \ 1583 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c 1584 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c \ 1585 3600000 IN AAAA 2001:e30:1c1e:10::333 1586 . 3600000 IN NS yeti1.ipv6.ernet.in 1587 yeti1.ipv6.ernet.in 3600000 IN AAAA 2001:e30:187d::333 1588 . 3600000 IN NS yeti-dns02.dnsworkshop.org 1589 yeti-dns02.dnsworkshop.org \ 1590 3600000 IN AAAA 2001:19f0:0:1133::53 1591 . 3600000 IN NS yeti.mind-dns.nl 1592 yeti.mind-dns.nl 3600000 IN AAAA 2a02:990:100:b01::53:0 1594 Appendix B. Yeti-Root Server Priming Response 1596 Here is the reply of a Yeti root name server to a priming request. 1597 The authoritative server runs NSD. 1599 ... 1600 ;; Got answer: 1601 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62391 1602 ;; flags: qr aa rd; QUERY: 1, ANSWER: 26, AUTHORITY: 0, ADDITIONAL: 7 1603 ;; WARNING: recursion requested but not available 1605 ;; OPT PSEUDOSECTION: 1606 ; EDNS: version: 0, flags: do; udp: 1460 1607 ;; QUESTION SECTION: 1608 ;. IN NS 1610 ;; ANSWER SECTION: 1611 . 86400 IN NS bii.dns-lab.net. 1612 . 86400 IN NS yeti.bofh.priv.at. 1613 . 86400 IN NS yeti.ipv6.ernet.in. 1614 . 86400 IN NS yeti.aquaray.com. 1615 . 86400 IN NS yeti.jhcloos.net. 1616 . 86400 IN NS yeti.mind-dns.nl. 1617 . 86400 IN NS dahu1.yeti.eu.org. 1618 . 86400 IN NS dahu2.yeti.eu.org. 1619 . 86400 IN NS yeti1.ipv6.ernet.in. 1620 . 86400 IN NS ns-yeti.bondis.org. 1621 . 86400 IN NS yeti-ns.ix.ru. 1622 . 86400 IN NS yeti-ns.lab.nic.cl. 1623 . 86400 IN NS yeti-ns.tisf.net. 1624 . 86400 IN NS yeti-ns.wide.ad.jp. 1625 . 86400 IN NS yeti-ns.datev.net. 1626 . 86400 IN NS yeti-ns.switch.ch. 1627 . 86400 IN NS yeti-ns.as59715.net. 1628 . 86400 IN NS yeti-ns1.dns-lab.net. 1629 . 86400 IN NS yeti-ns2.dns-lab.net. 1630 . 86400 IN NS yeti-ns3.dns-lab.net. 1631 . 86400 IN NS xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c. 1632 . 86400 IN NS yeti-dns01.dnsworkshop.org. 1633 . 86400 IN NS yeti-dns02.dnsworkshop.org. 1634 . 86400 IN NS 3f79bb7b435b05321651daefd374cd.yeti-dns.net. 1635 . 86400 IN NS ca978112ca1bbdcafac231b39a23dc.yeti-dns.net. 1636 . 86400 IN RRSIG NS 8 0 86400 ( 1637 20171121050105 20171114050105 26253 . 1638 FUvezvZgKtlLzQx2WKyg+D6dw/pITcbuZhzStZfg+LNa 1639 DjLJ9oGIBTU1BuqTujKHdxQn0DcdFh9QE68EPs+93bZr 1640 VlplkmObj8f0B7zTQgGWBkI/K4Tn6bZ1I7QJ0Zwnk1mS 1641 BmEPkWmvo0kkaTQbcID+tMTodL6wPAgW1AdwQUInfy21 1642 p+31GGm3+SU6SJsgeHOzPUQW+dUVWmdj6uvWCnUkzW9p 1643 +5en4+85jBfEOf+qiyvaQwUUe98xZ1TOiSwYvk5s/qiv 1644 AMjG6nY+xndwJUwhcJAXBVmGgrtbiR8GiGZfGqt748VX 1645 4esLNtD8vdypucffem6n0T0eV1c+7j/eIA== ) 1647 ;; ADDITIONAL SECTION: 1648 bii.dns-lab.net. 86400 IN AAAA 240c:f:1:22::6 1649 yeti.bofh.priv.at. 86400 IN AAAA 2a01:4f8:161:6106:1::10 1650 yeti.ipv6.ernet.in. 86400 IN AAAA 2001:e30:1c1e:1::333 1651 yeti.aquaray.com. 86400 IN AAAA 2a02:ec0:200::1 1652 yeti.jhcloos.net. 86400 IN AAAA 2001:19f0:5401:1c3::53 1653 yeti.mind-dns.nl. 86400 IN AAAA 2a02:990:100:b01::53:0 1655 ;; Query time: 163 msec 1656 ;; SERVER: 2001:4b98:dc2:45:216:3eff:fe4b:8c5b#53 1657 ;; WHEN: Tue Nov 14 16:45:37 +08 2017 1658 ;; MSG SIZE rcvd: 1222 1660 Appendix C. Active IPv6 Prefixes in Yeti DNS testbed 1662 +----------------------+---------------------------------+----------+ 1663 | Prefix | Originator | Location | 1664 +----------------------+---------------------------------+----------+ 1665 | 240c::/28 | BII | CN | 1666 | 2001:6d0:6d06::/48 | MSK-IX | RU | 1667 | 2001:1488::/32 | CZ.NIC | CZ | 1668 | 2001:620::/32 | SWITCH | CH | 1669 | 2001:470::/32 | Hurricane Electric, Inc. | US | 1670 | 2001:0DA8:0202::/48 | BUPT6-CERNET2 | CN | 1671 | 2001:19f0:6c00::/38 | Choopa, LLC | US | 1672 | 2001:da8:205::/48 | BJTU6-CERNET2 | CN | 1673 | 2001:62a::/31 | Vienna University Computer | AT | 1674 | | Center | | 1675 | 2001:67c:217c::/48 | AFNIC | FR | 1676 | 2a02:2478::/32 | Profitbricks GmbH | DE | 1677 | 2001:1398:1::/48 | NIC Chile | CL | 1678 | 2001:4490:dc4c::/46 | NIB (National Internet | IN | 1679 | | Backbone) | | 1680 | 2001:4b98::/32 | Gandi | FR | 1681 | 2a02:aa8:0:2000::/52 | T-Systems-Eltec | ES | 1682 | 2a03:b240::/32 | Netskin GmbH | CH | 1683 | 2801:1a0::/42 | Universidad de Ibague | CO | 1684 | 2a00:1cc8::/40 | ICT Valle Umbra s.r.l. | IT | 1685 | 2a02:cdc0::/29 | ORG-CdSB1-RIPE | IT | 1686 +----------------------+---------------------------------+----------+ 1688 Appendix D. Tools developed for Yeti DNS testbed 1690 Various tools were developed to support the Yeti DNS testbed, a 1691 selection of which are described briefly below. 1693 YmmV ("Yeti Many Mirror Verifier") is designed to make it easy and 1694 safe for a DNS administrator to capture traffic sent from a resolver 1695 to the Root Server System and to replay it towards Yeti-Root servers. 1696 Responses from both systems are recorded and compared, and 1697 differences are logged. See . 1699 PcapParser is a module used by YmmV which reassembles fragmented IPv6 1700 datagrams and TCP segments from a PCAP archive and extracts DNS 1701 messages contained within them. See . 1704 DNS-layer-fragmentation implements DNS proxies that perform 1705 application-level fragmentation of DNS messages, based on 1706 [I-D.muks-dns-message-fragments]. The idea with these proxies is to 1707 explore splitting DNS messages in the protocol itself, so they will 1708 not by fragmented by the IP layer. See . 1711 DNS_ATR is an implementation of DNS Additional Truncated Response 1712 (ATR), as described in [I-D.song-atr-large-resp] [How_ATR_work]. 1713 DNS_ATR acts as a proxy between resolver and authoritative servers, 1714 forwarding queries and responses as a silent and transparent 1715 listener. Responses that are larger than a nominated threshold (1280 1716 octets by default) trigger additional truncated responses to be sent 1717 immediately following the large response. See 1718 . 1720 Appendix E. Controversy 1722 The Yeti DNS Project, its infrastructure and the various experiments 1723 that have been carried out using that infrastructure, have been 1724 described by people involved in the project in many public meetings 1725 at technical venues since its inception. The mailing lists using 1726 which the operation of the infrastructure has been coordinated are 1727 open to join, and their archives are public. The project as a whole 1728 has been the subject of robust public discussion. 1730 Some commentators have expressed concern that the Yeti DNS Project 1731 is, in effect, operating an alternate root, challenging the IAB's 1732 comments published in [RFC2826]. Other such alternate roots are 1733 considered to have caused end-user confusion and instability in the 1734 namespace of the DNS by the introduction of new top-level labels or 1735 the different use of top-level labels present in the Root Server 1736 System. The coordinators of the Yeti DNS Project do not consider the 1737 Yeti DNS Project to be an alternate root in this sense, since by 1738 design the namespace enabled by the Yeti-Root Zone is identical to 1739 that of the Root Zone. 1741 Some commentators have expressed concern that the Yeti DNS Project 1742 seeks to influence or subvert administrative policy relating to the 1743 Root Server System, in particular in the use of DNSSEC trust anchors 1744 not published by the IANA and the use of Yeti-Root Servers in regions 1745 where governments or other organisations have expressed interest in 1746 operating a Root Server. The coordinators of the Yeti-Root project 1747 observe that their mandate is entirely technical and has no ambition 1748 to influence policy directly; they do hope, however, that technical 1749 findings from the Yeti DNS Project might act as a useful resource for 1750 the wider technical community. 1752 Appendix F. About This Document 1754 This section (and sub-sections) has been included as an aid to 1755 reviewers of this document, and should be removed prior to 1756 publication. 1758 F.1. Venue 1760 The authors propose that this document proceed as an Independent 1761 Submission, since it documents work that, although relevant to the 1762 IETF, has been carried out externally to any IETF working group. 1763 However, a suitable venue for discussion of this document is the 1764 dnsop working group. 1766 Information about the Yeti DNS project and discussion relating to 1767 particular experiments described in this document can be found at 1768 . 1770 This document is maintained in GitHub at . 1773 F.2. Revision History 1775 F.2.1. draft-song-yeti-testbed-experience-00 through -03 1777 Change history is available in the public GitHub repository where 1778 this document is maintained: . 1781 F.2.2. draft-song-yeti-testbed-experience-04 1783 Substantial editorial review and rearrangement of text by Joe Abley 1784 at request of BII. 1786 Added what is intended to be a balanced assessment of the controversy 1787 that has arisen around the Yeti DNS Project, at the request of the 1788 Independent Submissions Editorial Board. 1790 Changed the focus of the document from the description of individual 1791 experiments on a Root-like testbed to the construction and 1792 motivations of the testbed itself, since that better describes the 1793 output of the Yeti DNS Project to date. In the considered opinion of 1794 this reviewer, the novel approaches taken in the construction of the 1795 testbed infrastructure and the technical challenges met in doing so 1796 are useful to record, and the RFC series is a reasonable place to 1797 record operational experiences related to core Internet 1798 infrastructure. 1800 Note that due to draft cut-off deadlines some of the technical 1801 details described in this revision of the document may not exactly 1802 match operational reality; however, this revision provides an 1803 indicative level of detail, focus and flow which it is hoped will be 1804 helpful to reviewers. 1806 F.2.3. draft-song-yeti-testbed-experience-05 1808 Added commentary on IPv6-only operation, IPv6 fragmentation, 1809 applicability to and use by IPv4-only end-users and use of multiple 1810 signers. 1812 F.2.4. draft-song-yeti-testbed-experience-06 1814 Conclusion; tools; editorial changes. 1816 F.2.5. draft-song-yeti-testbed-experience-07 1818 Add section for requirements Notation and Conventions, editorial 1819 changes according to reviewers' comments. 1821 F.2.6. draft-song-yeti-testbed-experience-08 1823 Editorial changes after ISE review. 1825 F.2.7. draft-song-yeti-testbed-experience-09 1827 ISE suggested editorial changes responding Terry's comments 1829 Authors' Addresses 1830 Linjian Song (editor) 1831 Beijing Internet Institute 1832 2nd Floor, Building 5, No.58 Jing Hai Wu Lu, BDA 1833 Beijing 100176 1834 P. R. China 1836 Email: songlinjian@gmail.com 1837 URI: http://www.biigroup.com/ 1839 Dong Liu 1840 Beijing Internet Institute 1841 2nd Floor, Building 5, No.58 Jing Hai Wu Lu, BDA 1842 Beijing 100176 1843 P. R. China 1845 Email: dliu@biigroup.com 1846 URI: http://www.biigroup.com/ 1848 Paul Vixie 1849 TISF 1850 11400 La Honda Road 1851 Woodside, California 94062 1852 US 1854 Email: vixie@tisf.net 1855 URI: http://www.redbarn.org/ 1857 Akira Kato 1858 Keio University/WIDE Project 1859 Graduate School of Media Design, 4-1-1 Hiyoshi, Kohoku 1860 Yokohama 223-8526 1861 JAPAN 1863 Email: kato@wide.ad.jp 1864 URI: http://www.kmd.keio.ac.jp/ 1866 Shane Kerr 1867 Antoon Coolenlaan 41 1868 Uithoorn 1422 GN 1869 NL 1871 Email: shane@time-travellers.org