idnits 2.17.1 draft-song-yeti-testbed-experience-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. == There are 25 instances of lines with non-RFC2606-compliant FQDNs in the document. == There are 1 instance of lines with non-RFC3849-compliant IPv6 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 30, 2017) is 2370 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-11) exists of draft-ietf-dnsop-resolver-priming-07 == Outdated reference: A later version (-03) exists of draft-song-atr-large-resp-00 ** Obsolete normative reference: RFC 2845 (Obsoleted by RFC 8945) Summary: 3 errors (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force (IETF) L. Song, Ed. 3 Internet-Draft D. Liu, Ed. 4 Intended status: Informational Beijing Internet Institute 5 Expires: May 3, 2018 P. Vixie, Ed. 6 TISF 7 Kato, Ed. 8 Keio University/WIDE Project 9 S. Kerr 10 October 30, 2017 12 Yeti DNS Testbed 13 draft-song-yeti-testbed-experience-05 15 Abstract 17 The Internet's Domain Name System (DNS) is built upon the foundation 18 provided by the Root Server System -- that is, the critical 19 infrastructure that serves the DNS root zone. 21 Yeti DNS is an experimental, non-production testbed that aims to 22 provide an environment where technical and operational experiments 23 can safely be performed without risk to production infrastructure. 24 This testbed has been used by a broad community of participants to 25 perform experiments that aim to inform operations and future 26 development of the production DNS. Yeti DNS is an independently- 27 coordinated project and is not affiliated with ICANN, IANA or any 28 Root Server Operator. 30 The Yeti DNS testbed implementation includes various novel and 31 experimental components including IPv6-only transport, independent, 32 autonomous Zone Signing Key management, large cryptographic keys and 33 a large number of component Yeti-Root Servers. These differences 34 from the Root Server System have operational consequences such as 35 large responses to priming queries and the coordination of a large 36 pool of independent operators; by deploying such a system globally 37 but outside the production DNS system, the Yeti DNS project provides 38 an opportunity to gain insight into those consequences without 39 threatening the stability of the DNS. 41 This document neither addresses the relevant policies under which the 42 Root Server System is operated nor makes any proposal for changing 43 any aspect of its implementation or operation. This document aims 44 solely to document technical and operational findings following the 45 deployment of a system which is similar but different from the Root 46 Server System. 48 Status of This Memo 50 This Internet-Draft is submitted in full conformance with the 51 provisions of BCP 78 and BCP 79. 53 Internet-Drafts are working documents of the Internet Engineering 54 Task Force (IETF). Note that other groups may also distribute 55 working documents as Internet-Drafts. The list of current Internet- 56 Drafts is at https://datatracker.ietf.org/drafts/current/. 58 Internet-Drafts are draft documents valid for a maximum of six months 59 and may be updated, replaced, or obsoleted by other documents at any 60 time. It is inappropriate to use Internet-Drafts as reference 61 material or to cite them other than as "work in progress." 63 This Internet-Draft will expire on May 3, 2018. 65 Copyright Notice 67 Copyright (c) 2017 IETF Trust and the persons identified as the 68 document authors. All rights reserved. 70 This document is subject to BCP 78 and the IETF Trust's Legal 71 Provisions Relating to IETF Documents 72 (https://trustee.ietf.org/license-info) in effect on the date of 73 publication of this document. Please review these documents 74 carefully, as they describe your rights and restrictions with respect 75 to this document. Code Components extracted from this document must 76 include Simplified BSD License text as described in Section 4.e of 77 the Trust Legal Provisions and are provided without warranty as 78 described in the Simplified BSD License. 80 Table of Contents 82 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 83 2. Areas of Study . . . . . . . . . . . . . . . . . . . . . . . 4 84 2.1. Implementation of a Root Server System-like Testbed . . . 5 85 2.2. Yeti-Root Zone Distribution . . . . . . . . . . . . . . . 5 86 2.3. Yeti-Root Server Names and Addressing . . . . . . . . . . 5 87 2.4. IPv6-Only Yeti-Root Servers . . . . . . . . . . . . . . . 5 88 2.5. DNSSEC in the Yeti-Root Zone . . . . . . . . . . . . . . 5 89 3. Yeti DNS Testbed Infrastructure . . . . . . . . . . . . . . . 6 90 3.1. Root Zone Retrieval . . . . . . . . . . . . . . . . . . . 7 91 3.2. Transformation of Root Zone to Yeti-Root Zone . . . . . . 8 92 3.2.1. ZSK and KSK Key Sets Shared Between DMs . . . . . . . 8 93 3.2.2. Unique ZSK per DM; No Shared KSK . . . . . . . . . . 9 94 3.2.3. Preserving Root Zone NSEC Chain and ZSK RRSIGs . . . 11 95 3.3. Yeti-Root Zone Distribution . . . . . . . . . . . . . . . 11 96 3.4. Synchronization of Service Metadata . . . . . . . . . . . 11 97 3.5. Yeti-Root Server Naming Scheme . . . . . . . . . . . . . 12 98 3.6. Yeti-Root Servers . . . . . . . . . . . . . . . . . . . . 13 99 3.7. Traffic Capture and Analysis . . . . . . . . . . . . . . 15 100 4. Operational Experience with Yeti DNS Testbed . . . . . . . . 15 101 4.1. Automated Hints File Maintenance . . . . . . . . . . . . 15 102 4.2. IPv6-only root operation . . . . . . . . . . . . . . . . 16 103 4.2.1. Impact of IPv6 fragmentation . . . . . . . . . . . . 16 104 4.2.2. How IPv6-only Root serve IPv4 users? . . . . . . . . 18 105 4.3. Experience on Multiple Signers . . . . . . . . . . . . . 19 106 4.3.1. IXFR fallback to AXFR . . . . . . . . . . . . . . . . 19 107 4.3.2. Latency of Root Zone update . . . . . . . . . . . . . 20 108 4.4. Root Label Compression in Knot . . . . . . . . . . . . . 20 109 4.5. Increased ZSK Key Size . . . . . . . . . . . . . . . . . 21 110 4.6. KSK Rollover . . . . . . . . . . . . . . . . . . . . . . 21 111 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 21 112 6. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 21 113 7. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 114 Appendix A. Yeti-Root Hints File . . . . . . . . . . . . . . . . 25 115 Appendix B. Controversy . . . . . . . . . . . . . . . . . . . . 26 116 Appendix C. About This Document . . . . . . . . . . . . . . . . 27 117 C.1. Venue . . . . . . . . . . . . . . . . . . . . . . . . . . 27 118 C.2. Revision History . . . . . . . . . . . . . . . . . . . . 28 119 C.2.1. draft-song-yeti-testbed-experience-00 through -03 . . 28 120 C.2.2. draft-song-yeti-testbed-experience-04 . . . . . . . . 28 121 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 28 123 1. Introduction 125 The Domain Name System (DNS), as originally specified in [RFC1034] 126 and [RFC1035], has proved to be an enduring and important platform 127 upon which almost every user of Internet services relies. Despite 128 its longevity, extensions to the protocol, new implementations and 129 refinements to DNS operations continue to emerge both inside and 130 outside the IETF. 132 The Root Server System in particular has seen technical innovation 133 and development in recent years, for example in the form of wide- 134 scale anycast deployment, the mitigation of unwanted traffic on a 135 global scale, the widespread deployment of Response Rate Limiting 136 [RRL], the introduction of IPv6 transport, the deployment of DNSSEC, 137 changes in DNSSEC key sizes and preparations to roll the root zone's 138 trust anchor. Together, even the projects listed in this brief 139 summary imply tremendous operational change, all the more impressive 140 when considered the necessary caution when managing Internet critical 141 infrastructure, and the context of the adjacent administrative 142 changes involved in root zone management and the (relatively 143 speaking) massive increase in the the number of delegations in the 144 root zone itself. 146 Aspects of the operational structure of the Root Server System have 147 been described in such documents as [TNO2009], [ISC-TN-2003-1], 148 [RSSAC001] and [RFC7720]. Such references, considered together, 149 provide sufficient insight into the operations of the system as a 150 whole that it is straightforward to imagine structural changes to the 151 root server system's infrastructure, and to wonder what the 152 operational implications of such changes might be. 154 The Yeti DNS Project was conceived in May 2015 to provide a captive, 155 non-production testbed upon which the technical community could 156 propose and run experiments designed to answer these kinds of 157 questions. Coordination for the project was provided by TISF, the 158 WIDE Project and the Beijing Internet Institute. Many volunteers 159 collaborated to build a distributed testbed that at the time of 160 writing includes 25 Yeti root servers with 16 operators and handles 161 experimental traffic from individual volunteers, universities, DNS 162 vendors and distributed measurement networks. 164 By design, the Yeti testbed system serves the root zone published by 165 the IANA with only those structural modifications necessary to ensure 166 that it is able to function usefully in the Yeti testbed system 167 instead of the production Root Server system. In particular, no 168 delegation for any top-level zone is changed, added or removed from 169 the IANA-published root zone to construct the root zone served by the 170 Yeti testbed system. In this document, for clarity, we refer to the 171 zone derived from the IANA-published root zone as the Yeti-Root zone. 173 The Yeti DNS testbed serves a similar function to the Root Server 174 System in the sense that they both serve similar zones (the Yeti-Root 175 zone and the Root zone, respectively). However, the Yeti DNS testbed 176 only serves clients that are explicitly configured to participate in 177 the experiment, whereas the Root Server System serves the whole 178 Internet. The known set of clients has allowed structural changes to 179 be deployed in the Yeti DNS testbed whose impact on clients can be 180 measured and analysed. 182 2. Areas of Study 184 Examples of topics that the Yeti DNS Testbed was built to address are 185 included below, each illustrated with indicative questions. 187 2.1. Implementation of a Root Server System-like Testbed 189 o How can a captive testbed be constructed and deployed on the 190 Internet, allowing useful public participation without any risk of 191 disruption of the Root Server System? 193 o How can representative traffic be introduced into such a captive 194 testbed such that insights into the impact of specific differences 195 between the testbed and the Root Server System can be observed? 197 2.2. Yeti-Root Zone Distribution 199 o What are the scaling properties of Yeti-Root zone distribution as 200 the number of Yeti-Root servers, Yeti-Root server instances or 201 intermediate distribution points increase? 203 2.3. Yeti-Root Server Names and Addressing 205 o What naming schemes other than those closely analogous to the use 206 of ROOT-SERVERS.NET in the production root zone are practical, and 207 what are their respective advantages and disadvantages? 209 o What are the risks and benefits of signing the zone that contains 210 the names of the Yeti-Root servers? 212 o What automatic mechanisms might be useful to improve the rate at 213 which clients of Yeti-Root servers are able to react to a Yeti- 214 Root server renumbering event? 216 2.4. IPv6-Only Yeti-Root Servers 218 o Are there negative operational effects in the use of IPv6-only 219 Yeti-Root servers, compared to the use of servers that are dual- 220 stack? 222 o What effect does the IPv6 fragmentation model have on the 223 operation of Yeti-Root servers, compared with that of IPv4? 225 2.5. DNSSEC in the Yeti-Root Zone 227 o Is it practical to sign the Yeti-Root zone using multiple, 228 independently-operated DNSSEC signers and multiple corresponding 229 ZSKs? 231 o To what extent is [RFC5011] supported by resolvers? 233 o Does the KSK Rollover plan designed and in the process of being 234 implemented by ICANN work as expected on the Yeti testbed? 236 o What is the operational impact of using much larger RSA key sizes 237 in the ZSKs used in the Yeti-Root? 239 o What are the operational consequences of choosing DNSSEC 240 algorithms other than RSA to sign the Yeti-Root zone? 242 3. Yeti DNS Testbed Infrastructure 244 The purpose of the testbed is to allow DNS queries from stub 245 resolvers, mediated by recursive resolvers, to be delivered to Yeti- 246 Root servers, and for corresponding responses generated on the Yeti- 247 Root servers to be returned, as illustrated in Figure 1. 249 ,----------. ,-----------. ,------------. 250 | stub +------> | recursive +------> | Yeti-Root | 251 | resolver | <------+ resolver | <------+ nameserver | 252 `----------' `-----------' `------------' 253 ^ ^ ^ 254 | appropriate | Yeti-Root hints; | Yeti-Root zone 255 `- resolver `- Yeti-Root trust `- with DNSKEY RRSet, 256 configured anchor signed by Yeti-KSK 258 Figure 1: High-Level Testbed Components 260 To use the Yeti DNS testbed, a stub resolver must be explicitly 261 configured to use recursive resolvers that have themselves been 262 configured to use the Yeti-Root servers. On the resolvers, that 263 configuration consists of a list of names and addresses for the Yeti- 264 Root servers (often referred to as a "hints file") that replaces the 265 normal Internet DNS hints. Resolvers also need to be configured with 266 a DNSSEC trust anchor that corresponds to a KSK used in the Yeti DNS 267 Project, in place of the normal trust anchor for the root zone. 269 The need for a Yeti-specific trust anchor in the resolver stems from 270 the need to make minimal changes to the root zone, as retrieved from 271 the IANA, to transform it into the Yeti-Root that can be used in the 272 testbed. Those changes would be properly rejected by any validator 273 using an accurate root zone trust anchor as bogus. Corresponding 274 changes are required in the Yeti-Root hints file Appendix A. 276 The data flow from IANA to stub resolvers through the Yeti testbed is 277 illustrated in Figure 2 and are described in more detail in the 278 sections that follow. 280 ,----------------. 281 ,-- / IANA Root Zone / ---. 282 | `----------------' | 283 | | | 284 | | | Root Zone 285 ,--------------. ,---V---. ,---V---. ,---V---. 286 | Yeti Traffic | | BII | | WIDE | | TISF | 287 | Collection | | DM | | DM | | DM | 288 `----+----+----' `---+---' `---+---' `---+---' 289 | | ,-----' ,-------' `----. 290 | | | | | Yeti-Root 291 ^ ^ | | | Zone 292 | | ,---V---. ,---V---. ,---V---. 293 | `---+ Yeti | | Yeti | . . . . . . . | Yeti | 294 | | Root | | Root | | Root | 295 | `---+---' `---+---' `---+---' 296 | | | | DNS 297 | | | | Response 298 | ,--V----------V-------------------------V--. 299 `---------+ Yeti Resolvers | 300 `--------------------+---------------------' 301 | DNS 302 | Response 303 ,--------------------V---------------------. 304 | Yeti Stub Resolvers | 305 `------------------------------------------' 307 Figure 2: Testbed Data Flow 309 3.1. Root Zone Retrieval 311 Since Yeti DNS servers cannot receive DNS NOTIFY [RFC1996] messages 312 from the Root Server System, a polling approach is used. Each Yeti 313 Distribution Master (DM) requests the root zone SOA record from a 314 nameserver that permits unauthenticated zone transfers of the root 315 zone, and performs a zone transfer from that server if the retrieved 316 value of SOA.SERIAL is greater than that of the last retrieved zone. 318 At the time of writing, unauthenticated zone transfers of the root 319 zone are available directly from B-Root, C-Root, F-Root, G-Root and 320 K-Root, and from L-Root via the two servers XFR.CJR.DNS.ICANN.ORG and 321 XFR.LAX.DNS.ICANN.ORG, as well as via FTP from sites maintained by 322 the Root Zone Maintainer and the IANA Functions Operator. The Yeti 323 DNS Testbed retrieves the root zone from using zone transfers from 324 F-Root. The schedule on which F-Root is polled by each Yeti DM is as 325 follows: 327 +-------------+-----------------------+ 328 | DM Operator | Time | 329 +-------------+-----------------------+ 330 | BII | UTC hour + 00 minutes | 331 | WIDE | UTC hour + 20 minutes | 332 | TISF | UTC hour + 40 minutes | 333 +-------------+-----------------------+ 335 The Yeti DNS testbed uses multiple DMs, each of which acts 336 autonomously and equivalently to its siblings. Any single DM can act 337 to distribute new revisions of the Yeti-Root zone, and is also 338 responsible for signing the RRSets that are changed as part of the 339 transformation of the Root Zone into the Yeti-Root zone described in 340 Section 3.2. This shared control over the processing and 341 distribution of the Yeti-Root zone approximates some of the ideas 342 around shared zone control explored in [ITI2014]. 344 3.2. Transformation of Root Zone to Yeti-Root Zone 346 Two distinct approaches have been deployed in the Yeti-DNS Testbed, 347 separately, to transform the Root Zone into the Yeti-Root Zone. At a 348 high level both approaches are equivalent in the sense that they 349 replace a minimal set of information in the Root Zone with 350 corresponding data corresponding to the Yeti DNS Testbed; the 351 mechanisms by which the transforms are executed are different, 352 however. Each is discussed in turn in Section 3.2.1 and 353 Section 3.2.2, respectively. 355 A third approach has also been proposed, but not yet implemented. 356 The motivations and changes implied by that approach are also 357 described in Section 3.2.3. 359 3.2.1. ZSK and KSK Key Sets Shared Between DMs 361 The approach described here was the first to be implemented. It 362 features entirely autonomous operation of each DM, but also requires 363 secret key material (the private parts of all Yeti-Root KSK and ZSK 364 key-pairs) to be distributed and maintained on each DM in a 365 coordinated way. 367 The Root Zone is transformed as follows to produce the Yeti-Root 368 Zone. This transformation is carried out autonomously on each Yeti 369 DNS Project DM. Each DM carries an authentic copy of the current set 370 of Yeti KSK and ZSK key pairs, synchronised between all DMs (see 371 Section 3.4). 373 1. SOA.MNAME is set to www.yeti-dns.org. 375 2. SOA.RNAME is set to .yeti-dns.org. where is currently one of "wide", "bii" or "tisf". 378 3. All DNSKEY, RRSIG and NSEC records are removed. 380 4. The apex NS RRSet is removed, with the corresponding root server 381 glue RRSets. 383 5. A Yeti DNSKEY RRSet is added to the apex, comprising the public 384 parts of all Yeti KSK and ZSKs. 386 6. A Yeti NS RRSet is added to the apex that includes all Yeti-Root 387 servers. 389 7. Glue records (AAAA, since Yeti-Root servers are v6-only) for all 390 Yeti-Root servers are added. 392 8. The Yeti-Root Zone is signed: the NSEC chain is regenerated; the 393 Yeti KSK is used to sign the DNSKEY RRSet, and the DM's local ZSK 394 to generate every other RRSet. 396 Note that the SOA.SERIAL value published in the Yeti-Root Zone is 397 identical to that found in the Root Zone. 399 3.2.2. Unique ZSK per DM; No Shared KSK 401 The approach described here was the second to be implemented. Each 402 DM is provisioned with its own, dedicated ZSK key pairs that are not 403 shared with other DMs. A Yeti-Root DNSKEY RRSet is constructed and 404 signed upstream of all DMs as the union of the set of active KSKs and 405 the set of active ZSKs for every individual DM. Each DM now only 406 requires the secret part of its own dedicated ZSK key pairs to be 407 available locally, and no other secret key material is shared. The 408 high-level approach is illustrated in Figure 3. 410 ,----------. ,-----------. 411 .--------> BII ZSK +---------> Yeti-Root | 412 | signs `----------' signs `-----------' 413 | 414 ,-----------. | ,----------. ,-----------. 415 | Yeti KSK +-+--------> TISF ZSK +---------> Yeti-Root | 416 `-----------' | signs `----------' signs `-----------' 417 | 418 | ,----------. ,-----------. 419 `--------> WIDE ZSK +---------> Yeti-Root | 420 signs `----------' signs `-----------' 422 Figure 3: Unique ZSK per DM 424 The process of retrieving the Root Zone from the Root Server System 425 and replacing and signing the apex DNSKEY RRSet no longer takes place 426 on the DMs, and instead takes place on a central Hidden Master. The 427 production of signed DNSKEY RRSets is analogous to the use of Signed 428 Key Responses (SKR) produced during ICANN KSK key ceremonies. 430 Each DM now retrieves source data (with pre-modified and Yeti-signed 431 DNSKEY RRset, but otherwise unchanged) from the Yeti DNS Hidden 432 Master instead of from the Root Server System. 434 Each DM carries out a similar transformation to that described in 435 Section 3.2.1, except that DMs no longer need to modify or sign the 436 DNSKEY RRSet. 438 The Yeti-Root Zone served by any particular Yeti-Root Server will 439 include signatures generated using the ZSK from the DM that served 440 the Yeti-Root Zone to that Yeti-Root Server. Signatures cached at 441 resolvers might be retrieved from any Yeti-Root Server, and hence are 442 expected to be a mixture of signatures generated by different ZSKs. 443 Since all ZSKs can be trusted through the signature by the Yeti KSK 444 over the DNSKEY RRSet, which includes all ZSKs, the mixture of 445 signatures was predicted not to be a threat to reliable validation. 446 Deployment and experimentation confirms this to be the case, even 447 when individual ZSKs are rolled on different schedules. 449 A consequence of this approach is that the apex DNSKEY RRSet in the 450 Yeti-Root zone is much larger than the corresponding DNSKEY RRSet in 451 the Root Zone. 453 3.2.3. Preserving Root Zone NSEC Chain and ZSK RRSIGs 455 A change to the transformation described in Section 3.2.2 has been 456 proposed that would preserve the NSEC chain from the Root Zone and 457 all RRSIG RRs generated using the Root Zone's ZSKs. The DNSKEY RRSet 458 would continue to be modified to replace the Root Zone KSKs, and the 459 Yeti KSK would be used to generate replacement signatures over the 460 apex DNSKEY and NS RRSets. Source data would continue to flow from 461 the Root Server System through the Hidden Master to the set of DMs, 462 but no DNSSEC operations would be required on the DMs and the source 463 NSEC and most RRSIGs would remain intact. 465 This approach has been suggested in order to provide 466 cryptographically-verifiable confidence that no owner name in the 467 root zone had been changed in the process of producing the Yeti-Root 468 zone from the Root Zone, addressing one of the concerns described in 469 Appendix B in a way that can be verified automatically. 471 3.3. Yeti-Root Zone Distribution 473 Each Yeti DM is configured with a full list of Yeti-Root Server 474 addresses to send NOTIFY messages to, and to form the basis for an 475 address-based access-control list for zone transfers. Authentication 476 by address could be replaced with more rigourous mechanisms (e.g. 477 using Transaction Signatures (TSIG) [RFC2845]); this has not been 478 done at the time of writing since the use of address-based controls 479 avoid the need for the distribution of shared secrets amongst the 480 Yeti-Root Server Operators. 482 Individual Yeti-Root Servers are configured with a full set of Yeti 483 DM addresses to which SOA and AXFR requests may be sent in the 484 conventional manner. 486 3.4. Synchronization of Service Metadata 488 Changes in the Yeti-DNS Testbed infrastructure such as the addition 489 or removal of Yeti-Root servers, renumbering Yeti-Root Servers or 490 DNSSEC key rollovers require coordinated changes to take place on all 491 DMs. The Yeti-DNS Testbed is subject to more frequent changes than 492 are observed in the Root Server System and includes substantially 493 more Yeti-Root Servers than there are Root Servers, and hence a 494 manual change process in the Yeti Testbed would be more likely to 495 suffer from human error. An automated process was consequently 496 implemented. 498 A repository of all service metadata involved in the operation of 499 each DM was implemented as a separate git repository hosted at 500 github.com, since this provided a simple, transparent and familiar 501 mechanism for participants to review. Requests to change the service 502 metadata for a DM are submitted as pull requests from a fork of the 503 corresponding repository; each DM operator reviews pull requests and 504 merges them to indicate approval. Once merged, changes are pulled 505 automatically to individual DMs and promoted to production. 507 3.5. Yeti-Root Server Naming Scheme 509 The current naming scheme for Root Servers was normalized to use 510 single-character host names (A through M) under the domain ROOT- 511 SERVERS.NET, as described in [RSSAC023]). The principal benefit of 512 this naming scheme is that DNS label compression can be used to 513 produce a priming response that will fit within 512 bytes, the 514 maximum DNS message size using UDP transport without EDNS0 [RFC6891]. 516 Yeti-Root Servers do not use this optimisation, but rather use free- 517 form nameserver names chosen by their respective operators -- in 518 other words, no attempt is made to minimise the size of the priming 519 response through the use of label compression. This approach aims to 520 challenge the need for a minimally-sized priming response in a modern 521 DNS ecosystem where EDNS(0) is prevalent. 523 Priming responses from Yeti-Root Servers do not always include server 524 addresses in the additional section, as is the case with priming 525 responses from Root Servers. In particular, Yeti-Root Servers 526 running BIND9 return an empty additional section, forcing resolvers 527 to complete the priming process with a set of conventional recursive 528 lookups in order to resolve addresses for each Yeti-Root server. 529 Yeti-Root Servers running NSD appeared to return a fully-populated 530 additional section. 532 Various approaches to normalise the composition of the priming 533 response were considered, including: 535 o Require use of DNS implementations that exhibit the desired 536 behaviour in the priming response (e.g. NSD) in favour of BIND9; 538 o Modification of BIND9 (and any other server with similar 539 behaviour) for use by Yeti-Root Servers; 541 o Isolate the names of Yeti-Root Servers in one or more zones that 542 could be slaved on each Yeti-Root Server, renaming servers as 543 necessary, giving each a source of authoritative data with which 544 the authority section of a priming response could be fully 545 populated. This is the approach used in the Root Server System. 547 The potential mitigation of renaming all Yeti-Root Servers using a 548 scheme that would allow their names to exist in the balliwick of the 549 root zone was not considered, since that approach implies the 550 invention of new top-level labels not present in the Root Zone. 552 Given the relative infrequency of priming queries by individual 553 resolvers and the additional complexity or other compromises implied 554 by each of those mitigations, the decision was made to make no effort 555 to ensure that the composition of priming responses was identical 556 across servers. Even the empty additional sections generated by 557 Yeti-Root Servers running BIND9 seem to be sufficient for all 558 resolver software tested; resolvers simply perform a new recursive 559 lookup for each authoritative server name they need to resolve. 561 3.6. Yeti-Root Servers 563 Various volunteers have donated authoritative servers to act as Yeti- 564 Root servers. At the time of writing there are 25 Yeti-Root servers 565 distributed globally, one of which is named using an IDNA2008 566 [RFC5890] label, shown in the following list in punycode. 568 +-------------------------------------+---------------+-------------+ 569 | Name | Operator | Location | 570 +-------------------------------------+---------------+-------------+ 571 | bii.dns-lab.net | BII | CHINA | 572 | yet-ns.tsif.net | TSIF | USA | 573 | yeti-ns.wide.ad.jp | WIDE Project | Japan | 574 | yeti-ns.as59715.net | as59715 | Italy | 575 | dahu1.yeti.eu.org | Dahu Group | France | 576 | ns-yeti.bondis.org | Bond Internet | Spain | 577 | | Systems | | 578 | yeti-ns.ix.ru | Russia | MSK-IX | 579 | yeti.bofh.priv.at | CERT Austria | Austria | 580 | yeti.ipv6.ernet.in | ERNET India | India | 581 | yeti-dns01.dnsworkshop.org | dnsworkshop | Germany | 582 | | /informnis | | 583 | yeti-ns.conit.co | CONIT S.A.S | Colombia | 584 | dahu2.yeti.eu.org | Dahu Group | France | 585 | yeti.aquaray.com | Aqua Ray SAS | France | 586 | yeti-ns.switch.ch | SWITCH | Switzerland | 587 | yeti-ns.lab.nic.cl | CHILE NIC | Chile | 588 | yeti-ns1.dns-lab.net | BII | China | 589 | yeti-ns2.dns-lab.net | BII | China | 590 | yeti-ns3.dns-lab.net | BII | China | 591 | ca...a23dc.yeti-dns.net | Yeti-ZA | South | 592 | | | Africa | 593 | 3f...374cd.yeti-dns.net | Yeti-AU | Australia | 594 | yeti1.ipv6.ernet.in | ERNET India | India | 595 | xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c | ERNET India | India | 596 | yeti-dns02.dnsworkshop.org | dnsworkshop | USA | 597 | | /informnis | | 598 | yeti.mind-dns.nl | Monshouwer | Netherlands | 599 | | Internet | | 600 | | Diensten | | 601 | yeti-ns.datev.net | DATEV | Germany | 602 +-------------------------------------+---------------+-------------+ 604 The current list of Yeti-Root server is made available to a 605 participating resolver first using a substitute hints file Appendix A 606 and subsequently by the usual resolver priming process 607 [I-D.ietf-dnsop-resolver-priming]. All Yeti-Root servers are 608 IPv6-only, foreshadowing a future IPv6-only Internet, and hence the 609 Yeti-Root hints file contains no IPv4 addresses and the Yeti-Root 610 zone contains no IPv4 glue. 612 At the time of writing, all root servers within the Root Server 613 System serve the ROOT-SERVERS.NET zone in addition to the root zone, 614 and all but one also serve the ARPA zone. Yeti-Root servers serve 615 the Yeti-Root zone only. 617 Significant software diversity exists across the set of Yeti-Root 618 servers, as reported by their volunteer operators: 620 o Platform: 20 of 25 Yeti-Root servers are implemented on a VPS 621 rather than bare metal. 623 o Operating System: 6 Yeti-Root servers run on on Linux (Ubuntu, 624 Debian, CentOS, and ArchLinux); 5 run on FreeBSD and 1 on NetBSD. 626 o DNS software: 18 of 25 Yeti-Root servers use BIND9 (versions 627 varying between 9.9.7 and 9.10.3); four use NSD (4.10 and 4.15); 628 two use Knot (2.0.1 and 2.1.0) and one uses Bundy (1.2.0). 630 3.7. Traffic Capture and Analysis 632 Query and response traffic capture is available in the testbed in 633 both Yeti resolvers and Yeti-Root servers in anticipation of 634 experiments that require packet-level visibility into DNS traffic. 636 Traffic capture is performed on Yeti-Root servers using either dnscap 637 or pcapdump (part of the 638 pcaputils Debian package , 639 with a patch to facilitate triggered file upload 640 . PCAP- 641 format files containing packet captures are uploaded using rsync to 642 central storage. 644 4. Operational Experience with Yeti DNS Testbed 646 The following sections provide commentary on the operation and impact 647 analyses of the Yeti-DNS Testbed described in Section 3. More 648 detailed descriptions of observed phenomena are available in Yeti DNS 649 mailing list archives and on the Yeti DNS blog. 651 4.1. Automated Hints File Maintenance 653 Renumbering events in the Root Server System are relatively rare. 654 Although each such event is accompanied by the publication of an 655 updated hints file in standard locations, the task of updating local 656 copies of that file used by DNS resolvers is manual, and the process 657 has an observably-long tail: for example, in 2015 J-Root was still 658 receiving traffic at its old address some thirteen years after 659 renumbering [Wessels2015]. 661 The observed impact of these old, deployed hints file is minimal, 662 likely due to the very low frequency of such renumbering events. 663 Even the oldest of hints file would still contain some accurate root 664 server addresses from which priming responses could be obtained. 666 By contrast, due to the experimental nature of the system and the 667 fact that it is operated mainly by volunteers, Yeti-Root Servers are 668 added, removed and renumbered with much greater frequency. A tool to 669 facilitate automatic maintenance of hints files was therefore 670 created, [hintUpdate]. 672 The automated procedure followed by the hintUpdate tool is as 673 follows. 675 1. Use the local resolver to obtain a response to the query ./IN/NS; 677 2. Use the local resolver to obtain a set of IPv4 and IPv6 addresses 678 for each nameserver; 680 3. Validate all signatures obtained from the local resolvers, and 681 confirm that all data is signed; 683 4. Compare the data obtained to that contained within the currently- 684 active hints file; if there are differences, rotate the old one 685 way and replace it with a new one. 687 This tool would not function unmodified when used in the Root Server 688 System, since the names of individual Root Servers (e.g. A.ROOT- 689 SERVERS.NET) are not signed. All Yeti-Root Server names are signed, 690 however, and hence this tool functions as expected in that 691 environment. 693 4.2. IPv6-only root operation 695 Yeti DNS testbed was designed to explore whether it can survive in 696 pure IPv6 environment or not. So every root server required to run 697 only with non-EUI64 IPv6 addressed. There are mainly two questions 698 in designers' mind when constructing this testbed: 1) is there any 699 gap between IPv6-only Root and IPv4 Root to provide full function of 700 root server. 2) is it possible that IPv6-only root can serve the 701 Internet, even part of which still only speak IPv4. There are some 702 findings and impacts which IPv6-only property bring to Root system. 704 4.2.1. Impact of IPv6 fragmentation 706 In the Root Server System, structural changes with the potential to 707 increase response sizes (and hence fragmentation, fallback to TCP 708 transport or both) have been exercised with great care, since the 709 impact on clients has been difficult to predict or measure. The Yeti 710 DNS Testbed is experimental and has the luxury of a known client 711 base, making it far easier to make such changes and measure their 712 impact. 714 Many of the experimental design choices described in this document 715 were expected to trigger larger responses. For example, the choice 716 of naming scheme for Yeti-Root Servers described in Section 3.5 717 defeats label compression the priming response which introduce a 718 large priming response (up to 1754 octets with 25 NS server and their 719 glue) ; the Yeti-Root zone transformation approach described in 720 Section 3.2.2 greatly enlarges the apex DNSKEY RRSet especially 721 during the KSK rollover (up to 1975 octets with 3 ZSK and 2 KSK). An 722 increased incidence of fragmentation was therefore expected. 724 The Yeti-DNS Testbed provides service on IPv6 only. IPv6 has a 725 fragmentation model that is different from IPv4 -- in particular, 726 fragmentation always takes place on a sending end-host, and not on an 727 intermediate router. 729 Fragmentation may cause serious issues; if a single fragment is lost, 730 it results in the loss of the entire datagram of which the fragment 731 was a part, and in the DNS frequently triggers a timeout. It is 732 known at this moment that only a limited number of security middle- 733 box implementations support IPv6 fragments. Some public measurements 734 and reports [I-D.taylor-v6ops-fragdrop] [RFC7872] shows that there is 735 notable packets drop rate due to the mistreatment of middle-box on 736 IPv6 fragment. One APNIC study [IPv6-frag-DNS] reported that 37% of 737 endpoints using IPv6-capable DNS resolver cannot receive a fragmented 738 IPv6 response over UDP. 740 To study the impact, RIPE Atlas probes are used to spot failures like 741 timeout for DNSKEY queries via UDP. For each Yeti server, a Atlas 742 measurement was setup asking for 100 IPv6-enabled probes from 5 743 regions, in each 2 hours sending DNS query for DNSKEY via UDP with DO 744 bit set. An monitoring report during Yeti KSK rollover shows that 745 statistically large packets will trigger higher failure rate (up to 746 7%) due to IPv6 fragmentation issues, which accordingly increase 747 probability of retries and TCP fallback. Even within 1500 bytes, 748 when response size reaches 1414 bytes, the failure rate reaches 749 around 2%. Note that ICANN KSK rollover will produce packets 750 exceeding 1414 Bytes. 752 Regarding the large DNS response via UDP, some existing root 753 servers(A, B, G and J) truncating the response once the large IPv6 754 packet surpasses 1280 octets. In Yeti DNS Testbed, there are two 755 proposals are discussed and implemented in Yeti experiments.One 756 proposal is called DNS fragments [I-D.muks-dns-message-fragments] 757 which is to fragment the large response in DNS level. Another 758 proposal is called DNS ATR [I-D.song-atr-large-resp] which introduces 759 an simple improvement on authoritative server by replying additional 760 truncated response just after the normal large response. 762 The consequences of fragmentation were not limited to DNS using UDP 763 transport. There are two cases reported where some Yeti root servers 764 failed to transfer the Yeti-Root zone from a DM. When checking the 765 DM log file, it is found that some root servers experienced " socket 766 is not connected" errors when they pulled the zone file. Further 767 experimentation revealed that combinations of NetBSD 6.1, NetBSD 768 7.0RC1, FreeBSD 10.0, Debian 3.2 and VMWare ESXI 5.5 resulted in a 769 high TCP MSS value of 1440 octets being negotiated between client and 770 server despite the presence of the IPV6_USE_MIN_MTU socket option, as 771 described in [I-D.andrews-tcp-and-ipv6-use-minmtu]. The mismatch 772 appears to cause outbound segments greater in size than 1280 octets 773 to be dropped before sending. 775 One proposal to handle this issue is to change the Local TCP MSS to 776 be 1220 (1280-ip6/tcp header)and advise it if IPV6_USE_MINMTU=1. 777 Yeti root from WIDE and SWITCH set this during the test one year ago. 778 Now at the time of writing, 11 out of 25 change the MSS setting in 779 Yeti DNS Testbed. 781 4.2.2. How IPv6-only Root serve IPv4 users? 783 Although It is straightforward to setup the IPv6-only root, but it is 784 unknown if it is practical for IPv6-only root to serve the production 785 networks which are still largely speak only in IPv4. In Yeti DNS 786 Testbed it is demonstrated that IPv6-only root can serve the Internet 787 in a incremental approach, even for IPv4 network and users. 789 It is intuitive to propose to update the resolver to dual-stack and 790 configured it with hint file including IPv6 glues. The dual-stack 791 resolver connects IPv6 root with IPv4-only or dual-stack end users. 792 However, when we approached some partners who agreed to try IPv6-only 793 root in experimental network, they normally do not want to give up 794 the IPv4 root for redundancy reason due to unstable IPv6 network 795 performance. So it is adopted in campuses that one IPv4 resolver 796 address (using current IPv4 addresses of A-M root) and one IPv6 797 resolver address (using Yeti root) are configured for their customer 798 via DHCPv4 and DHCPv6 respectively. The end users can choose which 799 DNS they use (normally IPv6 first or using Happy eyeballs). Ideally, 800 the end users DNS traffic will largely be sent to the resolver 801 consuming IPv6-only root when IPv6 is widely deployed. 803 For resolvers who resident in IPv4 only networks, they can forward 804 the query to dual stack resolvers they have trust in. Or they can 805 configure the resolver with a hint file containing a set of IPv4 806 addresses which are mapped to IPv6 addresses of root in a IPv4/IPv6 807 translation devices. The query will be routed to the translation 808 devices and forward in IPv6 to IPv6-only root. It is designed and 809 going to be implemented in CERNET2 using IVI [RFC6219] technology. 811 4.3. Experience on Multiple Signers 813 In Section 3 it is introduced how three Distributor Masters (DM) 814 works and how they share the control over the Yeti root zone. This 815 section will describe some findings and experiences on its operation. 817 4.3.1. IXFR fallback to AXFR 819 In DNS specifications authoritative name server uses full zone 820 transfer (AXFR) [RFC5936], incremental Zone Transfer (IXFR)[RFC1995], 821 and NOTIFY [RFC1996] to achieve coherency of the zone contents. IXFR 822 is an optimization for large DNS zone transfer, which allows server 823 only transfer the changed portion(s) to client. AXFR fallback 824 usually happens at server side by simply returning IXFR client the 825 entire new zone in condition that IXFR server cannot fulfill the 826 given delta-update request. 828 One experiment in Yeti is designed to test multiple signers with 829 Multiple ZSKs (MZSK). It is required that all public ZSKs used by 830 DMs are included in the zone as a key set; and resolver can validate 831 the message by picking one key from the key set. From DNSSEC point 832 of view, it is technically workable. However, different signers do 833 produce different RRSIG RR which introduces zone inconsistency from 834 beginning in this case. In current setting of Yeti experiment, it is 835 possible that one client does AXFR/IXFR from one server and later 836 asks for IXFR from another server. 838 It is observed that when the IXFR client switched from one IXFR 839 server to another, it received a IXFR response deleting RRSIG record 840 that does not exist. One IXFR client running NSD 4.1.7 rejected IXFR 841 response, made a log indicating a bad data and then asked for full 842 zone transfer. Luckily, Yeti root zone is relatively small (691K), 843 so the fallback to AXFR does not cause significant performance 844 degeneration. But if operator does host big zone with MZSK model, it 845 will cause problem based on current IXFR. 847 Another observation is that another IXFR client running Knot 2.1.0 in 848 similar situation just accepts the IXFR response, ignores the 849 differences and generates a merged zone with two RRSIG RRs. It not 850 only produces larger response, but also causes DNSSEC failure when a 851 new zone is generated given that old RRSIG is the signature of old 852 zone RRs. 854 One possible solutions is asking for development of RRSIG-aware IXFR 855 format in which the RRSIG is treated as a special and RRSIG RR should 856 always be transfered in full (like it does in AXFR). Another 857 solution is adopting the behavior of NSD 4.1.7 as a improvement for 858 IXFR protocol in which an IXFR client should fall back to AXFR 859 automatically in the event of an IXFR incoherence error. 861 4.3.2. Latency of Root Zone update 863 Regarding the timing of Root Zone fetch and soa update, Each Yeti DM 864 checks the root zone serial hourly (in 20 minutes interval) to see if 865 the IANA root zone has changed . A new version of the Yeti root zone 866 is generated if the IANA root zone has changed. In this model, root 867 servers is expected pull the zone from one DM for each new update, 868 because 20 min is expected to be enough for root zone publication. 869 But it is not the true in Yeti testbed in a monitoring test. 871 It once was reported that one server running on Bundy 1.2.0 on 872 FreeBSD 10.2-RELEASE had some bugs on SOA update with more than 10 873 hours delay. Besides that server, half of Yeti servers has more than 874 20 min delay, some even with 40 min delay. One possible reason may 875 be that the server failed to pull the Zone on one DM due to network 876 failure(for example IPv6 fragmentation issue introduce previously) 877 and turn to another DM which introduces the delay. It is also 878 observed that even in the same 20-minutes time frame, not all servers 879 pull from a single DM. It is possible that some servers not use FCFS 880 strategy to pull the zone after they receive the notify. They may 881 pull the zone based on other metrics like the rtt , or manual 882 preference. 884 4.4. Root Label Compression in Knot 886 [RFC1035] specifies that domain names can be compressed when encoded 887 in DNS messages, being represented as one of 889 1. a sequence of labels ending in a zero octet; 891 2. a pointer; or 893 3. a sequence of labels ending with a pointer. 895 The purpose of this flexibility is to reduce the size of domain names 896 encoded in DNS messages. 898 It was observed that Yeti-Root Servers running knot 2.0 would 899 compress the zero-length label (the root domain, often represented as 900 ".") using a pointer to an earlier example. Although legal, this 901 encoding increases the encoded size of the root label from one octet 902 to two; it was also found to break some client software, in 903 particular the Go DNS library. Bug reports were filed against both 904 knot and the Go DNS library, and both were resolved in subsequent 905 releases. 907 4.5. Increased ZSK Key Size 909 The ZSK key size used in the Yeti-DNS Testbed was initially 1024 910 bits, consistent with the size of the ZSK used in the Root Zone at 911 the time the Yeti DNS Project was started. It later became clear 912 that the ZSK key size in the Root Zone was to be increased. 914 The ZSK key size in the Yeti-Root zone was subsequently increased in 915 an attempt to identify any unexpected operational effects of doing 916 so. 918 XXX Note to reviewers: observations following that change to be 919 inserted here. XXX 921 The ZSK key size in the Root Zone was increased from 1024 bits to 922 2048 bits in October 2016. [Verisign2016]. 924 4.6. KSK Rollover 926 The Root Zone KSK is expected to undergo a carefully-orchestrated 927 rollover as described in [ICANN2016]. ICANN has commissioned various 928 tests and has published an external test plan [ICANN2017]. 930 The planned approach was also modelled in the Yeti-DNS Testbed. 932 XXX Note to reviewers: observations about the KSK rollover in the 933 Yeti-Root zone to be inserted here. XXX 935 5. IANA Considerations 937 This document requests no action of the IANA. 939 6. Acknowledgments 941 The editors would like to acknowledge the contributions of the 942 various and many subscribers to the Yeti DNS Project mailing lists, 943 including the following people who were involved in the 944 implementation and operation of the Yeti DNS testbed itself: 946 Tomohiro Ishihara, Antonio Prado, Stephane Bortzmeyer, Mickael 947 Jouanne, Pierre Beyssac, Joao Damas, Pavel Khramtsov, Ma Yan, 948 Otmar Lendl, Praveen Misra, Carsten Strotmann, Edwin Gomez, Remi 949 Gacogne, Guillaume de Lafond, Yves Bovard, Hugo Salgado-Hernandez, 950 Li Zhen, Daobiao Gong, Runxia Wan. 952 The editors also acknowledge the contributions of the Independent 953 Submissions Editorial Board, and of the following reviewers whose 954 opinions helped improve the clarity of this document: 956 Subramanian Moonesamy, Joe Abley. 958 7. References 960 [hintUpdate] 961 "Hintfile Auto Update", 2015, 962 . 964 [I-D.andrews-tcp-and-ipv6-use-minmtu] 965 Andrews, M., "TCP Fails To Respect IPV6_USE_MIN_MTU", 966 draft-andrews-tcp-and-ipv6-use-minmtu-04 (work in 967 progress), October 2015. 969 [I-D.ietf-dnsop-resolver-priming] 970 Koch, P., Larson, M., and P. Hoffman, "Initializing a DNS 971 Resolver with Priming Queries", draft-ietf-dnsop-resolver- 972 priming-07 (work in progress), March 2016. 974 [I-D.muks-dns-message-fragments] 975 Sivaraman, M., Kerr, S., and D. Song, "DNS message 976 fragments", draft-muks-dns-message-fragments-00 (work in 977 progress), July 2015. 979 [I-D.song-atr-large-resp] 980 Song, L., "ATR: Additional Truncated Response for Large 981 DNS Response", draft-song-atr-large-resp-00 (work in 982 progress), September 2017. 984 [I-D.taylor-v6ops-fragdrop] 985 Jaeggli, J., Colitti, L., Kumari, W., Vyncke, E., Kaeo, 986 M., and T. Taylor, "Why Operators Filter Fragments and 987 What It Implies", draft-taylor-v6ops-fragdrop-02 (work in 988 progress), December 2013. 990 [ICANN2016] 991 "Root Zone KSK Rollover Plan", 2016, 992 . 995 [ICANN2017] 996 "2017 KSK Rollover External Test Plan", July 2016, 997 . 1000 [IPv6-frag-DNS] 1001 "Dealing with IPv6 fragmentation in the DNS", August 2017, 1002 . 1005 [ISC-TN-2003-1] 1006 Abley, J., "Hierarchical Anycast for Global Service 1007 Distribution", March 2003, 1008 . 1010 [ITI2014] "Identifier Technology Innovation Report", May 2014, 1011 . 1014 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities", 1015 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987, 1016 . 1018 [RFC1035] Mockapetris, P., "Domain names - implementation and 1019 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035, 1020 November 1987, . 1022 [RFC1995] Ohta, M., "Incremental Zone Transfer in DNS", RFC 1995, 1023 DOI 10.17487/RFC1995, August 1996, 1024 . 1026 [RFC1996] Vixie, P., "A Mechanism for Prompt Notification of Zone 1027 Changes (DNS NOTIFY)", RFC 1996, DOI 10.17487/RFC1996, 1028 August 1996, . 1030 [RFC2826] Internet Architecture Board, "IAB Technical Comment on the 1031 Unique DNS Root", RFC 2826, DOI 10.17487/RFC2826, May 1032 2000, . 1034 [RFC2845] Vixie, P., Gudmundsson, O., Eastlake 3rd, D., and B. 1035 Wellington, "Secret Key Transaction Authentication for DNS 1036 (TSIG)", RFC 2845, DOI 10.17487/RFC2845, May 2000, 1037 . 1039 [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC) 1040 Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011, 1041 September 2007, . 1043 [RFC5890] Klensin, J., "Internationalized Domain Names for 1044 Applications (IDNA): Definitions and Document Framework", 1045 RFC 5890, DOI 10.17487/RFC5890, August 2010, 1046 . 1048 [RFC5936] Lewis, E. and A. Hoenes, Ed., "DNS Zone Transfer Protocol 1049 (AXFR)", RFC 5936, DOI 10.17487/RFC5936, June 2010, 1050 . 1052 [RFC6219] Li, X., Bao, C., Chen, M., Zhang, H., and J. Wu, "The 1053 China Education and Research Network (CERNET) IVI 1054 Translation Design and Deployment for the IPv4/IPv6 1055 Coexistence and Transition", RFC 6219, 1056 DOI 10.17487/RFC6219, May 2011, 1057 . 1059 [RFC6891] Damas, J., Graff, M., and P. Vixie, "Extension Mechanisms 1060 for DNS (EDNS(0))", STD 75, RFC 6891, 1061 DOI 10.17487/RFC6891, April 2013, 1062 . 1064 [RFC7720] Blanchet, M. and L-J. Liman, "DNS Root Name Service 1065 Protocol and Deployment Requirements", BCP 40, RFC 7720, 1066 DOI 10.17487/RFC7720, December 2015, 1067 . 1069 [RFC7872] Gont, F., Linkova, J., Chown, T., and W. Liu, 1070 "Observations on the Dropping of Packets with IPv6 1071 Extension Headers in the Real World", RFC 7872, 1072 DOI 10.17487/RFC7872, June 2016, 1073 . 1075 [RRL] Vixie, P. and V. Schryver, "Response Rate Limiting (RRL)", 1076 June 2012, . 1078 [RSSAC001] 1079 "Service Expectations of Root Servers", December 2015, 1080 . 1083 [RSSAC023] 1084 "History of the Root Server System", November 2016, 1085 . 1088 [TNO2009] Gijsen, B., Jamakovic, A., and F. Roijers, "Root Scaling 1089 Study: Description of the DNS Root Scaling Model", 1090 September 2009, 1091 . 1094 [Verisign2016] 1095 Wessels, D., "Increasing the Strength of the Zone Signing 1096 Key for the Root Zone", May 2016, 1097 . 1100 [Wessels2015] 1101 Wessels, D., "Thirteen Years of "Old J-Root"", 2015, 1102 . 1106 Appendix A. Yeti-Root Hints File 1108 The following hints file (complete and accurate at the time of 1109 writing) causes a DNS resolver to use the Yeti DNS testbed in place 1110 of the production Root Server System and hence participate in 1111 experiments running on the testbed. 1113 Note that some lines have been wrapped in the text that follows in 1114 order to fit within the production constraints of this document. 1115 Wrapped lines are indicated with a blackslash character ("\"), 1116 following common convention. 1118 . 3600000 IN NS bii.dns-lab.net 1119 bii.dns-lab.net 3600000 IN AAAA 240c:f:1:22::6 1120 . 3600000 IN NS yeti-ns.tisf.net 1121 yeti-ns.tisf.net 3600000 IN AAAA 2001:559:8000::6 1122 . 3600000 IN NS yeti-ns.wide.ad.jp 1123 yeti-ns.wide.ad.jp 3600000 IN AAAA 2001:200:1d9::35 1124 . 3600000 IN NS yeti-ns.as59715.net 1125 yeti-ns.as59715.net 3600000 IN AAAA \ 1126 2a02:cdc5:9715:0:185:5:203:53 1127 . 3600000 IN NS dahu1.yeti.eu.org 1128 dahu1.yeti.eu.org 3600000 IN AAAA \ 1129 2001:4b98:dc2:45:216:3eff:fe4b:8c5b 1130 . 3600000 IN NS ns-yeti.bondis.org 1131 ns-yeti.bondis.org 3600000 IN AAAA 2a02:2810:0:405::250 1132 . 3600000 IN NS yeti-ns.ix.ru 1133 yeti-ns.ix.ru 3600000 IN AAAA 2001:6d0:6d06::53 1134 . 3600000 IN NS yeti.bofh.priv.at 1135 yeti.bofh.priv.at 3600000 IN AAAA 2a01:4f8:161:6106:1::10 1136 . 3600000 IN NS yeti.ipv6.ernet.in 1137 yeti.ipv6.ernet.in 3600000 IN AAAA 2001:e30:1c1e:1::333 1138 . 3600000 IN NS yeti-dns01.dnsworkshop.org 1139 yeti-dns01.dnsworkshop.org \ 1140 3600000 IN AAAA 2001:1608:10:167:32e::53 1141 . 3600000 IN NS yeti-ns.conit.co 1142 yeti-ns.conit.co 3600000 IN AAAA \ 1143 2604:6600:2000:11::4854:a010 1144 . 3600000 IN NS dahu2.yeti.eu.org 1145 dahu2.yeti.eu.org 3600000 IN AAAA 2001:67c:217c:6::2 1146 . 3600000 IN NS yeti.aquaray.com 1147 yeti.aquaray.com 3600000 IN AAAA 2a02:ec0:200::1 1148 . 3600000 IN NS yeti-ns.switch.ch 1149 yeti-ns.switch.ch 3600000 IN AAAA 2001:620:0:ff::29 1150 . 3600000 IN NS yeti-ns.lab.nic.cl 1151 yeti-ns.lab.nic.cl 3600000 IN AAAA 2001:1398:1:21::8001 1152 . 3600000 IN NS yeti-ns1.dns-lab.net 1153 yeti-ns1.dns-lab.net 3600000 IN AAAA 2001:da8:a3:a027::6 1154 . 3600000 IN NS yeti-ns2.dns-lab.net 1155 yeti-ns2.dns-lab.net 3600000 IN AAAA 2001:da8:268:4200::6 1156 . 3600000 IN NS yeti-ns3.dns-lab.net 1157 yeti-ns3.dns-lab.net 3600000 IN AAAA 2400:a980:30ff::6 1158 . 3600000 IN NS \ 1159 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net 1160 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net \ 1161 3600000 IN AAAA 2c0f:f530::6 1162 . 3600000 IN NS \ 1163 3e23e8160039594a33894f6564e1b1.yeti-dns.net 1164 3e23e8160039594a33894f6564e1b1.yeti-dns.net \ 1165 3600000 IN AAAA 2803:80:1004:63::1 1166 . 3600000 IN NS \ 1167 3f79bb7b435b05321651daefd374cd.yeti-dns.net 1168 3f79bb7b435b05321651daefd374cd.yeti-dns.net \ 1169 3600000 IN AAAA 2401:c900:1401:3b:c::6 1170 . 3600000 IN NS \ 1171 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c 1172 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c \ 1173 3600000 IN AAAA 2001:e30:1c1e:10::333 1174 . 3600000 IN NS yeti1.ipv6.ernet.in 1175 yeti1.ipv6.ernet.in 3600000 IN AAAA 2001:e30:187d::333 1176 . 3600000 IN NS yeti-dns02.dnsworkshop.org 1177 yeti-dns02.dnsworkshop.org \ 1178 3600000 IN AAAA 2001:19f0:0:1133::53 1179 . 3600000 IN NS yeti.mind-dns.nl 1180 yeti.mind-dns.nl 3600000 IN AAAA 2a02:990:100:b01::53:0 1182 Appendix B. Controversy 1184 The Yeti DNS Project, its infrastructure and the various experiments 1185 that have been carried out using that infrastructure, have been 1186 described by people involved in the project in many public meetings 1187 at technical venues since its inception. The mailing lists using 1188 which the operation of the infrastructure has been coordinated are 1189 open to join, and their archives are public. The project as a whole 1190 has been the subject of robust public discussion. 1192 Some commentators have expressed concern that the Yeti DNS Project 1193 is, in effect, operating an "alternate root," challenging the IAB's 1194 comments published in [RFC2826]. Other such alternate roots are 1195 considered to have caused end-user confusion and instability in the 1196 namespace of the DNS by the introduction of new top-level labels or 1197 the different use of top-level labels present in the Root Server 1198 System. The coordinators of the Yeti DNS Project do not consider the 1199 Yeti DNS Project to be an alternate root in this sense, since by 1200 design the namespace enabled by the Yeti-Root Zone is identical to 1201 that of the Root Zone. 1203 Some commentators have expressed concern that the Yeti DNS Project 1204 seeks to influence or subvert administrative policy relating to the 1205 Root Server System, in particular in the use of DNSSEC trust anchors 1206 not published by the IANA and the use of Yeti-Root Servers in regions 1207 where governments or other organisations have expressed interest in 1208 operating a Root Server. The coordinators of the Yeti-Root project 1209 observe that their mandate is entirely technical and has no ambition 1210 to influence policy directly; they do hope, however, that technical 1211 findings from the Yeti DNS Project might act as a useful resource for 1212 the wider technical community. 1214 Finally, some concern has been expressed about the possible 1215 applications of the Yeti DNS Project to the governments of countries 1216 where access to the Internet is subject to substantial centralised 1217 control, in contrast to most other jurisdictions where such controls 1218 are either lighter or not present. The coordinators of the Yeti DNS 1219 Project have taken care to steer all discussions and related 1220 decisions about the technical work of the project to public venues in 1221 the interests of full transparency, and encourage anybody concerned 1222 about the decision-making process to participate in those venues and 1223 review their archives directly. 1225 Appendix C. About This Document 1227 This section (and sub-sections) has been included as an aid to 1228 reviewers of this document, and should be removed prior to 1229 publication. 1231 C.1. Venue 1233 The authors propose that this document proceeed as an Independent 1234 Submission, since it documents work that, although relevant to the 1235 IETF, has been carried out externally to any IETF working group. 1236 However, a suitable venue for discussion of this document is the 1237 dnsop working group. 1239 Information about the Yeti DNS project and discussion relating to 1240 particular experiments described in this document can be found at 1241 . 1243 This document is maintained in GitHub at . 1246 C.2. Revision History 1248 C.2.1. draft-song-yeti-testbed-experience-00 through -03 1250 Change history is available in the public GitHub repository where 1251 this document is maintained: . 1254 C.2.2. draft-song-yeti-testbed-experience-04 1256 Substantial editorial review and rearrangement of text by Joe Abley 1257 at request of BII. 1259 Added what is intended to be a balanced assessment of the controversy 1260 that has arisen around the Yeti DNS Project, at the request of the 1261 Independent Submissions Editorial Board. 1263 Changed the focus of the document from the description of individual 1264 experiments on a Root-like testbed to the construction and 1265 motivations of the testbed itself, since that better describes the 1266 output of the Yeti DNS Project to date. In the considered opinion of 1267 this reviewer, the novel approaches taken in the construction of the 1268 testbed infrastructure and the technical challenges met in doing so 1269 are useful to record, and the RFC series is a reasonable place to 1270 record operational experiences related to core Internet 1271 infrastructure. 1273 Note that due to draft cut-off deadlines some of the technical 1274 details described in this revision of the document may not exactly 1275 match operational reality; however, this revision provides an 1276 indicative level of detail, focus and flow which it is hoped will be 1277 helpful to reviewers. 1279 Authors' Addresses 1281 Linjian Song (editor) 1282 Beijing Internet Institute 1283 2508 Room, 25th Floor, Tower A, Time Fortune 1284 Beijing 100028 1285 P. R. China 1287 Email: songlinjian@gmail.com 1288 URI: http://www.biigroup.com/ 1289 Dong Liu (editor) 1290 Beijing Internet Institute 1291 2508 Room, 25th Floor, Tower A, Time Fortune 1292 Beijing 100028 1293 P. R. China 1295 Email: dliu@biigroup.com 1296 URI: http://www.biigroup.com/ 1298 Paul Vixie (editor) 1299 TISF 1300 11400 La Honda Road 1301 Woodside, California 94062 1302 US 1304 Email: vixie@tisf.net 1305 URI: http://www.redbarn.org/ 1307 Akira Kato (editor) 1308 Keio University/WIDE Project 1309 Graduate School of Media Design, 4-1-1 Hiyoshi, Kohoku 1310 Yokohama 223-8526 1311 JAPAN 1313 Email: kato@wide.ad.jp 1314 URI: http://www.kmd.keio.ac.jp/ 1316 Shane Kerr 1317 Antoon Coolenlaan 41 1318 Uithoorn 1422 GN 1319 NL 1321 Email: shane@time-travellers.org