idnits 2.17.1
draft-song-yeti-testbed-experience-06.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
** The document seems to lack a Security Considerations section.
** The document seems to lack separate sections for Informative/Normative
References. All references will be assumed normative when checking for
downward references.
== There are 26 instances of lines with non-RFC2606-compliant FQDNs in the
document.
== There are 4 instances of lines with non-RFC3849-compliant IPv6 addresses
in the document. If these are example addresses, they should be changed.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (December 12, 2017) is 2325 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
== Outdated reference: A later version (-03) exists of
draft-song-atr-large-resp-00
-- No information found for draft-icann-dnssec-keymgmt - is the name
correct?
** Obsolete normative reference: RFC 2845 (Obsoleted by RFC 8945)
Summary: 3 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Internet Engineering Task Force (IETF) L. Song, Ed.
3 Internet-Draft D. Liu
4 Intended status: Informational Beijing Internet Institute
5 Expires: June 15, 2018 P. Vixie
6 TISF
7 Kato
8 Keio University/WIDE Project
9 S. Kerr
10 December 12, 2017
12 Yeti DNS Testbed
13 draft-song-yeti-testbed-experience-06
15 Abstract
17 The Internet's Domain Name System (DNS) is built upon the foundation
18 provided by the Root Server System -- that is, the critical
19 infrastructure that serves the DNS root zone.
21 Yeti DNS is an experimental, non-production testbed that provides an
22 environment where technical and operational experiments can safely be
23 performed without risk to production infrastructure. This testbed
24 has been used by a broad community of participants to perform
25 experiments that aim to inform operations and future development of
26 the production DNS. Yeti DNS is an independently-coordinated project
27 and is not affiliated with ICANN, IANA or any Root Server Operator.
29 The Yeti DNS testbed implementation includes various novel and
30 experimental components including IPv6-only transport, independent,
31 autonomous Zone Signing Key management, large cryptographic keys and
32 a large number of Yeti-Root Servers. These differences from the Root
33 Server System have operational consequences such as large responses
34 to priming queries and the coordination of a large pool of
35 independent operators; by deploying such a system globally but
36 outside the production DNS system, the Yeti DNS project provides an
37 opportunity to gain insight into those consequences without
38 threatening the stability of the DNS.
40 This document neither addresses the relevant policies under which the
41 Root Server System is operated nor makes any proposal for changing
42 any aspect of its implementation or operation. This document aims
43 solely to document the technical and operational experience of
44 deploying a system which is similar to but different from the Root
45 Server System.
47 Status of This Memo
49 This Internet-Draft is submitted in full conformance with the
50 provisions of BCP 78 and BCP 79.
52 Internet-Drafts are working documents of the Internet Engineering
53 Task Force (IETF). Note that other groups may also distribute
54 working documents as Internet-Drafts. The list of current Internet-
55 Drafts is at https://datatracker.ietf.org/drafts/current/.
57 Internet-Drafts are draft documents valid for a maximum of six months
58 and may be updated, replaced, or obsoleted by other documents at any
59 time. It is inappropriate to use Internet-Drafts as reference
60 material or to cite them other than as "work in progress."
62 This Internet-Draft will expire on June 15, 2018.
64 Copyright Notice
66 Copyright (c) 2017 IETF Trust and the persons identified as the
67 document authors. All rights reserved.
69 This document is subject to BCP 78 and the IETF Trust's Legal
70 Provisions Relating to IETF Documents
71 (https://trustee.ietf.org/license-info) in effect on the date of
72 publication of this document. Please review these documents
73 carefully, as they describe your rights and restrictions with respect
74 to this document. Code Components extracted from this document must
75 include Simplified BSD License text as described in Section 4.e of
76 the Trust Legal Provisions and are provided without warranty as
77 described in the Simplified BSD License.
79 Table of Contents
81 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3
82 2. Areas of Study . . . . . . . . . . . . . . . . . . . . . . . 5
83 2.1. Implementation of a Root Server System-like Testbed . . . 5
84 2.2. Yeti-Root Zone Distribution . . . . . . . . . . . . . . . 5
85 2.3. Yeti-Root Server Names and Addressing . . . . . . . . . . 5
86 2.4. IPv6-Only Yeti-Root Servers . . . . . . . . . . . . . . . 5
87 2.5. DNSSEC in the Yeti-Root Zone . . . . . . . . . . . . . . 6
88 3. Yeti DNS Testbed Infrastructure . . . . . . . . . . . . . . . 6
89 3.1. Root Zone Retrieval . . . . . . . . . . . . . . . . . . . 8
90 3.2. Transformation of Root Zone to Yeti-Root Zone . . . . . . 8
91 3.2.1. ZSK and KSK Key Sets Shared Between DMs . . . . . . . 9
92 3.2.2. Unique ZSK per DM; No Shared KSK . . . . . . . . . . 10
93 3.2.3. Preserving Root Zone NSEC Chain and ZSK RRSIGs . . . 11
94 3.3. Yeti-Root Zone Distribution . . . . . . . . . . . . . . . 11
95 3.4. Synchronisation of Service Metadata . . . . . . . . . . . 11
96 3.5. Yeti-Root Server Naming Scheme . . . . . . . . . . . . . 12
97 3.6. Yeti-Root Servers . . . . . . . . . . . . . . . . . . . . 13
98 3.7. Experimental Traffic . . . . . . . . . . . . . . . . . . 15
99 3.8. Traffic Capture and Analysis . . . . . . . . . . . . . . 15
100 4. Operational Experience with the Yeti DNS Testbed . . . . . . 16
101 4.1. Viability of IPv6-Only Operation . . . . . . . . . . . . 16
102 4.1.1. IPv6 Fragmentation . . . . . . . . . . . . . . . . . 16
103 4.1.2. Serving IPv4-Only End-Users . . . . . . . . . . . . . 18
104 4.2. Zone Distribution . . . . . . . . . . . . . . . . . . . . 18
105 4.2.1. Zone Transfers . . . . . . . . . . . . . . . . . . . 18
106 4.2.2. Delays in Yeti-Root Zone Distribution . . . . . . . . 19
107 4.3. DNSSEC KSK Rollover . . . . . . . . . . . . . . . . . . . 20
108 4.3.1. Failure-Case KSK Rollover . . . . . . . . . . . . . . 20
109 4.3.2. KSK Rollover vs. BIND9 Views . . . . . . . . . . . . 20
110 4.3.3. Large Responses during KSK Rollover . . . . . . . . . 21
111 4.4. Capture of Large DNS Response . . . . . . . . . . . . . . 22
112 4.5. Automated Hints File Maintenance . . . . . . . . . . . . 22
113 4.6. Root Label Compression in Knot . . . . . . . . . . . . . 23
114 5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 24
115 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 26
116 7. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 26
117 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 26
118 Appendix A. Yeti-Root Hints File . . . . . . . . . . . . . . . . 29
119 Appendix B. Yeti-Root Server Priming Response . . . . . . . . . 31
120 Appendix C. Active IPv6 Prefixes in Yeti DNS testbed . . . . . . 32
121 Appendix D. Tools developed for Yeti DNS testbed . . . . . . . . 33
122 Appendix E. Controversy . . . . . . . . . . . . . . . . . . . . 34
123 Appendix F. About This Document . . . . . . . . . . . . . . . . 34
124 F.1. Venue . . . . . . . . . . . . . . . . . . . . . . . . . . 35
125 F.2. Revision History . . . . . . . . . . . . . . . . . . . . 35
126 F.2.1. draft-song-yeti-testbed-experience-00 through -03 . . 35
127 F.2.2. draft-song-yeti-testbed-experience-04 . . . . . . . . 35
128 F.2.3. draft-song-yeti-testbed-experience-05 . . . . . . . . 36
129 F.2.4. draft-song-yeti-testbed-experience-06 . . . . . . . . 36
130 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 36
132 1. Introduction
134 The Domain Name System (DNS), as originally specified in [RFC1034]
135 and [RFC1035], has proved to be an enduring and important platform
136 upon which almost every end-user of the Internet relies. Despite its
137 longevity, extensions to the protocol, new implementations and
138 refinements to DNS operations continue to emerge both inside and
139 outside the IETF.
141 The Root Server System in particular has seen technical innovation
142 and development, for example in the form of wide-scale anycast
143 deployment, the mitigation of unwanted traffic on a global scale, the
144 widespread deployment of Response Rate Limiting [RRL], the
145 introduction of IPv6 transport, the deployment of DNSSEC, changes in
146 DNSSEC key sizes and preparations to roll the root zone's Key Signing
147 Key (KSK) and corresponding trust anchor. Together, even the
148 projects listed in this brief summary imply tremendous operational
149 change, all the more impressive when considered the necessary caution
150 when managing Internet critical infrastructure, and the context of
151 the adjacent administrative changes involved in root zone management
152 and the (relatively speaking) massive increase in the the number of
153 delegations in the root zone itself .
155 Aspects of the operational structure of the Root Server System have
156 been described in such documents as [TNO2009], [ISC-TN-2003-1],
157 [RSSAC001] and [RFC7720]. Such references, considered together,
158 provide sufficient insight into the operations of the system as a
159 whole that it is straightforward to imagine structural changes to the
160 root server system's infrastructure and to wonder what the
161 operational implications of such changes might be.
163 The Yeti DNS Project was conceived in May 2015 to provide a non-
164 production testbed upon which the technical community could propose
165 and run experiments designed to answer these kinds of questions.
166 Coordination for the project was provided by TISF, the WIDE Project
167 and the Beijing Internet Institute. Many volunteers collaborated to
168 build a distributed testbed that at the time of writing includes 25
169 Yeti root servers with 16 operators and handles experimental traffic
170 from individual volunteers, universities, DNS vendors and distributed
171 measurement networks.
173 By design, the Yeti testbed system serves the root zone published by
174 the IANA with only those structural modifications necessary to ensure
175 that it is able to function usefully in the Yeti testbed system
176 instead of the production Root Server system. In particular, no
177 delegation for any top-level zone is changed, added or removed from
178 the IANA-published root zone to construct the root zone served by the
179 Yeti testbed system and changes in the root zone are reflected in the
180 testbed in near real-time. In this document, for clarity, we refer
181 to the zone derived from the IANA-published root zone as the Yeti-
182 Root zone.
184 The Yeti DNS testbed serves a similar function to the Root Server
185 System in the sense that they both serve similar zones: the Yeti-Root
186 zone and the IANA-published root zone. However, the Yeti DNS testbed
187 only serves clients that are explicitly configured to participate in
188 the experiment, whereas the Root Server System serves the whole
189 Internet. Since the dependent end-users and systems of the Yeti DNS
190 testbed are known and their operations well-coordinated with those of
191 the Yeti project, it has been possible to deploy structural changes
192 in the Yeti DNS testbed with effective measurement and analysis,
193 something that is difficult or simply impractical in the production
194 Root Server System.
196 2. Areas of Study
198 Examples of topics that the Yeti DNS Testbed was built to address are
199 included below, each illustrated with indicative questions.
201 2.1. Implementation of a Root Server System-like Testbed
203 o How can a testbed be constructed and deployed on the Internet,
204 allowing useful public participation without any risk of
205 disruption of the Root Server System?
207 o How can representative traffic be introduced into such a testbed
208 such that insights into the impact of specific differences between
209 the testbed and the Root Server System can be observed?
211 2.2. Yeti-Root Zone Distribution
213 o What are the scaling properties of Yeti-Root zone distribution as
214 the number of Yeti-Root servers, Yeti-Root server instances or
215 intermediate distribution points increase?
217 2.3. Yeti-Root Server Names and Addressing
219 o What naming schemes other than those closely analogous to the use
220 of ROOT-SERVERS.NET in the production root zone are practical, and
221 what are their respective advantages and disadvantages?
223 o What are the risks and benefits of signing the zone that contains
224 the names of the Yeti-Root servers?
226 o What automatic mechanisms might be useful to improve the rate at
227 which clients of Yeti-Root servers are able to react to a Yeti-
228 Root server renumbering event?
230 2.4. IPv6-Only Yeti-Root Servers
232 o Are there negative operational effects in the use of IPv6-only
233 Yeti-Root servers, compared to the use of servers that are dual-
234 stack?
236 o What effect does the IPv6 fragmentation model have on the
237 operation of Yeti-Root servers, compared with that of IPv4?
239 2.5. DNSSEC in the Yeti-Root Zone
241 o Is it practical to sign the Yeti-Root zone using multiple,
242 independently-operated DNSSEC signers and multiple corresponding
243 ZSKs?
245 o To what extent is [RFC5011] supported by resolvers?
247 o Does the KSK Rollover plan designed and in the process of being
248 implemented by ICANN work as expected on the Yeti testbed?
250 o What is the operational impact of using much larger RSA key sizes
251 in the ZSKs used in the Yeti-Root?
253 o What are the operational consequences of choosing DNSSEC
254 algorithms other than RSA to sign the Yeti-Root zone?
256 3. Yeti DNS Testbed Infrastructure
258 The purpose of the testbed is to allow DNS queries from stub
259 resolvers, mediated by recursive resolvers, to be delivered to Yeti-
260 Root servers, and for corresponding responses generated on the Yeti-
261 Root servers to be returned, as illustrated in Figure 1.
263 ,----------. ,-----------. ,------------.
264 | stub +------> | recursive +------> | Yeti-Root |
265 | resolver | <------+ resolver | <------+ nameserver |
266 `----------' `-----------' `------------'
267 ^ ^ ^
268 | appropriate | Yeti-Root hints; | Yeti-Root zone
269 `- resolver `- Yeti-Root trust `- with DNSKEY RRSet
270 configured anchor signed by Yeti-KSK
272 Figure 1: High-Level Testbed Components
274 To use the Yeti DNS testbed, a recursive resolver must be configured
275 to use the Yeti-Root servers. That configuration consists of a list
276 of names and addresses for the Yeti-Root servers (often referred to
277 as a "hints file") that replaces the corresponding hints used for the
278 production Root Server System Appendix A. Resolvers also need to be
279 configured with a DNSSEC trust anchor that corresponds to a KSK used
280 in the Yeti DNS Project, in place of the normal trust anchor set used
281 for the root zone.
283 The need for a Yeti-specific trust anchor in the resolver stems from
284 the need to make minimal changes to the root zone, as retrieved from
285 the IANA, to transform it into the Yeti-Root zone that can be used in
286 the testbed. Corresponding changes are required in the Yeti-Root
287 hints file Appendix A. Those changes would be properly rejected by
288 any validator using the production Root Server System's root zone
289 trust anchor set as bogus.
291 Stub resolvers become part of the Yeti DNS Testbed by their use of
292 recursive resolvers that are configured as described above.
294 The data flow from IANA to stub resolvers through the Yeti testbed is
295 illustrated in Figure 2 and are described in more detail in the
296 sections that follow.
298 ,----------------.
299 ,-- / IANA Root Zone / ---.
300 | `----------------' |
301 | | |
302 | | | Root Zone
303 ,--------------. ,---V---. ,---V---. ,---V---.
304 | Yeti Traffic | | BII | | WIDE | | TISF |
305 | Collection | | DM | | DM | | DM |
306 `----+----+----' `---+---' `---+---' `---+---'
307 | | ,-----' ,-------' `----.
308 | | | | | Yeti-Root
309 ^ ^ | | | Zone
310 | | ,---V---. ,---V---. ,---V---.
311 | `---+ Yeti | | Yeti | . . . . . . . | Yeti |
312 | | Root | | Root | | Root |
313 | `---+---' `---+---' `---+---'
314 | | | | DNS
315 | | | | Response
316 | ,--V----------V-------------------------V--.
317 `---------+ Yeti Resolvers |
318 `--------------------+---------------------'
319 | DNS
320 | Response
321 ,--------------------V---------------------.
322 | Yeti Stub Resolvers |
323 `------------------------------------------'
325 Figure 2: Testbed Data Flow
327 3.1. Root Zone Retrieval
329 The Yeti-Root Zone is distributed within the Yeti DNS testbed through
330 a set of internal master servers that are referred to as Distribution
331 Masters (DMs). These server elements distribute the Yeti-Root zone
332 to all Yeti-Root servers. The means by which the Yeti DMs construct
333 the Yeti-Root zone for distribution is described below.
335 Since Yeti DNS DMs do not receive DNS NOTIFY [RFC1996] messages from
336 the Root Server System, a polling approach is used to determine when
337 new revisions of the root zone are available from the production Root
338 Server System. Each Yeti DM requests the root zone SOA record from a
339 root server that permits unauthenticated zone transfers of the root
340 zone, and performs a zone transfer from that server if the retrieved
341 value of SOA.SERIAL is greater than that of the last retrieved zone.
343 At the time of writing, unauthenticated zone transfers of the root
344 zone are available directly from B-Root, C-Root, F-Root, G-Root and
345 K-Root, and from L-Root via the two servers XFR.CJR.DNS.ICANN.ORG and
346 XFR.LAX.DNS.ICANN.ORG, as well as via FTP from sites maintained by
347 the Root Zone Maintainer and the IANA Functions Operator. The Yeti
348 DNS Testbed retrieves the root zone from using zone transfers from
349 F-Root. The schedule on which F-Root is polled by each Yeti DM is as
350 follows:
352 +-------------+-----------------------+
353 | DM Operator | Time |
354 +-------------+-----------------------+
355 | BII | UTC hour + 00 minutes |
356 | WIDE | UTC hour + 20 minutes |
357 | TISF | UTC hour + 40 minutes |
358 +-------------+-----------------------+
360 The Yeti DNS testbed uses multiple DMs, each of which acts
361 autonomously and equivalently to its siblings. Any single DM can act
362 to distribute new revisions of the Yeti-Root zone, and is also
363 responsible for signing the RRSets that are changed as part of the
364 transformation of the Root Zone into the Yeti-Root zone described in
365 Section 3.2. This shared control over the processing and
366 distribution of the Yeti-Root zone approximates some of the ideas
367 around shared zone control explored in [ITI2014].
369 3.2. Transformation of Root Zone to Yeti-Root Zone
371 Two distinct approaches have been deployed in the Yeti-DNS Testbed,
372 separately, to transform the Root Zone into the Yeti-Root Zone. At a
373 high level both approaches are equivalent in the sense that they
374 replace a minimal set of information in the root zone with
375 corresponding data for the Yeti DNS Testbed; the mechanisms by which
376 the transforms are executed are different, however. Each is
377 discussed in turn in Section 3.2.1 and Section 3.2.2, respectively.
379 A third approach has also been proposed, but not yet implemented.
380 The motivations and changes implied by that approach are described in
381 Section 3.2.3.
383 3.2.1. ZSK and KSK Key Sets Shared Between DMs
385 The approach described here was the first to be implemented. It
386 features entirely autonomous operation of each DM, but also requires
387 secret key material (the private key in each of the Yeti-Root KSK and
388 ZSK key-pairs) to be distributed and maintained on each DM in a
389 coordinated way.
391 The Root Zone is transformed as follows to produce the Yeti-Root
392 Zone. This transformation is carried out autonomously on each Yeti
393 DNS Project DM. Each DM carries an authentic copy of the current set
394 of Yeti KSK and ZSK key pairs, synchronized between all DMs (see
395 Section 3.4).
397 1. SOA.MNAME is set to www.yeti-dns.org.
399 2. SOA.RNAME is set to .yeti-dns.org. where is currently one of "wide", "bii" or "tisf".
402 3. All DNSKEY, RRSIG and NSEC records are removed.
404 4. The apex NS RRSet is removed, with the corresponding root server
405 glue (A and AAAA) RRSets.
407 5. A Yeti DNSKEY RRSet is added to the apex, comprising the public
408 parts of all Yeti KSK and ZSKs.
410 6. A Yeti NS RRSet is added to the apex that includes all Yeti-Root
411 servers.
413 7. Glue records (AAAA only, since Yeti-Root servers are v6-only) for
414 all Yeti-Root servers are added.
416 8. The Yeti-Root Zone is signed: the NSEC chain is regenerated; the
417 Yeti KSK is used to sign the DNSKEY RRSet, and the DM's local ZSK
418 is used to sign every other RRSet.
420 Note that the SOA.SERIAL value published in the Yeti-Root Zone is
421 identical to that found in the root zone.
423 3.2.2. Unique ZSK per DM; No Shared KSK
425 The approach described here was the second to be implemented. Each
426 DM is provisioned with its own, dedicated ZSK key pairs that are not
427 shared with other DMs. A Yeti-Root DNSKEY RRSet is constructed and
428 signed upstream of all DMs as the union of the set of active KSKs and
429 the set of active ZSKs for every individual DM. Each DM now only
430 requires the secret part of its own dedicated ZSK key pairs to be
431 available locally, and no other secret key material is shared. The
432 high-level approach is illustrated in Figure 3.
434 ,----------. ,-----------.
435 .--------> BII ZSK +---------> Yeti-Root |
436 | signs `----------' signs `-----------'
437 |
438 ,-----------. | ,----------. ,-----------.
439 | Yeti KSK +-+--------> TISF ZSK +---------> Yeti-Root |
440 `-----------' | signs `----------' signs `-----------'
441 |
442 | ,----------. ,-----------.
443 `--------> WIDE ZSK +---------> Yeti-Root |
444 signs `----------' signs `-----------'
446 Figure 3: Unique ZSK per DM
448 The process of retrieving the Root Zone from the Root Server System
449 and replacing and signing the apex DNSKEY RRSet no longer takes place
450 on the DMs, and instead takes place on a central Hidden Master. The
451 production of signed DNSKEY RRSets is analogous to the use of Signed
452 Key Responses (SKR) produced during ICANN KSK key ceremonies
453 [ICANN2010].
455 Each DM now retrieves source data (with pre-modified and Yeti-signed
456 DNSKEY RRset, but otherwise unchanged) from the Yeti DNS Hidden
457 Master instead of from the Root Server System.
459 Each DM carries out a similar transformation to that described in
460 Section 3.2.1, except that DMs no longer need to modify or sign the
461 DNSKEY RRSet.
463 The Yeti-Root Zone served by any particular Yeti-Root Server will
464 include signatures generated using the ZSK from the DM that served
465 the Yeti-Root Zone to that Yeti-Root Server. Signatures cached at
466 resolvers might be retrieved from any Yeti-Root Server, and hence are
467 expected to be a mixture of signatures generated by different ZSKs.
468 Since all ZSKs can be trusted through the signature by the Yeti KSK
469 over the DNSKEY RRSet, which includes all ZSKs, the mixture of
470 signatures was predicted not to be a threat to reliable validation.
471 Deployment and experimentation confirms this to be the case, even
472 when individual ZSKs are rolled on different schedules.
474 A consequence of this approach is that the apex DNSKEY RRSet in the
475 Yeti-Root zone is much larger than the corresponding DNSKEY RRSet in
476 the Root Zone.
478 3.2.3. Preserving Root Zone NSEC Chain and ZSK RRSIGs
480 A change to the transformation described in Section 3.2.2 has been
481 proposed that would preserve the NSEC chain from the Root Zone and
482 all RRSIG RRs generated using the Root Zone's ZSKs. The DNSKEY RRSet
483 would continue to be modified to replace the Root Zone KSKs, and the
484 Yeti KSK would be used to generate replacement signatures over the
485 apex DNSKEY and NS RRSets. Source data would continue to flow from
486 the Root Server System through the Hidden Master to the set of DMs,
487 but no DNSSEC operations would be required on the DMs and the source
488 NSEC and most RRSIGs would remain intact.
490 This approach has been suggested in order to provide
491 cryptographically-verifiable confidence that no owner name in the
492 root zone had been changed in the process of producing the Yeti-Root
493 zone from the Root Zone, addressing one of the concerns described in
494 Appendix E in a way that can be verified automatically.
496 3.3. Yeti-Root Zone Distribution
498 Each Yeti DM is configured with a full list of Yeti-Root Server
499 addresses to send NOTIFY [RFC1996] messages to, which also forms the
500 basis for an address-based access-control list for zone transfers.
501 Authentication by address could be replaced with more rigourous
502 mechanisms (e.g. using Transaction Signatures (TSIG) [RFC2845]); this
503 has not been done at the time of writing since the use of address-
504 based controls avoids the need for the distribution of shared secrets
505 amongst the Yeti-Root Server Operators.
507 Individual Yeti-Root Servers are configured with a full set of Yeti
508 DM addresses to which SOA and AXFR queries may be sent in the
509 conventional manner.
511 3.4. Synchronisation of Service Metadata
513 Changes in the Yeti-DNS Testbed infrastructure such as the addition
514 or removal of Yeti-Root servers, renumbering Yeti-Root Servers or
515 DNSSEC key rollovers require coordinated changes to take place on all
516 DMs. The Yeti-DNS Testbed is subject to more frequent changes than
517 are observed in the Root Server System and includes substantially
518 more Yeti-Root Servers than there are IANA Root Servers, and hence a
519 manual change process in the Yeti Testbed would be more likely to
520 suffer from human error. An automated process was consequently
521 implemented.
523 A repository of all service metadata involved in the operation of
524 each DM was implemented as a dedicated git repository hosted at
525 github.com, a mechanism chosen since it was simple, transparent and
526 familiar to participants. Requests to change the service metadata
527 for a DM were submitted as pull requests from a fork of the
528 corresponding repository; each DM operator reviewed pull requests and
529 merged them to indicate approval. Once merged, changes were pulled
530 automatically to individual DMs and promoted to production.
532 3.5. Yeti-Root Server Naming Scheme
534 The current naming scheme for Root Servers was normalized to use
535 single-character host names (A through M) under the domain ROOT-
536 SERVERS.NET, as described in [RSSAC023]). The principal benefit of
537 this naming scheme was that DNS label compression could be used to
538 produce a priming response that would fit within 512 bytes at the
539 time it was introduced, 512 bytes being the maximum DNS message size
540 using UDP transport without EDNS(0) [RFC6891].
542 Yeti-Root Servers do not use this optimization, but rather use free-
543 form nameserver names chosen by their respective operators -- in
544 other words, no attempt is made to minimize the size of the priming
545 response through the use of label compression. This approach aims to
546 challenge the need for a minimally-sized priming response in a modern
547 DNS ecosystem where EDNS(0) is prevalent.
549 Priming responses from Yeti-Root Servers do not always include server
550 addresses in the additional section, as is the case with priming
551 responses from Root Servers. In particular, Yeti-Root Servers
552 running BIND9 return an empty additional section if the configuration
553 parameter minimum-responses is set, forcing resolvers to complete the
554 priming process with a set of conventional recursive lookups in order
555 to resolve addresses for each Yeti-Root server. The Yeti-Root
556 Servers running NSD were observed to return a fully-populated
557 additional section (depending of course of the EDNS buffer size in
558 use).
560 Various approaches to normalize the composition of the priming
561 response were considered, including:
563 o Require use of DNS implementations that exhibit a desired
564 behaviour in the priming response;
566 o Modify nameserver software or configuration as used by Yeti-Root
567 Servers;
569 o Isolate the names of Yeti-Root Servers in one or more zones that
570 could be slaved on each Yeti-Root Server, renaming servers as
571 necessary, giving each a source of authoritative data with which
572 the authority section of a priming response could be fully
573 populated. This is the approach used in the Root Server System
574 with the ROOT-SERVERS.NET zone.
576 The potential mitigation of renaming all Yeti-Root Servers using a
577 scheme that would allow their names to exist directly in the root
578 zone was not considered, since that approach implies the invention of
579 new top-level labels not present in the Root Zone.
581 Given the relative infrequency of priming queries by individual
582 resolvers and the additional complexity or other compromises implied
583 by each of those mitigations, the decision was made to make no effort
584 to ensure that the composition of priming responses was identical
585 across servers. Even the empty additional sections generated by
586 Yeti-Root Servers running BIND9 seem to be sufficient for all
587 resolver software tested; resolvers simply perform a new recursive
588 lookup for each authoritative server name they need to resolve.
590 3.6. Yeti-Root Servers
592 Various volunteers have donated authoritative servers to act as Yeti-
593 Root servers. At the time of writing there are 25 Yeti-Root servers
594 distributed globally, one of which is named using an IDNA2008
595 [RFC5890] label, shown in the following list in punycode.
597 +-------------------------------------+---------------+-------------+
598 | Name | Operator | Location |
599 +-------------------------------------+---------------+-------------+
600 | bii.dns-lab.net | BII | CHINA |
601 | yeti-ns.tsif.net | TSIF | USA |
602 | yeti-ns.wide.ad.jp | WIDE Project | Japan |
603 | yeti-ns.as59715.net | as59715 | Italy |
604 | dahu1.yeti.eu.org | Dahu Group | France |
605 | ns-yeti.bondis.org | Bond Internet | Spain |
606 | | Systems | |
607 | yeti-ns.ix.ru | Russia | MSK-IX |
608 | yeti.bofh.priv.at | CERT Austria | Austria |
609 | yeti.ipv6.ernet.in | ERNET India | India |
610 | yeti-dns01.dnsworkshop.org | dnsworkshop | Germany |
611 | | /informnis | |
612 | dahu2.yeti.eu.org | Dahu Group | France |
613 | yeti.aquaray.com | Aqua Ray SAS | France |
614 | yeti-ns.switch.ch | SWITCH | Switzerland |
615 | yeti-ns.lab.nic.cl | CHILE NIC | Chile |
616 | yeti-ns1.dns-lab.net | BII | China |
617 | yeti-ns2.dns-lab.net | BII | China |
618 | yeti-ns3.dns-lab.net | BII | China |
619 | ca...a23dc.yeti-dns.net | Yeti-ZA | South |
620 | | | Africa |
621 | 3f...374cd.yeti-dns.net | Yeti-AU | Australia |
622 | yeti1.ipv6.ernet.in | ERNET India | India |
623 | xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c | ERNET India | India |
624 | yeti-dns02.dnsworkshop.org | dnsworkshop | USA |
625 | | /informnis | |
626 | yeti.mind-dns.nl | Monshouwer | Netherlands |
627 | | Internet | |
628 | | Diensten | |
629 | yeti-ns.datev.net | DATEV | Germany |
630 | yeti.jhcloos.net. | jhcloos | USA |
631 +-------------------------------------+---------------+-------------+
633 The current list of Yeti-Root server is made available to a
634 participating resolver first using a substitute hints file Appendix A
635 and subsequently by the usual resolver priming process [RFC8109].
636 All Yeti-Root servers are IPv6-only, foreshadowing a future IPv6-only
637 Internet, and hence the Yeti-Root hints file contains no IPv4
638 addresses and the Yeti-Root zone contains no IPv4 glue.
640 At the time of writing, all root servers within the Root Server
641 System serve the ROOT-SERVERS.NET zone in addition to the root zone,
642 and all but one also serve the ARPA zone. Yeti-Root servers serve
643 the Yeti-Root zone only.
645 Significant software diversity exists across the set of Yeti-Root
646 servers, as reported by their volunteer operators at the time of
647 writing:
649 o Platform: 18 of 25 Yeti-Root servers are implemented on a VPS
650 rather than bare metal.
652 o Operating System: 15 Yeti-Root servers run on Linux (Ubuntu,
653 Debian, CentOS, Red Hat and ArchLinux); 4 run on FreeBSD, 1 on
654 NetBSD and 1 in Windows server 2016.
656 o DNS software: 18 of 25 Yeti-Root servers use BIND9 (versions
657 varying between 9.9.7 and 9.10.3); 4 use NSD (4.10 and 4.15); 2
658 use Knot (2.0.1 and 2.1.0), 1 uses Bundy (1.2.0) and 1 uses MS DNS
659 (10.0.14300.1000).
661 3.7. Experimental Traffic
663 For the Yeti DNS Testbed to be useful as a platform for
664 experimentation, it needs to carry statistically representative
665 traffic. Several approaches have been taken to load the system with
666 traffic, including both real-world traffic triggered by end-users and
667 synthetic traffic.
669 Resolvers that have been explicitly configured to participate in the
670 testbed, as described in Section 3, are a source of real-world, end-
671 user traffic. Sustained levels of traffic have been observed from a
672 variety of sources, as summarised in Appendix C.
674 Synthetic traffic has been introduced to the system from time to time
675 in order to increase traffic loads. Approaches include the use of
676 distributed measurement platforms such as RIPE ATLAS to send DNS
677 queries to Yeti-Root servers, and the capture of traffic sent from
678 non-Yeti resolvers to the Root Server System which was subsequently
679 modified and replayed towards Yeti-Root servers.
681 3.8. Traffic Capture and Analysis
683 Query and response traffic capture is available in the testbed in
684 both Yeti resolvers and Yeti-Root servers in anticipation of
685 experiments that require packet-level visibility into DNS traffic.
687 Traffic capture is performed on Yeti-Root servers using either dnscap
688 or pcapdump (part of the
689 pcaputils Debian package ,
690 with a patch to facilitate triggered file upload
691 . PCAP-
692 format files containing packet captures are uploaded using rsync to
693 central storage.
695 4. Operational Experience with the Yeti DNS Testbed
697 The following sections provide commentary on the operation and impact
698 analyses of the Yeti-DNS Testbed described in Section 3. More
699 detailed descriptions of observed phenomena are available in Yeti DNS
700 mailing list archives
701 and on the Yeti DNS blog .
703 4.1. Viability of IPv6-Only Operation
705 All Yeti-Root servers were deployed with IPv6 connectivity, and no
706 IPv4 addresses for any Yeti-Root server were made available (e.g. in
707 the Yeti hints file, or in the DNS itself). This implementation
708 decision constrained the Yeti-Root system to be v6-only.
710 DNS implementations are generally adept at using both IPv4 and IPv6
711 when both are available. Servers that cannot be reliably reached
712 over one protocol might be better queried over the other, to the
713 benefit of end-users in the common case where DNS resolution is on
714 the critical path for end-users' perception of performance. However,
715 this optimisation also means that systemic problems with one protocol
716 can be masked by the other. By forcing all traffic to be carried
717 over IPv6, the Yeti DNS testbed aimed to expose any such problems and
718 make them easier to identify and understand. Several examples of
719 IPv6-specific phenomena observed during the operation of the testbed
720 are described in the sections that follow.
722 Although the Yeti-Root servers themselves were only reachable using
723 IPv6, real-world end-users often have no IPv6 connectivity. The
724 testbed was also able to explore the degree to which IPv6-only Yeti-
725 Root servers were able to serve single-stack, IPv4-only end-user
726 populations through the use of dual-stack Yeti resolvers.
728 4.1.1. IPv6 Fragmentation
730 In the Root Server System, structural changes with the potential to
731 increase response sizes (and hence fragmentation, fallback to TCP
732 transport or both) have been exercised with great care, since the
733 impact on clients has been difficult to predict or measure. The Yeti
734 DNS Testbed is experimental and has the luxury of a known client
735 base, making it far easier to make such changes and measure their
736 impact.
738 Many of the experimental design choices described in this document
739 were expected to trigger larger responses. For example, the choice
740 of naming scheme for Yeti-Root Servers described in Section 3.5
741 defeats label compression. It makes a large priming response (up to
742 1754 octets with 25 NS server and their glue) ; the Yeti-Root zone
743 transformation approach described in Section 3.2.2 greatly enlarges
744 the apex DNSKEY RRSet especially during the KSK rollover (up to 1975
745 octets with 3 ZSK and 2 KSK). An increased incidence of
746 fragmentation was therefore expected.
748 The Yeti-DNS Testbed provides service on IPv6 only. IPv6 has a
749 fragmentation model that is different from IPv4 -- in particular,
750 fragmentation always takes place on the sending host, and not on an
751 intermediate router.
753 Fragmentation may cause serious issues; if a single fragment is lost,
754 it results in the loss of the entire datagram of which the fragment
755 was a part, and in the DNS frequently triggers a timeout. It is
756 known at this moment that only a limited number of security middle-
757 box implementations support IPv6 fragments. Some public measurements
758 and reports [I-D.taylor-v6ops-fragdrop] [RFC7872] shows that there is
759 notable packets drop rate due to the mistreatment of middle-box on
760 IPv6 fragment. One APNIC study [IPv6-frag-DNS] reported that 37% of
761 endpoints using IPv6-capable DNS resolver cannot receive a fragmented
762 IPv6 response over UDP.
764 To study the impact, RIPE Atlas probes were used. For each Yeti-Root
765 server, an Atlas measurement was setup using 100 IPv6-enabled probes
766 from five regions, sending a DNS query for ./IN/DNSKEY using UDP
767 transport with DO=1. This measurement, when carried out concurrently
768 with a Yeti KSK rollover, further exacerbating the potential for
769 fragmentation, identified a 7% failure rate compared with a non-
770 fragmented control. A failure rate of 2% was observed with response
771 sizes of 1414 octets, which was surprising given the expected
772 prevalence of 1500-octet (Ethernet-framed) MTUs.
774 The consequences of fragmentation were not limited to failures in
775 delivering DNS responses over UDP transport. There were two cases
776 where a Yeti-Root server failed to transfer the Yeti-Root zone from a
777 DM. DM log files revealed "socket is not connected" errors
778 corresponding to zone transfer requests. Further experimentation
779 revealed that combinations of NetBSD 6.1, NetBSD 7.0RC1, FreeBSD
780 10.0, Debian 3.2 and VMWare ESXI 5.5 resulted in a high TCP MSS value
781 of 1440 octets being negotiated between client and server despite the
782 presence of the IPV6_USE_MIN_MTU socket option, as described in
783 [I-D.andrews-tcp-and-ipv6-use-minmtu]. The mismatch appears to cause
784 outbound segments greater in size than 1280 octets to be dropped
785 before sending. Setting the local TCP MSS to 1220 octets (chosen as
786 1280-60, the size of the IPv6/TCP header with no other extension
787 headers) was observed to be a pragmatic mitigation.
789 4.1.2. Serving IPv4-Only End-Users
791 Yeti resolvers have been successfully used by real-world end-users
792 for general name resolution within a number of participant
793 organisations, including resolution of names to IPv4 addresses and
794 resolution by IPv4-only end-user devices.
796 Some participants, recognising the operational importance of
797 reliability in resolver infrastructure and concerned about the
798 stability of their IPv6 connectivity, chose to deploy Yeti resolvers
799 in parallel to conventional resolvers, making both available to end-
800 users. While the viability of this approach provides a useful data
801 point, end-users using Yeti resolvers exclusively provided a better
802 opportunity to identify and understand any failures in the Yeti DNS
803 testbed infrastructure.
805 Resolvers deployed in IPv4-only environments were able to join the
806 Yeti DNS testbed by way of upstream, dual-stack Yeti resolvers, or in
807 one case, in CERNET2, by assigning IPv4 addresses to Yeti-Root
808 servers and mapping them in dual-stack IVI translation devices
809 [RFC6219].
811 4.2. Zone Distribution
813 The Yeti DNS testbed makes use of multiple DMs to distribute the
814 Yeti-Root zone, an approach that would allow the number of Yeti-Root
815 servers to scale to a higher number than could be supported by a
816 single distribution source and which provided redundancy. The use of
817 multiple DMs introduced some operational challenges, however, which
818 are described in the following sections.
820 4.2.1. Zone Transfers
822 Yeti-Root Servers were configured to serve the Yeti-Root zone as
823 slaves. Each slave had all DMs configured as masters, providing
824 redundancy in zone synchronisation.
826 Each DM in the Yeti testbed served a Yeti-Root zone which is
827 functionally equivalent but not congruent to that served by every
828 other DM (see Section 3.3). The differences included variations in
829 the SOA.MNAME field and, more critically, in the RRSIGs for
830 everything other than the apex DNSKEY RRSet, since signatures for all
831 other RRSets are generated using a private key that is only available
832 to the DM serving its particular variant of the zone (see
833 Section 3.2, Section 3.2.2).
835 Incremental Zone Transfer (IXFR), as described in [RFC1995], is a
836 viable mechanism to use for zone synchronization between any Yeti-
837 Root server and a consistent, single DM. However, if that Yeti-Root
838 server ever selected a different DM, IXFR would no longer be a safe
839 mechanism; structural changes between the incongruent zones on
840 different DMs would not be included in any transferred delta and the
841 result would be a zone that was not internally self-consistent. For
842 this reason the first transfer after a change of DM would require
843 AXFR, not IXFR.
845 None of the DNS software in use on Yeti-Root Servers supports this
846 mixture of IXFR/AXFR according to the master server in use. This is
847 unsurprising, given that the environment described above in the Yeti-
848 Root system is idiosyncratic; conventional zone transfer graphs
849 involve zones that are congruent between all nodes. For this reason,
850 all Yeti-Root servers are configured to use AXFR at all times, and
851 never IXFR, to ensure that zones being served are internally self-
852 consistent.
854 4.2.2. Delays in Yeti-Root Zone Distribution
856 Each Yeti DM polled the Root Server System for a new revision of the
857 root zone on an interleaved schedule, as described in Section 3.1.
858 Consequently, different DMs were expected to retrieve each revision
859 of the root zone, and make a corresponding revision of the Yeti-Root
860 zone available, at different times. The availability of a new
861 revision of the Yeti-Root zone on the first DM would typically
862 precede that of the last by 40 minutes.
864 It might be expected given this distribution mechanism that the
865 maximum latency between the publication of a new revision of the root
866 zone and the availability of the corresponding Yeti-Root zone on any
867 Yeti-Root server would be 20 minutes, since in normal operation at
868 least one DM should serve that Yeti-Zone within 20 minutes of root
869 zone publication. In practice, this was not observed.
871 In one case a Yeti-Root server running Bundy 1.2.0 on FreeBSD
872 10.2-RELEASE was found to lag root zone publication by as much as ten
873 hours, which upon investigation was due to software defects that were
874 subsequently corrected.
876 More generally, Yeti-Root servers were observed routinely to lag root
877 zone publication by more than 20 minutes, and relatively often by
878 more than 40 minutes. Whilst in some cases this might be assumed to
879 be a result of connectivity problems, perhaps suppressing the
880 delivery of NOTIFY messages, it was also observed that Yeti-Root
881 servers receiving a NOTIFY from one DM would often send SOA queries
882 and AXFR requests to a different DM. If that DM was not yet serving
883 the new revision of the Yeti-Root zone, a delay in updating the Yeti-
884 Root server would naturally result.
886 4.3. DNSSEC KSK Rollover
888 At the time of writing, the Root Zone KSK is expected to undergo a
889 carefully-orchestrated rollover as described in [ICANN2016]. ICANN
890 has commissioned various tests and has published an external test
891 plan [ICANN2017].
893 Three related DNSSEC KSK rollover exercises were carried out on the
894 Yeti DNS testbed, somewhat concurrent with the planning and execution
895 of the rollover in the root zone. Brief descriptions of these
896 exercises are included below.
898 4.3.1. Failure-Case KSK Rollover
900 The first KSK rollover that was executed on the Yeti DNS testbed
901 deliberately ignored the 30-day hold-down timer specified in
902 [RFC5011] before retiring the outgoing KSK.
904 It was confirmed that clients of some (but not all) validating Yeti
905 resolvers experienced resolution failures (received SERVFAIL
906 responses) following this change. Those resolvers required
907 administrator intervention to install a functional trust anchor
908 before resolution was restored.
910 4.3.2. KSK Rollover vs. BIND9 Views
912 The second Yeti KSK rollover was designed with similar phases to the
913 ICANN's KSK rollover roll, although with modified timings to reduce
914 the time required to complete the process. The "slot" used in this
915 rollover was ten days long, as follows:
917 +--------------+----------+----------+
918 | | 19444 | New Key |
919 +--------------+----------+----------+
920 | slot 1 | pub+sign | |
921 | slot 2,3,4,5 | pub+sign | pub |
922 | slot 6,7 | pub | pub+sign |
923 | slot 8 | revoke | pub+sign |
924 | slot 9 | | pub+sign |
925 +--------------+----------+----------+
927 During this rollover exercise, a problem was observed on one Yeti
928 resolver that was running BIND 9.10.4-p2 [KROLL-ISSUE]. That
929 resolver was configured with multiple views serving clients in
930 different subnets at the time that the KSK rollover began. DNSSEC
931 validation failures were observed following the completion of the KSK
932 rollover, triggered by the addition of a new view, intended to serve
933 clients from a new subnet.
935 BIND 9.10 requires "managed-keys" configuration to be specified in
936 every view, a detail that was apparently not obvious to the operator
937 in this case and which was subsequently highlighted by ISC in their
938 general advice relating to KSK rollover in the root zone to users of
939 BIND 9 . When the "managed-keys" configuration is
941 present in every view that is configured to perform validation, trust
942 anchors for all views are updated during a KSK rollover.
944 4.3.3. Large Responses during KSK Rollover
946 Since a KSK rollover necessarily involves the publication of outgoing
947 and incoming public keys simultaneously, an increase in the size of
948 DNSKEY responses is expected. The third KSK rollover carried out on
949 the Yeti DNS testbed was accompanied by a concerted effort to observe
950 response sizes and their impact on end-users.
952 As described in Section 3.2.2, in the Yeti DNS testbed each DM can
953 maintain control of its own set of ZSKs, which can undergo rollover
954 independently. During a KSK rollover where concurrent ZSK rollovers
955 are executed by each of three DMs the maximum number of apex DNSKEY
956 RRs present is eight (incoming and outcoming KSK, plus incoming and
957 outgoing of each of three ZSKs). In practice, however, such
958 concurrency did not occur; only the BII ZSK was rolled during the KSK
959 rollover, and hence only three DNSKEY RRSet configurations were
960 observed:
962 o 3 ZSK and 2 KSK, DNSKEY response of 1975 octets;
964 o 3 ZSK and 1 KSK, DNSKEY response of 1414 octets; and
966 o 2 ZSK and 1 KSK, DNSKEY response of 1139 octets.
968 RIPE Atlast probes were used as described in Section 4.1.1 to send
969 DNSKEY queries directly to Yeti-Root servers. The numbers of queries
970 and failures were recorded and categorised according to the response
971 sizes at the time the queries were sent. A summary of the results is
972 as follows:
974 +---------------+----------+---------------+--------------+
975 | Response Size | Failures | Total Queries | Failure rate |
976 +---------------+----------+---------------+--------------+
977 | 1139 | 274 | 64252 | 0.0042 |
978 | 1414 | 3141 | 126951 | 0.0247 |
979 | 1975 | 2920 | 42529 | 0.0687 |
980 +---------------+----------+---------------+--------------+
982 The general approach illustrated briefly here provides a useful
983 example of how the design of the Yeti DNS testbed, separate from the
984 Root Server System but constructed as a live testbed on the Internet,
985 facilitates the use of general-purpose active measurement facilities
986 such as RIPE Atlas probes as well as internal passive measurement
987 such as packet capture.
989 4.4. Capture of Large DNS Response
991 Packet capture is a common approach in production DNS systems where
992 operators require fine-grained insight into traffic in order to
993 understand production traffic. For authoritative servers, capture of
994 inbound query traffic is often sufficient, since responses can be
995 synthesised with knowledge of the zones being served at the time the
996 query was received. Queries are generally small enough not to be
997 fragmented, and even with TCP transport are generally packed within a
998 single segment.
1000 The Yeti DNS testbed has different requirements; in particular there
1001 is a desire to compare responses obtained from the Yeti
1002 infrastructure with those received from the Root Server System in
1003 response to a single query stream (e.g. using YmmV as described in
1004 Appendix D). Some Yeti-Root servers were capable of recovering
1005 complete DNS messages from within nameservers, e.g. using dnstap;
1006 however, not all servers provided that functionality and a consistent
1007 approach was desirable.
1009 The requirement passive capture of responses from the wire together
1010 with experiments that were expected (and in some cases designed) to
1011 trigger fragmentation and use of TCP transport led to the development
1012 of a new tool, PcapParser, to perform fragment and TCP stream
1013 reassembly from raw packet capture data. A brief description of
1014 PcapParser is included in Appendix D.
1016 4.5. Automated Hints File Maintenance
1018 Renumbering events in the Root Server System are relatively rare.
1019 Although each such event is accompanied by the publication of an
1020 updated hints file in standard locations, the task of updating local
1021 copies of that file used by DNS resolvers is manual, and the process
1022 has an observably-long tail: for example, in 2015 J-Root was still
1023 receiving traffic at its old address some thirteen years after
1024 renumbering [Wessels2015].
1026 The observed impact of these old, deployed hints file is minimal,
1027 likely due to the very low frequency of such renumbering events.
1028 Even the oldest of hints file would still contain some accurate root
1029 server addresses from which priming responses could be obtained.
1031 By contrast, due to the experimental nature of the system and the
1032 fact that it is operated mainly by volunteers, Yeti-Root Servers are
1033 added, removed and renumbered with much greater frequency. A tool to
1034 facilitate automatic maintenance of hints files was therefore
1035 created, [hintUpdate].
1037 The automated procedure followed by the hintUpdate tool is as
1038 follows.
1040 1. Use the local resolver to obtain a response to the query ./IN/NS;
1042 2. Use the local resolver to obtain a set of IPv4 and IPv6 addresses
1043 for each name server;
1045 3. Validate all signatures obtained from the local resolvers, and
1046 confirm that all data is signed;
1048 4. Compare the data obtained to that contained within the currently-
1049 active hints file; if there are differences, rotate the old one
1050 away and replace it with a new one.
1052 This tool would not function unmodified when used in the Root Server
1053 System, since the names of individual Root Servers (e.g. A.ROOT-
1054 SERVERS.NET) are not signed. All Yeti-Root Server names are signed,
1055 however, and hence this tool functions as expected in that
1056 environment.
1058 4.6. Root Label Compression in Knot
1060 [RFC1035] specifies that domain names can be compressed when encoded
1061 in DNS messages, being represented as one of
1063 1. a sequence of labels ending in a zero octet;
1065 2. a pointer; or
1067 3. a sequence of labels ending with a pointer.
1069 The purpose of this flexibility is to reduce the size of domain names
1070 encoded in DNS messages.
1072 It was observed that Yeti-Root Servers running Knot 2.0 would
1073 compress the zero-length label (the root domain, often represented as
1074 ".") using a pointer to an earlier example. Although legal, this
1075 encoding increases the encoded size of the root label from one octet
1076 to two; it was also found to break some client software, in
1077 particular the Go DNS library. Bug reports were filed against both
1078 knot and the Go DNS library, and both were resolved in subsequent
1079 releases.
1081 5. Conclusions
1083 Yeti DNS was designed and implemented as a live DNS root system
1084 testbed. It serves a root zone ("Yeti-Root" in this document)
1085 derived from the root zone root zone published by the IANA with only
1086 those structural modifications necessary to ensure its function in
1087 the testbed system. The Yeti DNS testbed has proven to be a useful
1088 platform to address many questions that would be challenging to
1089 answer using the production Root Server System, such as those
1090 included in Section 2.
1092 Indicative findings following from the construction and operation of
1093 the Yeti DNS testbed include:
1095 o Operation in a pure IPv6-only environment; confirmation of a
1096 significant failure rate in the transmission of large responses
1097 (~7%), but no other persistent failures observed. Two cases in
1098 which Yeti-Root servers failed to retrieve the Yeti-Root zone due
1099 to fragmentation of TCP segments; mitigated by setting a TCP MSS
1100 of 1220 octets (see Section 4.1.1).
1102 o Successful operation with three autonomous Yeti-Root zone signers
1103 and 25 Yeti-Root servers, and confirmation that IXFR is not an
1104 appropriate transfer mechanism of zones that are structurally
1105 incongruent across different transfer paths (see Section 4.2).
1107 o ZSK size increased to 2048 bits and multiple KSK rollovers
1108 executed to exercise RFC 5011 support in validating resolvers;
1109 identification of pitfalls relating to views in BIND9 when
1110 configured with "managed-keys" (see Section 4.3).
1112 o Use of natural (non-normalised) names for Yeti-Root servers
1113 exposed some differences between implementations in the inclusion
1114 of additional-section glue in responses to priming queries;
1115 however, despite this inefficiency, Yeti resolvers were observed
1116 to function adequately (see Section 3.5).
1118 o It was observed that Knot 2.0 performed label compression on the
1119 root (empty) label. This results in an increased encoding size
1120 for references to the root label, since a pointer is encoded as
1121 two octets whilst the root label itself only requires one (see
1122 Section 4.6).
1124 o Some tools were developed in response to the operational
1125 experience of running and using the Yeti DNS testbed: DNS fragment
1126 and DNS ATR for large DNS responses, a BIND9 patch for additional
1127 section glue, YmmV and IPv6 defrag for capturing and mirroring
1128 traffic. In addition a tool to facilitate automatic maintenance
1129 of hints files was created (see Appendix D).
1131 The Yeti DNS testbed was used only by end-users whose local
1132 infrastructure providers had made the conscious decision to do so, as
1133 is appropriate for an experimental, non-production system. However,
1134 the service quality reported from end-users of the system was high,
1135 in the main case indistinguishable from that of the production Root
1136 Server System.
1138 The experience gained during the operation of the Yeti DNS testbed
1139 suggested several topics worthy of further study:
1141 o Priming Truncation and TCP-only Yeti-Root servers: observe and
1142 measure the worst-possible case for priming truncation by
1143 responding with TC=1 to all priming queries received over UDP
1144 transport, forcing clients to retry using TCP. This should also
1145 give some insight into the usefulness of TCP-only DNS in general.
1147 o KSK ECDSA Rollover: one possible way to reduce DNSKEY response
1148 sizes is to change to an elliptic curve signing algorithm. While
1149 in principle this can be done separately for the KSK and the ZSK,
1150 the RIPE NCC has done research recently and discovered that some
1151 resolvers require that both KSK and ZSK use the same algorithm.
1152 This means that an algorithm roll also involves a KSK roll.
1153 Performing an algorithm roll at the root would be an interesting
1154 challenge.
1156 o Sticky Notify for zone transfer: the non-applicability of IXFR as
1157 a zone transfer mechanism in the Yeti DNS testbed could be
1158 mitigated by the implementation of a sticky preference for master
1159 server for each slave, such that an initial AXFR response could be
1160 followed up with IXFR requests without compromising zone integrity
1161 in the case (as with Yeti) that equivalent but incongruent
1162 versions of a zone are served by different masters.
1164 o Key distribution for zone transfer credentials: the use of a
1165 shared secret between slave and master requires key distribution
1166 and management whose scaling properties are not ideally suited to
1167 systems with large numbers of transfer clients. Other approaches
1168 for key distribution and authentication could be considered.
1170 6. IANA Considerations
1172 This document requests no action of the IANA.
1174 7. Acknowledgments
1176 The editors would like to acknowledge the contributions of the
1177 various and many subscribers to the Yeti DNS Project mailing lists,
1178 including the following people who were involved in the
1179 implementation and operation of the Yeti DNS testbed itself:
1181 Tomohiro Ishihara, Antonio Prado, Stephane Bortzmeyer, Mickael
1182 Jouanne, Pierre Beyssac, Joao Damas, Pavel Khramtsov, Ma Yan,
1183 Otmar Lendl, Praveen Misra, Carsten Strotmann, Edwin Gomez, Remi
1184 Gacogne, Guillaume de Lafond, Yves Bovard, Hugo Salgado-Hernandez,
1185 Li Zhen, Daobiao Gong, Runxia Wan.
1187 The editors also acknowledge the assistance of the Independent
1188 Submissions Editorial Board, and of the following reviewers whose
1189 opinions helped improve the clarity of this document:
1191 Subramanian Moonesamy, Joe Abley,Paul Mockapetris
1193 8. References
1195 [hintUpdate]
1196 "Hintfile Auto Update", 2015,
1197 .
1199 [I-D.andrews-tcp-and-ipv6-use-minmtu]
1200 Andrews, M., "TCP Fails To Respect IPV6_USE_MIN_MTU",
1201 draft-andrews-tcp-and-ipv6-use-minmtu-04 (work in
1202 progress), October 2015.
1204 [I-D.muks-dns-message-fragments]
1205 Sivaraman, M., Kerr, S., and D. Song, "DNS message
1206 fragments", draft-muks-dns-message-fragments-00 (work in
1207 progress), July 2015.
1209 [I-D.song-atr-large-resp]
1210 Song, L., "ATR: Additional Truncated Response for Large
1211 DNS Response", draft-song-atr-large-resp-00 (work in
1212 progress), September 2017.
1214 [I-D.taylor-v6ops-fragdrop]
1215 Jaeggli, J., Colitti, L., Kumari, W., Vyncke, E., Kaeo,
1216 M., and T. Taylor, "Why Operators Filter Fragments and
1217 What It Implies", draft-taylor-v6ops-fragdrop-02 (work in
1218 progress), December 2013.
1220 [ICANN2010]
1221 "DNSSEC Key Management Implementation for the Root Zone",
1222 May 2010, .
1226 [ICANN2016]
1227 "Root Zone KSK Rollover Plan", 2016,
1228 .
1231 [ICANN2017]
1232 "2017 KSK Rollover External Test Plan", July 2016,
1233 .
1236 [IPv6-frag-DNS]
1237 "Dealing with IPv6 fragmentation in the DNS", August 2017,
1238 .
1241 [ISC-TN-2003-1]
1242 Abley, J., "Hierarchical Anycast for Global Service
1243 Distribution", March 2003,
1244 .
1246 [ITI2014] "Identifier Technology Innovation Report", May 2014,
1247 .
1250 [KROLL-ISSUE]
1251 "A DNSSEC issue during Yeti KSK rollover", 2016,
1252 .
1255 [RFC1034] Mockapetris, P., "Domain names - concepts and facilities",
1256 STD 13, RFC 1034, DOI 10.17487/RFC1034, November 1987,
1257 .
1259 [RFC1035] Mockapetris, P., "Domain names - implementation and
1260 specification", STD 13, RFC 1035, DOI 10.17487/RFC1035,
1261 November 1987, .
1263 [RFC1995] Ohta, M., "Incremental Zone Transfer in DNS", RFC 1995,
1264 DOI 10.17487/RFC1995, August 1996,
1265 .
1267 [RFC1996] Vixie, P., "A Mechanism for Prompt Notification of Zone
1268 Changes (DNS NOTIFY)", RFC 1996, DOI 10.17487/RFC1996,
1269 August 1996, .
1271 [RFC2826] Internet Architecture Board, "IAB Technical Comment on the
1272 Unique DNS Root", RFC 2826, DOI 10.17487/RFC2826, May
1273 2000, .
1275 [RFC2845] Vixie, P., Gudmundsson, O., Eastlake 3rd, D., and B.
1276 Wellington, "Secret Key Transaction Authentication for DNS
1277 (TSIG)", RFC 2845, DOI 10.17487/RFC2845, May 2000,
1278 .
1280 [RFC5011] StJohns, M., "Automated Updates of DNS Security (DNSSEC)
1281 Trust Anchors", STD 74, RFC 5011, DOI 10.17487/RFC5011,
1282 September 2007, .
1284 [RFC5890] Klensin, J., "Internationalized Domain Names for
1285 Applications (IDNA): Definitions and Document Framework",
1286 RFC 5890, DOI 10.17487/RFC5890, August 2010,
1287 .
1289 [RFC6219] Li, X., Bao, C., Chen, M., Zhang, H., and J. Wu, "The
1290 China Education and Research Network (CERNET) IVI
1291 Translation Design and Deployment for the IPv4/IPv6
1292 Coexistence and Transition", RFC 6219,
1293 DOI 10.17487/RFC6219, May 2011,
1294 .
1296 [RFC6891] Damas, J., Graff, M., and P. Vixie, "Extension Mechanisms
1297 for DNS (EDNS(0))", STD 75, RFC 6891,
1298 DOI 10.17487/RFC6891, April 2013,
1299 .
1301 [RFC7720] Blanchet, M. and L-J. Liman, "DNS Root Name Service
1302 Protocol and Deployment Requirements", BCP 40, RFC 7720,
1303 DOI 10.17487/RFC7720, December 2015,
1304 .
1306 [RFC7872] Gont, F., Linkova, J., Chown, T., and W. Liu,
1307 "Observations on the Dropping of Packets with IPv6
1308 Extension Headers in the Real World", RFC 7872,
1309 DOI 10.17487/RFC7872, June 2016,
1310 .
1312 [RFC8109] Koch, P., Larson, M., and P. Hoffman, "Initializing a DNS
1313 Resolver with Priming Queries", BCP 209, RFC 8109,
1314 DOI 10.17487/RFC8109, March 2017,
1315 .
1317 [RRL] Vixie, P. and V. Schryver, "Response Rate Limiting (RRL)",
1318 June 2012, .
1320 [RSSAC001]
1321 "Service Expectations of Root Servers", December 2015,
1322 .
1325 [RSSAC023]
1326 "History of the Root Server System", November 2016,
1327 .
1330 [TNO2009] Gijsen, B., Jamakovic, A., and F. Roijers, "Root Scaling
1331 Study: Description of the DNS Root Scaling Model",
1332 September 2009,
1333 .
1336 [Wessels2015]
1337 Wessels, D., "Thirteen Years of Old J-Root", 2015,
1338 .
1342 Appendix A. Yeti-Root Hints File
1344 The following hints file (complete and accurate at the time of
1345 writing) causes a DNS resolver to use the Yeti DNS testbed in place
1346 of the production Root Server System and hence participate in
1347 experiments running on the testbed.
1349 Note that some lines have been wrapped in the text that follows in
1350 order to fit within the production constraints of this document.
1351 Wrapped lines are indicated with a blackslash character ("\"),
1352 following common convention.
1354 . 3600000 IN NS bii.dns-lab.net
1355 bii.dns-lab.net 3600000 IN AAAA 240c:f:1:22::6
1356 . 3600000 IN NS yeti-ns.tisf.net
1357 yeti-ns.tisf.net 3600000 IN AAAA 2001:559:8000::6
1358 . 3600000 IN NS yeti-ns.wide.ad.jp
1359 yeti-ns.wide.ad.jp 3600000 IN AAAA 2001:200:1d9::35
1360 . 3600000 IN NS yeti-ns.as59715.net
1361 yeti-ns.as59715.net 3600000 IN AAAA \
1362 2a02:cdc5:9715:0:185:5:203:53
1363 . 3600000 IN NS dahu1.yeti.eu.org
1364 dahu1.yeti.eu.org 3600000 IN AAAA \
1365 2001:4b98:dc2:45:216:3eff:fe4b:8c5b
1366 . 3600000 IN NS ns-yeti.bondis.org
1367 ns-yeti.bondis.org 3600000 IN AAAA 2a02:2810:0:405::250
1368 . 3600000 IN NS yeti-ns.ix.ru
1369 yeti-ns.ix.ru 3600000 IN AAAA 2001:6d0:6d06::53
1370 . 3600000 IN NS yeti.bofh.priv.at
1371 yeti.bofh.priv.at 3600000 IN AAAA 2a01:4f8:161:6106:1::10
1372 . 3600000 IN NS yeti.ipv6.ernet.in
1373 yeti.ipv6.ernet.in 3600000 IN AAAA 2001:e30:1c1e:1::333
1374 . 3600000 IN NS yeti-dns01.dnsworkshop.org
1375 yeti-dns01.dnsworkshop.org \
1376 3600000 IN AAAA 2001:1608:10:167:32e::53
1377 . 3600000 IN NS yeti-ns.conit.co
1378 yeti-ns.conit.co 3600000 IN AAAA \
1379 2604:6600:2000:11::4854:a010
1380 . 3600000 IN NS dahu2.yeti.eu.org
1381 dahu2.yeti.eu.org 3600000 IN AAAA 2001:67c:217c:6::2
1382 . 3600000 IN NS yeti.aquaray.com
1383 yeti.aquaray.com 3600000 IN AAAA 2a02:ec0:200::1
1384 . 3600000 IN NS yeti-ns.switch.ch
1385 yeti-ns.switch.ch 3600000 IN AAAA 2001:620:0:ff::29
1386 . 3600000 IN NS yeti-ns.lab.nic.cl
1387 yeti-ns.lab.nic.cl 3600000 IN AAAA 2001:1398:1:21::8001
1388 . 3600000 IN NS yeti-ns1.dns-lab.net
1389 yeti-ns1.dns-lab.net 3600000 IN AAAA 2001:da8:a3:a027::6
1390 . 3600000 IN NS yeti-ns2.dns-lab.net
1391 yeti-ns2.dns-lab.net 3600000 IN AAAA 2001:da8:268:4200::6
1392 . 3600000 IN NS yeti-ns3.dns-lab.net
1393 yeti-ns3.dns-lab.net 3600000 IN AAAA 2400:a980:30ff::6
1394 . 3600000 IN NS \
1395 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net
1396 ca978112ca1bbdcafac231b39a23dc.yeti-dns.net \
1397 3600000 IN AAAA 2c0f:f530::6
1398 . 3600000 IN NS \
1399 3e23e8160039594a33894f6564e1b1.yeti-dns.net
1400 3e23e8160039594a33894f6564e1b1.yeti-dns.net \
1401 3600000 IN AAAA 2803:80:1004:63::1
1402 . 3600000 IN NS \
1403 3f79bb7b435b05321651daefd374cd.yeti-dns.net
1404 3f79bb7b435b05321651daefd374cd.yeti-dns.net \
1405 3600000 IN AAAA 2401:c900:1401:3b:c::6
1406 . 3600000 IN NS \
1407 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c
1408 xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c \
1409 3600000 IN AAAA 2001:e30:1c1e:10::333
1410 . 3600000 IN NS yeti1.ipv6.ernet.in
1411 yeti1.ipv6.ernet.in 3600000 IN AAAA 2001:e30:187d::333
1412 . 3600000 IN NS yeti-dns02.dnsworkshop.org
1413 yeti-dns02.dnsworkshop.org \
1414 3600000 IN AAAA 2001:19f0:0:1133::53
1415 . 3600000 IN NS yeti.mind-dns.nl
1416 yeti.mind-dns.nl 3600000 IN AAAA 2a02:990:100:b01::53:0
1418 Appendix B. Yeti-Root Server Priming Response
1420 Here is the reply of a Yeti root name server to a priming request.
1421 The authoritative server runs NSD.
1423 ...
1424 ;; Got answer:
1425 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62391
1426 ;; flags: qr aa rd; QUERY: 1, ANSWER: 26, AUTHORITY: 0, ADDITIONAL: 7
1427 ;; WARNING: recursion requested but not available
1429 ;; OPT PSEUDOSECTION:
1430 ; EDNS: version: 0, flags: do; udp: 1460
1431 ;; QUESTION SECTION:
1432 ;. IN NS
1434 ;; ANSWER SECTION:
1435 . 86400 IN NS bii.dns-lab.net.
1436 . 86400 IN NS yeti.bofh.priv.at.
1437 . 86400 IN NS yeti.ipv6.ernet.in.
1438 . 86400 IN NS yeti.aquaray.com.
1439 . 86400 IN NS yeti.jhcloos.net.
1440 . 86400 IN NS yeti.mind-dns.nl.
1441 . 86400 IN NS dahu1.yeti.eu.org.
1442 . 86400 IN NS dahu2.yeti.eu.org.
1443 . 86400 IN NS yeti1.ipv6.ernet.in.
1444 . 86400 IN NS ns-yeti.bondis.org.
1445 . 86400 IN NS yeti-ns.ix.ru.
1446 . 86400 IN NS yeti-ns.lab.nic.cl.
1447 . 86400 IN NS yeti-ns.tisf.net.
1448 . 86400 IN NS yeti-ns.wide.ad.jp.
1449 . 86400 IN NS yeti-ns.datev.net.
1450 . 86400 IN NS yeti-ns.switch.ch.
1451 . 86400 IN NS yeti-ns.as59715.net.
1452 . 86400 IN NS yeti-ns1.dns-lab.net.
1454 . 86400 IN NS yeti-ns2.dns-lab.net.
1455 . 86400 IN NS yeti-ns3.dns-lab.net.
1456 . 86400 IN NS xn--r2bi1c.xn--h2bv6c0a.xn--h2brj9c.
1457 . 86400 IN NS yeti-dns01.dnsworkshop.org.
1458 . 86400 IN NS yeti-dns02.dnsworkshop.org.
1459 . 86400 IN NS 3f79bb7b435b05321651daefd374cd.yeti-dns.net.
1460 . 86400 IN NS ca978112ca1bbdcafac231b39a23dc.yeti-dns.net.
1461 . 86400 IN RRSIG NS 8 0 86400 (
1462 20171121050105 20171114050105 26253 .
1463 FUvezvZgKtlLzQx2WKyg+D6dw/pITcbuZhzStZfg+LNa
1464 DjLJ9oGIBTU1BuqTujKHdxQn0DcdFh9QE68EPs+93bZr
1465 VlplkmObj8f0B7zTQgGWBkI/K4Tn6bZ1I7QJ0Zwnk1mS
1466 BmEPkWmvo0kkaTQbcID+tMTodL6wPAgW1AdwQUInfy21
1467 p+31GGm3+SU6SJsgeHOzPUQW+dUVWmdj6uvWCnUkzW9p
1468 +5en4+85jBfEOf+qiyvaQwUUe98xZ1TOiSwYvk5s/qiv
1469 AMjG6nY+xndwJUwhcJAXBVmGgrtbiR8GiGZfGqt748VX
1470 4esLNtD8vdypucffem6n0T0eV1c+7j/eIA== )
1472 ;; ADDITIONAL SECTION:
1473 bii.dns-lab.net. 86400 IN AAAA 240c:f:1:22::6
1474 yeti.bofh.priv.at. 86400 IN AAAA 2a01:4f8:161:6106:1::10
1475 yeti.ipv6.ernet.in. 86400 IN AAAA 2001:e30:1c1e:1::333
1476 yeti.aquaray.com. 86400 IN AAAA 2a02:ec0:200::1
1477 yeti.jhcloos.net. 86400 IN AAAA 2001:19f0:5401:1c3::53
1478 yeti.mind-dns.nl. 86400 IN AAAA 2a02:990:100:b01::53:0
1480 ;; Query time: 163 msec
1481 ;; SERVER: 2001:4b98:dc2:45:216:3eff:fe4b:8c5b#53
1482 ;; WHEN: Tue Nov 14 16:45:37 +08 2017
1483 ;; MSG SIZE rcvd: 1222
1485 Appendix C. Active IPv6 Prefixes in Yeti DNS testbed
1486 +----------------------+---------------------------------+----------+
1487 | Prefix | Originator | Location |
1488 +----------------------+---------------------------------+----------+
1489 | 240c::/28 | BII | CN |
1490 | 2001:6d0:6d06::/48 | MSK-IX | RU |
1491 | 2001:1488::/32 | CZ.NIC | CZ |
1492 | 2001:620::/32 | SWITCH | CH |
1493 | 2001:470::/32 | Hurricane Electric, Inc. | US |
1494 | 2001:0DA8:0202::/48 | BUPT6-CERNET2 | CN |
1495 | 2001:19f0:6c00::/38 | Choopa, LLC | US |
1496 | 2001:da8:205::/48 | BJTU6-CERNET2 | CN |
1497 | 2001:62a::/31 | Vienna University Computer | AT |
1498 | | Center | |
1499 | 2001:67c:217c::/48 | AFNIC | FR |
1500 | 2a02:2478::/32 | Profitbricks GmbH | DE |
1501 | 2001:1398:4::/48 | BII | CN |
1502 | 240c::/28 | NIC Chile | CL |
1503 | 2001:4490:dc4c::/46 | NIB (National Internet | IN |
1504 | | Backbone) | |
1505 | 2001:4b98::/32 | Gandi | FR |
1506 | 2a02:aa8:0:2000::/52 | T-Systems-Eltec | ES |
1507 | 2a03:b240::/32 | Netskin GmbH | CH |
1508 | 2801:1a0::/42 | Universidad de Ibague | CO |
1509 | 2a00:1cc8::/40 | ICT Valle Umbra s.r.l. | IT |
1510 | 2a02:cdc0::/29 | ORG-CdSB1-RIPE | IT |
1511 +----------------------+---------------------------------+----------+
1513 Appendix D. Tools developed for Yeti DNS testbed
1515 Various tools were developed to support the Yeti DNS testbed, a
1516 selection of which are described briefly below.
1518 YmmV ("Yeti Many Mirror Verifier") is designed to make it easy and
1519 safe for a DNS administrator to capture traffic sent from a resolver
1520 to the Root Server System and to replay it towards Yeti-Root servers.
1521 Responses from both systems are recorded and compared, and
1522 differences are logged. See .
1524 PcapParser is a module used by YmmV which reassembles fragmented IPv6
1525 datagrams and TCP segments from a PCAP archive and extracts DNS
1526 messages contained within them. See .
1529 DNS-layer-fragmentation implements DNS proxies that perform
1530 application-level fragmentation of DNS messages, based on
1531 [I-D.muks-dns-message-fragments]. The idea with these proxies is to
1532 explore splitting DNS messages in the protocol itself, so they will
1533 not by fragmented by the IP layer. See .
1536 DNS_ATR is an implementation of DNS ATR, as described in
1537 [I-D.song-atr-large-resp]. DNS_ATR acts as a proxy between resolver
1538 and authoritative servers, forwarding queries and responses as a
1539 silent and transparent listener. Responses that are larger than a
1540 nominated threshold (1280 octets by default) trigger additional
1541 truncated responses to be sent immediately following the large
1542 response. See .
1544 Appendix E. Controversy
1546 The Yeti DNS Project, its infrastructure and the various experiments
1547 that have been carried out using that infrastructure, have been
1548 described by people involved in the project in many public meetings
1549 at technical venues since its inception. The mailing lists using
1550 which the operation of the infrastructure has been coordinated are
1551 open to join, and their archives are public. The project as a whole
1552 has been the subject of robust public discussion.
1554 Some commentators have expressed concern that the Yeti DNS Project
1555 is, in effect, operating an alternate root, challenging the IAB's
1556 comments published in [RFC2826]. Other such alternate roots are
1557 considered to have caused end-user confusion and instability in the
1558 namespace of the DNS by the introduction of new top-level labels or
1559 the different use of top-level labels present in the Root Server
1560 System. The coordinators of the Yeti DNS Project do not consider the
1561 Yeti DNS Project to be an alternate root in this sense, since by
1562 design the namespace enabled by the Yeti-Root Zone is identical to
1563 that of the Root Zone.
1565 Some commentators have expressed concern that the Yeti DNS Project
1566 seeks to influence or subvert administrative policy relating to the
1567 Root Server System, in particular in the use of DNSSEC trust anchors
1568 not published by the IANA and the use of Yeti-Root Servers in regions
1569 where governments or other organisations have expressed interest in
1570 operating a Root Server. The coordinators of the Yeti-Root project
1571 observe that their mandate is entirely technical and has no ambition
1572 to influence policy directly; they do hope, however, that technical
1573 findings from the Yeti DNS Project might act as a useful resource for
1574 the wider technical community.
1576 Appendix F. About This Document
1578 This section (and sub-sections) has been included as an aid to
1579 reviewers of this document, and should be removed prior to
1580 publication.
1582 F.1. Venue
1584 The authors propose that this document proceed as an Independent
1585 Submission, since it documents work that, although relevant to the
1586 IETF, has been carried out externally to any IETF working group.
1587 However, a suitable venue for discussion of this document is the
1588 dnsop working group.
1590 Information about the Yeti DNS project and discussion relating to
1591 particular experiments described in this document can be found at
1592 .
1594 This document is maintained in GitHub at .
1597 F.2. Revision History
1599 F.2.1. draft-song-yeti-testbed-experience-00 through -03
1601 Change history is available in the public GitHub repository where
1602 this document is maintained: .
1605 F.2.2. draft-song-yeti-testbed-experience-04
1607 Substantial editorial review and rearrangement of text by Joe Abley
1608 at request of BII.
1610 Added what is intended to be a balanced assessment of the controversy
1611 that has arisen around the Yeti DNS Project, at the request of the
1612 Independent Submissions Editorial Board.
1614 Changed the focus of the document from the description of individual
1615 experiments on a Root-like testbed to the construction and
1616 motivations of the testbed itself, since that better describes the
1617 output of the Yeti DNS Project to date. In the considered opinion of
1618 this reviewer, the novel approaches taken in the construction of the
1619 testbed infrastructure and the technical challenges met in doing so
1620 are useful to record, and the RFC series is a reasonable place to
1621 record operational experiences related to core Internet
1622 infrastructure.
1624 Note that due to draft cut-off deadlines some of the technical
1625 details described in this revision of the document may not exactly
1626 match operational reality; however, this revision provides an
1627 indicative level of detail, focus and flow which it is hoped will be
1628 helpful to reviewers.
1630 F.2.3. draft-song-yeti-testbed-experience-05
1632 Added commentary on IPv6-only operation, IPv6 fragmentation,
1633 applicability to and use by IPv4-only end-users and use of multiple
1634 signers.
1636 F.2.4. draft-song-yeti-testbed-experience-06
1638 Conclusion; tools; editorial changes.
1640 Authors' Addresses
1642 Linjian Song (editor)
1643 Beijing Internet Institute
1644 2508 Room, 25th Floor, Tower A, Time Fortune
1645 Beijing 100028
1646 P. R. China
1648 Email: songlinjian@gmail.com
1649 URI: http://www.biigroup.com/
1651 Dong Liu
1652 Beijing Internet Institute
1653 2508 Room, 25th Floor, Tower A, Time Fortune
1654 Beijing 100028
1655 P. R. China
1657 Email: dliu@biigroup.com
1658 URI: http://www.biigroup.com/
1660 Paul Vixie
1661 TISF
1662 11400 La Honda Road
1663 Woodside, California 94062
1664 US
1666 Email: vixie@tisf.net
1667 URI: http://www.redbarn.org/
1668 Akira Kato
1669 Keio University/WIDE Project
1670 Graduate School of Media Design, 4-1-1 Hiyoshi, Kohoku
1671 Yokohama 223-8526
1672 JAPAN
1674 Email: kato@wide.ad.jp
1675 URI: http://www.kmd.keio.ac.jp/
1677 Shane Kerr
1678 Antoon Coolenlaan 41
1679 Uithoorn 1422 GN
1680 NL
1682 Email: shane@time-travellers.org