Current Meeting Report
2.3.18 DNS Research Measurements (dnsmeas) Bof
Current Meeting Report
DNS Research Measurements BOF
52nd IETF, Salt Lake City
December 12, 2001
Chair: Vern Paxson
Minutes recorded by Mark Allman, with editing by Vern Paxson.
Goals of the BOF
There have been a number of recent DNS measurement studies, looking at different facets of how DNS is operating today.
The meeting is to present and discuss the findings.
The meeting reflects an IRTF activity rather than an IETF.
In particular, the goal is to understand what's working well and what isn't - not how to fix it. Also, discussing alternate DNS paradigms was stated as being fully out of scope.
Four different groups of researchers presented their DNS measurement efforts and the results of their analysis to the group. The notes about each presentation in the meetings here are quite brief, because the slides, available from
have the detail.
Nevil Brownlee / kc claffy - CAIDA
This presentation was centered on passive measurement of DNS requests observed at UCSD and at the F root-server.
Brad Karp - ICSI
The research discussed is also a passive measurement study of DNS performance. The study suggests that the retransmit timeout used in DNS is quite conservative and could possibly be reduced. In addition, the study reports that the speed of security patches for DNS servers is not as good as one would hope (in comparison to the importance of DNS to the operation of the network).
Matt Larson - Verisign
This presentation discussed several problems with the DNS system that can be serious. The most serious problem outlined was the "aggressive delegation requery" problem. In addition, repeated queries to lame DNS servers was also cited as a problem -- with the suggested fix being better caching of the fact that a server is lame. See draft-ietf-dnsop-bad-dns-res-00.txt.
Jaeyeon Jung - MIT
This presentation focused on passive measurements of DNS requests at MIT and at KAIST in Korea. The study produced a wide variety of very detailed statistics about the outcome of the DNS queries observed. More information on this talk is available in a paper in the proceedings of SIGCOMM's Internet Measurement Workshop and in the presentation slides. One of the controversial findings is that low TTLs on A records do not have much ill-effect in terms of degrading DNS scalability - the main scaling effects come from the caching of NS records.
Wondered whether there were caching nameservers between the clients and the measurement boxes in the MIT and ICSI studies.
We assume there are a ton of caching nameservers and the point is that we are observing the behavior in the real world *after* those caching nameservers have done their work.
Wondered what the key insights from the CAIDA work are.
The overall conclusion is that the roots perform pretty well and the GTLDs slightly better.
MIT used to have every Unix box running named. Is this still the case?
Negative caching is for the public good.
Why did the failures in the MIT dataset happen? Did you go back to try to find out? In other words, were the failures transiant?
We didn't go back and try to replay queries, we just looked at the error codes in the DNS traffic.
10-20% of hits on a mirror were from "non-existent" domains (error on PTR lookups) 4 years ago.
DNS is not hurting or helping the load on the network; we should keep in mind that changes to DNS are for performance.
Typos can cause a bunch of DNS errors; would be interesting to see if the requests have valid TLDs before recursing.
Yes, a filter on valid TLD could be used to eliminate a bunch of bogus requests.
But, it is not going to help a ton.
Bad idea to lock down a list of "good" TLDs.
Running a local named effects user perception of latency.
Regarding the retransmit timers... The timeout is based observations from an old network. We have different networks with different applications now and so we need revisit some of these things.
Note that "no domain found" is a failure, but not a DNS failure.
The biggest help for much of the world would be to get some faster lines.