Date: Thursday 9:30-12:00 - Morning session I
Room: Place du Canada
Giovane C. M. Moura
Giovane C. M. Moura
Dmap: Atomating Domain
Name Ecosystem Measurements and Applications (Giovane C. M. Moura)
Very often, researchers spend an awful lot of time planning and executing measurements. The complexity of different data formats and issues that may emerge drain energy that could be otherwise be spent on research questions. Dmap is a multi-protocol cralwer (DNS,TLS,HTTP,HTTPS,SMTP and page screenshot) that free researchers from the complexity of measurements and parsing complex data formats; rather, it automates the measurements and provides a SQL interface o the data. We make it open for researchers.
Monitoring DNS with open-source solutions (Javier Bustos, Felipe Espinoza)
NIC Chile is the DNS administrator of the ccTLD .cl, managing over 500.000 domain names in an infrastructure composed by more than 30 servers distributed around the globe (some of them belonging to one of the three Anycast clouds used in the name service) answering a ratio of around 3,000 queries/sec per server. In this scenario, we took the challenge of build a real-time monitor system four our DNS service, by only using open-source software.
We reviewed and benchmarked different alternatives: Packetbeat, Collectd, DSC, Fievel, and GoPassiveDNS for data collection; Prometheus, Druid, ClickHouse, InfluxDB, ElasticSearch, and OpenTSDB as DB engines; and Kibana, Grafana, and Graphite Web for visualization. The info we wanted to know were, Five top-queried domains, mean length of DNS queries, and the number of queries per subnetwork, per operation code (OPCODE), per class (QCLASS), per type (QTYPE), per answer type, per transport protocol (UDP, TCP), and with active EDNS.
With that scenario, we measured: * CPU used by DB. * RAM * Secondary memory * Time required for data aggregation
We present two compatibility matrices summarizing our findings and an ready-to-use open-source integrated monitoring system.
Is Bufferbloat a Privacy Issue? (Brian Trammell)
Following up on a question raised last year in the QUIC working group about the privacy implications of enabling passive measurement of round-trip times, we looked at various ways that Internet latency data might be a privacy threat ("Revisiting the Privacy Implications of Two-Way Internet Latency Data", PAM 2018, Berlin). The most problematic potential issue we found allows "remote load telemetry", leveraging the likelihood that (1) an access network link is the most likely to be congested and (2) that access network link's RTT will vary with load due to bufferbloat to measure a remote access network's activity from anywhere on the Internet. A simple active measurement study shows that remote load telemetry is indeed possible on a significant fraction of networks measured.
Clusters in the Expanse: De-Aliasing IPv6 Hitlists (Oliver Gasser, Quirin Scheitle, Pawel Foremski, Qasim Lone, Maciej Korczynski, Stephen D. Strowes, Luuk Hendriks, Georg Carle )
Measuring the usable maximum packet size across Internet paths (Ana Custura, Gorry Fairhurst, Iain Learmonth)
o optimise their transmission, Internet endpoints need to know the largest size of packet they can send across a specific Internet path, the Path Maximum Transmission Unit (PMTU). This paper explores the PMTU size experienced across the Internet core, wired and mobile edge networks. Our results show that MSS Clamping has been widely deployed in edge networks, and some webservers artificially reduce their advertised MSS, both of which we expect help avoid PMTUD failure for TCP. The maximum packet size used by a TCP connection is also constrained by the acMSS. MSS Clamping was observed in over 20% of edge networks tested. We find a significant proportion of webservers that advertise a low MSS can still be reached with a 1500 byte packet. We also find more than half of IPv6 webservers do not attempt PMTUD and clamp the MSS to 1280 bytes. Furthermore, we see evidence of black-hole detection mech- anisms implemented by over a quarter of IPv6 webservers and almost 15% of IPv4 webservers. We also consider the implications for UDP - which necessarily can not utilise MSS Clamping. The paper provides useful input to the design of a robust PMTUD method that can be appropriate for the growing volume of UDP- based applications, by determining ICMP quotations can be used as to verify sender authenticity.
When the Dike breaks: dissecting DNS Defenses during DDoS (Giovane C. M. Moura, John Heidemann, Ricardo de O. Schmidt, Moritz Müller, Marco Davids)
Paper: https://isi.edu/~johnh/PAPERS/Moura18a.pdfThe Internet’s Domain Name System (DNS) is a frequent target of Distributed Denial-of-Service (DDoS) attacks, but such attacks have had very different outcomes—some attacks have disabled major public websites, while the external ef- fects of other attacks have been minimal. While on one hand the DNS protocol is a relatively simple, the system has many moving parts, with multiple levels of caching and retries and replicated servers. This paper uses controlled experiments to examine how these mechanisms affect DNS resilience and latency, exploring both the client side’s DNS user experience, and server-side traffic. We find that, for about 30% of clients, caching is not effective. However, when caches are full they allow about half of clients to ride out server outages. Caching and retries together allow up to half of the clients to toler- ate DDoS attacks that result in 90% query loss, and almost all clients to tolerate attacks resulting in 50% packet loss. While clients may get service during an attack, tail-latency increases for clients. For servers, retries during DDoS attacks increase normal traffic up to 8×. Our findings about caching and retries help explain why users see service outages from real-world DDoS events, but minimal visible effects from others. See also https://labs.ripe.net/Members/giovane_moura/dissecting-dns-defenses-during-ddos-attacks
Finding the source of DNS resolver users that were using old DNSSEC keys (Wes Hardaker)
Common best-practice for public-key based cryptographic mechanisms dictates regular replacement of trust anchor keys that are used by relying parties to bootstrap Internet security protocols. However, replacement of these trust anchors must be done with extreme caution to prevent large scale outages caused by relying parties that fail to adopt newly published trust anchors while older keys are transitioned from use. The top-level trust anchor of the relatively new DNSSEC ecosystem recently delayed its root key rollover procedure due to uncertainty about successful adoption of its newly published DNS\-KEY.
In an effort to estimate the success of the upcoming DNSSEC Trust Anchor flag day switch over, we relay a case study depicting the difficulty in identifying the root cause of DNSSEC validating resolvers that have failed to adopt the recently published new DNSSEC root key. Finally, we conclude with key-rollover recommendations for protocols that make use of trusted third-parties.