idnits 2.17.1
draft-learmonth-pearg-safe-internet-measurement-02.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (May 16, 2019) is 1806 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
No issues found here.
Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Network Working Group I. Learmonth
3 Internet-Draft Tor Project
4 Intended status: Informational May 16, 2019
5 Expires: November 17, 2019
7 Guidelines for Performing Safe Measurement on the Internet
8 draft-learmonth-pearg-safe-internet-measurement-02
10 Abstract
12 Researchers from industry and academia will often use Internet
13 measurements as a part of their work. While these measurements can
14 give insight into the functioning and usage of the Internet, they can
15 come at the cost of user privacy. This document describes guidelines
16 for ensuring that such measurements can be carried out safely.
18 Note
20 Comments are solicited and should be addressed to the research
21 group's mailing list at pearg@irtf.org and/or the author(s).
23 The sources for this draft are at:
25 https://github.com/irl/draft-safe-internet-measurement
27 Status of This Memo
29 This Internet-Draft is submitted in full conformance with the
30 provisions of BCP 78 and BCP 79.
32 Internet-Drafts are working documents of the Internet Engineering
33 Task Force (IETF). Note that other groups may also distribute
34 working documents as Internet-Drafts. The list of current Internet-
35 Drafts is at https://datatracker.ietf.org/drafts/current/.
37 Internet-Drafts are draft documents valid for a maximum of six months
38 and may be updated, replaced, or obsoleted by other documents at any
39 time. It is inappropriate to use Internet-Drafts as reference
40 material or to cite them other than as "work in progress."
42 This Internet-Draft will expire on November 17, 2019.
44 Copyright Notice
46 Copyright (c) 2019 IETF Trust and the persons identified as the
47 document authors. All rights reserved.
49 This document is subject to BCP 78 and the IETF Trust's Legal
50 Provisions Relating to IETF Documents
51 (https://trustee.ietf.org/license-info) in effect on the date of
52 publication of this document. Please review these documents
53 carefully, as they describe your rights and restrictions with respect
54 to this document.
56 1. Introduction
58 When performing research using the Internet, as opposed to an
59 isolated testbed or simulation platform, means that you research co-
60 exists in a space with other users. This document outlines
61 guidelines for academic and industry researchers that might use the
62 Internet as part of scientific experiementation.
64 1.1. Scope of this document
66 Following the guidelines contained within this document is not a
67 substitute for any institutional ethics review process you may have,
68 although these guidelines could help to inform that process.
69 Similarly, these guidelines are not legal advice and local laws must
70 also be considered before starting any experiment that could have
71 adverse impacts on user privacy.
73 1.2. Active and passive measurements
75 Internet measurement studies can be broadly categorized into two
76 groups: active measurements and passive measurements. Active
77 measurements generate traffic. Performance measurements such as TCP
78 throughput testing [RFC6349] or functional measurements such as the
79 feature-dependent connectivity failure tests performed by
80 [PATHspider] both fall into this category. Performing passive
81 measurements requires existing traffic. Passive measurements can
82 help to inform new developments in Internet protocols but can also
83 carry risk.
85 The type of measurement is not truly binary and many studies will
86 include both active and passive components. Each of the
87 considerations in this document must be carefully considered for
88 their applicability regardless of the type of measurement.
90 2. Consent
92 Ideally, informed consent would be collected from all users of a
93 shared network before measurements were performed on them. In cases
94 where it is practical to do so, this should be done.
96 For consent to be informed, all possible risks must be presented to
97 the users. The considerations in this document can be used to
98 provide a starting point although other risks may be present
99 depending on the nature of the measurements to be performed.
101 2.1. Proxy Consent
103 In cases where it is not practical to collect informed consent from
104 all users of a shared network, it may be possible to obtain proxy
105 consent. Proxy consent may be given by a network operator or
106 employer that would be more familiar with the expectations of users
107 of a network than the researcher.
109 2.2. Implied consent
111 In larger scale measurements, even proxy consent collection may not
112 be practical. In this case, implied consent may be presumed from
113 users for some measurements. Consider that users of a network will
114 have certain expectations of privacy and those expectations may not
115 align with the privacy guarantees offered by the technologies they
116 are using. As a thought experiment, consider how users might respond
117 if you asked for their informed consent for the measurements you'd
118 like to perform.
120 For example, the operator of a web server that is exposed to the
121 Internet hosting a popular website would have the expectation that it
122 may be included in surveys that look at supported protocols or
123 extensions but would not expect that attempts be made to degrade the
124 service with large numbers of simultaneous connections.
126 If practical, attempt to obtain informed consent or proxy consent
127 from a sample of users to better understand the expectations of other
128 users.
130 3. Safety Considerations
132 3.1. Use a testbed
134 Wherever possible, use a testbed. An isolated network means that
135 there are no other users sharing the infrastructure you are using for
136 your experiments.
138 When measuring performance, competing traffic can have negative
139 effects on the performance of your test traffic and so the testbed
140 approach can also produce more accurate and repeatable results than
141 experiments using the public Internet.
143 WAN link conditions can be emulated through artificial delays and/or
144 packet loss using a tool like [netem]. Competing traffic can also be
145 emulated using traffic generators.
147 3.2. Only record your own traffic
149 When performing measurements be sure to only capture traffic that you
150 have generated. Traffic may be identified by IP ranges or by some
151 token that is unlikely to be used by other users.
153 Again, this can help to improve the accuracy and repeatability of
154 your experiment. [RFC2544], for performance benchmarking, requires
155 that any frames received that were not part of the test traffic are
156 discarded and not counted in the results.
158 3.3. Be respectful of other's infrastructure
160 If your experiment is designed to trigger a response from
161 infrastructure that is not your own, consider what the negative
162 consequences of that may be. At the very least your experiment will
163 consume bandwidth that may have to be paid for.
165 In more extreme circumstances, you could cause traffic to be
166 generated that causes legal trouble for the owner of that
167 infrastructure. The Internet is a global network crossing many legal
168 jurisdictions and so what may be legal for you is not necessarily
169 legal for everyone.
171 If you are sending a lot of traffic quickly, or otherwise generally
172 deviate from typical client behaviour, a network may identify this as
173 an attack which means that you will not be collecting results that
174 are representative of what a typical client would see.
176 3.3.1. Maintain a "Do Not Scan" list
178 When performing active measurements on a shared network, maintain a
179 list of hosts that you will never scan regardless of whether they
180 appear in your target lists. When developing tools for performing
181 active measurement, or traffic generation for use in a larger
182 measurement system, ensure that the tool will support the use of a
183 "Do Not Scan" list.
185 If complaints are made that request you do not generate traffic
186 towards a host or network, you must add that host or network to your
187 "Do Not Scan" list, even if no explanation is given or the request is
188 automated.
190 You may ask the requester for their reasoning if it would be useful
191 to your experiment. This can also be an oppertunity to explain your
192 research and offer to share any results that may be of interest. If
193 you plan to share the reasoning when publishing your measurement
194 results, e.g. in an academic paper, you must seek consent for this
195 from the requester.
197 Be aware that in publishing your measurement results, it may be
198 possible to infer your "Do Not Scan" list from those results. For
199 example, if you measured a well-known list of popular websites then
200 it would be possible to correlate the results with that list to
201 determine which are missing.
203 3.4. Only collect data that is safe to make public
205 When deciding on the data to collect, assume that any data collected
206 might become public. There are many ways that this could happen,
207 through operation security mistakes or compulsion by a judicial
208 system.
210 3.5. Minimization
212 For all data collected, consider whether or not it is really needed.
214 3.6. Aggregation
216 When collecting data, consider if the granularity can be limited by
217 using bins or adding noise. XXX: Differential privacy.
219 3.7. Source Aggregation
221 Do this at the source, definitely do it before you write to disk.
223 [Tor.2017-04-001] presents a case-study on the in-memory statistics
224 in the software used by the Tor network, as an example.
226 4. Risk Analysis
228 The benefits should outweigh the risks. Consider auxiliary data
229 (e.g. third-party data sets) when assessing the risks.
231 5. Security Considerations
233 Take reasonable security precautions, e.g. about who has access to
234 your data sets or experimental systems.
236 6. IANA Considerations
238 This document has no actions for IANA.
240 7. Acknowledgements
242 Many of these considerations are based on those from the
243 [TorSafetyBoard] adapted and generalised to be applied to Internet
244 research.
246 Other considerations are taken from the Menlo Report [MenloReport]
247 and its companion document [MenloReportCompanion].
249 8. Informative References
251 [MenloReport]
252 Dittrich, D. and E. Kenneally, "The Menlo Report: Ethical
253 Principles Guiding Information and Communication
254 Technology Research", August 2012,
255 .
258 [MenloReportCompanion]
259 Bailey, M., Dittrich, D., and E. Kenneally, "Applying
260 Ethical Principles to Information and Communication
261 Technology Research", October 2013,
262 .
265 [netem] Stephen, H., "Network emulation with NetEm", April 2005.
267 [PATHspider]
268 Learmonth, I., Trammell, B., Kuehlewind, M., and G.
269 Fairhurst, "PATHspider: A tool for active measurement of
270 path transparency", DOI 10.1145/2959424.2959441, July
271 2016,
272 .
274 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
275 Network Interconnect Devices", RFC 2544,
276 DOI 10.17487/RFC2544, March 1999,
277 .
279 [RFC6349] Constantine, B., Forget, G., Geib, R., and R. Schrage,
280 "Framework for TCP Throughput Testing", RFC 6349,
281 DOI 10.17487/RFC6349, August 2011,
282 .
284 [Tor.2017-04-001]
285 Herm, K., "Privacy analysis of Tor's in-memory
286 statistics", Tor Tech Report 2017-04-001, April 2017,
287 .
290 [TorSafetyBoard]
291 Tor Project, "Tor Research Safety Board",
292 .
294 Author's Address
296 Iain R. Learmonth
297 Tor Project
299 Email: irl@torproject.org