idnits 2.17.1
draft-irtf-pearg-safe-internet-measurement-04.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (November 16, 2020) is 1256 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
No issues found here.
Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Network Working Group I. Learmonth
3 Internet-Draft Tor Project
4 Intended status: Informational GG. Grover
5 Expires: May 20, 2021 Centre for Internet and Society
6 November 16, 2020
8 Guidelines for Performing Safe Measurement on the Internet
9 draft-irtf-pearg-safe-internet-measurement-04
11 Abstract
13 Researchers from industry and academia often use Internet
14 measurements as part of their work. While these measurements can
15 give insight into the functioning and usage of the Internet, they can
16 come at the cost of user privacy. This document describes guidelines
17 for ensuring that such measurements can be carried out safely.
19 Note
21 Comments are solicited and should be addressed to the research
22 group's mailing list at pearg@irtf.org and/or the author(s).
24 The sources for this draft are at:
26 https://github.com/irl/draft-safe-internet-measurement
28 Status of This Memo
30 This Internet-Draft is submitted in full conformance with the
31 provisions of BCP 78 and BCP 79.
33 Internet-Drafts are working documents of the Internet Engineering
34 Task Force (IETF). Note that other groups may also distribute
35 working documents as Internet-Drafts. The list of current Internet-
36 Drafts is at https://datatracker.ietf.org/drafts/current/.
38 Internet-Drafts are draft documents valid for a maximum of six months
39 and may be updated, replaced, or obsoleted by other documents at any
40 time. It is inappropriate to use Internet-Drafts as reference
41 material or to cite them other than as "work in progress."
43 This Internet-Draft will expire on May 20, 2021.
45 Copyright Notice
47 Copyright (c) 2020 IETF Trust and the persons identified as the
48 document authors. All rights reserved.
50 This document is subject to BCP 78 and the IETF Trust's Legal
51 Provisions Relating to IETF Documents
52 (https://trustee.ietf.org/license-info) in effect on the date of
53 publication of this document. Please review these documents
54 carefully, as they describe your rights and restrictions with respect
55 to this document.
57 1. Introduction
59 Performing research using the Internet, as opposed to an isolated
60 testbed or simulation platform, means that experiments co-exist in a
61 space with other users. This document outlines guidelines for
62 academic and industry researchers that might use the Internet as part
63 of scientific experimentation to mitigate risks to the safety of
64 other users.
66 1.1. Scope of this document
68 Following the guidelines contained within this document is not a
69 substitute for any institutional ethics review process, although
70 these guidelines could help to inform that process. Similarly, these
71 guidelines are not legal advice and local laws must also be
72 considered before starting any experiment that could have adverse
73 impacts on user safety.
75 The scope of this document is restricted to guidelines that mitigate
76 exposure to risks to Internet user safety when measuring properties
77 of the Internet: the network, its constiuent hosts and links, or its
78 users traffic.
80 For the purpose of this document, an Internet user is an individual
81 or organisation that uses the Internet to communicate, or maintains
82 Internet infrastructure.
84 1.2. Threat Model
86 A threat is a potential for a security violation, which exists when
87 there is a circumstance, capability, action, or event that could
88 breach security and cause harm [RFC4949]. Every Internet measurement
89 study has the potential to subject Internet users to threat actions,
90 or attacks.
92 Many of the threats to user safety occur from an instantiation (or
93 combination) of the following:
95 Surveillance: An attack whereby an Internet user's information is
96 collected. This type of attack covers not only data but also
97 metadata.
99 Inadequate protection of collected data: An attack where data, either
100 in flight or at rest, was not adequately protected from disclosure.
101 Failure to adequately protect data to the expectations of the user is
102 an attack even if it does not lead to another party gaining access to
103 the data.
105 Traffic generation: An attack whereby traffic is generated to
106 traverse the Internet.
108 Traffic modification: An attack whereby the Internet traffic of users
109 is modified.
111 Any conceivable Internet measurement study might be considered an
112 attack on an Internet user's safety. It is always necessary to
113 consider the best approach to mitigate the impact of measurements,
114 and to balance the risks of measurements against the benefits to
115 impacted users.
117 1.3. Measurement Studies
119 Internet measurement studies can be broadly categorized into two
120 groups: active measurements and passive measurements. Active
121 measurements generate or modify traffic while passive measurements
122 use surveillance of existing traffic. The type of measurement is not
123 truly binary and many studies will include both active and passive
124 components. The measurement of generated traffic may also lead to
125 insights into other users' traffic indirectly.
127 XXX On-path/off-path
129 XXX One ended/two ended
131 1.4. User Impact from Measurement Studies
133 Consequences of attacks
135 Breach of Privacy: data collection. This impact also covers the case
136 of an Internet user's data being shared beyond that which a user had
137 given consent for.
139 Impersonation: An attack where a user is impersonated during a
140 measurement.
142 XXX Legal
144 XXX Other Retribution
146 System corruption: An attack where generated or modified traffic
147 causes the corruption of a system. This attack covers cases where a
148 user's data may be lost or corrupted, and cases where a user's access
149 to a system may be affected.
151 XXX Data loss, corruption
153 XXX Denial of Service (by which self-censorship is covered)
155 XXX Emotional Trauma
157 2. Consent
159 XXX a user is best placed to balanced risks vs benefits themselves
161 In an ideal world, informed consent would be collected from all users
162 that may be placed at risk, no matter how small a risk, by an
163 experiment. In cases where it is practical to do so, this should be
164 done.
166 2.1. Informed Consent
168 For consent to be informed, all possible risks must be presented to
169 the users. The considerations in this document can be used to
170 provide a starting point although other risks may be present
171 depending on the nature of the measurements to be performed.
173 2.2. Informed Consent: Case Study
175 A researcher would like to use volunteer owned mobile devices to
176 collect information about local Internet censorship. Connections
177 will be made from the volunteer's device towards known or suspected
178 blocked webpages.
180 This experiment can carry substantial risk for the user depending on
181 the circumstances, from disciplinary action from their employer to
182 arrest or imprisonment. Fully informed consent ensures that any risk
183 that is being taken has been carefully considered by the volunteer
184 before proceeding.
186 2.3. Proxy Consent
188 In cases where it is not practical to collect informed consent from
189 all users of a shared network, it may be possible to obtain proxy
190 consent. Proxy consent may be given by a network operator or
191 employer that would be more familiar with the expectations of users
192 of a network than the researcher.
194 In some cases, a network operator or employer may have terms of
195 service that specifically allow for giving consent to 3rd parties to
196 perform certain experiments.
198 2.4. Proxy Consent: Case Study
200 A researcher would like to perform a packet capture to determine the
201 TCP options and their values used by all client devices on an
202 corporate wireless network.
204 The employer may already have terms of service laid out that allow
205 them to provide proxy consent for this experiment on behalf of the
206 employees (the users of the network). The purpose of the experiment
207 may affect whether or not they are able to provide this consent. For
208 example, to perform engineering work on the network then it may be
209 allowed, whereas academic research may not be covered.
211 2.5. Implied Consent
213 In larger scale measurements, even proxy consent collection may not
214 be practical. In this case, implied consent may be presumed from
215 users for some measurements. Consider that users of a network will
216 have certain expectations of privacy and those expectations may not
217 align with the privacy guarantees offered by the technologies they
218 are using. As a thought experiment, consider how users might respond
219 if asked for their informed consent for the measurements you'd like
220 to perform.
222 Implied consent should not be considered sufficient for any
223 experiment that may collect sensitive or personally identifying
224 information. If practical, attempt to obtain informed consent or
225 proxy consent from a sample of users to better understand the
226 expectations of other users.
228 2.6. Implied Consent: Case Study 1
230 A researcher would like to run a measurement campaign to determine
231 the maximum supported TLS version on popular web servers.
233 The operator of a web server that is exposed to the Internet hosting
234 a popular website would have the expectation that it may be included
235 in surveys that look at supported protocols or extensions but would
236 not expect that attempts be made to degrade the service with large
237 numbers of simultaneous connections.
239 2.7. Implied Consent: Case Study 2
241 A researcher would like to perform A/B testing for protocol feature
242 and how it affects web performance. They have created two versions
243 of their software and have instrumented both to report telemetry
244 back. These updates will be pushed to users at random by the
245 software's auto-update framework. The telemetry consists only of
246 performance metrics and does not contain any personally identifying
247 or sensitive information.
249 As users expect to receive automatic updates, the effect of changing
250 the behaviour of the software is already expected by the user. If
251 users have already been informed that data will be reported back to
252 the developers of the software, then again the addition of new
253 metrics would be expected. There are risks in pushing any new
254 software update, and the A/B testing technique can reduce the number
255 of users that may be adversely affected by a bad update.
257 The reduced impact should not be used as an excuse for pushing higher
258 risk updates, only updates that could be considered appropriate to
259 push to all users should be A/B tested. Likewise, not pushing the
260 new behaviour to any user should be considered appropriate if some
261 users are to remain with the old behavior.
263 In the event that something does go wrong with the update, it should
264 be easy for a user to discover that they have been part of an
265 experiment and roll back the change, allowing for explicit refusal of
266 consent to override the presumed implied consent.
268 3. Safety Considerations
270 3.1. Isolate risk with a dedicated testbed
272 Wherever possible, use a testbed. An isolated network means that
273 there are no other users sharing the infrastructure you are using for
274 your experiments.
276 When measuring performance, competing traffic can have negative
277 effects on the performance of your test traffic and so the testbed
278 approach can also produce more accurate and repeatable results than
279 experiments using the public Internet.
281 WAN link conditions can be emulated through artificial delays and/or
282 packet loss using a tool like [netem]. Competing traffic can also be
283 emulated using traffic generators.
285 3.2. Be respectful of other's infrastructure
287 If your experiment is designed to trigger a response from
288 infrastructure that is not your own, consider what the negative
289 consequences of that may be. At the very least your experiment will
290 consume bandwidth that may have to be paid for.
292 In more extreme circumstances, you could cause traffic to be
293 generated that causes legal trouble for the owner of that
294 infrastructure. The Internet is a global network crossing many legal
295 jurisdictions and so what may be legal for you is not necessarily
296 legal for everyone.
298 If you are sending a lot of traffic quickly, or otherwise generally
299 deviate from typical client behaviour, a network may identify this as
300 an attack which means that you will not be collecting results that
301 are representative of what a typical client would see.
303 3.2.1. Maintain a "Do Not Scan" list
305 When performing active measurements on a shared network, maintain a
306 list of hosts that you will never scan regardless of whether they
307 appear in your target lists. When developing tools for performing
308 active measurement, or traffic generation for use in a larger
309 measurement system, ensure that the tool will support the use of a
310 "Do Not Scan" list.
312 If complaints are made that request you do not generate traffic
313 towards a host or network, you must add that host or network to your
314 "Do Not Scan" list, even if no explanation is given or the request is
315 automated.
317 You may ask the requester for their reasoning if it would be useful
318 to your experiment. This can also be an opportunity to explain your
319 research and offer to share any results that may be of interest. If
320 you plan to share the reasoning when publishing your measurement
321 results, e.g. in an academic paper, you must seek consent for this
322 from the requester.
324 Be aware that in publishing your measurement results, it may be
325 possible to infer your "Do Not Scan" list from those results. For
326 example, if you measured a well-known list of popular websites then
327 it would be possible to correlate the results with that list to
328 determine which are missing.
330 3.3. Data Minimization
332 When collecting, using, disclosing, and storing data from a
333 measurement, use only the minimal data necessary to perform a task.
334 Reducing the amount of data reduces the amount of data that can be
335 misused or leaked.
337 When deciding on the data to collect, assume that any data collected
338 might be disclosed. There are many ways that this could happen,
339 through operation security mistakes or compulsion by a judicial
340 system.
342 When directly instrumenting a protocol to provide metrics to a
343 passive observer, see section 6.1 of RFC6973 [RFC6973] for data
344 minimalization considerations specific to this use case.
346 3.3.1. Discarding Data
348 XXX: Discard data that is not required to perform the task.
350 When performing active measurements be sure to only capture traffic
351 that you have generated. Traffic may be identified by IP ranges or
352 by some token that is unlikely to be used by other users.
354 Again, this can help to improve the accuracy and repeatability of
355 your experiment. [RFC2544], for performance benchmarking, requires
356 that any frames received that were not part of the test traffic are
357 discarded and not counted in the results.
359 3.3.2. Masking Data
361 XXX: Mask data that is not required to perform the task.
362 Particularly useful for content of traffic to indicate that either a
363 particular class of content existed or did not exist, or the length
364 of the content, but not recording the content itself. Can also
365 replace content with tokens, or encrypt.
367 3.3.3. Reduce Accuracy
369 XXX: Binning, categorizing, geoip, noise.
371 3.3.4. Data Aggregation
373 When collecting data, consider if the granularity can be limited by
374 using bins or adding noise. XXX: Differential privacy.
376 XXX: Do this at the source, definitely do it before you write to
377 disk.
379 [Tor.2017-04-001] presents a case-study on the in-memory statistics
380 in the software used by the Tor network, as an example.
382 4. Risk Analysis
384 The benefits should outweigh the risks. Consider auxiliary data
385 (e.g. third-party data sets) when assessing the risks.
387 5. Security Considerations
389 Take reasonable security precautions, e.g. about who has access to
390 your data sets or experimental systems.
392 6. IANA Considerations
394 This document has no actions for IANA.
396 7. Acknowledgements
398 Many of these considerations are based on those from the
399 [TorSafetyBoard] adapted and generalised to be applied to Internet
400 research.
402 Other considerations are taken from the Menlo Report [MenloReport]
403 and its companion document [MenloReportCompanion].
405 8. Informative References
407 [MenloReport]
408 Dittrich, D. and E. Kenneally, "The Menlo Report: Ethical
409 Principles Guiding Information and Communication
410 Technology Research", August 2012,
411 .
414 [MenloReportCompanion]
415 Bailey, M., Dittrich, D., and E. Kenneally, "Applying
416 Ethical Principles to Information and Communication
417 Technology Research", October 2013,
418 .
421 [netem] Stephen, H., "Network emulation with NetEm", April 2005.
423 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
424 Network Interconnect Devices", RFC 2544,
425 DOI 10.17487/RFC2544, March 1999,
426 .
428 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2",
429 August 2007, .
431 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J.,
432 Morris, J., Hansen, M., and R. Smith, "Privacy
433 Considerations for Internet Protocols", RFC 6973, July
434 2013, .
436 [Tor.2017-04-001]
437 Herm, K., "Privacy analysis of Tor's in-memory
438 statistics", Tor Tech Report 2017-04-001, April 2017,
439 .
442 [TorSafetyBoard]
443 Tor Project, "Tor Research Safety Board",
444 .
446 Authors' Addresses
448 Iain R. Learmonth
449 Tor Project
451 Email: irl@torproject.org
453 Gurshabad Grover
454 Centre for Internet and Society
456 Email: gurshabad@cis-india.org