idnits 2.17.1
draft-irtf-pearg-safe-internet-measurement-05.txt:
Checking boilerplate required by RFC 5378 and the IETF Trust (see
https://trustee.ietf.org/license-info):
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt:
----------------------------------------------------------------------------
No issues found here.
Checking nits according to https://www.ietf.org/id-info/checklist :
----------------------------------------------------------------------------
No issues found here.
Miscellaneous warnings:
----------------------------------------------------------------------------
== The copyright year in the IETF Trust and authors Copyright Line does not
match the current year
-- The document date (July 12, 2021) is 1017 days in the past. Is this
intentional?
Checking references for intended status: Informational
----------------------------------------------------------------------------
No issues found here.
Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--).
Run idnits with the --verbose option for more detailed information about
the items above.
--------------------------------------------------------------------------------
2 Network Working Group I. Learmonth
3 Internet-Draft Tor Project
4 Intended status: Informational GG. Grover
5 Expires: January 13, 2022 Centre for Internet and Society
6 M. Knodel
7 Center for Democracy and Technology
8 July 12, 2021
10 Guidelines for Performing Safe Measurement on the Internet
11 draft-irtf-pearg-safe-internet-measurement-05
13 Abstract
15 Researchers from industry and academia often use Internet
16 measurements as part of their work. While these measurements can
17 give insight into the functioning and usage of the Internet, they can
18 come at the cost of user privacy. This document describes guidelines
19 for ensuring that such measurements can be carried out safely.
21 Note
23 Comments are solicited and should be addressed to the research
24 group's mailing list at pearg@irtf.org and/or the author(s).
26 The sources for this draft are at:
28 https://github.com/irl/draft-safe-internet-measurement
30 Status of This Memo
32 This Internet-Draft is submitted in full conformance with the
33 provisions of BCP 78 and BCP 79.
35 Internet-Drafts are working documents of the Internet Engineering
36 Task Force (IETF). Note that other groups may also distribute
37 working documents as Internet-Drafts. The list of current Internet-
38 Drafts is at https://datatracker.ietf.org/drafts/current/.
40 Internet-Drafts are draft documents valid for a maximum of six months
41 and may be updated, replaced, or obsoleted by other documents at any
42 time. It is inappropriate to use Internet-Drafts as reference
43 material or to cite them other than as "work in progress."
45 This Internet-Draft will expire on January 13, 2022.
47 Copyright Notice
49 Copyright (c) 2021 IETF Trust and the persons identified as the
50 document authors. All rights reserved.
52 This document is subject to BCP 78 and the IETF Trust's Legal
53 Provisions Relating to IETF Documents
54 (https://trustee.ietf.org/license-info) in effect on the date of
55 publication of this document. Please review these documents
56 carefully, as they describe your rights and restrictions with respect
57 to this document.
59 1. Introduction
61 Performing research using the Internet, as opposed to an isolated
62 testbed or simulation platform, means that experiments co-exist in a
63 space with other users. This document outlines guidelines for
64 academic and industry researchers that might use the Internet as part
65 of scientific experimentation to mitigate risks to the safety of
66 other users.
68 1.1. Scope of this document
70 These are guidelines for how to measure the Internet safely. When
71 performing research on a platform shared with live traffic from other
72 users, that research is considered safe if and only if other users
73 are protected from or unlikely to experience danger, risk, or injury,
74 now or in the future, due to the research.
76 Following the guidelines contained within this document is not a
77 substitute for any institutional ethics review process, although
78 these guidelines could help to inform that process. It is
79 particularly important for the growing area of research that includes
80 Internet measurement to better equip review boards to evaluate
81 Internet measurement methods [SIGCOMM], and we hope that this
82 document is part of that larger effort.
84 Similarly, these guidelines are not legal advice and local laws must
85 also be considered before starting any experiment that could have
86 adverse impacts on user safety.
88 The scope of this document is restricted to guidelines that mitigate
89 exposure to risks to Internet user safety when measuring properties
90 of the Internet: the network, its constiuent hosts and links, or its
91 users traffic.
93 For the purpose of this document, an Internet user is an individual
94 or organisation whose data is used in communications over the
95 Internet, most broadly, and those who use the Internet to communicate
96 or maintain Internet infrastructure.
98 1.2. Threat Model
100 A threat is a potential for a security violation, which exists when
101 there is a circumstance, capability, action, or event that could
102 breach security and cause harm [RFC4949]. Every Internet measurement
103 study has the potential to subject Internet users to threat actions,
104 or attacks.
106 Many of the threats to user safety occur from an instantiation (or
107 combination) of the following:
109 Surveillance: An attack whereby an Internet user's information is
110 collected. This type of attack covers not only data but also
111 metadata.
113 Inadequate protection of collected data: An attack where data, either
114 in flight or at rest, was not adequately protected from disclosure.
115 Failure to adequately protect data to the expectations of the user is
116 an attack even if it does not lead to another party gaining access to
117 the data.
119 Traffic generation: An attack whereby traffic is generated to
120 traverse the Internet.
122 Traffic modification: An attack whereby the Internet traffic of users
123 is modified.
125 Any conceivable Internet measurement study might be considered an
126 attack on an Internet user's safety. It is always necessary to
127 consider the best approach to mitigate the impact of measurements,
128 and to balance the risks of measurements against the benefits to
129 impacted users.
131 1.3. Measurement Studies
133 Internet measurement studies can be broadly categorized into two
134 groups: active measurements and passive measurements. Active
135 measurements generate or modify traffic while passive measurements
136 use surveillance of existing traffic. The type of measurement is not
137 truly binary and many studies will include both active and passive
138 components. The measurement of generated traffic may also lead to
139 insights into other users' traffic indirectly.
141 XXX On-path/off-path
142 XXX One ended/two ended
144 1.4. User Impact from Measurement Studies
146 Consequences of attacks
148 Breach of Privacy: data collection. This impact also covers the case
149 of an Internet user's data being shared beyond that which a user had
150 given consent for.
152 Impersonation: An attack where a user is impersonated during a
153 measurement.
155 XXX Legal
157 XXX Other Retribution
159 System corruption: An attack where generated or modified traffic
160 causes the corruption of a system. This attack covers cases where a
161 user's data may be lost or corrupted, and cases where a user's access
162 to a system may be affected.
164 XXX Data loss, corruption
166 XXX Denial of Service (by which self-censorship is covered)
168 XXX Emotional Trauma
170 2. Consent
172 Accountability and transparency are fundamentally related to consent.
173 As per the Menlo Report, "Accountability demands that research
174 methodology, ethical evaluations, data collected, and results
175 generated should be documented and made available responsibly in
176 accordance with balancing risks and benefits."[MenloReport]
178 XXX a user is best placed to balanced risks vs benefits themselves
180 In an ideal world, informed consent would be collected from all users
181 that may be placed at risk, no matter how small a risk, by an
182 experiment. In cases where it is practical to do so, this should be
183 done.
185 2.1. Informed Consent
187 For consent to be informed, all possible risks must be presented to
188 the users. The considerations in this document can be used to
189 provide a starting point although other risks may be present
190 depending on the nature of the measurements to be performed.
192 2.2. Informed Consent: Case Study
194 A researcher would like to use volunteer owned mobile devices to
195 collect information about local Internet censorship. Connections
196 will be made from the volunteer's device towards known or suspected
197 blocked webpages.
199 This experiment can carry substantial risk for the user depending on
200 the circumstances, from disciplinary action from their employer to
201 arrest or imprisonment. Fully informed consent ensures that any risk
202 that is being taken has been carefully considered by the volunteer
203 before proceeding.
205 2.3. Proxy Consent
207 In cases where it is not practical to collect informed consent from
208 all users of a shared network, it may be possible to obtain proxy
209 consent. Proxy consent may be given by a network operator or
210 employer that would be more familiar with the expectations of users
211 of a network than the researcher.
213 In some cases, a network operator or employer may have terms of
214 service that specifically allow for giving consent to 3rd parties to
215 perform certain experiments.
217 2.4. Proxy Consent: Case Study
219 A researcher would like to perform a packet capture to determine the
220 TCP options and their values used by all client devices on an
221 corporate wireless network.
223 The employer may already have terms of service laid out that allow
224 them to provide proxy consent for this experiment on behalf of the
225 employees (the users of the network). The purpose of the experiment
226 may affect whether or not they are able to provide this consent. For
227 example, to perform engineering work on the network then it may be
228 allowed, whereas academic research may not be covered.
230 2.5. Implied Consent
232 In larger scale measurements, even proxy consent collection may not
233 be practical. In this case, implied consent may be presumed from
234 users for some measurements. Consider that users of a network will
235 have certain expectations of privacy and those expectations may not
236 align with the privacy guarantees offered by the technologies they
237 are using. As a thought experiment, consider how users might respond
238 if asked for their informed consent for the measurements you'd like
239 to perform.
241 Implied consent should not be considered sufficient for any
242 experiment that may collect sensitive or personally identifying
243 information. If practical, attempt to obtain informed consent or
244 proxy consent from a sample of users to better understand the
245 expectations of other users.
247 2.6. Implied Consent: Case Study 1
249 A researcher would like to run a measurement campaign to determine
250 the maximum supported TLS version on popular web servers.
252 The operator of a web server that is exposed to the Internet hosting
253 a popular website would have the expectation that it may be included
254 in surveys that look at supported protocols or extensions but would
255 not expect that attempts be made to degrade the service with large
256 numbers of simultaneous connections.
258 2.7. Implied Consent: Case Study 2
260 A researcher would like to perform A/B testing for protocol feature
261 and how it affects web performance. They have created two versions
262 of their software and have instrumented both to report telemetry
263 back. These updates will be pushed to users at random by the
264 software's auto-update framework. The telemetry consists only of
265 performance metrics and does not contain any personally identifying
266 or sensitive information.
268 As users expect to receive automatic updates, the effect of changing
269 the behaviour of the software is already expected by the user. If
270 users have already been informed that data will be reported back to
271 the developers of the software, then again the addition of new
272 metrics would be expected. There are risks in pushing any new
273 software update, and the A/B testing technique can reduce the number
274 of users that may be adversely affected by a bad update.
276 The reduced impact should not be used as an excuse for pushing higher
277 risk updates, only updates that could be considered appropriate to
278 push to all users should be A/B tested. Likewise, not pushing the
279 new behaviour to any user should be considered appropriate if some
280 users are to remain with the old behavior.
282 In the event that something does go wrong with the update, it should
283 be easy for a user to discover that they have been part of an
284 experiment and roll back the change, allowing for explicit refusal of
285 consent to override the presumed implied consent.
287 3. Safety Considerations
289 3.1. Isolate risk with a dedicated testbed
291 Wherever possible, use a testbed. An isolated network means that
292 there are no other users sharing the infrastructure you are using for
293 your experiments.
295 When measuring performance, competing traffic can have negative
296 effects on the performance of your test traffic and so the testbed
297 approach can also produce more accurate and repeatable results than
298 experiments using the public Internet.
300 WAN link conditions can be emulated through artificial delays and/or
301 packet loss using a tool like [netem]. Competing traffic can also be
302 emulated using traffic generators.
304 3.2. Be respectful of other's infrastructure
306 If your experiment is designed to trigger a response from
307 infrastructure that is not your own, consider what the negative
308 consequences of that may be. At the very least your experiment will
309 consume bandwidth that may have to be paid for.
311 In more extreme circumstances, you could cause traffic to be
312 generated that causes legal trouble for the owner of that
313 infrastructure. The Internet is a global network crossing many legal
314 jurisdictions and so what may be legal for you is not necessarily
315 legal for everyone.
317 If you are sending a lot of traffic quickly, or otherwise generally
318 deviate from typical client behaviour, a network may identify this as
319 an attack which means that you will not be collecting results that
320 are representative of what a typical client would see.
322 3.2.1. Maintain a "Do Not Scan" list
324 When performing active measurements on a shared network, maintain a
325 list of hosts that you will never scan regardless of whether they
326 appear in your target lists. When developing tools for performing
327 active measurement, or traffic generation for use in a larger
328 measurement system, ensure that the tool will support the use of a
329 "Do Not Scan" list.
331 If complaints are made that request you do not generate traffic
332 towards a host or network, you must add that host or network to your
333 "Do Not Scan" list, even if no explanation is given or the request is
334 automated.
336 You may ask the requester for their reasoning if it would be useful
337 to your experiment. This can also be an opportunity to explain your
338 research and offer to share any results that may be of interest. If
339 you plan to share the reasoning when publishing your measurement
340 results, e.g. in an academic paper, you must seek consent for this
341 from the requester.
343 Be aware that in publishing your measurement results, it may be
344 possible to infer your "Do Not Scan" list from those results. For
345 example, if you measured a well-known list of popular websites then
346 it would be possible to correlate the results with that list to
347 determine which are missing.
349 3.3. Data Minimization
351 When collecting, using, disclosing, and storing data from a
352 measurement, use only the minimal data necessary to perform a task.
353 Reducing the amount of data reduces the amount of data that can be
354 misused or leaked.
356 When deciding on the data to collect, assume that any data collected
357 might be disclosed. There are many ways that this could happen,
358 through operation security mistakes or compulsion by a judicial
359 system.
361 When directly instrumenting a protocol to provide metrics to a
362 passive observer, see section 6.1 of RFC6973 [RFC6973] for data
363 minimalization considerations specific to this use case.
365 3.3.1. Discarding Data
367 XXX: Discard data that is not required to perform the task.
369 When performing active measurements be sure to only capture traffic
370 that you have generated. Traffic may be identified by IP ranges or
371 by some token that is unlikely to be used by other users.
373 Again, this can help to improve the accuracy and repeatability of
374 your experiment. [RFC2544], for performance benchmarking, requires
375 that any frames received that were not part of the test traffic are
376 discarded and not counted in the results.
378 3.3.2. Masking Data
380 XXX: Mask data that is not required to perform the task.
381 Particularly useful for content of traffic to indicate that either a
382 particular class of content existed or did not exist, or the length
383 of the content, but not recording the content itself. Can also
384 replace content with tokens, or encrypt.
386 3.3.3. Reduce Accuracy
388 XXX: Binning, categorizing, geoip, noise.
390 3.3.4. Data Aggregation
392 When collecting data, consider if the granularity can be limited by
393 using bins or adding noise. XXX: Differential privacy.
395 XXX: Do this at the source, definitely do it before you write to
396 disk.
398 [Tor.2017-04-001] presents a case-study on the in-memory statistics
399 in the software used by the Tor network, as an example.
401 4. Risk Analysis
403 The benefits should outweigh the risks. Consider auxiliary data
404 (e.g. third-party data sets) when assessing the risks.
406 5. Security Considerations
408 Take reasonable security precautions, e.g. about who has access to
409 your data sets or experimental systems.
411 6. IANA Considerations
413 This document has no actions for IANA.
415 7. Acknowledgements
417 Many of these considerations are based on those from the
418 [TorSafetyBoard] adapted and generalised to be applied to Internet
419 research.
421 Other considerations are taken from the Menlo Report [MenloReport]
422 and its companion document [MenloReportCompanion].
424 8. Informative References
426 [MenloReport]
427 Dittrich, D. and E. Kenneally, "The Menlo Report: Ethical
428 Principles Guiding Information and Communication
429 Technology Research", August 2012,
430 .
433 [MenloReportCompanion]
434 Bailey, M., Dittrich, D., and E. Kenneally, "Applying
435 Ethical Principles to Information and Communication
436 Technology Research", October 2013,
437 .
440 [netem] Stephen, H., "Network emulation with NetEm", April 2005.
442 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for
443 Network Interconnect Devices", RFC 2544,
444 DOI 10.17487/RFC2544, March 1999,
445 .
447 [RFC4949] Shirey, R., "Internet Security Glossary, Version 2",
448 August 2007, .
450 [RFC6973] Cooper, A., Tschofenig, H., Aboba, B., Peterson, J.,
451 Morris, J., Hansen, M., and R. Smith, "Privacy
452 Considerations for Internet Protocols", RFC 6973, July
453 2013, .
455 [SIGCOMM] Jones, B., Ensafi, R., Feamster, N., Paxson, V., and N.
456 Weaver, "Ethical Concerns for Censorship Measurement",
457 August 2015,
458 .
461 [Tor.2017-04-001]
462 Herm, K., "Privacy analysis of Tor's in-memory
463 statistics", Tor Tech Report 2017-04-001, April 2017,
464 .
467 [TorSafetyBoard]
468 Tor Project, "Tor Research Safety Board",
469 .
471 Authors' Addresses
473 Iain R. Learmonth
474 Tor Project
476 Email: irl@torproject.org
478 Gurshabad Grover
479 Centre for Internet and Society
481 Email: gurshabad@cis-india.org
483 Mallory Knodel
484 Center for Democracy and Technology
486 Email: mknodel@cdt.org