idnits 2.17.1 draft-ietf-lmap-use-cases-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 2 characters in excess of 72. ** There are 2 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 11, 2015) is 3362 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 2 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Marc Linsner 3 Intended Status: Informational Cisco Systems 4 Expires: August 15, 2015 Philip Eardley 5 Trevor Burbridge 6 BT 7 Frode Sorensen 8 Nkom 9 February 11, 2015 11 Large-Scale Broadband Measurement Use Cases 12 draft-ietf-lmap-use-cases-06 14 Abstract 16 Measuring broadband performance on a large scale is important for 17 network diagnostics by providers and users, as well as for public 18 policy. Understanding the various scenarios and users of measuring 19 broadband performance is essential to development of the Large-scale 20 Measurement of Broadband Performance (LMAP) framework, information 21 model and protocol. This document details two use cases that can 22 assist to developing that framework. The details of the measurement 23 metrics themselves are beyond the scope of this document. 25 Status of this Memo 27 This Internet-Draft is submitted to IETF in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF), its areas, and its working groups. Note that 32 other groups may also distribute working documents as 33 Internet-Drafts. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 The list of current Internet-Drafts can be accessed at 41 http://www.ietf.org/1id-abstracts.html 43 The list of Internet-Draft Shadow Directories can be accessed at 44 http://www.ietf.org/shadow.html 46 Copyright and License Notice 48 Copyright (c) 2015 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2.1 Internet Service Provider (ISP) Use Case . . . . . . . . . . 3 66 2.2 Regulator Use Case . . . . . . . . . . . . . . . . . . . . . 4 67 3 Details of ISP Use Case . . . . . . . . . . . . . . . . . . . . 5 68 3.1 Understanding the quality experienced by customers . . . . . 5 69 3.2 Understanding the impact and operation of new devices and 70 technology . . . . . . . . . . . . . . . . . . . . . . . . . 6 71 3.3 Design and planning . . . . . . . . . . . . . . . . . . . . 6 72 3.4 Monitoring Service Level Agreements . . . . . . . . . . . . 7 73 3.5 Identifying, isolating and fixing network problems . . . . . 7 74 4 Details of Regulator Use Case . . . . . . . . . . . . . . . . . 8 75 4.1 Providing transparent performance information . . . . . . . 8 76 4.2 Measuring broadband deployment . . . . . . . . . . . . . . . 9 77 4.3 Monitoring traffic management practices . . . . . . . . . . 9 78 6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . 11 79 7 Security Considerations . . . . . . . . . . . . . . . . . . . . 13 80 8 IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 14 81 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 82 Informative References . . . . . . . . . . . . . . . . . . . . . . 14 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 17 85 1 Introduction 87 This document describes two use cases for the Large-scale Measurement 88 of Broadband Performance (LMAP). The use cases contained in this 89 document are (1) the Internet Service Provider Use Case and (2) the 90 Regulator Use Case. In the first, a network operator wants to 91 understand the performance of the network and the quality experienced 92 by customers, whilst in the second, a regulator wants to provide 93 information on the performance of the ISPs in their jurisdiction. 94 There are other use cases that are not the focus of the initial LMAP 95 work, for example end users would like to use measurements to help 96 identify problems in their home network and to monitor the 97 performance of their broadband provider; it is expected that the same 98 mechanisms are applicable. 100 Large-scale measurements raise several security concerns, including 101 privacy issues. These are summarized in Section 7 and considered in 102 further detail in [framework]. 104 2 Use Cases 106 From the LMAP perspective, there is no difference between fixed 107 service and mobile (cellular) service used for Internet access. 108 Hence, like measurements will take place on both fixed and mobile 109 networks. Fixed services include technologies like Digital 110 Subscriber Line (DSL), Cable, and Carrier Ethernet. Mobile services 111 include all those advertised as 2G, 3G, 4G, and Long-Term Evolution 112 (LTE). A metric defined to measure end-to-end services will execute 113 similarly on all access technologies. Other metrics may be access 114 technology specific. The LMAP architecture covers both IPv4 and IPv6 115 networks. 117 2.1 Internet Service Provider (ISP) Use Case 119 A network operator needs to understand the performance of their 120 networks, the performance of the suppliers (downstream and upstream 121 networks), the performance of Internet access services, and the 122 impact that such performance has on the experience of their 123 customers. Largely, the processes that ISPs operate (which are based 124 on network measurement) include: 126 o Identifying, isolating and fixing problems, which may be in the 127 network, with the service provider, or in the end user equipment. 128 Such problems may be common to a point in the network topology 129 (e.g. a single exchange), common to a vendor or equipment type 130 (e.g. line card or home gateway) or unique to a single user line 131 (e.g. copper access). Part of this process may also be helping 132 users understand whether the problem exists in their home network 133 or with a third party application service instead of with their 134 broadband (BB) product. 136 o Design and planning. Through monitoring the end user experience 137 the ISP can design and plan their network to ensure specified 138 levels of user experience. Services may be moved closer to end 139 users, services upgraded, the impact of QoS assessed or more 140 capacity deployed at certain locations. Service Level Agreements 141 (SLAs) may be defined at network or product boundaries. 143 o Understanding the quality experienced by customers. The network 144 operator would like to gain better insight into the end-to-end 145 performance experienced by its customers. "End-to-end" could, for 146 instance, incorporate home and enterprise networks, and the impact 147 of peering, caching and Content Delivery Networks (CDNs). 149 o Understanding the impact and operation of new devices and 150 technology. As a new product is deployed, or a new technology 151 introduced into the network, it is essential that its operation 152 and its impact is measured. This also helps to quantify the 153 advantage that the new technology is bringing and support the 154 business case for larger roll-out. 156 2.2 Regulator Use Case 158 A regulator may want to evaluate the performance of the Internet 159 access services offered by operators. 161 While each jurisdiction responds to distinct consumer, industry, and 162 regulatory concerns, much commonality exists in the need to produce 163 datasets that can be used to compare multiple Internet access service 164 providers, diverse technical solutions, geographic and regional 165 distributions, and marketed and provisioned levels and combinations 166 of broadband Internet access services. 168 Regulators may want to publish performance measures of different ISPs 169 as background information for end users. They may also want to track 170 the growth of high-speed broadband deployment, or to monitor the 171 traffic management practices of Internet providers. 173 A regulator's role in the development and enforcement of broadband 174 Internet access service policies requires that the measurement 175 approaches meet a high level of verifiability, accuracy and provider- 176 independence to support valid and meaningful comparisons of Internet 177 access service performance. Standards can help regulators' shared 178 needs for scalable, cost-effective, scientifically robust solutions 179 to the measurement and collection of broadband Internet access 180 service performance information. 182 3 Details of ISP Use Case 184 3.1 Understanding the quality experienced by customers 186 Operators want to understand the quality of experience (QoE) of their 187 broadband customers. The understanding can be gained through a 188 "panel", i.e. measurement probes deployed to several customers. A 189 probe is a device or piece of software that makes measurements and 190 reports the results, under the control of the measurement system. 191 Implementation options are discussed in Section 5. The panel needs to 192 include a representative sample of the operator's technologies and 193 broadband speeds. For instance it might encompass speeds ranging from 194 sub 8Mbps to over 100Mbps. The operator would like the end-to-end 195 view of the service, rather than just the access portion. This 196 involves relating the pure network parameters to something like a 197 'mean opinion score' [MOS] which will be service dependent (for 198 instance web browsing QoE is largely determined by latency above a 199 few Mb/s). 201 An operator will also want compound metrics such as "reliability", 202 which might involve packet loss, DNS failures, re-training of the 203 line, video streaming under-runs etc. 205 The operator really wants to understand the end-to-end service 206 experience. However, the home network (Ethernet, WiFi, powerline) is 207 highly variable and outside its control. To date, operators (and 208 regulators) have instead measured performance from the home gateway. 209 However, mobile operators clearly must include the wireless link in 210 the measurement. 212 Active measurements are the most obvious approach, i.e., special 213 measurement traffic is sent by - and to - the probe. In order not to 214 degrade the service of the customer, the measurement data should only 215 be sent when the user is silent, and it shouldn't reduce the 216 customer's data allowance. The other approach is passive measurements 217 on the customer's ordinary traffic; the advantage is that it measures 218 what the customer actually does, but it creates extra variability 219 (different traffic mixes give different results) and especially it 220 raises privacy concerns. RFC6973] discusses privacy considerations 221 for Internet protocols in general, whilst [framework] discusses them 222 specifically for large-scale measurement systems. 224 From an operator's viewpoint, understanding customer experience 225 enables it to offer better services. Also, simple metrics can be more 226 easily understood by senior managers who make investment decisions 227 and by sales and marketing. 229 3.2 Understanding the impact and operation of new devices and technology 231 Another type of measurement is to test new capabilities before they 232 are rolled out. For example, the operator may want to: 234 o Check whether a customer can be upgraded to a new broadband 235 option 237 o Understand the impact of IPv6 before it is made available to 238 customers. Questions such as these could be assessed: will v6 239 packets get through? what will the latency be to major websites? 240 what transition mechanisms will be most appropriate? 242 o Check whether a new capability can be signaled using TCP options 243 (how often it will be blocked by a middlebox? - along the lines of 244 the experiments described in [Extend TCP]); 246 o Investigate a quality of service mechanism (e.g. checking 247 whether Diffserv markings are respected on some path); and so on. 249 3.3 Design and planning 251 Operators can use large scale measurements to help with their network 252 planning - proactive activities to improve the network. 254 For example, by probing from several different vantage points the 255 operator can see that a particular group of customers has performance 256 below that expected during peak hours, which should help capacity 257 planning. Naturally operators already have tools to help this - a 258 network element reports its individual utilization (and perhaps other 259 parameters). However, making measurements across a path rather than 260 at a point may make it easier to understand the network. There may 261 also be parameters like bufferbloat that aren't currently reported by 262 equipment and/or that are intrinsically path metrics. 264 With information gained from measurement results, capacity planning 265 and network design can be more effective. Such planning typically 266 uses simulations to emulate the measured performance of the current 267 network and understand the likely impact of new capacity and 268 potential changes to the topology. Simulations, informed by data from 269 a limited panel of probes, can help quantify the advantage that a new 270 technology brings and support the business case for larger roll-out. 272 It may also be possible to use probes to run stress tests for risk 273 analysis. For example, an operator could run a carefully controlled 274 and limited experiment in which probing is used to assess the 275 potential impact if some new application becomes popular. 277 3.4 Monitoring Service Level Agreements 279 Another example is that the operator may want to monitor performance 280 where there is a service level agreement (SLA). This could be with 281 its own customers, especially enterprises may have an SLA. The 282 operator can proactively spot when the service is degrading near to 283 the SLA limit, and get information that will enable more informed 284 conversations with the customer at contract renewal. 286 An operator may also want to monitor the performance of its 287 suppliers, to check whether they meet their SLA or to compare two 288 suppliers if it is dual-sourcing. This could include its transit 289 operator, CDNs, peering, video source, local network provider (for a 290 global operator in countries where it doesn't have its own network), 291 even the whole network for a virtual operator. 293 Through a better understanding of its own network and its suppliers, 294 the operator should be able to focus investment more effectively - in 295 the right place at the right time with the right technology. 297 3.5 Identifying, isolating and fixing network problems 299 Operators can use large scale measurements to help identify a fault 300 more rapidly and decide how to solve it. 302 Operators already have Test and Diagnostic tools, where a network 303 element reports some problem or failure to a management system. 304 However, many issues are not caused by a point failure but something 305 wider and so will trigger too many alarms, whilst other issues will 306 cause degradation rather than failure and so not trigger any alarm. 307 Large-scale measurements can help provide a more nuanced view that 308 helps network management to identify and fix problems more rapidly 309 and accurately. The network management tools may use simulations to 310 emulate the network and so help identify a fault and assess possible 311 solutions. 313 An operator can obtain useful information without measuring the 314 performance on every broadband line. By measuring a subset, the 315 operator can identify problems that affect a group of customers. For 316 example, the issue could be at a shared point in the network topology 317 (such as an exchange), or common to a vendor, or equipment type; for 318 instance, [IETF85-Plenary] describes a case where a particular home 319 gateway upgrade had caused a (mistaken!) drop in line rate. 321 A more extensive deployment of the measurement capability to every 322 broadband line would enable an operator to identify issues unique to 323 a single customer. Overall, large-scale measurements can help an 324 operator help an operator fix the fault more rapidly and/or allow the 325 affected customers to be informed what's happening. More accurate 326 information enables the operator to reassure customers and take more 327 rapid and effective action to cure the problem. 329 Often customers experience poor broadband due to problems in the home 330 network - the ISP's network is fine. For example they may have moved 331 too far away from their wireless access point. Anecdotally, a large 332 fraction of customer calls about fixed BB problems are due to in-home 333 wireless issues. These issues are expensive and frustrating for an 334 operator, as they are extremely hard to diagnose and solve. The 335 operator would like to narrow down whether the problem is in the home 336 (with the home network or edge device or home gateway), in the 337 operator's network, or with an application service. The operator 338 would like two capabilities. Firstly, self-help tools that customers 339 use to improve their own service or understand its performance 340 better, for example to re-position their devices for better WiFi 341 coverage. Secondly, on-demand tests that can the operator can run 342 instantly - so the call center person answering the phone (or e-chat) 343 could trigger a test and get the result whilst the customer is still 344 in an on-line session. 346 4 Details of Regulator Use Case 348 4.1 Providing transparent performance information 350 Some regulators publish information about the quality of the various 351 Internet access services provided in their national market. Quality 352 information about service offers could include speed, delay, and 353 jitter. Such information can be published to facilitate end users' 354 choice of service provider and offer. Regulators may also check the 355 accuracy of the marketing claims of Internet service providers, and 356 may also encourage ISPs all to use the same metrics in their service 357 level contracts. The goal with these transparency mechanisms is to 358 promote competition for end users and potentially also help content, 359 application, service and device providers develop their Internet 360 offerings. 362 The published information needs to be: 364 o Accurate - the measurement results must be correct and not 365 influenced by errors or side effects. The results should be 366 reproducible and consistent over time. 368 o Comparable - common metrics should be used across different ISPs 369 and service offerings, and over time, so that measurement results 370 can be compared. 372 o Meaningful - the metrics used for measurements need to reflect 373 what end users value about their broadband Internet access 374 service. 376 o Reliable - the number and distribution of measurement agents, 377 and the statistical processing of the raw measurement data, needs 378 to be appropriate. 380 In practical terms, the regulators may measure network performance 381 from users towards multiple content and application providers, 382 including dedicated test measurement servers. Measurement probes are 383 distributed to a 'panel' of selected end users. The panel covers all 384 the operators and packages in the market, spread over urban, suburban 385 and rural areas, and often includes both fixed and mobile Internet 386 access. Periodic tests running on the probes can for example measure 387 actual speed at peak and off-peak hours, but also other detailed 388 quality metrics like delay and jitter. Collected data goes afterwards 389 through statistical analysis, deriving estimates for the whole 390 population. Summary information, such as a service quality index, is 391 published regularly, perhaps alongside more detailed information. 393 The regulator can also facilitate end users to monitor the 394 performance of their own broadband Internet access service. They 395 might use this information to check that the performance meets that 396 specified in their contract or to understand whether their current 397 subscription is the most appropriate. 399 4.2 Measuring broadband deployment 401 Regulators may also want to monitor the improvement through time of 402 actual broadband Internet access performance in a specific country or 403 a region. The motivation is often to evaluate the effect of the 404 stimulated growth over time, when government has set a strategic goal 405 for high-speed broadband deployment, whether in absolute terms or 406 benchmarked against other countries. An example of such an initiative 407 is [DAE]. The actual measurements can be made in the same way as 408 described in Section 4.1. 410 4.3 Monitoring traffic management practices 412 A regulator may want to monitor traffic management practices or 413 compare the performance of Internet access service with specialized 414 services offered in parallel to but separate from Internet access 415 service (for example IPTV). A regulator could monitor for 416 departures from application agnosticism such as blocking or 417 throttling of traffic from specific applications, or preferential 418 treatment of specific applications. A measurement system could send, 419 or passively monitor, application-specific traffic and then measure 420 in detail the transfer of the different packets. Whilst it is 421 relatively easy to measure port blocking, it is a research topic how 422 to detect other types of differentiated treatment. The paper, 423 "Glasnost: Enabling End Users to Detect Traffic Differentiation" [M- 424 Labs NSDI 2010] and follow-on tool "Glasnost" [Glasnost] is an 425 example of work in this area. 427 A regulator could also monitor the performance of the broadband 428 service over time, to try and detect if the specialized service is 429 provided at the expense of the Internet access service. Comparison 430 between ISPs or between different countries may also be relevant for 431 this kind of evaluation. 433 The motivation for a regulator monitoring such traffic management 434 practices is that regulatory approaches related to net neutrality and 435 the open Internet have been introduced in some jurisdictions. 436 Examples of such efforts are the Internet policy as outlined by the 437 Body of European Regulators for Electronic Communications Guidelines 438 for quality of service [BEREC Guidelines] and US FCC Preserving the 439 Open Internet Report and Order [FCC R&O]. Although legal challenges 440 can change the status of policy, the take-away for LMAP purposes is 441 that policy-makers are looking for measurement solutions to assist 442 them in discovering biased treatment of traffic flows. The exact 443 definitions and requirements vary from one jurisdiction to another. 445 5 Implementation Options 447 There are several ways of implementing a measurement system. The 448 choice may be influenced by the details of the particular use case 449 and what the most important criteria are for the regulator, ISP or 450 third party operating the measurement system. 452 One type of probe is a special hardware device that is connected 453 directly to the home gateway. The devices are deployed to a carefully 454 selected panel of end users and they perform measurements according 455 to a defined schedule. The schedule can run throughout the day, to 456 allow continuous assessment of the network. Careful design ensures 457 that measurements do not detrimentally impact the home user 458 experience or corrupt the results by testing when the user is also 459 using the broadband line. The system is therefore tightly controlled 460 by the operator of the measurement system. One advantage of this 461 approach is that it is possible to get reliable benchmarks for the 462 performance of a network with only a few devices. One disadvantage is 463 that it would be expensive to deploy hardware devices on a mass scale 464 sufficient to understand the performance of the network at the 465 granularity of a single broadband user. 467 Another type of probe involves implementing the measurement 468 capability as a webpage or an "app" that end users are encouraged to 469 download onto their mobile phone or computing device. Measurements 470 are triggered by the end user, for example the user interface may 471 have a button to "test my broadband now". One advantage of this 472 approach is that the performance is measured to the end user, rather 473 than to the home gateway, and so includes the home network. Another 474 difference is that the system is much more loosely controlled, as the 475 panel of end users and the schedule of tests are determined by the 476 end users themselves rather than the measurement system. It would be 477 easier to get large-scale, however it is harder to get comparable 478 benchmarks as the measurements are affected by the home network and 479 also the population is self-selecting and so potentially biased 480 towards those who think they have a problem. This could be alleviated 481 by stimulating widespread downloading of the app and careful post- 482 processing of the results to reduce biases. 484 There are several other possibilities. For example, as a variant on 485 the first approach, the measurement capability could be implemented 486 as software embedded in the home gateway, which would make it more 487 viable to have the capability on every user line. As a variant on the 488 second approach, the end user could initiate measurements in response 489 to a request from the measurement system. 491 The operator of the measurement system should be careful to ensure 492 that measurements do not detrimentally impact users. Potential issues 493 include: 495 * Measurement traffic generated on a particular user's line may 496 impact that end user's quality of experience. The danger is 497 greater for measurements that generate a lot of traffic over a 498 lengthy period. 500 * The measurement traffic may impact that particular user's bill 501 or traffic cap. 503 * The measurement traffic from several end users may, in 504 combination, congest a shared link. 506 * The traffic associated with the control and reporting of 507 measurements may overload the network. The danger is greater where 508 the traffic associated with many end users is synchronized. 510 6 Conclusions 512 Large-scale measurements of broadband performance are useful for both 513 network operators and regulators. Network operators would like to use 514 measurements to help them better understand the quality experienced 515 by their customers, identify problems in the network and design 516 network improvements. Regulators would like to use measurements to 517 help promote competition between network operators, stimulate the 518 growth of broadband access and monitor 'net neutrality'. There are 519 other use cases that are not the focus of the initial LMAP charter 520 (although it is expected that the mechanisms developed would be 521 readily applied), for example end users would like to use 522 measurements to help identify problems in their home network and to 523 monitor the performance of their broadband provider. 525 From consideration of the various use cases, several common themes 526 emerge whilst there are also some detailed differences. These 527 characteristics guide the development of LMAP's framework, 528 information model and protocol. 530 A measurement capability is needed across a wide number of 531 heterogeneous environments. Tests may be needed in the home network, 532 in the ISP's network or beyond; they may be measuring a fixed or 533 wireless network; they may measure just the access network or across 534 several networks; at least some of which are not operated by the 535 measurement provider. 537 There is a role for both standardized and non-standardized 538 measurements. For example, a regulator would like to publish 539 standardized performance metrics for all network operators, whilst an 540 ISP may need their own tests to understand some feature special to 541 their network. Most use cases need active measurements, which create 542 and measure specific test traffic, but some need passive measurements 543 of the end user's traffic. 545 Regardless of the tests being operated, there needs to be a way to 546 demand or schedule the tests. Most use cases need a regular schedule 547 of measurements, but sometimes ad hoc testing is needed, for example 548 for troubleshooting. It needs to be ensured that measurements do not 549 affect the user experience and are not affected by user traffic 550 (unless desired). In addition there needs to be a common way to 551 collect the results. Standardization of this control and reporting 552 functionality allows the operator of a measurement system to buy the 553 various components from different vendors. 555 After the measurement results are collected, they need to be 556 understood and analyzed. Often it is sufficient to measure only a 557 small subset of end users, but per-line fault diagnosis requires the 558 ability to test every individual line. Analysis requires accurate 559 definition and understanding of where the test points are, as well as 560 contextual information about the topology, line, product and the 561 subscriber's contract. The actual analysis of results is beyond the 562 scope of LMAP, as is the key challenge of how to integrate the 563 measurement system into a network operator's existing tools for 564 diagnostics and network planning. 566 Finally the test data, along with any associated network, product or 567 subscriber contract data is commercial or private information and 568 needs to be protected. 570 7 Security Considerations 572 Large-scale measurements raise several potential security, privacy 573 (data protection) [RFC6973] and business sensitivity issues. 575 1. a malicious party may try to gain control of probes to launch 576 DoS (Denial of Service) attacks at a target. A DoS attack could be 577 targeted at a particular end user or set of end users, a certain 578 network, or a specific service provider. 580 2. a malicious party may try to gain control of probes to create a 581 platform for pervasive monitoring [RFC7258], or for more targeted 582 monitoring. [RFC7258] summarises the threats as: "an attack may 583 change the content of the communication, record the content or 584 external characteristics of the communication, or through 585 correlation with other communication events, reveal information 586 the parties did not intend to be revealed." For example, a 587 malicious party could distribute to the probes a new measurement 588 test that recorded (and later reported) information of maleficent 589 interest. Similar concerns also arise if the measurement results 590 are intercepted or corrupted. 592 * from the end user's perspective, the concerns include a 593 malicious party monitoring the traffic they send and receive, 594 who they communicate with and the websites they visit, and 595 information about their behaviour such as when they are at home 596 and the location of their devices. Some of the concerns may be 597 greater when the MA is on the end user's device rather than on 598 their home gateway. 600 * from the network operator's perspective, the concerns include 601 the leakage of commercially-sensitive information about the 602 design and operation of their network, their customers and 603 suppliers. Some threats are indirect, for example the attacker 604 could reconnoitre potential weaknesses, such as open ports and 605 paths through the network, which enabled it to launch an attack 606 later. 608 * from the regulator's perspective, the concerns include 609 distortion of the measurement tests or alteration of the 610 measurement results. Also, a malicious network operator could 611 try to identify the broadband lines that the regulator was 612 measuring and prioritise that traffic ("game the system"). 614 3. a measurement system that does not obtain the end user's 615 informed consent, or fails to specify a specific purpose in the 616 consent, or uses the collected information for secondary uses 617 beyond those specified. 619 4. a measurement system that does not indicate who is responsible 620 for the collection and processing of personal data and who is 621 responsible for fulfilling the rights of users. The responsible 622 party (often termed the "data controller") should, as good 623 practice, consider issues such as defining:- the purpose for which 624 the data is collected and used; how the data is stored, accessed, 625 and processed; how long it is retained for; and how the end user 626 can view, update, and even delete their personal data. If 627 anonymized personal data is shared with a third party, the data 628 controller should consider the possibility that the third party 629 can de-anonymize it by combining it with other information. 631 These security and privacy issues will need to be considered 632 carefully by any measurement system. In the context of LMAP, the 633 [framework] considers them further along with some potential 634 mitigations. Other LMAP documents will specify protocol(s) that 635 enable the measurement system to instruct a probe about what 636 measurements to make and that enable the probe to report the 637 measurement results. Those documents will need to discuss solutions 638 to the security and privacy issues. However, the protocol documents 639 will not consider the actual usage of the measurement information; 640 many use cases can be envisaged and, earlier in this document, we 641 have described some likely ones for the network operator and 642 regulator. 644 8 IANA Considerations 646 None 648 Contributors 650 The information in this document is partially derived from text 651 written by the following contributors: 653 James Miller jamesmilleresquire@gmail.com 655 Rachel Huang rachel.huang@huawei.com 657 Informative References 659 [IETF85-Plenary] Crawford, S., "Large-Scale Active Measurement of 660 Broadband Networks", 661 http://www.ietf.org/proceedings/85/slides/slides-85-iesg- 662 opsandtech-7.pdf 'example' from slide 18 664 [Extend TCP] Michio Honda, Yoshifumi Nishida, Costin Raiciu, Adam 665 Greenhalgh, Mark Handley and Hideyuki Tokuda. "Is it Still 666 Possible to Extend TCP?" Proc. ACM Internet Measurement 667 Conference (IMC), November 2011, Berlin, Germany. 668 http://www.ietf.org/proceedings/82/slides/IRTF-1.pdf 670 [framework] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 671 Aitken, P., Akhter, A. "A framework for large-scale 672 measurement platforms (LMAP)", 673 http://datatracker.ietf.org/doc/draft-ietf-lmap-framework/ 675 [RFC6973] Cooper, A., Tschofenig, H.z., Aboba, B., Peterson, J., 676 Morris, J., Hansen, M., and R. Smith, "Privacy 677 Considerations for Internet Protocols", RFC 6973, July 678 2013. 680 [RFC7258] Farrell, S., Tschofenig, H., "PPervasive Monitoring Is an 681 Attack", RFC 7258, May 2014. 683 [FCC R&O] United States Federal Communications Commission, 10-201, 684 "Preserving the Open Internet, Broadband Industries 685 Practices, Report and Order", 686 http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-10- 687 201A1.pdf 689 [BEREC Guidelines] Body of European Regulators for Electronic 690 Communications, "BEREC Guidelines for quality of service 691 in the scope of net neutrality", 692 http://berec.europa.eu/eng/document_register/ 693 subject_matter/berec/download/0/1101-berec-guidelines-for- 694 quality-of-service-_0.pdf 696 [M-Labs NSDI 2010] M-Lab, "Glasnost: Enabling End Users to Detect 697 Traffic Differentiation", 698 http://www.measurementlab.net/download/AMIfv945ljiJXzG- 699 fgUrZSTu2hs1xRl5Oh-rpGQMWL305BNQh- 700 BSq5oBoYU4a7zqXOvrztpJhK9gwk5unOe-fOzj4X-vOQz_HRrnYU- 701 aFd0rv332RDReRfOYkJuagysstN3GZ__lQHTS8_UHJTWkrwyqIUjffVeDxQ/ 703 [Glasnost] M-Lab tool "Glasnost", http://mlab-live.appspot.com/tools/ 704 glasnost 706 [P.800] ITU-T, "SERIES P: TELEPHONE TRANSMISSION QUALITY Methods for 707 objective and subjective assessment of quality", 708 https://www.itu.int/rec/dologin_pub.asp?lang=e&id=T-REC- 709 P.800-199608-I!!PDF-E&type=items 711 [MOS] Wikipedia, "Mean Opinion Score", 712 http://en.wikipedia.org/wiki/Mean_opinion_score 714 [DAE] Digital Agenda for Europe, COM(2010)245 final, Communication 715 from the Commission to the European Parliament, the 716 Council, the European Economic and Social Committee and 717 the Committee of the Regions, http://eur- 718 lex.europa.eu/legal- 719 content/EN/TXT/PDF/?uri=CELEX:52010DC0245&from=EN 721 Authors' Addresses 723 Marc Linsner 724 Cisco Systems, Inc. 725 Marco Island, FL 726 USA 728 EMail: mlinsner@cisco.com 730 Philip Eardley 731 BT 732 B54 Room 77, Adastral Park, Martlesham 733 Ipswich, IP5 3RE 734 UK 736 Email: philip.eardley@bt.com 738 Trevor Burbridge 739 BT 740 B54 Room 77, Adastral Park, Martlesham 741 Ipswich, IP5 3RE 742 UK 744 Email: trevor.burbridge@bt.com 746 Frode Sorensen 747 Norwegian Communications Authority (Nkom) 748 Lillesand 749 Norway 751 Email: frode.sorensen@nkom.no