idnits 2.17.1 draft-linsner-lmap-use-cases-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 2 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (October 2, 2013) is 3858 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'IETF85-Plenary' is mentioned on line 461, but not defined Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Marc Linsner 3 Intended Status: Informational Cisco Systems 4 Expires: April 5, 2014 Philip Eardley 5 Trevor Burbridge 6 BT 7 October 2, 2013 9 Large-Scale Broadband Measurement Use Cases 10 draft-linsner-lmap-use-cases-04 12 Abstract 14 Measuring broadband performance on a large scale is important for 15 network diagnostics by providers and users, as well for as public 16 policy. To conduct such measurements, user networks gather data, 17 either on their own initiative or instructed by a measurement 18 controller, and then upload the measurement results to a designated 19 measurement server. Understanding the various scenarios and users of 20 measuring broadband performance is essential to development of the 21 system requirements. The details of the measurement metrics 22 themselves are beyond the scope of this document. 24 Status of this Memo 26 This Internet-Draft is submitted to IETF in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as 32 Internet-Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/1id-abstracts.html 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html 45 Copyright and License Notice 47 Copyright (c) 2013 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 Table of Contents 62 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 63 1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2.1 Internet Service Provider (ISP) Use Case . . . . . . . . . . 3 66 2.2 Regulators . . . . . . . . . . . . . . . . . . . . . . . . . 4 67 2.2.1 Measurement Providers . . . . . . . . . . . . . . . . . 5 68 2.2.2 Benchmarking and competitor insight . . . . . . . . . . 5 69 2.3 Fixed and Mobile Service . . . . . . . . . . . . . . . . . . 6 70 3 Details of ISP Use Case . . . . . . . . . . . . . . . . . . . . 6 71 3.1 Existing Capabilities and Shortcomings . . . . . . . . . . . 6 72 3.2 Understanding the quality experienced by customers . . . . . 7 73 3.3 Understanding the impact and operation of new devices and 74 technology . . . . . . . . . . . . . . . . . . . . . . . . . 8 75 3.4 Design and planning . . . . . . . . . . . . . . . . . . . . 9 76 3.5 Identifying, isolating and fixing network problems . . . . . 10 77 3.6 Comparison with the regulator use case . . . . . . . . . . . 12 78 3.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . 13 79 4 Security Considerations . . . . . . . . . . . . . . . . . . . . 14 80 5 IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 14 81 Appendix A. End User Use Case . . . . . . . . . . . . . . . . . . 14 82 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 83 Normative References . . . . . . . . . . . . . . . . . . . . . . . 15 84 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 86 1 Introduction 88 Large-scale measurement efforts in [LMAP-REQ] describe three use 89 cases to be considered in deriving the requirements to be used in 90 developing the solution. This documents attempts to describe those 91 use cases in further detail and include additional use cases. 93 1.1 Terminology 95 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 96 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 97 document are to be interpreted as described in RFC 2119 [RFC2119]. 99 2 Use Cases 101 2.1 Internet Service Provider (ISP) Use Case 103 An ISP, or indeed another network operator, needs to understand the 104 performance of their networks, the performance of the suppliers 105 (downstream and upstream networks), the performance of services, and 106 the impact that such performance has on the experience of their 107 customers. In addition they may also desire visibility of their 108 competitor's networks and services in order to be able to benchmark 109 and improve their own offerings. Largely the processes that ISPs 110 operate (which are based on network measurement) include: 112 o Identifying, isolating and fixing problems in the network, 113 services or with CPE and end user equipment. Such problems may be 114 common to a point in the network topology (e.g. a single 115 exchange), common to a vendor or equipment type (e.g. line card or 116 home gateway) or unique to a single user line (e.g. copper 117 access). Part of this process may also be helping users understand 118 whether the problem exists in their home network or with an over- 119 the-top service instead of with their BB product. 121 o Design and planning. Through identifying the end user experience 122 the ISP can design and plan their network to ensure specified 123 levels of user experience. Services may be moved closer to end 124 users, services upgraded, the impact of QoS assessed or more 125 capacity deployed at certain locations. SLAs may be defined at 126 network or product boundaries. 128 o Benchmarking and competitor insight. The operation of sample 129 panels across competitor products can enable and ISP to assess 130 where they play in the market, identify opportunities where other 131 products operate different technology, and assess the performance 132 of network suppliers that are common to both operators. 134 o Understanding the quality experienced by customers. Alongside 135 benchmarking competitors, gaining better insight into the user's 136 service through a sample panel of the operator's own customers. 137 The end-to-end perspective matters, across home /enterprise 138 networks, peering points, CDNs etc. 140 o Understanding the impact and operation of new devices and 141 technology. As a new product is deployed, or a new technology 142 introduced into the network, it is essential that its operation 143 and impact on other services is measured. This also helps to 144 quantify the advantage that the new technology is bringing and 145 support the business case for larger roll-out. 147 2.2 Regulators 149 Regulators in jurisdictions around the world are responding to 150 consumers' adoption of broadband technology solution for traditional 151 telecommunications and media services by reviewing the historical 152 approaches to regulating these industries and services and in some 153 cases modifying existing approaches or developing new solutions. 155 Some jurisdictions have responded to a perceived need for greater 156 information about broadband performance in the development of 157 regulatory policies and approaches for broadband technologies by 158 developing large-scale measurement programs. Programs such as the 159 U.S. Federal Communications Commission's Measuring Broadband America, 160 U.K. Ofcom's UK Broadband Speeds reports and a growing list of other 161 programs employ a diverse set of operational and technical approaches 162 to gathering data in scientifically and statistical robust ways to 163 perform analysis and reporting on diverse aspects of broadband 164 performance. 166 While each jurisdiction responds to distinct consumer, industry, and 167 regulatory concerns, much commonality exists in the need to produce 168 datasets that are able to compare multiple broadband providers, 169 diverse technical solutions, geographic and regional distributions, 170 and marketed and provisioned levels and combinations of broadband 171 services. 173 Regulators role in the development and enforcement of broadband 174 policies also require that the measurement approaches meet a high 175 level of verifiability, accuracy and fairness to support valid and 176 meaningful comparisons of broadband performance 178 LMAP standards could answer regulators shared needs by providing 179 scalable, cost-effective, scientifically robust solutions to the 180 measurement and collection of broadband performance information. 182 2.2.1 Measurement Providers 184 In some jurisdictions, the role of measuring is provided by a 185 measurement provider. Measurement providers measure a network 186 performance from users to multiple content providers to show a 187 performance of the actual network. Users need to know a performance 188 that are using. In addition, they need to know a performance of other 189 ISP of same location as information for selecting the network. 190 Measurement providers will show the measurement result with 191 measurement methods and measurement parameters. 193 2.2.2 Benchmarking and competitor insight 195 An operator may want to check that the results reported by the 196 regulator match its own belief about how its network is performing. 197 There is quite a lot of variation in underlying line performance for 198 customers on (say) a nominal 20Mb/s service, so it is possible for 199 two panels of ~100 probes to produce different results. 201 An operator may also want more detailed understanding of its 202 competitors, beyond that reported by the regulator - probably by 203 getting a third party to establish a panel of probes in its rival 204 ISPs. Measurements could, for example, help an operator: target its 205 marketing by showing that it's 'best for video streaming' but 'worst 206 for web browsing'; gain detailed insight into the strengths and 207 weaknesses of different access technologies (DSL vs cable vs 208 wireless); understand market segments that it currently doesn't 209 serve; and so on. 211 The characteristics of large scale measurements that emerge from 212 these examples are very similar to the sub use case above: 214 1. Averaged data (over say 1 month) is generally ok 216 2. A panel (subset) of only a few customers is OK 218 3. Both active and passive measurements are possible, though the 219 former seems easier 221 4. Regularly scheduled tests are fine (providing active tests 222 back off if the customer is using the line). Scheduling can be 223 done some time ahead ('starting tomorrow, run the following test 224 every day'). 226 5. The performance metrics are whatever the operator wants to 227 benchmark. As well as QoE measures, it may want to measure some 228 network-specific parameters. 230 6. As well as the performance of the access link, the performance 231 of different network segments, including end-to-end. 233 2.3 Fixed and Mobile Service 235 From a consumer perspective, the differentiation between fixed 236 broadband and mobile (cellular) service is blurring as the 237 applications used are very similar. Hence, similar measurements will 238 take place on both fixed and mobile broadband services. 240 3 Details of ISP Use Case 242 3.1 Existing Capabilities and Shortcomings 244 In order to get reliable benchmarks some ISPs use vendor provided 245 hardware measurement platforms that connect directly to the home 246 gateway. These devices typically perform a continuous test schedule, 247 allowing the operation of the network to be continually assessed 248 throughout the day. Careful design ensures that they do not 249 detrimentally impact the home user experience or corrupt the test 250 results by testing when the user is also using the Broadband line. 251 While the test capabilities of such probes are good, they are simply 252 too expensive to deploy on mass scale to enable detailed 253 understanding of network performance (e.g. to the granularity of a 254 single backhaul or single user line). In addition there is no easy 255 way to operate similar tests on other devices (eg set top box) or to 256 manage application level tests (such as IPTV) using the same control 257 and reporting framework. 259 ISPs also use speed and other diagnostic tests from user owned 260 devices (such as PCs, tablets or smartphones). These often use 261 browser related technology to conduct tests to servers in the ISP 262 network to confirm the operation of the user BB access line. These 263 tests can be helpful for a user to understand whether their BB line 264 has a problem, and for dialogue with a helpdesk. However they are not 265 able to perform continuous testing and the uncontrolled device and 266 home network means that results are not comparable. Producing 267 statistics across such tests is very dangerous as the population is 268 self-selecting (e.g. those who think they have a problem). 270 Faced with a gap in current vendor offerings some ISPs have taken the 271 approach of placing proprietary test capabilities on their home 272 gateway and other consumer device offerings (such as Set Top Boxes). 273 This also means that different device platforms may have different 274 and largely incomparable tests, developed by different company sub- 275 divisions managed by different systems. 277 3.2 Understanding the quality experienced by customers 279 Operators want to understand the quality of experience (QoE) of their 280 broadband customers. The understanding can be gained through a 281 "panel", ie a measurement probe is deployed to a few 100 or 1000 of 282 its customers. The panel needs to be a representative sample for each 283 of the operator's technologies (FTTP, FTTC, ADSL...) and broadband 284 options (80Mb/s, 20Mb/s, basic...), ~100 probes for each. The 285 operator would like the end-to-end view of the service, rather than 286 (say) just the access portion. So as well as simple network 287 statistics like speed and loss rates they want to understand what the 288 service feels like to the customer. This involves relating the pure 289 network parameters to something like a 'mean opinion score' which 290 will be service dependent (for instance web browsing QoE is largely 291 determined by latency above a few Mb/s). 293 An operator will also want compound metrics such as "reliability", 294 which might involve packet loss, DNS failures, re-training of the 295 line, video streaming under-runs etc. 297 The operator really wants to understand the end-to-end service 298 experience. However, the home network (Ethernet, wifi, powerline) is 299 highly variable and outside its control. To date, operators (and 300 regulators) have instead measured performance from the home gateway. 301 However, mobile operators clearly must include the wireless link in 302 the measurement. 304 Active measurements are the most obvious approach, ie special 305 measurement traffic is sent by - and to - the probe. In order not to 306 degrade the service of the customer, the measurement data should only 307 be sent when the user is silent, and it shouldn't reduce the 308 customer's data allowance. The other approach is passive measurements 309 on the customer's real traffic; the advantage is that it measures 310 what the customer actually does, but it creates extra variability 311 (different traffic mixes give different results) and especially it 312 raises privacy concerns. 314 From an operator's viewpoint, understanding customers better enables 315 it to offer better services. Also, simple metrics can be more easily 316 understood by senior managers who make investment decisions and by 317 sales and marketing. 319 The characteristics of large scale measurements that emerge from 320 these examples: 322 1. Averaged data (over say 1 month) is generally ok 324 2. A panel (subset) of only a few customers is OK 326 3. Both active and passive measurements are possible, though the 327 former seems easier 329 4. Regularly scheduled tests are fine (providing active tests 330 back off if the customer is using the line). Scheduling can be 331 done some time ahead ('starting tomorrow, run the following test 332 every day'). 334 5. The operator needs to devise metrics and compound measures 335 that represent the QoE 337 6. End-to-end service matters, and not (just) the access link 338 performance 340 3.3 Understanding the impact and operation of new devices and technology 342 Another type of measurement is to test new capabilities and services 343 before they are rolled out. For example, the operator may want to: 344 check whether a customer can be upgraded to a new broadband option; 345 understand the impact of IPv6 before it makes it available to its 346 customers (will v6 packets get through, what will the latency be to 347 major websites, what transition mechanisms will be most is 348 appropriate?); check whether a new capability can be signaled using 349 TCP options (how often it will be blocked by a middlebox? - along the 350 lines of some existing experiments) [Extend TCP]; investigate a 351 quality of service mechanism (eg checking whether Diffserv markings 352 are respected on some path); and so on. 354 The characteristics of large scale measurements that emerge from 355 these examples are: 357 1. New tests need to be devised that test a prospective 358 capability. 360 2. Most of the tests are probably simply: "send one packet and 361 record what happens", so an occasional one-off test is sufficient. 363 3. A panel (subset) of only a few customers is probably OK, to 364 gain an understanding of the impact of a new technology, but it 365 may be necessary to check an individual line where the roll-out is 366 per customer. 368 4. An active measurement is needed. 370 3.4 Design and planning 372 Operators can use large scale measurements to help with their network 373 planning - proactive activities to improve the network. 375 For example, by probing from several different vantage points the 376 operator can see that a particular group of customers has performance 377 below that expected during peak hours, which should help capacity 378 planning. Naturally operators already have tools to help this - a 379 network element reports its individual utilisation (and perhaps other 380 parameters). However, making measurements across a path rather than 381 at a point may make it easier to understand the network. There may 382 also be parameters like bufferbloat that aren't currently reported by 383 equipment and/or that are intrinsically path metrics. 385 With better information, capacity planning and network design can be 386 more effective. Such planning typically uses simulations to emulate 387 the measured performance of the current network and understand the 388 likely impact of new capacity and potential changes to the topology. 389 It may also be possible to run stress tests for risk analysis, for 390 example 'if whizzy new application (or device) becomes popular, which 391 parts of my network would struggle, what would be the impact on other 392 services and how many customers would be affected'. What-if 393 simulations could help quantify the advantage that a new technology 394 brings and support the business case for larger roll-out. This 395 approach should allow good results with measurements from a limited 396 panel of customers. 398 Another example is that the operator may want to monitor performance 399 where there is a service level agreement. This could be with its own 400 customers, especially enterprises may have an SLA. The operator can 401 proactively spot when the service is degrading near to the SLA limit, 402 and get information that will enable more informed conversations with 403 the customer at contract renewal. 405 An operator may also want to monitor the performance of its 406 suppliers, to check whether they meet their SLA or to compare two 407 suppliers if it is dual-sourcing. This could include its transit 408 operator, CDNs, peering, video source, local network provider (for a 409 global operator in countries where it doesn't have its own network), 410 even the whole network for a virtual operator. 412 Through a better understanding of its own network and its suppliers, 413 the operator should be able to focus investment more effectively - in 414 the right place at the right time with the right technology. 416 The characteristics of large scale measurements emerging from these 417 examples: 419 1. A key challenge is how to integrate results from measurements 420 into existing network planning and management tools 422 2. New tests may need to be devised for the what-if and risk 423 analysis scenarios. 425 3. Capacity constraints first reveal themselves during atypical 426 events (early warning). So averaging of measurements should be 427 over a much shorter time than the sub use case discussed above. 429 4. A panel (subset) of only a few customers is OK for most of the 430 examples, but it should probably be larger than the QoE use case 431 #1 and the operator may also want to regularly change who is in 432 the subset, in order to sample the revealing outliers. 434 5. Measurements over a segment of the network ("end-to-middle") 435 are needed, in order to refine understanding, as well as end-to- 436 end measurements. 438 6. The primary interest is in measuring specific network 439 performance parameters rather than QoE. 441 7. Regularly scheduled tests are fine 443 8. Active measurements are needed; passive ones probably aren't 445 3.5 Identifying, isolating and fixing network problems 447 Operators can use large scale measurements to help identify a fault 448 more rapidly and decide how to solve it. 450 Operators already have Test and Diagnostic tools, where a network 451 element reports some problem or failure to a management system. 452 However, many issues are not caused by a point failure but something 453 wider and so will trigger too many alarms, whilst other issues will 454 cause degradation rather than failure and so not trigger any alarm. 455 Large scale measurements can help provide a more nuanced view that 456 helps network management to identify and fix problems more rapidly 457 and accurately. The network management tools may use simulations to 458 emulate the network and so help identify a fault and assess possible 459 solutions. 461 One example was described in [IETF85-Plenary]. The operator was 462 running a measurement panel for reasons discussed in sub use case #1. 463 It was noticed that the performance of some lines had unexpectedly 464 degraded. This led to a detailed (off-line) investigation which 465 discovered that a particular home gateway upgrade had caused a 466 (mistaken!) drop in line rate. 468 Another example is that occasionally some internal network management 469 event (like re-routing) can be customer-affecting (of course this is 470 unusual). This affects a whole group of customers, for instance those 471 on the same DSLAM. Understanding this will help an operator fix the 472 fault more rapidly and/or allow the affected customers to be informed 473 what's happening and/or request them to re-set their home hub 474 (required to cure some conditions). More accurate information enables 475 the operator to reassure customers and take more rapid and effective 476 action to cure the problem. 478 There may also be problems unique to a single user line (e.g. copper 479 access) that need to be identified. 481 Often customers experience poor broadband due to problems in the home 482 network - the ISP's network is fine. For example they may have moved 483 too far away from their wireless access point. Perhaps 80% of 484 customer calls about fixed BB problems are due to in-home wireless 485 issues. These issues are expensive and frustrating for an operator, 486 as they are extremely hard to diagnose and solve. The operator would 487 like to narrow down whether the problem is in the home (with the home 488 network or edge device or home gateway), in the operator's network, 489 or with an over-the-top service. The operator would like two 490 capabilities. Firstly, self-help tools that customers use to improve 491 their own service or understand its performance better, for example 492 to re-position their devices for better wifi coverage. Secondly, on- 493 demand tests that can the operator can run instantly - so the call 494 centre person answering the phone (or e-chat) could trigger a test 495 and get the result whilst the customer is still on-line session. 497 The characteristics of large scale measurements emerging from these 498 examples: 500 1. A key challenge is how to integrate results from measurements 501 into the operator's existing Test and Diagnostics system. 503 2. Results from the tests shouldn't be averaged 505 3. Tests are generally run on an ad hoc basis, ie specific 506 requests for immediate action 508 4. "End-to-middle" measurements, ie across a specific network 509 segment, are very relevant 511 5. The primary interest is in measuring specific network 512 performance parameters and not QoE 513 6. New tests are needed for example to check the home network (ie 514 the connection from the home hub to the set top boxes or to a 515 tablets on wifi) 517 7. Active measurements are critical. Passive ones may be useful 518 to help understand exactly what the customer is experiencing. 520 3.6 Comparison with the regulator use case 522 Today an increasing number of regulators measure the performance of 523 broadband operators. Typically they deploy a few 1000 probes, each of 524 which is connected directly to the broadband customer's home gateway 525 and periodically measures the performance of that line. The regulator 526 ensures they have a set of probes that covers the different ISPs and 527 their different technology types and contract speeds, so that they 528 can publish statistically-reasonable average performances. 529 Publicising the results stimulates competition and so pressurises 530 ISPs to improve broadband service. 532 The operator use case has similarities but several significant 533 differences from the regulator one: 535 o Performance metrics: A regulator and operator are generally 536 interested in the same performance metrics. Both would like 537 standardised metrics, though this is more important for 538 regulators. 540 o Sampling: The regulator wants an average across a 541 representative sample of broadband customers (per operator, per 542 type of BB contract). The operator also wants to measure 543 individual lines with a problem. 545 o Timeliness: The regulator wants to know the (averaged) 546 performance last quarter (say). For fault identification and 547 fixing, the operator would like to know the performance at this 548 moment and also to instruct a test to be run at this moment (so 549 the requirement is on both the testing and reporting). Also, when 550 testing the impact of new devices and technology, the operator is 551 gaining insight about future performance. 553 o Scheduling: The regulator wants to run scheduled tests 554 ('measure download rate every hour'). The operator also wants to 555 run one-off tests; perhaps also the result of one test would 556 trigger the operator to run a specific follow-up test. 558 o Pre-processing: A regulator would like standard ways of 559 processing the collected data, to remove outlier measurements and 560 aggregate results, because this can significantly affect the final 561 "averaged" result. Pre-processing is not important for an 562 operator. 564 o Historic data: The regulator wants to track how the (averaged) 565 performance of each operator changes on (say) a quarterly basis. 566 The operator would like detailed, recent historic data (eg a 567 customer with an intermittent fault over the last week). 569 o Scope: To date, regulators have measured the performance of 570 access lines. An operator also wants to understand the performance 571 of the home (or enterprise) network and of the end-to-end service, 572 ie including backbone, core, peering and transit, CDNs and 573 application /content servers. 575 o Control of testing and reporting: The operator wants detailed 576 control. The regulator contracts out the measurement caboodle and 577 'control' will be via negotiation with its contractor. 579 o Politics: A regulator has to take account of government targets 580 (eg UK government: "Our ambition (by 2015) is to provide superfast 581 broadband (24Mbps) to at least 90 per cent of premises in the UK 582 and to provide universal access to standard broadband with a speed 583 of at least 2Mbps.") This may affect the metrics the regulator 584 wants to measure and certainly affects how they interpret results. 585 The operator is more focused on winning market share. 587 3.7 Conclusions 589 There is a clear need from an ISP point of view to deploy a single 590 coherent measurement capability across a wide number of heterogeneous 591 devices both in their own networks and in the home environment. These 592 tests need to be able to operate from a wide number of locations to a 593 set of interoperable test points in their own network as well as 594 spanning supplier and competitor networks. 596 Regardless of the tests being operated, there needs to be a way to 597 demand or schedule the tests and critically ensure that such tests do 598 not affect each other; are not affected by user traffic (unless 599 desired) and do not affect the user experience. In addition there 600 needs to be a common way to collect and understand the results of 601 such tests across different devices to enable correlation and 602 comparison between any network or service parameters. 604 Since network and service performance needs to be understood and 605 analysed in the presence of topology, line, product or contract 606 information it is critical that the test points are accurately 607 defined and authenticated. 609 Finally the test data, along with any associated network, product or 610 contract data is commercial or private information and needs to be 611 protected. 613 4 Security Considerations 615 The transport of Controller to MA and MA to Collector traffic must be 616 protected both in-flight and such that each entity is known and 617 trusted to each other. 619 It is imperative that end user identifying data is protected. 620 Identifying data includes, end user name, time and location of the 621 MA, and any attributes about a service such as service location, 622 including IP address that could be used to re-construct physical 623 location. 625 5 IANA Considerations 627 TBD 629 Appendix A. End User Use Case 631 End users may want to determine whether their network is performing 632 according to the specifications (e.g., service level agreements) 633 offered by their Internet service provider, or they may want to 634 diagnose whether components of their network path are impaired. End 635 users may perform measurements on their own, using the measurement 636 infrastructure they provide or infrastructure offered by a third 637 party, or they may work directly with their network or application 638 provider to diagnose a specific performance problem. Depending on 639 the circumstances, measurements may occur at specific pre-defined 640 intervals, or may be triggered manually. A system administrator may 641 perform such measurements on behalf of the user. Example use cases 642 of end user initiated performance measurements include: 644 o An end user may wish to perform diagnostics prior to calling 645 their ISP to report a problem. Hence, the end user could connect 646 a MA to different points of their home network and trigger manual 647 tests. Different attachment points could include their in-home 648 802.11 network or an Ethernet port on the back of their BB modem. 650 o An OTT or ISP service provider may deploy a MA within an their 651 service platform to provide the end user a capability to diagnose 652 service issues. For instance a video streaming service may 653 include a manually initiated MA within their platform that has the 654 Controller and Collector predefined. The end user could initiate 655 performance tests manually, with results forwarded to both the 656 provider and the end user via other means, like UI, email, etc. 658 Contributors 660 The information in this document is partially derived from text 661 written by the following contributors: 663 James Miller jamesmilleresquire@gmail.com 665 Rachel Huang rachel.huang@huawei.com 667 Normative References 669 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 670 Requirement Levels", BCP 14, RFC 2119, March 1997. 672 [LMAP-REQ] Schulzrinne, H., "Large-Scale Measurement of Broadband 673 Performance: Use Cases, Architecture and Protocol 674 Requirements", draft-schulzrinne-lmap-requirements, 675 September, 2012 677 [IETF85 Plenary] Crawford, S., "Large-Scale Active Measurement of 678 Broadband Networks", 679 http://www.ietf.org/proceedings/85/slides/slides-85-iesg- 680 opsandtech-7.pdf 'example' from slide 18 682 [Extend TCP] Michio Honda, Yoshifumi Nishida, Costin Raiciu, Adam 683 Greenhalgh, Mark Handley and Hideyuki Tokuda. "Is it Still 684 Possible to Extend TCP?" Proc. ACM Internet Measurement 685 Conference (IMC), November 2011, Berlin, Germany. 686 http://www.ietf.org/proceedings/82/slides/IRTF-1.pdf 688 Authors' Addresses 690 Marc Linsner 691 Marco Island, FL 692 USA 694 EMail: mlinsner@cisco.com 696 Philip Eardley 697 BT 698 B54 Room 77, Adastral Park, Martlesham 699 Ipswich, IP5 3RE 700 UK 702 Email: philip.eardley@bt.com 704 Trevor Burbridge 705 BT 706 B54 Room 77, Adastral Park, Martlesham 707 Ipswich, IP5 3RE 708 UK 710 Email: trevor.burbridge@bt.com