idnits 2.17.1 draft-linsner-lmap-use-cases-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (July 15, 2013) is 3928 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'IETF85-Plenary' is mentioned on line 467, but not defined Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Marc Linsner 3 Intended Status: Informational Cisco Systems 4 Expires: January 16, 2014 Philip Eardley 5 Trevor Burbridge 6 BT 7 July 15, 2013 9 Large-Scale Broadband Measurement Use Cases 10 draft-linsner-lmap-use-cases-03 12 Abstract 14 Measuring broadband performance on a large scale is important for 15 network diagnostics by providers and users, as well for as public 16 policy. To conduct such measurements, user networks gather data, 17 either on their own initiative or instructed by a measurement 18 controller, and then upload the measurement results to a designated 19 measurement server. Understanding the various scenarios and users of 20 measuring broadband performance is essential to development of the 21 system requirements. The details of the measurement metrics 22 themselves are beyond the scope of this document. 24 Status of this Memo 26 This Internet-Draft is submitted to IETF in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF), its areas, and its working groups. Note that 31 other groups may also distribute working documents as 32 Internet-Drafts. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 The list of current Internet-Drafts can be accessed at 40 http://www.ietf.org/1id-abstracts.html 42 The list of Internet-Draft Shadow Directories can be accessed at 43 http://www.ietf.org/shadow.html 45 Copyright and License Notice 47 Copyright (c) 2013 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 Table of Contents 62 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 63 1.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . 3 64 2 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2.1 Internet Service Provider (ISP) Use Case . . . . . . . . . . 3 66 2.2 End User Network Diagnostics . . . . . . . . . . . . . . . . 4 67 2.3 Regulators . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 3 Details of ISP Use Case . . . . . . . . . . . . . . . . . . . . 6 69 3.1 Existing Capabilities and Shortcomings . . . . . . . . . . . 6 70 3.2 Understanding the quality experienced by customers . . . . . 6 71 3.3 Benchmarking and competitor insight . . . . . . . . . . . . 8 72 3.4 Understanding the impact and operation of new devices and 73 technology . . . . . . . . . . . . . . . . . . . . . . . . . 8 74 3.5 Design and planning . . . . . . . . . . . . . . . . . . . . 9 75 3.6 Identifying, isolating and fixing network problems . . . . . 11 76 3.7 Comparison with the regulator use case . . . . . . . . . . . 12 77 3.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . 13 78 4 Security Considerations . . . . . . . . . . . . . . . . . . . . 14 79 5 IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 14 80 6 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 14 81 7 References . . . . . . . . . . . . . . . . . . . . . . . . . . 15 82 7.1 Normative References . . . . . . . . . . . . . . . . . . . 15 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 15 85 1 Introduction 87 Large-scale measurement efforts in [LMAP-REQ] describe three use 88 cases to be considered in deriving the requirements to be used in 89 developing the solution. This documents attempts to describe those 90 use cases in further detail and include additional use cases. 92 1.1 Terminology 94 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 95 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 96 document are to be interpreted as described in RFC 2119 [RFC2119]. 98 2 Use Cases 100 2.1 Internet Service Provider (ISP) Use Case 102 An ISP, or indeed another network operator, needs to understand the 103 performance of their networks, the performance of the suppliers 104 (downstream and upstream networks), the performance of services, and 105 the impact that such performance has on the experience of their 106 customers. In addition they may also desire visibility of their 107 competitor's networks and services in order to be able to benchmark 108 and improve their own offerings. Largely the processes that ISPs 109 operate (which are based on network measurement) include: 111 o Identifying, isolating and fixing problems in the network, 112 services or with CPE and end user equipment. Such problems may be 113 common to a point in the network topology (e.g. a single 114 exchange), common to a vendor or equipment type (e.g. line card or 115 home gateway) or unique to a single user line (e.g. copper 116 access). Part of this process may also be helping users understand 117 whether the problem exists in their home network or with an over- 118 the-top service instead of with their BB product. 120 o Design and planning. Through identifying the end user experience 121 the ISP can design and plan their network to ensure specified 122 levels of user experience. Services may be moved closer to end 123 users, services upgraded, the impact of QoS assessed or more 124 capacity deployed at certain locations. SLAs may be defined at 125 network or product boundaries. 127 o Benchmarking and competitor insight. The operation of sample 128 panels across competitor products can enable and ISP to assess 129 where they play in the market, identify opportunities where other 130 products operate different technology, and assess the performance 131 of network suppliers that are common to both operators. 133 o Understanding the quality experienced by customers. Alongside 134 benchmarking competitors, gaining better insight into the user's 135 service through a sample panel of the operator's own customers. 136 The end-to-end perspective matters, across home /enterprise 137 networks, peering points, CDNs etc. 139 o Understanding the impact and operation of new devices and 140 technology. As a new product is deployed, or a new technology 141 introduced into the network, it is essential that its operation 142 and impact on other services is measured. This also helps to 143 quantify the advantage that the new technology is bringing and 144 support the business case for larger roll-out. 146 2.2 End User Network Diagnostics 148 End users may want to determine whether their network is performing 149 according to the specifications (e.g., service level agreements) 150 offered by their Internet service provider, or they may want to 151 diagnose whether components of their network path are impaired. End 152 users may perform measurements on their own, using the measurement 153 infrastructure they provide or infrastructure offered by a third 154 party, or they may work directly with their network or application 155 provider to diagnose a specific performance problem. Depending on 156 the circumstances, measurements may occur at specific pre-defined 157 intervals, or may be triggered manually. A system administrator may 158 perform such measurements on behalf of the user. Example use cases 159 of end user initiated performance measurements include: 161 o An end user may wish to perform diagnostics prior to calling 162 their ISP to report a problem. Hence, the end user could connect 163 a MA to different points of their home network and trigger manual 164 tests. Different attachment points could include their in-home 165 802.11 network or an Ethernet port on the back of their BB modem. 167 o An OTT or ISP service provider may deploy a MA within an their 168 service platform to provide the end user a capability to diagnose 169 service issues. For instance a video streaming service may 170 include a manually initiated MA within their platform that has the 171 Controller and Collector predefined. The end user could initiate 172 performance tests manually, with results forwarded to both the 173 provider and the end user via other means, like UI, email, etc. 175 2.3 Regulators 177 Regulators in jurisdictions around the world are responding to 178 consumers' adoption of broadband technology solution for 179 traditional telecommunications and media services by reviewing the 180 historical approaches to regulating these industries and services 181 and in some cases modifying existing approaches or developing new 182 solutions. 184 Some jurisdictions have responded to a perceived need for greater 185 information about broadband performance in the development of 186 regulatory policies and approaches for broadband technologies by 187 developing large-scale measurement programs. Programs such as the 188 U.S. Federal Communications Commission's Measuring Broadband 189 America, U.K. Ofcom's UK Broadband Speeds reports and a growing 190 list of other programs employ a diverse set of operational and 191 technical approaches to gathering data in scientifically and 192 statistical robust ways to perform analysis and reporting on 193 diverse aspects of broadband performance. 195 While each jurisdiction responds to distinct consumer, industry, 196 and regulatory concerns, much commonality exists in the need to 197 produce datasets that are able to compare multiple broadband 198 providers, diverse technical solutions, geographic and regional 199 distributions, and marketed and provisioned levels and 200 combinations of broadband services. 202 Regulators role in the development and enforcement of broadband 203 policies also require that the measurement approaches meet a high 204 level of verifiability, accuracy and fairness to support valid and 205 meaningful comparisons of broadband performance 207 LMAP standards could answer regulators shared needs by providing 208 scalable, cost-effective, scientifically robust solutions to the 209 measurement and collection of broadband performance information. 211 The main consumer of this use case are regulators 213 3 Details of ISP Use Case 215 3.1 Existing Capabilities and Shortcomings 217 In order to get reliable benchmarks some ISPs use vendor provided 218 hardware measurement platforms that connect directly to the home 219 gateway. These devices typically perform a continuous test 220 schedule, allowing the operation of the network to be continually 221 assessed throughout the day. Careful design ensures that they do 222 not detrimentally impact the home user experience or corrupt the 223 test results by testing when the user is also using the Broadband 224 line. While the test capabilities of such probes are good, they 225 are simply too expensive to deploy on mass scale to enable 226 detailed understanding of network performance (e.g. to the 227 granularity of a single backhaul or single user line). In addition 228 there is no easy way to operate similar tests on other devices (eg 229 set top box) or to manage application level tests (such as IPTV) 230 using the same control and reporting framework. 232 ISPs also use speed and other diagnostic tests from user owned 233 devices (such as PCs, tablets or smartphones). These often use 234 browser related technology to conduct tests to servers in the ISP 235 network to confirm the operation of the user BB access line. These 236 tests can be helpful for a user to understand whether their BB 237 line has a problem, and for dialogue with a helpdesk. However they 238 are not able to perform continuous testing and the uncontrolled 239 device and home network means that results are not comparable. 240 Producing statistics across such tests is very dangerous as the 241 population is self-selecting (e.g. those who think they have a 242 problem). 244 Faced with a gap in current vendor offerings some ISPs have taken 245 the approach of placing proprietary test capabilities on their 246 home gateway and other consumer device offerings (such as Set Top 247 Boxes). This also means that different device platforms may have 248 different and largely incomparable tests, developed by different 249 company sub-divisions managed by different systems. 251 3.2 Understanding the quality experienced by customers 253 Operators want to understand the quality of experience (QoE) of 254 their broadband customers. The understanding can be gained through 255 a "panel", ie a measurement probe is deployed to a few 100 or 1000 256 of its customers. The panel needs to be a representative sample 257 for each of the operator's technologies (FTTP, FTTC, ADSL...) and 258 broadband options (80Mb/s, 20Mb/s, basic...), ~100 probes for 259 each. The operator would like the end-to-end view of the service, 260 rather than (say) just the access portion. So as well as simple 261 network statistics like speed and loss rates they want to 262 understand what the service feels like to the customer. This 263 involves relating the pure network parameters to something like a 264 'mean opinion score' which will be service dependent (for instance 265 web browsing QoE is largely determined by latency above a few 266 Mb/s). 268 An operator will also want compound metrics such as "reliability", 269 which might involve packet loss, DNS failures, re-training of the 270 line, video streaming under-runs etc. 272 The operator really wants to understand the end-to-end service 273 experience. However, the home network (Ethernet, wifi, powerline) 274 is highly variable and outside its control. To date, operators 275 (and regulators) have instead measured performance from the home 276 gateway. However, mobile operators clearly must include the 277 wireless link in the measurement. 279 Active measurements are the most obvious approach, ie special 280 measurement traffic is sent by - and to - the probe. In order not 281 to degrade the service of the customer, the measurement data 282 should only be sent when the user is silent, and it shouldn't 283 reduce the customer's data allowance. The other approach is 284 passive measurements on the customer's real traffic; the advantage 285 is that it measures what the customer actually does, but it 286 creates extra variability (different traffic mixes give different 287 results) and especially it raises privacy concerns. 289 From an operator's viewpoint, understanding customers better 290 enables it to offer better services. Also, simple metrics can be 291 more easily understood by senior managers who make investment 292 decisions and by sales and marketing. 294 The characteristics of large scale measurements that emerge from 295 these examples: 297 1. Averaged data (over say 1 month) is generally ok 299 2. A panel (subset) of only a few customers is OK 301 3. Both active and passive measurements are possible, though the 302 former seems easier 304 4. Regularly scheduled tests are fine (providing active tests 305 back off if the customer is using the line). Scheduling can be 306 done some time ahead ('starting tomorrow, run the following test 307 every day'). 309 5. The operator needs to devise metrics and compound measures 310 that represent the QoE 312 6. End-to-end service matters, and not (just) the access link 313 performance 315 3.3 Benchmarking and competitor insight 317 An operator may want to check that the results reported by the 318 regulator match its own belief about how its network is performing. 319 There is quite a lot of variation in underlying line performance for 320 customers on (say) a nominal 20Mb/s service, so it is possible for 321 two panels of ~100 probes to produce different results. 323 An operator may also want more detailed understanding of its 324 competitors, beyond that reported by the regulator - probably by 325 getting a third party to establish a panel of probes in its rival 326 ISPs. Measurements could, for example, help an operator: target its 327 marketing by showing that it's 'best for video streaming' but 'worst 328 for web browsing'; gain detailed insight into the strengths and 329 weaknesses of different access technologies (DSL vs cable vs 330 wireless); understand market segments that it currently doesn't 331 serve; and so on. 333 The characteristics of large scale measurements that emerge from 334 these examples are very similar to the sub use case above: 336 1. Averaged data (over say 1 month) is generally ok 338 2. A panel (subset) of only a few customers is OK 340 3. Both active and passive measurements are possible, though the 341 former seems easier 343 4. Regularly scheduled tests are fine (providing active tests 344 back off if the customer is using the line). Scheduling can be 345 done some time ahead ('starting tomorrow, run the following test 346 every day'). 348 5. The performance metrics are whatever the operator wants to 349 benchmark. As well as QoE measures, it may want to measure some 350 network-specific parameters. 352 6. As well as the performance of the access link, the performance 353 of different network segments, including end-to-end. 355 3.4 Understanding the impact and operation of new devices and technology 356 Another type of measurement is to test new capabilities and services 357 before they are rolled out. For example, the operator may want to: 358 check whether a customer can be upgraded to a new broadband option; 359 understand the impact of IPv6 before it makes it available to its 360 customers (will v6 packets get through, what will the latency be to 361 major websites, what transition mechanisms will be most is 362 appropriate?); check whether a new capability can be signaled using 363 TCP options (how often it will be blocked by a middlebox? - along the 364 lines of some existing experiments) [Extend TCP]; investigate a 365 quality of service mechanism (eg checking whether Diffserv markings 366 are respected on some path); and so on. 368 The characteristics of large scale measurements that emerge from 369 these examples are: 371 1. New tests need to be devised that test a prospective 372 capability. 374 2. Most of the tests are probably simply: "send one packet and 375 record what happens", so an occasional one-off test is sufficient. 377 3. A panel (subset) of only a few customers is probably OK, to 378 gain an understanding of the impact of a new technology, but it 379 may be necessary to check an individual line where the roll-out is 380 per customer. 382 4. An active measurement is needed. 384 3.5 Design and planning 386 Operators can use large scale measurements to help with their network 387 planning - proactive activities to improve the network. 389 For example, by probing from several different vantage points the 390 operator can see that a particular group of customers has performance 391 below that expected during peak hours, which should help capacity 392 planning. Naturally operators already have tools to help this - a 393 network element reports its individual utilisation (and perhaps other 394 parameters). However, making measurements across a path rather than 395 at a point may make it easier to understand the network. There may 396 also be parameters like bufferbloat that aren't currently reported by 397 equipment and/or that are intrinsically path metrics. 399 It may also be possible to run stress tests for risk analysis, for 400 example 'if whizzy new application (or device) becomes popular, which 401 parts of my network would struggle, what would be the impact on other 402 services and how many customers would be affected'. 404 Another example is that the operator may want to monitor performance 405 where there is a service level agreement. This could be with its own 406 customers, especially enterprises may have an SLA. The operator can 407 proactively spot when the service is degrading near to the SLA limit, 408 and get information that will enable more informed conversations with 409 the customer at contract renewal. 411 An operator may also want to monitor the performance of its 412 suppliers, to check whether they meet their SLA or to compare two 413 suppliers if it is dual-sourcing. This could include its transit 414 operator, CDNs, peering, video source, local network provider (for a 415 global operator in countries where it doesn't have its own network), 416 even the whole network for a virtual operator. 418 Through a better understanding of its own network and its suppliers, 419 the operator should be able to focus investment more effectively - in 420 the right place at the right time with the right technology. What-if 421 tests could help quantify the advantage that a new technology brings 422 and support the business case for larger roll-out. 424 The characteristics of large scale measurements emerging from these 425 examples: 427 1. A key challenge is how to integrate results from measurements 428 into existing network planning and management tools 430 2. New tests may need to be devised for the what-if and risk 431 analysis scenarios. 433 3. Capacity constraints first reveal themselves during atypical 434 events (early warning). So averaging of measurements should be 435 over a much shorter time than the sub use case discussed above. 437 4. A panel (subset) of only a few customers is OK for most of the 438 examples, but it should probably be larger than the QoE use case 439 #1 and the operator may also want to regularly change who is in 440 the subset, in order to sample the revealing outliers. 442 5. Measurements over a segment of the network ("end-to-middle") 443 are needed, in order to refine understanding, as well as end-to- 444 end measurements. 446 6. The primary interest is in measuring specific network 447 performance parameters rather than QoE. 449 7. Regularly scheduled tests are fine 451 8. Active measurements are needed; passive ones probably aren't 453 3.6 Identifying, isolating and fixing network problems 455 Operators can use large scale measurements to help identify a fault 456 more rapidly and decide how to solve it. 458 Operators already have Test and Diagnostic tools, where a network 459 element reports some problem or failure to a management system. 460 However, many issues are not caused by a point failure but something 461 wider and so will trigger too many alarms, whilst other issues will 462 cause degradation rather than failure and so not trigger any alarm. 463 Large scale measurements can help provide a more nuanced view that 464 helps network management to identify and fix problems more rapidly 465 and accurately. 467 One example was described in [IETF85-Plenary]. The operator was 468 running a measurement panel for reasons discussed in sub use case #1. 469 It was noticed that the performance of some lines had unexpectedly 470 degraded. This led to a detailed (off-line) investigation which 471 discovered that a particular home gateway upgrade had caused a 472 (mistaken!) drop in line rate. 474 Another example is that occasionally some internal network management 475 event (like re-routing) can be customer-affecting (of course this is 476 unusual). This affects a whole group of customers, for instance those 477 on the same DSLAM. Understanding this will help an operator fix the 478 fault more rapidly and/or allow the affected customers to be informed 479 what's happening and/or request them to re-set their home hub 480 (required to cure some conditions). More accurate information enables 481 the operator to reassure customers and take more rapid and effective 482 action to cure the problem. 484 There may also be problems unique to a single user line (e.g. copper 485 access) that need to be identified. 487 Often customers experience poor broadband due to problems in the home 488 network - the ISP's network is fine. For example they may have moved 489 too far away from their wireless access point. Perhaps 80% of 490 customer calls about fixed BB problems are due to in-home wireless 491 issues. These issues are expensive and frustrating for an operator, 492 as they are extremely hard to diagnose and solve. The operator would 493 like to narrow down whether the problem is in the home (with the home 494 network or edge device or home gateway), in the operator's network, 495 or with an over-the-top service. The operator would like two 496 capabilities. Firstly, self-help tools that customers use to improve 497 their own service or understand its performance better, for example 498 to re-position their devices for better wifi coverage. Secondly, on- 499 demand tests that can the operator can run instantly - so the call 500 centre person answering the phone (or e-chat) could trigger a test 501 and get the result whilst the customer is still on-line session. 503 The characteristics of large scale measurements emerging from these 504 examples: 506 1. A key challenge is how to integrate results from measurements 507 into the operator's existing Test and Diagnostics system. 509 2. Results from the tests shouldn't be averaged 511 3. Tests are generally run on an ad hoc basis, ie specific 512 requests for immediate action 514 4. "End-to-middle" measurements, ie across a specific network 515 segment, are very relevant 517 5. The primary interest is in measuring specific network 518 performance parameters and not QoE 520 6. New tests are needed for example to check the home network (ie 521 the connection from the home hub to the set top boxes or to a 522 tablets on wifi) 524 7. Active measurements are critical. Passive ones may be useful 525 to help understand exactly what the customer is experiencing. 527 3.7 Comparison with the regulator use case 529 Today an increasing number of regulators measure the performance of 530 broadband operators. Typically they deploy a few 1000 probes, each of 531 which is connected directly to the broadband customer's home gateway 532 and periodically measures the performance of that line. The regulator 533 ensures they have a set of probes that covers the different ISPs and 534 their different technology types and contract speeds, so that they 535 can publish statistically-reasonable average performances. 536 Publicising the results stimulates competition and so pressurises 537 ISPs to improve broadband service. 539 The operator use case has similarities but several significant 540 differences from the regulator one: 542 o Performance metrics: A regulator and operator are generally 543 interested in the same performance metrics. Both would like 544 standardised metrics, though this is more important for 545 regulators. 547 o Sampling: The regulator wants an average across a 548 representative sample of broadband customers (per operator, per 549 type of BB contract). The operator also wants to measure 550 individual lines with a problem. 552 o Timeliness: The regulator wants to know the (averaged) 553 performance last quarter (say). For fault identification and 554 fixing, the operator would like to know the performance at this 555 moment and also to instruct a test to be run at this moment (so 556 the requirement is on both the testing and reporting). Also, when 557 testing the impact of new devices and technology, the operator is 558 gaining insight about future performance. 560 o Scheduling: The regulator wants to run scheduled tests 561 ('measure download rate every hour'). The operator also wants to 562 run one-off tests; perhaps also the result of one test would 563 trigger the operator to run a specific follow-up test. 565 o Pre-processing: A regulator would like standard ways of 566 processing the collected data, to remove outlier measurements and 567 aggregate results, because this can significantly affect the final 568 "averaged" result. Pre-processing is not important for an 569 operator. 571 o Historic data: The regulator wants to track how the (averaged) 572 performance of each operator changes on (say) a quarterly basis. 573 The operator would like detailed, recent historic data (eg a 574 customer with an intermittent fault over the last week). 576 o Scope: To date, regulators have measured the performance of 577 access lines. An operator also wants to understand the performance 578 of the home (or enterprise) network and of the end-to-end service, 579 ie including backbone, core, peering and transit, CDNs and 580 application /content servers. 582 o Control of testing and reporting: The operator wants detailed 583 control. The regulator contracts out the measurement caboodle and 584 'control' will be via negotiation with its contractor. 586 o Politics: A regulator has to take account of government targets 587 (eg UK government: "Our ambition (by 2015) is to provide superfast 588 broadband (24Mbps) to at least 90 per cent of premises in the UK 589 and to provide universal access to standard broadband with a speed 590 of at least 2Mbps.") This may affect the metrics the regulator 591 wants to measure and certainly affects how they interpret results. 592 The operator is more focused on winning market share. 594 3.8 Conclusions 596 There is a clear need from an ISP point of view to deploy a single 597 coherent measurement capability across a wide number of heterogeneous 598 devices both in their own networks and in the home environment. These 599 tests need to be able to operate from a wide number of locations to a 600 set of interoperable test points in their own network as well as 601 spanning supplier and competitor networks. 603 Regardless of the tests being operated, there needs to be a way to 604 demand or schedule the tests and critically ensure that such tests do 605 not affect each other; are not affected by user traffic (unless 606 desired) and do not affect the user experience. In addition there 607 needs to be a common way to collect and understand the results of 608 such tests across different devices to enable correlation and 609 comparison between any network or service parameters. 611 Since network and service performance needs to be understood and 612 analysed in the presence of topology, line, product or contract 613 information it is critical that the test points are accurately 614 defined and authenticated. 616 Finally the test data, along with any associated network, product or 617 contract data is commercial or private information and needs to be 618 protected. 620 4 Security Considerations 622 The transport of Controller to MA and MA to Collector traffic must be 623 protected both in-flight and such that each entity is known and 624 trusted to each other. 626 It is imperative that end user identifying data is protected. 627 Identifying data includes, end user name, time and location of the 628 MA, and any attributes about a service such as service location, 629 including IP address that could be used to re-construct physical 630 location. 632 5 IANA Considerations 634 TBD 636 6 Contributors 638 The information in this document is partially derived from text 639 written by the following contributors: 641 James Miller jamesmilleresquire@gmail.com 643 7 References 645 7.1 Normative References 647 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 648 Requirement Levels", BCP 14, RFC 2119, March 1997. 650 [LMAP-REQ] Schulzrinne, H., "Large-Scale Measurement of Broadband 651 Performance: Use Cases, Architecture and Protocol 652 Requirements", draft-schulzrinne-lmap-requirements, 653 September, 2012 655 [IETF85 Plenary] Crawford, S., "Large-Scale Active Measurement of 656 Broadband Networks", 657 http://www.ietf.org/proceedings/85/slides/slides-85-iesg- 658 opsandtech-7.pdf 'example' from slide 18 660 [Extend TCP] Michio Honda, Yoshifumi Nishida, Costin Raiciu, Adam 661 Greenhalgh, Mark Handley and Hideyuki Tokuda. "Is it Still 662 Possible to Extend TCP?" Proc. ACM Internet Measurement 663 Conference (IMC), November 2011, Berlin, Germany. 664 http://www.ietf.org/proceedings/82/slides/IRTF-1.pdf 666 Authors' Addresses 668 Marc Linsner 669 Marco Island, FL 670 USA 672 EMail: mlinsner@cisco.com 674 Philip Eardley 675 BT 676 B54 Room 77, Adastral Park, Martlesham 677 Ipswich, IP5 3RE 678 UK 680 Email: philip.eardley@bt.com 682 Trevor Burbridge 683 BT 684 B54 Room 77, Adastral Park, Martlesham 685 Ipswich, IP5 3RE 686 UK 687 Email: trevor.burbridge@bt.com