idnits 2.17.1 draft-morton-ippm-pipe-dream-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 6, 2021) is 956 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Missing reference section? 'EY-Study' on line 698 looks like a reference -- Missing reference section? 'N-Africa' on line 710 looks like a reference -- Missing reference section? 'DontKnow' on line 691 looks like a reference -- Missing reference section? 'WorldRecord' on line 757 looks like a reference -- Missing reference section? 'RFC2330' on line 620 looks like a reference -- Missing reference section? 'RFC7312' on line 732 looks like a reference -- Missing reference section? 'RFC8468' on line 751 looks like a reference -- Missing reference section? 'RFC3148' on line 723 looks like a reference -- Missing reference section? 'RFC2678' on line 625 looks like a reference -- Missing reference section? 'RFC7679' on line 679 looks like a reference -- Missing reference section? 'RFC7680' on line 684 looks like a reference -- Missing reference section? 'RFC2681' on line 629 looks like a reference -- Missing reference section? 'RFC6673' on line 675 looks like a reference -- Missing reference section? 'RFC4737' on line 648 looks like a reference -- Missing reference section? 'RFC5560' on line 662 looks like a reference -- Missing reference section? 'RFC3393' on line 633 looks like a reference -- Missing reference section? 'RFC5481' on line 658 looks like a reference -- Missing reference section? 'ForAll' on line 705 looks like a reference -- Missing reference section? 'RFC6390' on line 670 looks like a reference -- Missing reference section? 'RFC6076' on line 666 looks like a reference -- Missing reference section? 'RFC5136' on line 728 looks like a reference -- Missing reference section? 'RFC8337' on line 747 looks like a reference -- Missing reference section? 'I-D.ietf-ippm-capacity-metric-method' on line 615 looks like a reference -- Missing reference section? 'RFC7799' on line 743 looks like a reference -- Missing reference section? 'RFC3432' on line 638 looks like a reference -- Missing reference section? 'RFC2544' on line 718 looks like a reference -- Missing reference section? 'RFC4656' on line 643 looks like a reference -- Missing reference section? 'RFC5357' on line 653 looks like a reference -- Missing reference section? 'RFC7594' on line 737 looks like a reference Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 30 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft AT&T Labs 4 Intended status: Informational September 6, 2021 5 Expires: March 10, 2022 7 Dream-Pipe or Pipe-Dream: What Do Users Want (and how can we assure it)? 8 draft-morton-ippm-pipe-dream-01 10 Abstract 12 This memo addresses the problem of defining relevant properties and 13 metrics with the goal of improving Internet access for all users. 14 Where the fundamental metrics are well-defined, a framework to 15 standardize new metrics exists and been used with success. Users 16 consider reliability to be important, as well as latency and 17 capacity; it really depends who you ask and their current 18 experiences. 20 Status of This Memo 22 This Internet-Draft is submitted in full conformance with the 23 provisions of BCP 78 and BCP 79. 25 Internet-Drafts are working documents of the Internet Engineering 26 Task Force (IETF). Note that other groups may also distribute 27 working documents as Internet-Drafts. The list of current Internet- 28 Drafts is at https://datatracker.ietf.org/drafts/current/. 30 Internet-Drafts are draft documents valid for a maximum of six months 31 and may be updated, replaced, or obsoleted by other documents at any 32 time. It is inappropriate to use Internet-Drafts as reference 33 material or to cite them other than as "work in progress." 35 This Internet-Draft will expire on March 10, 2022. 37 Copyright Notice 39 Copyright (c) 2021 IETF Trust and the persons identified as the 40 document authors. All rights reserved. 42 This document is subject to BCP 78 and the IETF Trust's Legal 43 Provisions Relating to IETF Documents 44 (https://trustee.ietf.org/license-info) in effect on the date of 45 publication of this document. Please review these documents 46 carefully, as they describe your rights and restrictions with respect 47 to this document. Code Components extracted from this document must 48 include Simplified BSD License text as described in Section 4.e of 49 the Trust Legal Provisions and are provided without warranty as 50 described in the Simplified BSD License. 52 Table of Contents 54 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 55 1.1. What's happening with users today? . . . . . . . . . . . 2 56 1.2. User expectations and how they can change . . . . . . . . 3 57 1.3. How fast can you go? . . . . . . . . . . . . . . . . . . 4 58 2. Dream-Pipe/Pipe-Dream . . . . . . . . . . . . . . . . . . . . 5 59 3. Metrics to Assess Fundamental Properties . . . . . . . . . . 7 60 4. Interpreting the Measurements . . . . . . . . . . . . . . . . 10 61 5. Work together, as always! . . . . . . . . . . . . . . . . . . 11 62 6. Displaying or Reporting Results . . . . . . . . . . . . . . . 11 63 7. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 64 8. Security Considerations . . . . . . . . . . . . . . . . . . . 13 65 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 66 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 13 67 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 68 11.1. References . . . . . . . . . . . . . . . . . . . . . . . 13 69 11.2. More References . . . . . . . . . . . . . . . . . . . . 15 70 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 17 72 1. Introduction 74 This memo addresses the problem of defining relevant properties and 75 metrics with the goal of improving Internet access for all users. 76 Much has already been done, and it's important to have a common 77 foundation as we move forward. There is certainly more to understand 78 about the problem and the approaches to a solution. 80 1.1. What's happening with users today? 82 Part of the motivation for examining metrics in greater detail at 83 this time is the belief that "Internet speed" is no longer the only 84 service dimension that matters to users. A small sample of recent 85 surveys follows. 87 In a 2021 UK [EY-Study] EY study (Summary in Advanced Television), 88 Decoding the digital home, Chapter 2, a survey of 2500 subscribers 89 found that "Fifty-eight per cent of UK households believe broadband 90 reliability is more important than speed". Also, "the appetite for a 91 consistent connection aligns with perceptions that broadband 92 reliability declined during the pandemic - 29 per cent across all 93 households, rising to 46 per cent in households with children aged up 94 to 11 years." If the family is on-line more-often, they will notice 95 more outages. 97 Reliability is surely important, but other factors come into play: 98 "...nearly half (47 per cent) don't think upgrading to higher-speed 99 packages is worth the cost. Meanwhile, 29 per cent say they don't 100 understand what broadband speed means in practice." [EY-Study] Price 101 trade-off and knowledge about the communication details play a role 102 when discussing speed. Reliability issues are experienced by 103 everyone. 105 All the findings above are predicated on the current "speeds" being 106 delivered. For example, "Fifty-two percent of rural users are 107 frustrated that the fastest speeds are not available in their area, 108 and only 52 per cent believe they are getting value for money from 109 their current broadband package." [EY-Study] The proportion of Urban 110 users was 10-15% higher when asked about value. Broadband speed 111 available often defines the development gap in world-wide surveys, 112 and users in under-served areas are likely glad to see 113 increases.[N-Africa] 115 Another survey described in [DontKnow] found that 36% of Americans 116 *don't* know their Internet speed. A higher percentage of females 117 did not know (47%) than males (25%). Older Americans (55+) and those 118 in lower income households are other demographics where service speed 119 was an above-overall-average gap in their Internet knowledge. Those 120 who did not know their speed tended to be satisfied with the speed 121 they receive, about 20% of these indicated dissatisfaction. 123 Latency wasn't mentioned in [EY-Study], but "While the survey 124 indicates that most households ultimately want 'the basics' to work 125 well, those that do consider additional features as part of a 126 broadband bundle favour privacy and security (48 per cent), 127 reflecting wider anxieties and concerns relating to data protection 128 experienced during the pandemic." Many users want their service 129 provider to provide all the services. 131 1.2. User expectations and how they can change 133 User expectations are greatly influenced by their current 134 experiences, and by what is technologically feasible at the time. 135 Users have a view of the levels of availability, quality, and utility 136 that constitute the overall experience in their situation of use 137 (e.g., stationary in their home). 139 When we insert a communication network as an integral part of an 140 activity (a task or form of entertainment), then the expectation 141 doesn't change or make allowances without a trade-off between the new 142 features and the new situation For example, take the home but add the 143 ability to travel: a stretch limousine fills some of the need but 144 with reduced capabilities (the refrigerator is smaller, no beds or 145 lavatory; there is less of everything). But we have a TV! 147 Let's assume the TV uses analog transmission for this discussion. 148 The TV is smaller and experiences reception outages, so we are 149 suddenly aware of radio characteristics like fading and multipath 150 (especially with earlier analog transmissions, there is no buffering 151 at all). The limo is really nicely appointed, but it's not all we 152 dreamed-of (or possibly even expected), especially the TV reception 153 because we wanted to watch a play-off game on this trip! 155 Where did they go wrong providing the TV in the Limo? They made the 156 communication channel a much more obvious part of the viewing 157 experience. And the first-time users didn't expect it. They 158 inserted unexpected impairments in the communication channel by 159 adding mobility. The TV designer might not have been aware of the 160 moving use case; their "portable TV" means you can pick-up the TV 161 easily, not watch TV at high speed, so needed features were not 162 provided (a tracking antenna, to start with...). 164 Since the goal of the workshop is to improve Internet Access for 165 *all* users, then we have set a difficult task. Some users might 166 want a dedicated pipe to communicate in the ways they choose or 167 access content that they have identified. Other users are willing to 168 share a pool of communication resources to communicate only when and 169 where they want, for the potential benefits of being able to 170 communicate more widely, spontaneously, and possibly for less cost 171 than the dedicated pipe. It seems that we should focus on the subset 172 of performance attributes that will benefit as many users as 173 possible, and over-which our constituent organizations have some 174 control. 176 1.3. How fast can you go? 178 To some, the maximum bit rate remains the primary goal. 180 The Internet Speed record previously held by the University College 181 of London (178 Tbps) was topped in July 2021 by a team of researchers 182 from the National Institute of Information and Communications 183 Technology (NICT) in Japan. The new world record for Internet speed, 184 is 319 Tbps, using 4-core fiber [WorldRecord]. 186 Higher link rates and subscriber rates may not be everything to 187 users, but there can be a cross-over dependency to latency 188 performance. Packet serialization time is reduced at higher link 189 speeds, directly proportional to the increased rate. Bursts of large 190 packets arriving for one stream affect the buffer time for packets in 191 other streams that arrive behind them in a single FIFO queue, but 192 again the problem is relieved by using higher link rates. For 193 example, early VoIP used very low bit rate codecs: 8kbps was common, 194 and so were 10 Mbps LANs. But mixing bursty and periodic traffic 195 meant unreliable delivery and delay variation for the latter. Packet 196 marking, prioritization of multiple queues and queue management 197 methods helped. More rate headroom and adaptation for the network 198 impairments was also welcome. 200 Instead of providing more capacity for a single user, today's Gigabit 201 services support more users efficiently on the same access service. 202 So higher rates can improve the other important dimensions of 203 performance. 205 2. Dream-Pipe/Pipe-Dream 207 Perhaps the ideal view of point-to-point communications is a pipe 208 that illustrates many fundamental communication properties. A dream- 209 pipe, so to speak. 211 o The pipe is always ready to support communications when the user 212 desires. 214 o Material that enters the pipe leaves the pipe in a sufficiently 215 unadulterated form. 217 o The pipe has sufficient and dedicated capacity to deliver 218 information at the same rate at which it entered (once the 219 diameter of the pipe has been chosen). 221 The apparent rigidity of the pipe model helps identify additional 222 properties that are needed: 224 o The capacity requirement may change/increase over time, even from 225 session to session, so a user might anticipate a growing need for 226 communications or anticipate use by more than one user by choosing 227 a larger capacity pipe than they currently need. 229 o When we say, "delivered in sufficiently unadulterated form", we 230 could mean: 232 1. a perfect reproduction (the communication of all messages with 233 the same timing and order in which they started), or 235 2. a reliable byte stream delivery, or 237 3. communication of most messages with sufficient timeliness to 238 reconstruct the original messages,or 240 4. even a perceptual interpretation of a portion of the decoded 241 messages delivered in a sufficiently timely way (for human-to- 242 human voice communication: the users understood what was said, 243 the conversation was sufficiently interactive as though they 244 were in the same room (imperceptible latency), and each user 245 can identify the remote speaker's voice. But the overall 246 tolerance for imperfections in the pipe depends on the 247 particular use case and many design choices. 249 o The pipe itself must be ready for use, and the systems at the 250 source and destination ends of the pipe must be equally ready and 251 operational; all the systems required between users are 252 responsible for success. 254 o The pipe may be invisible! Radio access has its own advantages 255 and challenges. 257 So, a short list of network properties that contribute to good user 258 experience are: 260 1. Available always when needed 262 2. Sufficient Capacity when needed 264 3. No apparent loss 266 4. No apparent latency (which implies both low and consistent 267 latency). 269 Networking and geographic reality tells us that we are unlikely to 270 see all properties at once, for all time, AOE (anywhere on Earth); 271 that's the extreme Pipe-Dream. 273 But attaining a good user experience level in different communication 274 activities/scenarios likely implies different demands among the four 275 properties above, and places different relative importance on each of 276 these properties. 278 Author's Note: I'm positing a rigid pipe as the object of an 279 idealistic idea, or "dream pipe". I hope there won't be any 280 confusion about the play on words with "pipe dream; the dream pipe is 281 a pipe dream, especially when the dream includes zero latency between 282 any two points. I'm not talking about a hose or flexible pipe, 283 either. 285 3. Metrics to Assess Fundamental Properties 287 The IETF has been working the problem of standardizing metrics for 288 the Internet and the communication streams it transports for well 289 over 20 years. Many other organizations have been successfully 290 working in this area as well, and hopefully they will identify their 291 literature and key results for review. 293 The IETF's efforts to define IP-Layer and Transport-Layer performance 294 metrics and methods have largely been carried-out in the IPPM working 295 group (IP-Performance Metrics, and later, called IP-Performance 296 Measurements). Beginning as a somewhat joint effort with the 297 performance-focused Benchmarking Methodology Working Group (BMWG), 298 IPPM was chartered individually in 1997. IPPM has extensive 299 literature relevant to Internet measurement. The IPPM working group 300 has a strong foundation in its Framework [RFC2330] that has been 301 updated over time, with [RFC7312] and [RFC8468]. What we can 302 *standardize and measure*, we have as a basis to evaluate and 303 determine whether we have made it better (or not). 305 The problem that initiated the IPPM work turned-out to be the most 306 difficult to solve (Bulk Transfer Capacity, BTC [RFC3148]), and has 307 taken the longest. Meanwhile, the standards for fundamental metrics 308 other than BTC turned-out to be sufficiently challenging. 310 Here is a list of fundamental packet transfer metrics, specified in 311 RFCs: 313 o Connectivity [RFC2678] 315 o One-Way Loss [RFC7679], STD 81 317 o One-Way Delay [RFC7680], STD 82 319 o Round-Trip Delay [RFC2681] 321 o Round-Trip Loss [RFC6673] 323 o Reordering [RFC4737] 325 o Duplication [RFC5560] 327 The metrics and methods above were specified with considerable 328 flexibility, so that they could be applied in a range of specific 329 circumstances. 331 One of the most flexible metrics is IP Packet Delay Variation, 332 [RFC3393] which is a "derived metric", in that it requires One-Way 333 Delay measurements for assessment. The powerful feature of [RFC3393] 334 is the selection function, which permits comparing the delays of any 335 pair of packets in the stream. Fortunately, the performance 336 community predominantly uses one of two forms of delay variation, the 337 inter-packet delay variation and the packet delay variation forms. 338 These forms are defined and compared in efficacy for measurement uses 339 in [RFC5481] along with many other considerations and measurement 340 forms/processing. 342 It is possible to create new derived metrics at the IP-layer, and to 343 measure similar quantities (loss, delay, reordering) at other layers 344 [ForAll] [RFC6390] [RFC6076]. 346 Another example of a derived metric uses loss instead of delay. This 347 metric that does not have a direct parallel in the IETF literature is 348 the Stream Block metric found in ITU-T Recommendation Y.1540 [Y.1540] 349 (virtually all the IP-layer metrics are found in one standard). This 350 metric assigns consecutive packets into multi-packet blocks, and 351 assesses the number of lost packets in a block as a surrogate for a 352 higher-layer process's ability to maintain good communication. For 353 example, a Forward Error Correction process might be able to replace 354 2 lost packets in any order, but not 3. There is a parallel to 355 retransmission rate limits when attempting to maintain a continued 356 loss-free ratio with buffering to allow for the retransmission time. 358 Network and Bulk Transport Capacity have been chartered and 359 progressed over twenty years. The performance community has seen 360 development of Informative definitions in [RFC3148] for Framework for 361 Bulk Transport Capacity (BTC), [RFC5136] for Network Capacity, 362 Maximum IP-layer Capacity (in RFC9097-to-be), and the Experimental 363 metric definitions and methods in [RFC8337], Model-Based Metrics for 364 BTC. 366 One quantity that could be measured without too much controversy is 367 the Maximum IP-Layer Capacity (judging by the adoption in standards 368 bodies, RFC9097-to-be) [I-D.ietf-ippm-capacity-metric-method]. This 369 is the basis for many service specifications (and there is a 370 technology or configuration-limited "ground truth" for the 371 measurement), and can be tested simply with minimal reliance on end 372 systems. The method deploys a feedback channel from the receiver to 373 control the sender's transmission rate in near-real-time, and search 374 for the maximum. 376 The "invisible" radio dream pipe presents a challenge in terms of 377 results variability for all metrics and adds at least one critical 378 input parameter: location. It may be that the variability with 379 location and time are key metrics to help users understand radio 380 coverage (in addition to signal strength portrayed by the "number of 381 "bars"). 383 A network property that is very high on the list in Section 2 is 384 Availability. There is treatment of this important property as a 385 metric in IETF (Connectivity [RFC2678]) and somewhat different 386 details in an alternate definition (ITU-T Point-to-Point IP Service 387 Availability [Y.1540]), but not much deployment for such an important 388 pre-requisite to the rest of the metrics. Both Connectivity and 389 Availability rely on packet loss measurements, in fact they can be 390 considered *derived metrics*, adding time constraints and/or loss 391 ratio thresholds to the fundamental loss measurements on a packet 392 stream. 394 Sometimes the end systems decide whether the path is available or 395 not. In one on-line gaming system, reception of ICMP Destination 396 Unreachable caused complete and immediate termination of the session 397 with loss of all progress and resources accumulated. Customer 398 service centers sometimes experienced call-in overload from users 399 seeking to restore their player environment. But the Destination 400 Unreachable condition was temporary and resolved by routing updates 401 in a few seconds. The ultimate fix was to delay the session's 402 reaction to Destination Unreachable for a few seconds, and recovery 403 was automatic. So, when end systems play a role in the definition of 404 connectivity or availability, they must also be cognizant that all 405 automatic failure detection and restoration requires some amount of 406 time. Since failures are inevitable, the dream-pipe heals itself, 407 too (and doesn't confuse end-systems or users with error messages 408 that "sound final"). 410 Most measurement systems begin their process with a Source- 411 Destination packet exchange prior to actual measurements. If this 412 pre-measurement exchange fails, then the test is not conducted (and 413 re-tried later). But the most useful information to assess 414 continuous connectivity/Availability is the record of test set-up 415 success or failure over time. Measurement systems that make this 416 info readily available do a more complete job of network 417 characterization than others. 419 We cannot leave the topic of metrics without mentioning the equally 420 important topic of measurement streams for Active Metrics and 421 Measurements [RFC7799]. When attempting to measure characteristics 422 of VoIP streams, the IETF agreed on ways to produce periodic streams 423 in an acceptable way [RFC3432]. Many measured results completely 424 depend on the stream characteristics, and inter-packet delay 425 variation is a great example. If the stream contains packet bursts, 426 are the bursts preserved in transit? We would ask later-on whether 427 preserving burst spacing matters to the communication quality or to 428 the user's experience. The answer is likely a matter of degree, and 429 dependent on the communications activity itself. 431 When we consider the topic of new test streams in the context of 432 Gigabit and higher access speeds intended to support multiple users, 433 we might consider the notion of a "standard single user's stream 434 set". Then we might measure how many simultaneous standard users can 435 be supported with sufficient network performance: an indicator of 436 each user's experience in the "dream-pipe". Of course, that details 437 of a standard user's traffic would change over time, so we couldn't 438 argue over the current year's definition for a year... The facility 439 to register the new standard user test streams would be a key part of 440 such a solution. 442 4. Interpreting the Measurements 444 If we had measured only the fundamental metrics, we might ask, "we 445 saw a small proportion of packet losses; did this matter to users? 446 Did losses affect their satisfaction during their activity in any 447 way? Did any users experience an outage at this loss level?" 449 The likely role for derived metrics is to improve results 450 interpretation by measuring a new quantity, possibly in the context 451 of a newly-defined packet stream, thereby making the process of 452 results-interpretation easier to perform. 454 For example, Delay Variation metrics had many possible formulations, 455 but two main forms emerged. RFC 5481 [RFC5481] compared the IPDV and 456 PDV forms with tasks that network and application designers were 457 facing at the time. The RFC describes measurement considerations and 458 results interpretation from a purely objective point of view. The 459 most useful result interpretation was to show how PDV (a 460 characterization of the delay distribution from minimum to a high 461 percentile) could be used to determine the size of the de-jitter 462 buffer needed for the tested path. 464 For other fundamental IP-Layer metrics, there is some amount of 465 discussion of best practices and interpretation in each of the IETF 466 IPPM Metric RFCs. 468 Many researchers (working in ITU-T Study Group 12 and elsewhere) take 469 the information that can be derived from packet-layer measurements, 470 plus higher layers when available, and produce objective estimates of 471 user satisfaction by modeling user Mean Opinion Score (MOS) variation 472 over a range of conditions. The process determines the user MOS 473 through formal subjective testing in laboratories (or more recently 474 prompted by COVID-19 conditions, in crowd-sourced scenarios). The 475 corresponding objective models of user satisfaction are often 476 determined through competition among several candidates, where the 477 goal is to seek the most accurate model possible at the time. The 478 modeling efforts often produce new derived metrics that facilitate 479 automated interpretation. The main drawback is that the process 480 described above takes significant time when conducted in the context 481 of an industry standards body. So, user activities that are not 482 particularly demanding of network performance do not receive much 483 attention from researchers doing modeling; their performance is 484 assumed to come-along for the ride (but in a system with multiple 485 queues or other categorizations, each activities' requirements need 486 to be quantified). Nevertheless, a process to take-in network 487 measurements and produce a measure of user satisfaction is well- 488 understood and used. The output of these objective models can 489 rightly be called Quality of Experience (QoE), because a set of 490 users' opinions is an inherent part of the result (without users, you 491 don't have QoE! That's how QoE is defined and differs from QoS and 492 network performance). 494 5. Work together, as always! 496 "What are the best ways to communicate these properties to service 497 providers and network operators?" Work together with service 498 providers and network operators. Everyone has a stake in our future. 500 There are quite a few networking professionals whose day-job is 501 network operation, and who are participating now. 503 6. Displaying or Reporting Results 505 Whether we present a single figure of merit, or a set of relevant 506 measurements on a dashboard summary, each numer requires a frame of 507 reference. This is true for everyone, not just everyday users. 509 One example of a solid reference for results comes from the 510 fundamental benchmarking specification, [RFC2544]. The measured 511 Throughput (as defined in [RFC2544] ) must be compared to the maximum 512 theoretical frame rate on the layer-2 technology (accounting for 513 frame size, inter-frame gap, preamble, etc.). Tests with small frame 514 size may not achieve the maximum frame rate due to header processing 515 rate limitations, and tabulating the maximum with the results makes 516 this fact very clear. Other reference levels can be made available, 517 such as the capacity required to support 4k video (~25Mbps), etc. 519 Some people will simply want to know whether the measurement result 520 is good, bad, or somewhere in-between. We can follow common practice 521 here to use colors (green, red, yellow in-between), or present the 522 numer on a gauge with suitable color cues. But we need to know the 523 use case or the service specification accurately to do this. 525 In fact, a portion of user testing is prompted by subscribing to a 526 new service provider, a new level of service (higher speed?), or a 527 perception that poor-performance and trouble-shooting may be 528 necessary; even an apparent outage may prompt test attempts using 529 alternate devices and networks. 531 This last testing scenario is the most interesting: how can we help 532 users when they encounter a problem? It's usually most important to 533 isolate the problem in the complex network, and when user to network 534 host results are failing, can the next step remove some of the 535 components and check them in isolation (while keeping in mind that 536 the acceptance level for a sub-network is a part of the end-to-end 537 budget for performance!). Is it impossible to reach a particular 538 far-end host? Does the access network appear to be unavailable, or 539 is the problem related to interference on the WiFi radio network? 540 With test hosts placed at strategic points in the path, it may be 541 possible to segment the problem a user is experiencing. 543 7. Summary 545 When all components that comprise a user's activity work well- 546 together, then there are no surprises. Given that a large proportion 547 of user expectations are met at some point in time, then the metrics 548 and performance levels that characterize the network's contribution 549 to overall satisfaction are what we want to describe and maintain. 550 Users consider reliability to be important, as well as latency and 551 capacity; it really depends who you ask and their current 552 experiences. End-system designers have a role to play in the process 553 by recognizing the realities of packet networks and compensating for 554 them: the dream-pipe absolutes are still a pipe-dream today. 556 There are many fundamental metrics already-defined. But we might 557 find that we need new metrics that make interpreting the results 558 easier! The notion of *derived metrics* has been applied 559 successfully. Test streams with a known bias toward a particular 560 class of user streams can also be useful basis for performance 561 measurement. Where the fundamental metrics are well-defined, a 562 framework to standardize new metrics and active test streams exists 563 and been used with success. Metrics can be defined that immediately 564 improve our understanding of the performance presented to users, but 565 to understand user satisfaction requires that user opinions are part 566 of the development process. 568 All users, both knowledgeable and newcomers, need a frame of 569 reference to understand what numerical measurements are telling them. 570 The clues from the expected measurement range, the results from the 571 recent past, or the theoretical maximum value all have their place. 572 If users are willing, measurements should help them isolate their 573 current issue to one or more networks and/or components in the user- 574 to-X path. 576 If we break the problem down by specific communications activities 577 and look for specific metrics for each one, it could take a long time 578 to complete. Perhaps a categorization of the performance metrics, 579 numeric criteria, and reliability of a "pseudo-dream-pipe" for a set 580 of communication activities that have similar needs is a way to move 581 ahead. 583 8. Security Considerations 585 Active metrics and measurements have a long history of security 586 considerations. The security considerations that apply to any active 587 measurement of live paths are relevant here. See [RFC4656] and 588 [RFC5357]. 590 When considering privacy of those involved in measurement or those 591 whose traffic is measured, the sensitive information available to 592 potential observers is greatly reduced when using active techniques 593 which are within this scope of work. Passive observations of user 594 traffic for measurement purposes raise many privacy issues. We refer 595 the reader to the privacy considerations described in the Large Scale 596 Measurement of Broadband Performance (LMAP) Framework [RFC7594], 597 which covers active and passive techniques. 599 9. IANA Considerations 601 This memo makes no requests of IANA. 603 10. Acknowledgments 605 Thanks to all the folks who have worked on performance metric 606 development, many of whom are the authors of the references. Many 607 more have provided their insights along the way. It's rewarding to 608 travel with others (even for a short time), and to meet new people on 609 the journey. 611 11. References 613 11.1. References 615 [I-D.ietf-ippm-capacity-metric-method] 616 Morton, A., Geib, R., and L. Ciavattone, "Metrics and 617 Methods for One-way IP Capacity", draft-ietf-ippm- 618 capacity-metric-method-12 (work in progress), June 2021. 620 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 621 "Framework for IP Performance Metrics", RFC 2330, 622 DOI 10.17487/RFC2330, May 1998, 623 . 625 [RFC2678] Mahdavi, J. and V. Paxson, "IPPM Metrics for Measuring 626 Connectivity", RFC 2678, DOI 10.17487/RFC2678, September 627 1999, . 629 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 630 Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681, 631 September 1999, . 633 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 634 Metric for IP Performance Metrics (IPPM)", RFC 3393, 635 DOI 10.17487/RFC3393, November 2002, 636 . 638 [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network 639 performance measurement with periodic streams", RFC 3432, 640 DOI 10.17487/RFC3432, November 2002, 641 . 643 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 644 Zekauskas, "A One-way Active Measurement Protocol 645 (OWAMP)", RFC 4656, DOI 10.17487/RFC4656, September 2006, 646 . 648 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 649 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 650 DOI 10.17487/RFC4737, November 2006, 651 . 653 [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J. 654 Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)", 655 RFC 5357, DOI 10.17487/RFC5357, October 2008, 656 . 658 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 659 Applicability Statement", RFC 5481, DOI 10.17487/RFC5481, 660 March 2009, . 662 [RFC5560] Uijterwaal, H., "A One-Way Packet Duplication Metric", 663 RFC 5560, DOI 10.17487/RFC5560, May 2009, 664 . 666 [RFC6076] Malas, D. and A. Morton, "Basic Telephony SIP End-to-End 667 Performance Metrics", RFC 6076, DOI 10.17487/RFC6076, 668 January 2011, . 670 [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New 671 Performance Metric Development", BCP 170, RFC 6390, 672 DOI 10.17487/RFC6390, October 2011, 673 . 675 [RFC6673] Morton, A., "Round-Trip Packet Loss Metrics", RFC 6673, 676 DOI 10.17487/RFC6673, August 2012, 677 . 679 [RFC7679] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, 680 Ed., "A One-Way Delay Metric for IP Performance Metrics 681 (IPPM)", STD 81, RFC 7679, DOI 10.17487/RFC7679, January 682 2016, . 684 [RFC7680] Almes, G., Kalidindi, S., Zekauskas, M., and A. Morton, 685 Ed., "A One-Way Loss Metric for IP Performance Metrics 686 (IPPM)", STD 82, RFC 7680, DOI 10.17487/RFC7680, January 687 2016, . 689 11.2. More References 691 [DontKnow] 692 https://advanced-television.com/2021/06/25/survey-just-36- 693 of-us-know-their-broadband-speed/, "Survey: Just 36% of US 694 know their broadband speed (title is incorrect)", June 695 2021, . 698 [EY-Study] 699 https://advanced-television.com/2021/06/25/study-uk-homes- 700 favour-broadband-reliability-over-speed/, "Study: UK homes 701 favour broadband reliability over speed", June 2021, 702 . 705 [ForAll] Morton, A., ""Performance Metrics for All," in IEEE 706 Internet Computing, vol. 13, no. 4, pp. 82-86, doi: 707 10.1109/MIC.2009.87.", July 2009, 708 . 710 [N-Africa] 711 https://www.libyaherald.com/2021/06/08/internet-speeds- 712 improved-in-north-africa-over-last-four-quarters/, 713 "Internet speeds improved in North Africa over last four 714 quarters", June 2021, 715 . 718 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 719 Network Interconnect Devices", RFC 2544, 720 DOI 10.17487/RFC2544, March 1999, 721 . 723 [RFC3148] Mathis, M. and M. Allman, "A Framework for Defining 724 Empirical Bulk Transfer Capacity Metrics", RFC 3148, 725 DOI 10.17487/RFC3148, July 2001, 726 . 728 [RFC5136] Chimento, P. and J. Ishac, "Defining Network Capacity", 729 RFC 5136, DOI 10.17487/RFC5136, February 2008, 730 . 732 [RFC7312] Fabini, J. and A. Morton, "Advanced Stream and Sampling 733 Framework for IP Performance Metrics (IPPM)", RFC 7312, 734 DOI 10.17487/RFC7312, August 2014, 735 . 737 [RFC7594] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 738 Aitken, P., and A. Akhter, "A Framework for Large-Scale 739 Measurement of Broadband Performance (LMAP)", RFC 7594, 740 DOI 10.17487/RFC7594, September 2015, 741 . 743 [RFC7799] Morton, A., "Active and Passive Metrics and Methods (with 744 Hybrid Types In-Between)", RFC 7799, DOI 10.17487/RFC7799, 745 May 2016, . 747 [RFC8337] Mathis, M. and A. Morton, "Model-Based Metrics for Bulk 748 Transport Capacity", RFC 8337, DOI 10.17487/RFC8337, March 749 2018, . 751 [RFC8468] Morton, A., Fabini, J., Elkins, N., Ackermann, M., and V. 752 Hegde, "IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for 753 the IP Performance Metrics (IPPM) Framework", RFC 8468, 754 DOI 10.17487/RFC8468, November 2018, 755 . 757 [WorldRecord] 758 https://www.fibre-systems.com/news/internet-speed-record- 759 319tbs-reached?utm_source=Adestra&utm_medium=email&utm_con 760 tent=Internet%20speed%20record%20of%20319Tb%2Fs%20reached& 761 utm_campaign=FS%20July%202021%20Newsline&utm_term=Fibre%20 762 Systems, "Internet speed record of 319Tb/s reached", July 763 2021, . 769 [Y.1540] Y.1540, I. R., "Internet protocol data communication 770 service - IP packet transfer and availability performance 771 parameters", December 2019, 772 . 774 Author's Address 776 Al Morton 777 AT&T Labs 778 200 Laurel Avenue South 779 Middletown, NJ 07748 780 USA 782 Phone: +1 732 420 1571 783 Fax: +1 732 368 1192 784 Email: acm@research.att.com