idnits 2.17.1 draft-ietf-ippm-metric-registry-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 18, 2015) is 3112 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'RFC3393' is defined on line 1107, but no explicit reference was found in the text == Unused Reference: 'RFC4566' is defined on line 1127, but no explicit reference was found in the text == Unused Reference: 'RFC5481' is defined on line 1146, but no explicit reference was found in the text == Unused Reference: 'RFC5905' is defined on line 1150, but no explicit reference was found in the text == Unused Reference: 'RFC6776' is defined on line 1160, but no explicit reference was found in the text == Unused Reference: 'RFC6792' is defined on line 1166, but no explicit reference was found in the text == Unused Reference: 'RFC7003' is defined on line 1171, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2141 (Obsoleted by RFC 8141) ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Obsolete normative reference: RFC 4148 (Obsoleted by RFC 6248) ** Obsolete normative reference: RFC 5226 (Obsoleted by RFC 8126) ** Downref: Normative reference to an Informational RFC: RFC 6248 -- Obsolete informational reference (is this intentional?): RFC 2679 (Obsoleted by RFC 7679) -- Obsolete informational reference (is this intentional?): RFC 4566 (Obsoleted by RFC 8866) == Outdated reference: A later version (-06) exists of draft-ietf-ippm-active-passive-01 Summary: 5 errors (**), 0 flaws (~~), 9 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Bagnulo 3 Internet-Draft UC3M 4 Intended status: Best Current Practice B. Claise 5 Expires: April 20, 2016 Cisco Systems, Inc. 6 P. Eardley 7 BT 8 A. Morton 9 AT&T Labs 10 A. Akhter 11 Consultant 12 October 18, 2015 14 Registry for Performance Metrics 15 draft-ietf-ippm-metric-registry-05 17 Abstract 19 This document defines the format for the Performance Metrics registry 20 and defines the IANA Registry for Performance Metrics. This document 21 also gives a set of guidelines for Registered Performance Metric 22 requesters and reviewers. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on April 20, 2016. 41 Copyright Notice 43 Copyright (c) 2015 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 59 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 60 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 61 4. Motivation for a Performance Metrics Registry . . . . . . . . 7 62 4.1. Interoperability . . . . . . . . . . . . . . . . . . . . 7 63 4.2. Single point of reference for Performance Metrics . . . . 8 64 4.3. Side benefits . . . . . . . . . . . . . . . . . . . . . . 8 65 5. Criteria for Performance Metrics Registration . . . . . . . . 8 66 6. Performance Metric Registry: Prior attempt . . . . . . . . . 9 67 6.1. Why this Attempt Will Succeed . . . . . . . . . . . . . . 10 68 7. Definition of the Performance Metric Registry . . . . . . . . 10 69 7.1. Summary Category . . . . . . . . . . . . . . . . . . . . 11 70 7.1.1. Identifier . . . . . . . . . . . . . . . . . . . . . 11 71 7.1.2. Name . . . . . . . . . . . . . . . . . . . . . . . . 12 72 7.1.3. URI . . . . . . . . . . . . . . . . . . . . . . . . . 13 73 7.1.4. Description . . . . . . . . . . . . . . . . . . . . . 13 74 7.2. Metric Definition Category . . . . . . . . . . . . . . . 13 75 7.2.1. Reference Definition . . . . . . . . . . . . . . . . 13 76 7.2.2. Fixed Parameters . . . . . . . . . . . . . . . . . . 13 77 7.3. Method of Measurement Category . . . . . . . . . . . . . 14 78 7.3.1. Reference Method . . . . . . . . . . . . . . . . . . 14 79 7.3.2. Packet Generation Stream . . . . . . . . . . . . . . 14 80 7.3.3. Traffic Filter . . . . . . . . . . . . . . . . . . . 15 81 7.3.4. Sampling Distribution . . . . . . . . . . . . . . . . 16 82 7.3.5. Run-time Parameters . . . . . . . . . . . . . . . . . 16 83 7.3.6. Role . . . . . . . . . . . . . . . . . . . . . . . . 17 84 7.4. Output Category . . . . . . . . . . . . . . . . . . . . . 17 85 7.4.1. Type . . . . . . . . . . . . . . . . . . . . . . . . 17 86 7.4.2. Reference Definition . . . . . . . . . . . . . . . . 17 87 7.4.3. Metric Units . . . . . . . . . . . . . . . . . . . . 18 88 7.5. Administrative information . . . . . . . . . . . . . . . 18 89 7.5.1. Status . . . . . . . . . . . . . . . . . . . . . . . 18 90 7.5.2. Requester . . . . . . . . . . . . . . . . . . . . . . 18 91 7.5.3. Revision . . . . . . . . . . . . . . . . . . . . . . 18 92 7.5.4. Revision Date . . . . . . . . . . . . . . . . . . . . 18 93 7.6. Comments and Remarks . . . . . . . . . . . . . . . . . . 18 94 8. The Life-Cycle of Registered Performance Metrics . . . . . . 18 95 8.1. Adding new Performance Metrics to the Performance Metrics 96 Registry . . . . . . . . . . . . . . . . . . . . . . . . 19 98 8.2. Revising Registered Performance Metrics . . . . . . . . . 19 99 8.3. Deprecating Registered Performance Metrics . . . . . . . 21 100 9. Security considerations . . . . . . . . . . . . . . . . . . . 22 101 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 102 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 22 103 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 104 12.1. Normative References . . . . . . . . . . . . . . . . . . 22 105 12.2. Informative References . . . . . . . . . . . . . . . . . 23 106 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 26 108 1. Introduction 110 The IETF specifies and uses Performance Metrics of protocols and 111 applications transported over its protocols. Performance metrics are 112 such an important part of the operations of IETF protocols that 113 [RFC6390] specifies guidelines for their development. 115 The definition and use of Performance Metrics in the IETF happens in 116 various working groups (WG), most notably: 118 The "IP Performance Metrics" (IPPM) WG is the WG primarily 119 focusing on Performance Metrics definition at the IETF. 121 The "Metric Blocks for use with RTCP's Extended Report Framework" 122 (XRBLOCK) WG recently specified many Performance Metrics related 123 to "RTP Control Protocol Extended Reports (RTCP XR)" [RFC3611], 124 which establishes a framework to allow new information to be 125 conveyed in RTCP, supplementing the original report blocks defined 126 in "RTP: A Transport Protocol for Real-Time Applications", 127 [RFC3550]. 129 The "Benchmarking Methodology" WG (BMWG) defined many Performance 130 Metrics for use in laboratory benchmarking of inter-networking 131 technologies. 133 The "IP Flow Information eXport" (IPFIX) concluded WG specified an 134 IANA process for new Information Elements. Some Performance 135 Metrics related Information Elements are proposed on regular 136 basis. 138 The "Performance Metrics for Other Layers" (PMOL) concluded WG, 139 defined some Performance Metrics related to Session Initiation 140 Protocol (SIP) voice quality [RFC6035]. 142 It is expected that more Performance Metrics will be defined in the 143 future, not only IP-based metrics, but also metrics which are 144 protocol-specific and application-specific. 146 However, despite the importance of Performance Metrics, there are two 147 related problems for the industry. First, how to ensure that when 148 one party requests another party to measure (or report or in some way 149 act on) a particular Performance Metric, then both parties have 150 exactly the same understanding of what Performance Metric is being 151 referred to. Second, how to discover which Performance Metrics have 152 been specified, so as to avoid developing new Performance Metric that 153 is very similar, but not quite inter-operable. The problems can be 154 addressed by creating a registry of performance metrics. The usual 155 way in which IETF organizes namespaces is with Internet Assigned 156 Numbers Authority (IANA) registries, and there is currently no 157 Performance Metrics Registry maintained by the IANA. 159 This document therefore requests that IANA create and maintain a 160 Performance Metrics Registry, according to the maintenance procedures 161 and the Performance Metrics Registry format defined in this memo. 162 Although the Registry format is primarily for use by IANA, any other 163 organization that wishes to create a Performance Metrics Registry MAY 164 use the format for its purposes. The authors make no guarantee of 165 the format's applicability to any possible set of Performance Metrics 166 envisaged by other organizations. In the rest of this document, 167 unless we explicitly say so, we will refer to the IANA-maintained 168 Performance Metrics Registry as simply the Performance Metrics 169 Registry. 171 2. Terminology 173 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 174 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 175 "OPTIONAL" in this document are to be interpreted as described in 176 [RFC2119]. 178 Performance Metric: A Performance Metric is a quantitative measure 179 of performance, targeted to an IETF-specified protocol or targeted 180 to an application transported over an IETF-specified protocol. 181 Examples of Performance Metrics are the FTP response time for a 182 complete file download, the DNS response time to resolve the IP 183 address, a database logging time, etc. This definition is 184 consistent with the definition of metric in [RFC2330] and broader 185 than the definition of performance metric in [RFC6390]. 187 Registered Performance Metric: A Registered Performance Metric is a 188 Performance Metric expressed as an entry in the Performance Metric 189 Registry, administered by IANA. Such a performance metric has met 190 all the registry review criteria defined in this document in order 191 to included in the registry. 193 Performance Metrics Registry: The IANA registry containing 194 Registered Performance Metrics. 196 Proprietary Registry: A set of metrics that are registered in a 197 proprietary registry, as opposed to Performance Metrics Registry. 199 Performance Metrics Experts: The Performance Metrics Experts is a 200 group of designated experts [RFC5226] selected by the IESG to 201 validate the Performance Metrics before updating the Performance 202 Metrics Registry. The Performance Metrics Experts work closely 203 with IANA. 205 Parameter: An input factor defined as a variable in the definition 206 of a Performance Metric. A numerical or other specified factor 207 forming one of a set that defines a metric or sets the conditions 208 of its operation. All Parameters must be known to measure using a 209 metric and interpret the results. There are two types of 210 Parameters, Fixed and Run-time parameters. For the Fixed 211 Parameters, the value of the variable is specified in the 212 Performance Metrics Registry entry and different Fixed Parameter 213 values results in different Registered Performance Metrics. For 214 the Run-time Parameters, the value of the variable is defined when 215 the metric measurement method is executed and a given Registered 216 Performance Metric supports multiple values for the parameter. 217 Although Run-time Parameters do not change the fundamental nature 218 of the Performance Metric's definition, some have substantial 219 influence on the network property being assessed and 220 interpretation of the results. 222 Note: Consider the case of packet loss in the following two 223 Active Measurement Method cases. The first case is packet loss 224 as background loss where the Run-time Parameter set includes a 225 very sparse Poisson stream, and only characterizes the times 226 when packets were lost. Actual user streams likely see much 227 higher loss at these times, due to tail drop or radio errors. 228 The second case is packet loss as inverse of throughput where 229 the Run-time Parameter set includes a very dense, bursty 230 stream, and characterizes the loss experienced by a stream that 231 approximates a user stream. These are both "loss metrics", but 232 the difference in interpretation of the results is highly 233 dependent on the Run-time Parameters (at least), to the extreme 234 where we are actually using loss to infer its compliment: 235 delivered throughput. 237 Active Measurement Method: Methods of Measurement conducted on 238 traffic which serves only the purpose of measurement and is 239 generated for that reason alone, and whose traffic characteristics 240 are known a priori. A detailed definition of Active Measurement 241 Method is provided in [I-D.ietf-ippm-active-passive]. Examples of 242 Active Measurement Methods are the measurement methods for the One 243 way delay metric defined in [RFC2679] and the one for round trip 244 delay defined in [RFC2681]. 246 Passive Measurement Method: Methods of Measurement conducted on 247 network traffic, generated either from the end users or from 248 network elements that would exist regardless whether the 249 measurement was being conducted or not. One characteristic of 250 Passive Measurement Methods is that sensitive information may be 251 observed, and as a consequence, stored in the measurement system. 252 A detailed definition of Passive Measurement Method is provided in 253 [I-D.ietf-ippm-active-passive]. 255 3. Scope 257 This document is meant mainly for two different audiences. For those 258 defining new Registered Performance Metrics, it provides 259 specifications and best practices to be used in deciding which 260 Registered Performance Metrics are useful for a measurement study, 261 instructions for writing the text for each column of the Registered 262 Performance Metrics, and information on the supporting documentation 263 required for the new Performance Metrics Registry entry (up to and 264 including the publication of one or more RFCs or I-Ds describing it). 265 For the appointed Performance Metrics Experts and for IANA personnel 266 administering the new IANA Performance Metric Registry, it defines a 267 set of acceptance criteria against which these proposed Registered 268 Performance Metrics should be evaluated. In addition, this document 269 may be useful for other organization who are defining a Performance 270 Metric registry of its own, who can rely on the Performance Metric 271 registry defined in this document. 273 This Performance Metric Registry is applicable to Performance Metrics 274 issued from Active Measurement, Passive Measurement, and any other 275 form of Performance Metric. This registry is designed to encompass 276 Performance Metrics developed throughout the IETF and especially for 277 the technologies specified in the following working groups: IPPM, 278 XRBLOCK, IPFIX, and BMWG. This document analyzes an prior attempt to 279 set up a Performance Metric Registry, and the reasons why this design 280 was inadequate [RFC6248]. Finally, this document gives a set of 281 guidelines for requesters and expert reviewers of candidate 282 Registered Performance Metrics. 284 This document makes no attempt to populate the Performance Metrics 285 Registry with initial entries. It does provides a few examples that 286 are merely illustrations and should not be included in the registry 287 at this point in time. 289 Based on [RFC5226] Section 4.3, this document is processed as Best 290 Current Practice (BCP) [RFC2026]. 292 4. Motivation for a Performance Metrics Registry 294 In this section, we detail several motivations for the Performance 295 Metric Registry. 297 4.1. Interoperability 299 As any IETF registry, the primary use for a registry is to manage a 300 namespace for its use within one or more protocols. In the 301 particular case of the Performance Metric Registry, there are two 302 types of protocols that will use the Performance Metrics in the 303 Performance Metrics Registry during their operation (by referring to 304 the Index values): 306 o Control protocol: this type of protocols is used to allow one 307 entity to request another entity to perform a measurement using a 308 specific metric defined by the Performance Metrics Registry. One 309 particular example is the LMAP framework [RFC7594]. Using the 310 LMAP terminology, the Performance Metrics Registry is used in the 311 LMAP Control protocol to allow a Controller to request a 312 measurement task to one or more Measurement Agents. In order to 313 enable this use case, the entries of the Performance Metric 314 Registry must be well enough defined to allow a Measurement Agent 315 implementation to trigger a specific measurement task upon the 316 reception of a control protocol message. This requirement heavily 317 constrains the type of entries that are acceptable for the 318 Performance Metric Registry. 320 o Report protocol: This type of protocols is used to allow an entity 321 to report measurement results to another entity. By referencing 322 to a specific Performance Metric Registry, it is possible to 323 properly characterize the measurement result data being reported. 324 Using the LMAP terminology, the Performance Metrics Registry is 325 used in the Report protocol to allow a Measurement Agent to report 326 measurement results to a Collector. 328 It should be noted that the LMAP framework explicitly allows for 329 using not only the IANA-maintained Performance Metrics Registry but 330 also other registries containing Performance Metrics, either defined 331 by other organizations or private ones. However, others who are 332 creating Registries to be used in the context of an LMAP framework 333 are encouraged to use the Registry format defined in this document, 334 because this makes it easier for developers of LMAP Measurement 335 Agents (MAs) to programmatically use information found in those other 336 Registries' entries. 338 4.2. Single point of reference for Performance Metrics 340 A Performance Metrics Registry serves as a single point of reference 341 for Performance Metrics defined in different working groups in the 342 IETF. As we mentioned earlier, there are several WGs that define 343 Performance Metrics in the IETF and it is hard to keep track of all 344 them. This results in multiple definitions of similar Performance 345 Metrics that attempt to measure the same phenomena but in slightly 346 different (and incompatible) ways. Having a registry would allow 347 both the IETF community and external people to have a single list of 348 relevant Performance Metrics defined by the IETF (and others, where 349 appropriate). The single list is also an essential aspect of 350 communication about Performance Metrics, where different entities 351 that request measurements, execute measurements, and report the 352 results can benefit from a common understanding of the referenced 353 Performance Metric. 355 4.3. Side benefits 357 There are a couple of side benefits of having such a registry. 358 First, the Performance Metrics Registry could serve as an inventory 359 of useful and used Performance Metrics, that are normally supported 360 by different implementations of measurement agents. Second, the 361 results of measurements using the Performance Metrics would be 362 comparable even if they are performed by different implementations 363 and in different networks, as the Performance Metric is properly 364 defined. BCP 176 [RFC6576] examines whether the results produced by 365 independent implementations are equivalent in the context of 366 evaluating the completeness and clarity of metric specifications. 367 This BCP defines the standards track advancement testing for (active) 368 IPPM metrics, and the same process will likely suffice to determine 369 whether Registered Performance Metrics are sufficiently well 370 specified to result in comparable (or equivalent) results. 371 Registered Performance Metrics which have undergone such testing 372 SHOULD be noted, with a reference to the test results. 374 5. Criteria for Performance Metrics Registration 376 It is neither possible nor desirable to populate the Performance 377 Metrics Registry with all combinations of Parameters of all 378 Performance Metrics. The Registered Performance Metrics should be: 380 1. interpretable by the user. 382 2. implementable by the software designer, 384 3. deployable by network operators, 385 4. accurate, for interoperability and deployment across vendors, 387 5. Operationally useful, so that it has significant industry 388 interest and/or has seen deployment, 390 6. Sufficiently tightly defined, so that different values for the 391 Run-time Parameters does not change the fundamental nature of the 392 measurement, nor change the practicality of its implementation. 394 In essence, there needs to be evidence that a candidate Registered 395 Performance Metric has significant industry interest, or has seen 396 deployment, and there is agreement that the candidate Registered 397 Performance Metric serves its intended purpose. 399 6. Performance Metric Registry: Prior attempt 401 There was a previous attempt to define a metric registry RFC 4148 402 [RFC4148]. However, it was obsoleted by RFC 6248 [RFC6248] because 403 it was "found to be insufficiently detailed to uniquely identify IPPM 404 metrics... [there was too much] variability possible when 405 characterizing a metric exactly" which led to the RFC4148 registry 406 having "very few users, if any". 408 A couple of interesting additional quotes from RFC 6248 might help 409 understand the issues related to that registry. 411 1. "It is not believed to be feasible or even useful to register 412 every possible combination of Type P, metric parameters, and 413 Stream parameters using the current structure of the IPPM Metrics 414 Registry." 416 2. "The registry structure has been found to be insufficiently 417 detailed to uniquely identify IPPM metrics." 419 3. "Despite apparent efforts to find current or even future users, 420 no one responded to the call for interest in the RFC 4148 421 registry during the second half of 2010." 423 The current approach learns from this by tightly defining each 424 Registered Performance Metric with only a few variable (Run-time) 425 Parameters to be specified by the measurement designer, if any. The 426 idea is that entries in the Performance Metrics Registry stem from 427 different measurement methods which require input (Run-time) 428 parameters to set factors like source and destination addresses 429 (which do not change the fundamental nature of the measurement). The 430 downside of this approach is that it could result in a large number 431 of entries in the Performance Metrics Registry. There is agreement 432 that less is more in this context - it is better to have a reduced 433 set of useful metrics rather than a large set of metrics, some with 434 with questionable usefulness. 436 6.1. Why this Attempt Will Succeed 438 As mentioned in the previous section, one of the main issues with the 439 previous registry was that the metrics contained in the registry were 440 too generic to be useful. This document specifies stricter criteria 441 for performance metric registration (see section 6), and imposes a 442 group of Performance Metrics Experts that will provide guidelines to 443 assess if a Performance Metric is properly specified. 445 Another key difference between this attempt and the previous one is 446 that in this case there is at least one clear user for the 447 Performance Metrics Registry: the LMAP framework and protocol. 448 Because the LMAP protocol will use the Performance Metrics Registry 449 values in its operation, this actually helps to determine if a metric 450 is properly defined. In particular, since we expect that the LMAP 451 control protocol will enable a controller to request a measurement 452 agent to perform a measurement using a given metric by embedding the 453 Performance Metric Registry value in the protocol, a metric is 454 properly specified if it is defined well-enough so that it is 455 possible (and practical) to implement the metric in the measurement 456 agent. This was the failure of the previous attempt: a registry 457 entry with an undefined Type-P (section 13 of RFC 2330 [RFC2330]) 458 allows implementation to be ambiguous. 460 7. Definition of the Performance Metric Registry 462 In this section we define the columns of the Performance Metric 463 Registry. This Performance Metric Registry is applicable to 464 Performance Metrics issued from Active Measurement, Passive 465 Measurement, and any other form of Performance Metric. Because of 466 that, it may be the case that some of the columns defined are not 467 applicable for a given type of metric. If this is the case, the 468 column(s) SHOULD be populated with the "NA" value (Non Applicable). 469 However, the "NA" value MUST NOT be used by any metric in the 470 following columns: Identifier, Name, URI, Status, Requester, 471 Revision, Revision Date, Description. In addition, it may be 472 possible that, in the future, a new type of metric requires 473 additional columns. Should that be the case, it is possible to add 474 new columns to the registry. The specification defining the new 475 column(s) must define how to populate the new column(s) for existing 476 entries. 478 The columns of the Performance Metric Registry are defined next. The 479 columns are grouped into "Categories" to facilitate the use of the 480 registry. Categories are described at the 8.x heading level, and 481 columns are at the 8.x.y heading level. The Figure below illustrates 482 this organization. An entry (row) therefore gives a complete 483 description of a Registered Performance Metric. 485 Each column serves as a check-list item and helps to avoid omissions 486 during registration and expert review. 488 Registry Categories and Columns, shown as 489 Category 490 ------------------ 491 Column | Column | 493 Summary 494 ------------------------------- 495 Identifier | Name | URIs | Description | 497 Metric Definition 498 ----------------------------------------- 499 Reference Definition | Fixed Parameters | 501 Method of Measurement 502 --------------------------------------------------------------------- 503 Reference | Packet | Traffic | Sampling | Run-time | Role | 504 Method | Generation | Filter | Distribution | Parameters | | 505 | Stream | 506 Output 507 ----------------------------- 508 | Type | Reference | Units | 509 | | Definition | | 511 Administrative Information 512 ---------------------------------- 513 Status |Request | Rev | Rev.Date | 515 Comments and Remarks 516 -------------------- 518 7.1. Summary Category 520 7.1.1. Identifier 522 A numeric identifier for the Registered Performance Metric. This 523 identifier MUST be unique within the Performance Metric Registry. 525 The Registered Performance Metric unique identifier is a 16-bit 526 integer (range 0 to 65535). When adding newly Registered Performance 527 Metrics to the Performance Metric Registry, IANA should assign the 528 lowest available identifier to the next Registered Performance 529 Metric. 531 7.1.2. Name 533 As the name of a Registered Performance Metric is the first thing a 534 potential implementor will use when determining whether it is 535 suitable for a given application, it is important to be as precise 536 and descriptive as possible. 538 New names of Registered Performance Metrics: 540 1. "MUST be chosen carefully to describe the Registered Performance 541 Metric and the context in which it will be used." 543 2. "MUST be unique within the Performance Metric Registry." 545 3. "MUST use capital letters for the first letter of each component. 546 All other letters MUST be lowercase, even for acronyms. 547 Exceptions are made for acronyms containing a mixture of 548 lowercase and capital letters, such as 'IPv4' and 'IPv6'." 550 4. MUST use '_' between each component of the Registered Performance 551 Metric name. 553 5. MUST start with prefix Act_ for active measurement Registered 554 Performance Metric. 556 6. MUST start with prefix Pas_ for passive monitoring Registered 557 Performance Metric. 559 7. Other types of Performance Metric should define a proper prefix 560 for identifying the type. 562 8. The remaining rules for naming are left for the Performance 563 Metric Experts to determine as they gather experience, so this is 564 an area of planned update by a future RFC 566 An example is "Act_UDP_Latency_Poisson_mean" for a active monitoring 567 UDP latency metric using a Poisson stream of packets and producing 568 the mean as output. 570 Some examples of names of passive metrics might be: Pas_L3_L4_Octets 571 (Layer 3 and 4 level accounting of bytes observed), Pas_DNS_RTT 572 (Round Trip Time of in DNS query response of observed traffic), and 573 Pas_L3_TCP_RTT (Passively observed round trip time in TCP handshake 574 organized with L3 addresses) 576 7.1.3. URI 578 The URIs column MUST contain a URI [RFC3986] that uniquely identifies 579 the metric. This URI is a URN [RFC2141]. The URI is automatically 580 generated by prepending the prefix urn:ietf:params:ippm:metric: to 581 the metric name. The resulting URI is globally unique. 583 The URIs column MUST contain a second URI which is a URL [RFC3986] 584 and uniquely identifies and locates the metric entry so it is 585 accessible through the Internet. The URL points to a file containing 586 the information of exactly one registry entry. The separate files 587 for different entries can be more easily edited and re-used when 588 preparing new entries. The exact composition of each metric URL will 589 be determined by IANA, but there will be some overlap with the URN 590 described above. 592 7.1.4. Description 594 A Registered Performance Metric description is a written 595 representation of a particular Performance Metrics Registry entry. 596 It supplements the Registered Performance Metric name to help 597 Performance Metrics Registry users select relevant Registered 598 Performance Metrics. 600 7.2. Metric Definition Category 602 This category includes columns to prompt all necessary details 603 related to the metric definition, including the RFC reference and 604 values of input factors, called fixed parameters, which are left open 605 in the RFC but have a particular value defined by the performance 606 metric. 608 7.2.1. Reference Definition 610 This entry provides a reference (or references) to the relevant 611 section(s) of the document(s) that define the metric, as well as any 612 supplemental information needed to ensure an unambiguous definition 613 for implementations. The reference needs to be an immutable 614 document, such as an RFC; for other standards bodies, it is likely to 615 be necessary to reference a specific, dated version of a 616 specification. 618 7.2.2. Fixed Parameters 620 Fixed Parameters are Parameters whose value must be specified in the 621 Performance Metrics Registry. The measurement system uses these 622 values. 624 Where referenced metrics supply a list of Parameters as part of their 625 descriptive template, a sub-set of the Parameters will be designated 626 as Fixed Parameters. For example, for active metrics, Fixed 627 Parameters determine most or all of the IPPM Framework convention 628 "packets of Type-P" as described in [RFC2330], such as transport 629 protocol, payload length, TTL, etc. An example for passive metrics 630 is for RTP packet loss calculation that relies on the validation of a 631 packet as RTP which is a multi-packet validation controlled by 632 MIN_SEQUENTIAL as defined by [RFC3550]. Varying MIN_SEQUENTIAL 633 values can alter the loss report and this value could be set as a 634 Fixed Parameter 636 In any case, Parameters MUST have well defined names. For Human 637 readers, the hanging indent style will work, and the names and 638 definitions that do not appear in the Reference Method Specification 639 should appear in this column 641 A Parameter which is a Fixed Parameter for one Performance Metrics 642 Registry entry may be designated as a Run-time Parameter for another 643 Performance Metrics Registry entry. 645 7.3. Method of Measurement Category 647 This category includes columns for references to relevant sections of 648 the RFC(s) and any supplemental information needed to ensure an 649 unambiguous method for implementations. 651 7.3.1. Reference Method 653 This entry provides references to relevant sections of the RFC(s) 654 describing the method of measurement, as well as any supplemental 655 information needed to ensure unambiguous interpretation for 656 implementations referring to the RFC text. 658 Specifically, this section should include pointers to pseudocode or 659 actual code that could be used for an unambigious implementation. 661 7.3.2. Packet Generation Stream 663 This column applies to Performance Metrics that generate traffic for 664 a part of their Measurement Method purposes including but not 665 necessarily limited to Active metrics. The generated traffic is 666 referred as stream and this columns describe its characteristics. 668 Each entry for this column contains the following information: 670 o Value: The name of the packet stream scheduling discipline 671 o Reference: the specification where the stream is defined 673 The packet generation stream may require parameters such as the the 674 average packet rate and distribution truncation value for streams 675 with Poisson-distributed inter-packet sending times. In case such 676 parameters are needed, they should be included either in the Fixed 677 parameter column or in the run time parameter column, depending on 678 wether they will be fixed or will be an input for the metric. 680 The simplest example of stream specification is Singleton scheduling 681 (see [RFC2330]), where a single atomic measurement is conducted. 682 Each atomic measurement could consist of sending a single packet 683 (such as a DNS request) or sending several packets (for example, to 684 request a webpage). Other streams support a series of atomic 685 measurements in a "sample", with a schedule defining the timing 686 between each transmitted packet and subsequent measurement. 687 Principally, two different streams are used in IPPM metrics, Poisson 688 distributed as described in [RFC2330] and Periodic as described in 689 [RFC3432]. Both Poisson and Periodic have their own unique 690 parameters, and the relevant set of parameters names and values 691 should be included either in the Fixed Parameters column or in the 692 Run-time parameter column. 694 7.3.3. Traffic Filter 696 This column applies to Performance Metrics that observe packets 697 flowing through (the device with) the measurement agent i.e. that is 698 not necessarily addressed to the measurement agent. This includes 699 but is not limited to Passive Metrics. The filter specifies the 700 traffic that is measured. This includes protocol field values/ 701 ranges, such as address ranges, and flow or session identifiers. 703 The traffic filter itself depends on needs of the metric itself and a 704 balance of operators measurement needs and user's need for privacy. 705 Mechanics for conveying the filter criteria might be the BPF (Berkley 706 Packet Filter) or PSAMP [RFC5475] Property Match Filtering which 707 reuses IPFIX [RFC7012]. An example BPF string for matching TCP/80 708 traffic to remote destination net 192.0.2.0/24 would be "dst net 709 192.0.2.0/24 and tcp dst port 80". More complex filter engines might 710 be supported by the implementation that might allow for matching 711 using Deep Packet Inspection (DPI) technology. 713 The traffic filter includes the following information: 715 Type: the type of traffic filter used, e.g. BPF, PSAMP, OpenFlow 716 rule, etc. as defined by a normative reference 718 Value: the actual set of rules expressed 720 7.3.4. Sampling Distribution 722 The sampling distribution defines out of all the packets that match 723 the traffic filter, which one of those are actually used for the 724 measurement. One possibility is "all" which implies that all packets 725 matching the Traffic filter are considered, but there may be other 726 sampling strategies. It includes the following information: 728 Value: the name of the sampling distribution 730 Reference definition: pointer to the specification where the 731 sampling distribution is properly defined. 733 The sampling distribution may require parameters. In case such 734 parameters are needed, they should be included either in the Fixed 735 parameter column or in the run time parameter column, depending on 736 wether they will be fixed or will be an input for the metric. 738 Sampling and Filtering Techniques for IP Packet Selection are 739 documented in the PSAMP (Packet Sampling) [RFC5475], while the 740 Framework for Packet Selection and Reporting, [RFC5474] provides more 741 background information. The sampling distribution parameters might 742 be expressed in terms of the Information Model for Packet Sampling 743 Exports, [RFC5477], and the Flow Selection Techniques, [RFC7014]. 745 7.3.5. Run-time Parameters 747 Run-Time Parameters are Parameters that must be determined, 748 configured into the measurement system, and reported with the results 749 for the context to be complete. However, the values of these 750 parameters is not specified in the Performance Metrics Registry (like 751 the Fixed Parameters), rather these parameters are listed as an aid 752 to the measurement system implementer or user (they must be left as 753 variables, and supplied on execution). 755 Where metrics supply a list of Parameters as part of their 756 descriptive template, a sub-set of the Parameters will be designated 757 as Run-Time Parameters. 759 Parameters MUST have well defined names. For Human readers, the 760 hanging indent style will work, and the names and definitions that do 761 not appear in the Reference Method Specification should appear in 762 this column. 764 A Data Format for each Run-time Parameter MUST be specified in this 765 column, to simplify the control and implementation of measurement 766 devices. For example, parameters that include an IPv4 address can be 767 encoded as a 32 bit integer (i.e. binary base64 encoded value) or ip- 768 address as defined in [RFC6991]. The actual encoding(s) used must be 769 explicitly defined for each Run-time parameter. 771 Examples of Run-time Parameters include IP addresses, measurement 772 point designations, start times and end times for measurement, and 773 other information essential to the method of measurement. 775 7.3.6. Role 777 In some method of measurements, there may be several roles defined 778 e.g. on a one-way packet delay active measurement, there is one 779 measurement agent that generates the packets and the other one that 780 receives the packets. This column contains the name of the role for 781 this particular entry. In the previous example, there should be two 782 entries in the registry, one for each role, so that when a 783 measurement agent is instructed to perform the one way delay source 784 metric know that it is supposed to generate packets. The values for 785 this field are defined in the reference method of measurement. 787 7.4. Output Category 789 For entries which involve a stream and many singleton measurements, a 790 statistic may be specified in this column to summarize the results to 791 a single value. If the complete set of measured singletons is 792 output, this will be specified here. 794 Some metrics embed one specific statistic in the reference metric 795 definition, while others allow several output types or statistics. 797 7.4.1. Type 799 This column contain the name of the output type. The output type 800 defines the type of result that the metric produces. It can be the 801 raw results or it can be some form of statistic. The specification 802 of the output type must define the format of the output. In some 803 systems, format specifications will simplify both measurement 804 implementation and collection/storage tasks. Note that if two 805 different statistics are required from a single measurement (for 806 example, both "Xth percentile mean" and "Raw"), then a new output 807 type must be defined ("Xth percentile mean AND Raw"). 809 7.4.2. Reference Definition 811 This column contains a pointer to the specification where the output 812 type is defined 814 7.4.3. Metric Units 816 The measured results must be expressed using some standard dimension 817 or units of measure. This column provides the units. 819 When a sample of singletons (see [RFC2330] for definitions of these 820 terms) is collected, this entry will specify the units for each 821 measured value. 823 7.5. Administrative information 825 7.5.1. Status 827 The status of the specification of this Registered Performance 828 Metric. Allowed values are 'current' and 'deprecated'. All newly 829 defined Information Elements have 'current' status. 831 7.5.2. Requester 833 The requester for the Registered Performance Metric. The requester 834 MAY be a document, such as RFC, or person. 836 7.5.3. Revision 838 The revision number of a Registered Performance Metric, starting at 0 839 for Registered Performance Metrics at time of definition and 840 incremented by one for each revision. 842 7.5.4. Revision Date 844 The date of acceptance or the most recent revision for the Registered 845 Performance Metric. 847 7.6. Comments and Remarks 849 Besides providing additional details which do not appear in other 850 categories, this open Category (single column) allows for unforeseen 851 issues to be addressed by simply updating this informational entry. 853 8. The Life-Cycle of Registered Performance Metrics 855 Once a Performance Metric or set of Performance Metrics has been 856 identified for a given application, candidate Performance Metrics 857 Registry entry specifications in accordance with Section 7 are 858 submitted to IANA to follow the process for review by the Performance 859 Metric Experts, as defined below. This process is also used for 860 other changes to the Performance Metric Registry, such as deprecation 861 or revision, as described later in this section. 863 It is also desirable that the author(s) of a candidate Performance 864 Metrics Registry entry seek review in the relevant IETF working 865 group, or offer the opportunity for review on the WG mailing list. 867 8.1. Adding new Performance Metrics to the Performance Metrics Registry 869 Requests to change Registered Performance Metrics in the Performance 870 Metric Registry are submitted to IANA, which forwards the request to 871 a designated group of experts (Performance Metric Experts) appointed 872 by the IESG; these are the reviewers called for by the Expert Review 873 RFC5226 policy defined for the Performance Metric Registry. The 874 Performance Metric Experts review the request for such things as 875 compliance with this document, compliance with other applicable 876 Performance Metric-related RFCs, and consistency with the currently 877 defined set of Registered Performance Metrics. 879 Authors are expected to review compliance with the specifications in 880 this document to check their submissions before sending them to IANA. 882 The Performance Metric Experts should endeavor to complete referred 883 reviews in a timely manner. If the request is acceptable, the 884 Performance Metric Experts signify their approval to IANA, which 885 updates the Performance Metric Registry. If the request is not 886 acceptable, the Performance Metric Experts can coordinate with the 887 requester to change the request to be compliant. The Performance 888 Metric Experts may also choose in exceptional circumstances to reject 889 clearly frivolous or inappropriate change requests outright. 891 This process should not in any way be construed as allowing the 892 Performance Metric Experts to overrule IETF consensus. Specifically, 893 any Registered Performance Metrics that were added with IETF 894 consensus require IETF consensus for revision or deprecation. 896 Decisions by the Performance Metric Experts may be appealed as in 897 Section 7 of RFC5226. 899 8.2. Revising Registered Performance Metrics 901 A request for Revision is only permissible when the changes maintain 902 backward-compatibility with implementations of the prior Performance 903 Metrics Registry entry describing a Registered Performance Metric 904 (entries with lower revision numbers, but the same Identifier and 905 Name). 907 The purpose of the Status field in the Performance Metric Registry is 908 to indicate whether the entry for a Registered Performance Metric is 909 'current' or 'deprecated'. 911 In addition, no policy is defined for revising IANA Performance 912 Metric entries or addressing errors therein. To be certain, changes 913 and deprecations within the Performance Metric Registry are not 914 encouraged, and should be avoided to the extent possible. However, 915 in recognition that change is inevitable, the provisions of this 916 section address the need for revisions. 918 Revisions are initiated by sending a candidate Registered Performance 919 Metric definition to IANA, as in Section 8, identifying the existing 920 Performance Metrics Registry entry. 922 The primary requirement in the definition of a policy for managing 923 changes to existing Registered Performance Metrics is avoidance of 924 interoperability problems; Performance Metric Experts must work to 925 maintain interoperability above all else. Changes to Registered 926 Performance Metrics may only be done in an inter-operable way; 927 necessary changes that cannot be done in a way to allow 928 interoperability with unchanged implementations must result in the 929 creation of a new Registered Performance Metric and possibly the 930 deprecation of the earlier metric. 932 A change to a Registered Performance Metric is held to be backward- 933 compatible only when: 935 1. "it involves the correction of an error that is obviously only 936 editorial; or" 938 2. "it corrects an ambiguity in the Registered Performance Metric's 939 definition, which itself leads to issues severe enough to prevent 940 the Registered Performance Metric's usage as originally defined; 941 or" 943 3. "it corrects missing information in the metric definition without 944 changing its meaning (e.g., the explicit definition of 'quantity' 945 semantics for numeric fields without a Data Type Semantics 946 value); or" 948 4. "it harmonizes with an external reference that was itself 949 corrected." 951 If an Performance Metric revision is deemed permissible by the 952 Performance Metric Experts, according to the rules in this document, 953 IANA makes the change in the Performance Metric Registry. The 954 requester of the change is appended to the requester in the 955 Performance Metrics Registry. 957 Each Registered Performance Metric in the Performance Metrics 958 Registry has a revision number, starting at zero. Each change to a 959 Registered Performance Metric following this process increments the 960 revision number by one. 962 When a revised Registered Performance Metric is accepted into the 963 Performance Metric Registry, the date of acceptance of the most 964 recent revision is placed into the revision Date column of the 965 registry for that Registered Performance Metric. 967 Where applicable, additions to Registered Performance Metrics in the 968 form of text Comments or Remarks should include the date, but such 969 additions may not constitute a revision according to this process. 971 Older version(s) of the updated metric entries are kept in the 972 registry for archival purposes. The older entries are kept with all 973 fields unmodified (version, revision date) except for the status 974 field that is changed to "Deprecated". 976 8.3. Deprecating Registered Performance Metrics 978 Changes that are not permissible by the above criteria for Registered 979 Performance Metric's revision may only be handled by deprecation. A 980 Registered Performance Metric MAY be deprecated and replaced when: 982 1. "the Registered Performance Metric definition has an error or 983 shortcoming that cannot be permissibly changed as in 984 Section Revising Registered Performance Metrics; or" 986 2. "the deprecation harmonizes with an external reference that was 987 itself deprecated through that reference's accepted deprecation 988 method; or" 990 A request for deprecation is sent to IANA, which passes it to the 991 Performance Metric Expert for review. When deprecating an 992 Performance Metric, the Performance Metric description in the 993 Performance Metric Registry must be updated to explain the 994 deprecation, as well as to refer to any new Performance Metrics 995 created to replace the deprecated Performance Metric. 997 The revision number of a Registered Performance Metric is incremented 998 upon deprecation, and the revision Date updated, as with any 999 revision. 1001 The use of deprecated Registered Performance Metrics should result in 1002 a log entry or human-readable warning by the respective application. 1004 Names and Metric ID of deprecated Registered Performance Metrics must 1005 not be reused. 1007 The deprecated entries are kept with all fields unmodified, except 1008 the version, revision date, and the status field (changed to 1009 "Deprecated"). 1011 9. Security considerations 1013 This draft doesn't introduce any new security considerations for the 1014 Internet. However, the definition of Performance Metrics may 1015 introduce some security concerns, and should be reviewed with 1016 security in mind. 1018 10. IANA Considerations 1020 This document specifies the procedure for Performance Metrics 1021 Registry setup. IANA is requested to create a new registry for 1022 Performance Metrics called "Registered Performance Metrics" with the 1023 columns defined in Section 7. 1025 New assignments for Performance Metric Registry will be administered 1026 by IANA through Expert Review [RFC5226], i.e., review by one of a 1027 group of experts, the Performance Metric Experts, appointed by the 1028 IESG upon recommendation of the Transport Area Directors. The 1029 experts can be initially drawn from the Working Group Chairs and 1030 document editors of the Performance Metrics Directorate among other 1031 sources of experts. 1033 The Identifier values from 64512 to 65536 are reserved for private 1034 use. The name starting with the prefix Priv- are reserved for 1035 private use. 1037 This document requests the allocation of the URI prefix 1038 urn:ietf:params:ippm:metric for the purpose of generating URIs for 1039 Registered Performance Metrics. 1041 11. Acknowledgments 1043 Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading 1044 some brainstorming sessions on this topic. Thanks to Barbara Stark 1045 and Juergen Schoenwaelder for the detailed feedback and suggestions. 1047 12. References 1049 12.1. Normative References 1051 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 1052 3", BCP 9, RFC 2026, DOI 10.17487/RFC2026, October 1996, 1053 . 1055 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1056 Requirement Levels", BCP 14, RFC 2119, 1057 DOI 10.17487/RFC2119, March 1997, 1058 . 1060 [RFC2141] Moats, R., "URN Syntax", RFC 2141, DOI 10.17487/RFC2141, 1061 May 1997, . 1063 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1064 "Framework for IP Performance Metrics", RFC 2330, 1065 DOI 10.17487/RFC2330, May 1998, 1066 . 1068 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1069 Resource Identifier (URI): Generic Syntax", STD 66, 1070 RFC 3986, DOI 10.17487/RFC3986, January 2005, 1071 . 1073 [RFC4148] Stephan, E., "IP Performance Metrics (IPPM) Metrics 1074 Registry", BCP 108, RFC 4148, DOI 10.17487/RFC4148, August 1075 2005, . 1077 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 1078 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 1079 DOI 10.17487/RFC5226, May 2008, 1080 . 1082 [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics 1083 (IPPM) Registry of Metrics Are Obsolete", RFC 6248, 1084 DOI 10.17487/RFC6248, April 2011, 1085 . 1087 [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New 1088 Performance Metric Development", BCP 170, RFC 6390, 1089 DOI 10.17487/RFC6390, October 2011, 1090 . 1092 [RFC6576] Geib, R., Ed., Morton, A., Fardid, R., and A. Steinmitz, 1093 "IP Performance Metrics (IPPM) Standard Advancement 1094 Testing", BCP 176, RFC 6576, DOI 10.17487/RFC6576, March 1095 2012, . 1097 12.2. Informative References 1099 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 1100 Delay Metric for IPPM", RFC 2679, DOI 10.17487/RFC2679, 1101 September 1999, . 1103 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 1104 Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681, 1105 September 1999, . 1107 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 1108 Metric for IP Performance Metrics (IPPM)", RFC 3393, 1109 DOI 10.17487/RFC3393, November 2002, 1110 . 1112 [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network 1113 performance measurement with periodic streams", RFC 3432, 1114 DOI 10.17487/RFC3432, November 2002, 1115 . 1117 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 1118 Jacobson, "RTP: A Transport Protocol for Real-Time 1119 Applications", STD 64, RFC 3550, DOI 10.17487/RFC3550, 1120 July 2003, . 1122 [RFC3611] Friedman, T., Ed., Caceres, R., Ed., and A. Clark, Ed., 1123 "RTP Control Protocol Extended Reports (RTCP XR)", 1124 RFC 3611, DOI 10.17487/RFC3611, November 2003, 1125 . 1127 [RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session 1128 Description Protocol", RFC 4566, DOI 10.17487/RFC4566, 1129 July 2006, . 1131 [RFC5474] Duffield, N., Ed., Chiou, D., Claise, B., Greenberg, A., 1132 Grossglauser, M., and J. Rexford, "A Framework for Packet 1133 Selection and Reporting", RFC 5474, DOI 10.17487/RFC5474, 1134 March 2009, . 1136 [RFC5475] Zseby, T., Molina, M., Duffield, N., Niccolini, S., and F. 1137 Raspall, "Sampling and Filtering Techniques for IP Packet 1138 Selection", RFC 5475, DOI 10.17487/RFC5475, March 2009, 1139 . 1141 [RFC5477] Dietz, T., Claise, B., Aitken, P., Dressler, F., and G. 1142 Carle, "Information Model for Packet Sampling Exports", 1143 RFC 5477, DOI 10.17487/RFC5477, March 2009, 1144 . 1146 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 1147 Applicability Statement", RFC 5481, DOI 10.17487/RFC5481, 1148 March 2009, . 1150 [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch, 1151 "Network Time Protocol Version 4: Protocol and Algorithms 1152 Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010, 1153 . 1155 [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, 1156 "Session Initiation Protocol Event Package for Voice 1157 Quality Reporting", RFC 6035, DOI 10.17487/RFC6035, 1158 November 2010, . 1160 [RFC6776] Clark, A. and Q. Wu, "Measurement Identity and Information 1161 Reporting Using a Source Description (SDES) Item and an 1162 RTCP Extended Report (XR) Block", RFC 6776, 1163 DOI 10.17487/RFC6776, October 2012, 1164 . 1166 [RFC6792] Wu, Q., Ed., Hunt, G., and P. Arden, "Guidelines for Use 1167 of the RTP Monitoring Framework", RFC 6792, 1168 DOI 10.17487/RFC6792, November 2012, 1169 . 1171 [RFC7003] Clark, A., Huang, R., and Q. Wu, Ed., "RTP Control 1172 Protocol (RTCP) Extended Report (XR) Block for Burst/Gap 1173 Discard Metric Reporting", RFC 7003, DOI 10.17487/RFC7003, 1174 September 2013, . 1176 [RFC7012] Claise, B., Ed. and B. Trammell, Ed., "Information Model 1177 for IP Flow Information Export (IPFIX)", RFC 7012, 1178 DOI 10.17487/RFC7012, September 2013, 1179 . 1181 [RFC7014] D'Antonio, S., Zseby, T., Henke, C., and L. Peluso, "Flow 1182 Selection Techniques", RFC 7014, DOI 10.17487/RFC7014, 1183 September 2013, . 1185 [RFC7594] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 1186 Aitken, P., and A. Akhter, "A Framework for Large-Scale 1187 Measurement of Broadband Performance (LMAP)", RFC 7594, 1188 DOI 10.17487/RFC7594, September 2015, 1189 . 1191 [I-D.ietf-ippm-active-passive] 1192 Morton, A., "Active and Passive Metrics and Methods (and 1193 everything in-between, or Hybrid)", draft-ietf-ippm- 1194 active-passive-01 (work in progress), September 2015. 1196 [RFC6991] Schoenwaelder, J., Ed., "Common YANG Data Types", 1197 RFC 6991, DOI 10.17487/RFC6991, July 2013, 1198 . 1200 Authors' Addresses 1202 Marcelo Bagnulo 1203 Universidad Carlos III de Madrid 1204 Av. Universidad 30 1205 Leganes, Madrid 28911 1206 SPAIN 1208 Phone: 34 91 6249500 1209 Email: marcelo@it.uc3m.es 1210 URI: http://www.it.uc3m.es 1212 Benoit Claise 1213 Cisco Systems, Inc. 1214 De Kleetlaan 6a b1 1215 1831 Diegem 1216 Belgium 1218 Email: bclaise@cisco.com 1220 Philip Eardley 1221 BT 1222 Adastral Park, Martlesham Heath 1223 Ipswich 1224 ENGLAND 1226 Email: philip.eardley@bt.com 1228 Al Morton 1229 AT&T Labs 1230 200 Laurel Avenue South 1231 Middletown, NJ 1232 USA 1234 Email: acmorton@att.com 1235 Aamer Akhter 1236 Consultant 1237 118 Timber Hitch 1238 Cary, NC 1239 USA 1241 Email: aakhter@gmail.com