idnits 2.17.1 draft-ietf-ippm-metric-registry-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- -- The document has examples using IPv4 documentation addresses according to RFC6890, but does not use any IPv6 documentation addresses. Maybe there should be IPv6 examples, too? Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 21, 2016) is 2957 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'RFC7799' is mentioned on line 253, but not defined == Unused Reference: 'RFC3393' is defined on line 1117, but no explicit reference was found in the text == Unused Reference: 'RFC4566' is defined on line 1137, but no explicit reference was found in the text == Unused Reference: 'RFC5481' is defined on line 1156, but no explicit reference was found in the text == Unused Reference: 'RFC5905' is defined on line 1160, but no explicit reference was found in the text == Unused Reference: 'RFC6776' is defined on line 1170, but no explicit reference was found in the text == Unused Reference: 'RFC6792' is defined on line 1176, but no explicit reference was found in the text == Unused Reference: 'RFC7003' is defined on line 1181, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2141 (Obsoleted by RFC 8141) ** Downref: Normative reference to an Informational RFC: RFC 2330 ** Obsolete normative reference: RFC 4148 (Obsoleted by RFC 6248) ** Obsolete normative reference: RFC 5226 (Obsoleted by RFC 8126) ** Downref: Normative reference to an Informational RFC: RFC 6248 -- Obsolete informational reference (is this intentional?): RFC 2679 (Obsoleted by RFC 7679) -- Obsolete informational reference (is this intentional?): RFC 4566 (Obsoleted by RFC 8866) == Outdated reference: A later version (-16) exists of draft-ietf-ippm-initial-registry-00 Summary: 5 errors (**), 0 flaws (~~), 10 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group M. Bagnulo 3 Internet-Draft UC3M 4 Intended status: Best Current Practice B. Claise 5 Expires: September 22, 2016 Cisco Systems, Inc. 6 P. Eardley 7 BT 8 A. Morton 9 AT&T Labs 10 A. Akhter 11 Consultant 12 March 21, 2016 14 Registry for Performance Metrics 15 draft-ietf-ippm-metric-registry-06 17 Abstract 19 This document defines the format for the Performance Metrics registry 20 and defines the IANA Registry for Performance Metrics. This document 21 also gives a set of guidelines for Registered Performance Metric 22 requesters and reviewers. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on September 22, 2016. 41 Copyright Notice 43 Copyright (c) 2016 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 59 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 60 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 61 4. Motivation for a Performance Metrics Registry . . . . . . . . 7 62 4.1. Interoperability . . . . . . . . . . . . . . . . . . . . 7 63 4.2. Single point of reference for Performance Metrics . . . . 8 64 4.3. Side benefits . . . . . . . . . . . . . . . . . . . . . . 8 65 5. Criteria for Performance Metrics Registration . . . . . . . . 8 66 6. Performance Metric Registry: Prior attempt . . . . . . . . . 9 67 6.1. Why this Attempt Will Succeed . . . . . . . . . . . . . . 10 68 7. Definition of the Performance Metric Registry . . . . . . . . 10 69 7.1. Summary Category . . . . . . . . . . . . . . . . . . . . 11 70 7.1.1. Identifier . . . . . . . . . . . . . . . . . . . . . 11 71 7.1.2. Name . . . . . . . . . . . . . . . . . . . . . . . . 12 72 7.1.3. URIs . . . . . . . . . . . . . . . . . . . . . . . . 13 73 7.1.4. Description . . . . . . . . . . . . . . . . . . . . . 13 74 7.2. Metric Definition Category . . . . . . . . . . . . . . . 13 75 7.2.1. Reference Definition . . . . . . . . . . . . . . . . 13 76 7.2.2. Fixed Parameters . . . . . . . . . . . . . . . . . . 14 77 7.3. Method of Measurement Category . . . . . . . . . . . . . 14 78 7.3.1. Reference Method . . . . . . . . . . . . . . . . . . 14 79 7.3.2. Packet Stream Generation . . . . . . . . . . . . . . 15 80 7.3.3. Traffic Filter . . . . . . . . . . . . . . . . . . . 15 81 7.3.4. Sampling Distribution . . . . . . . . . . . . . . . . 16 82 7.3.5. Run-time Parameters . . . . . . . . . . . . . . . . . 16 83 7.3.6. Role . . . . . . . . . . . . . . . . . . . . . . . . 17 84 7.4. Output Category . . . . . . . . . . . . . . . . . . . . . 17 85 7.4.1. Type . . . . . . . . . . . . . . . . . . . . . . . . 18 86 7.4.2. Reference Definition . . . . . . . . . . . . . . . . 18 87 7.4.3. Metric Units . . . . . . . . . . . . . . . . . . . . 18 88 7.5. Administrative information . . . . . . . . . . . . . . . 18 89 7.5.1. Status . . . . . . . . . . . . . . . . . . . . . . . 18 90 7.5.2. Requester . . . . . . . . . . . . . . . . . . . . . . 18 91 7.5.3. Revision . . . . . . . . . . . . . . . . . . . . . . 18 92 7.5.4. Revision Date . . . . . . . . . . . . . . . . . . . . 19 93 7.6. Comments and Remarks . . . . . . . . . . . . . . . . . . 19 94 8. The Life-Cycle of Registered Performance Metrics . . . . . . 19 95 8.1. Adding new Performance Metrics to the Performance Metrics 96 Registry . . . . . . . . . . . . . . . . . . . . . . . . 19 98 8.2. Revising Registered Performance Metrics . . . . . . . . . 20 99 8.3. Deprecating Registered Performance Metrics . . . . . . . 21 100 9. Security considerations . . . . . . . . . . . . . . . . . . . 22 101 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 102 11. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 23 103 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 104 12.1. Normative References . . . . . . . . . . . . . . . . . . 23 105 12.2. Informative References . . . . . . . . . . . . . . . . . 24 106 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 26 108 1. Introduction 110 The IETF specifies and uses Performance Metrics of protocols and 111 applications transported over its protocols. Performance metrics are 112 such an important part of the operations of IETF protocols that 113 [RFC6390] specifies guidelines for their development. 115 The definition and use of Performance Metrics in the IETF happens in 116 various working groups (WG), most notably: 118 The "IP Performance Metrics" (IPPM) WG is the WG primarily 119 focusing on Performance Metrics definition at the IETF. 121 The "Metric Blocks for use with RTCP's Extended Report Framework" 122 (XRBLOCK) WG recently specified many Performance Metrics related 123 to "RTP Control Protocol Extended Reports (RTCP XR)" [RFC3611], 124 which establishes a framework to allow new information to be 125 conveyed in RTCP, supplementing the original report blocks defined 126 in "RTP: A Transport Protocol for Real-Time Applications", 127 [RFC3550]. 129 The "Benchmarking Methodology" WG (BMWG) defined many Performance 130 Metrics for use in laboratory benchmarking of inter-networking 131 technologies. 133 The "IP Flow Information eXport" (IPFIX) concluded WG specified an 134 IANA process for new Information Elements. Some Performance 135 Metrics related Information Elements are proposed on regular 136 basis. 138 The "Performance Metrics for Other Layers" (PMOL) concluded WG, 139 defined some Performance Metrics related to Session Initiation 140 Protocol (SIP) voice quality [RFC6035]. 142 It is expected that more Performance Metrics will be defined in the 143 future, not only IP-based metrics, but also metrics which are 144 protocol-specific and application-specific. 146 However, despite the importance of Performance Metrics, there are two 147 related problems for the industry. First, how to ensure that when 148 one party requests another party to measure (or report or in some way 149 act on) a particular Performance Metric, then both parties have 150 exactly the same understanding of what Performance Metric is being 151 referred to. Second, how to discover which Performance Metrics have 152 been specified, so as to avoid developing new Performance Metric that 153 is very similar, but not quite inter-operable. The problems can be 154 addressed by creating a registry of performance metrics. The usual 155 way in which IETF organizes namespaces is with Internet Assigned 156 Numbers Authority (IANA) registries, and there is currently no 157 Performance Metrics Registry maintained by the IANA. 159 This document therefore requests that IANA create and maintain a 160 Performance Metrics Registry, according to the maintenance procedures 161 and the Performance Metrics Registry format defined in this memo. 162 Although the Registry format is primarily for use by IANA, any other 163 organization that wishes to create a Performance Metrics Registry MAY 164 use the same format for their purposes. The authors make no 165 guarantee of the format's applicability to any possible set of 166 Performance Metrics envisaged by other organizations, but encourage 167 others to apply it. In the remainder of this document, unless we 168 explicitly say so, we will refer to the IANA-maintained Performance 169 Metrics Registry as simply the Performance Metrics Registry. 171 2. Terminology 173 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 174 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 175 "OPTIONAL" in this document are to be interpreted as described in 176 [RFC2119]. 178 Performance Metric: A Performance Metric is a quantitative measure 179 of performance, targeted to an IETF-specified protocol or targeted 180 to an application transported over an IETF-specified protocol. 181 Examples of Performance Metrics are the FTP response time for a 182 complete file download, the DNS response time to resolve the IP 183 address, a database logging time, etc. This definition is 184 consistent with the definition of metric in [RFC2330] and broader 185 than the definition of performance metric in [RFC6390]. 187 Registered Performance Metric: A Registered Performance Metric is a 188 Performance Metric expressed as an entry in the Performance Metric 189 Registry, administered by IANA. Such a performance metric has met 190 all the registry review criteria defined in this document in order 191 to included in the registry. 193 Performance Metrics Registry: The IANA registry containing 194 Registered Performance Metrics. 196 Proprietary Registry: A set of metrics that are registered in a 197 proprietary registry, as opposed to Performance Metrics Registry. 199 Performance Metrics Experts: The Performance Metrics Experts is a 200 group of designated experts [RFC5226] selected by the IESG to 201 validate the Performance Metrics before updating the Performance 202 Metrics Registry. The Performance Metrics Experts work closely 203 with IANA. 205 Parameter: An input factor defined as a variable in the definition 206 of a Performance Metric. A numerical or other specified factor 207 forming one of a set that defines a metric or sets the conditions 208 of its operation. All Parameters must be known to measure using a 209 metric and interpret the results. There are two types of 210 Parameters, Fixed and Run-time parameters. For the Fixed 211 Parameters, the value of the variable is specified in the 212 Performance Metrics Registry entry and different Fixed Parameter 213 values results in different Registered Performance Metrics. For 214 the Run-time Parameters, the value of the variable is defined when 215 the metric measurement method is executed and a given Registered 216 Performance Metric supports multiple values for the parameter. 217 Although Run-time Parameters do not change the fundamental nature 218 of the Performance Metric's definition, some have substantial 219 influence on the network property being assessed and 220 interpretation of the results. 222 Note: Consider the case of packet loss in the following two 223 Active Measurement Method cases. The first case is packet loss 224 as background loss where the Run-time Parameter set includes a 225 very sparse Poisson stream, and only characterizes the times 226 when packets were lost. Actual user streams likely see much 227 higher loss at these times, due to tail drop or radio errors. 228 The second case is packet loss as inverse of throughput where 229 the Run-time Parameter set includes a very dense, bursty 230 stream, and characterizes the loss experienced by a stream that 231 approximates a user stream. These are both "loss metrics", but 232 the difference in interpretation of the results is highly 233 dependent on the Run-time Parameters (at least), to the extreme 234 where we are actually using loss to infer its compliment: 235 delivered throughput. 237 Active Measurement Method: Methods of Measurement conducted on 238 traffic which serves only the purpose of measurement and is 239 generated for that reason alone, and whose traffic characteristics 240 are known a priori. A detailed definition of Active Measurement 241 Method is provided in [RFC7799][I-D.ietf-ippm-active-passive]. 242 Examples of Active Measurement Methods are the measurement methods 243 for the One way delay metric defined in [RFC2679] and the one for 244 round trip delay defined in [RFC2681]. 246 Passive Measurement Method: Methods of Measurement conducted on 247 network traffic, generated either from the end users or from 248 network elements that would exist regardless whether the 249 measurement was being conducted or not. One characteristic of 250 Passive Measurement Methods is that sensitive information may be 251 observed, and as a consequence, stored in the measurement system. 252 A detailed definition of Passive Measurement Method is provided in 253 [RFC7799] [I-D.ietf-ippm-active-passive]. 255 3. Scope 257 This document is meant mainly for two different audiences. For those 258 defining new Registered Performance Metrics, it provides 259 specifications and best practices to be used in deciding which 260 Registered Performance Metrics are useful for a measurement study, 261 instructions for writing the text for each column of the Registered 262 Performance Metrics, and information on the supporting documentation 263 required for the new Performance Metrics Registry entry (up to and 264 including the publication of one or more RFCs or I-Ds describing it). 265 For the appointed Performance Metrics Experts and for IANA personnel 266 administering the new IANA Performance Metric Registry, it defines a 267 set of acceptance criteria against which these proposed Registered 268 Performance Metrics should be evaluated. In addition, this document 269 may be useful for other organization who are defining a Performance 270 Metric registry of its own, who can rely on the Performance Metric 271 registry defined in this document. 273 This Performance Metric Registry is applicable to Performance Metrics 274 issued from Active Measurement, Passive Measurement, and any other 275 form of Performance Metric. This registry is designed to encompass 276 Performance Metrics developed throughout the IETF and especially for 277 the technologies specified in the following working groups: IPPM, 278 XRBLOCK, IPFIX, and BMWG. This document analyzes an prior attempt to 279 set up a Performance Metric Registry, and the reasons why this design 280 was inadequate [RFC6248]. Finally, this document gives a set of 281 guidelines for requesters and expert reviewers of candidate 282 Registered Performance Metrics. 284 This document makes no attempt to populate the Performance Metrics 285 Registry with initial entries. It does provides a few examples that 286 are merely illustrations and should not be included in the registry 287 at this point in time. 289 Based on [RFC5226] Section 4.3, this document is processed as Best 290 Current Practice (BCP) [RFC2026]. 292 4. Motivation for a Performance Metrics Registry 294 In this section, we detail several motivations for the Performance 295 Metric Registry. 297 4.1. Interoperability 299 As any IETF registry, the primary use for a registry is to manage a 300 namespace for its use within one or more protocols. In the 301 particular case of the Performance Metric Registry, there are two 302 types of protocols that will use the Performance Metrics in the 303 Performance Metrics Registry during their operation (by referring to 304 the Index values): 306 o Control protocol: this type of protocols is used to allow one 307 entity to request another entity to perform a measurement using a 308 specific metric defined by the Performance Metrics Registry. One 309 particular example is the LMAP framework [RFC7594]. Using the 310 LMAP terminology, the Performance Metrics Registry is used in the 311 LMAP Control protocol to allow a Controller to request a 312 measurement task to one or more Measurement Agents. In order to 313 enable this use case, the entries of the Performance Metric 314 Registry must be well enough defined to allow a Measurement Agent 315 implementation to trigger a specific measurement task upon the 316 reception of a control protocol message. This requirement heavily 317 constrains the type of entries that are acceptable for the 318 Performance Metric Registry. 320 o Report protocol: This type of protocols is used to allow an entity 321 to report measurement results to another entity. By referencing 322 to a specific Performance Metric Registry, it is possible to 323 properly characterize the measurement result data being reported. 324 Using the LMAP terminology, the Performance Metrics Registry is 325 used in the Report protocol to allow a Measurement Agent to report 326 measurement results to a Collector. 328 It should be noted that the LMAP framework explicitly allows for 329 using not only the IANA-maintained Performance Metrics Registry but 330 also other registries containing Performance Metrics, either defined 331 by other organizations or private ones. However, others who are 332 creating Registries to be used in the context of an LMAP framework 333 are encouraged to use the Registry format defined in this document, 334 because this makes it easier for developers of LMAP Measurement 335 Agents (MAs) to programmatically use information found in those other 336 Registries' entries. 338 4.2. Single point of reference for Performance Metrics 340 A Performance Metrics Registry serves as a single point of reference 341 for Performance Metrics defined in different working groups in the 342 IETF. As we mentioned earlier, there are several WGs that define 343 Performance Metrics in the IETF and it is hard to keep track of all 344 them. This results in multiple definitions of similar Performance 345 Metrics that attempt to measure the same phenomena but in slightly 346 different (and incompatible) ways. Having a registry would allow 347 both the IETF community and external people to have a single list of 348 relevant Performance Metrics defined by the IETF (and others, where 349 appropriate). The single list is also an essential aspect of 350 communication about Performance Metrics, where different entities 351 that request measurements, execute measurements, and report the 352 results can benefit from a common understanding of the referenced 353 Performance Metric. 355 4.3. Side benefits 357 There are a couple of side benefits of having such a registry. 358 First, the Performance Metrics Registry could serve as an inventory 359 of useful and used Performance Metrics, that are normally supported 360 by different implementations of measurement agents. Second, the 361 results of measurements using the Performance Metrics would be 362 comparable even if they are performed by different implementations 363 and in different networks, as the Performance Metric is properly 364 defined. BCP 176 [RFC6576] examines whether the results produced by 365 independent implementations are equivalent in the context of 366 evaluating the completeness and clarity of metric specifications. 367 This BCP defines the standards track advancement testing for (active) 368 IPPM metrics, and the same process will likely suffice to determine 369 whether Registered Performance Metrics are sufficiently well 370 specified to result in comparable (or equivalent) results. 371 Registered Performance Metrics which have undergone such testing 372 SHOULD be noted, with a reference to the test results. 374 5. Criteria for Performance Metrics Registration 376 It is neither possible nor desirable to populate the Performance 377 Metrics Registry with all combinations of Parameters of all 378 Performance Metrics. The Registered Performance Metrics should be: 380 1. interpretable by the user. 382 2. implementable by the software designer, 384 3. deployable by network operators, 385 4. accurate, for interoperability and deployment across vendors, 387 5. Operationally useful, so that it has significant industry 388 interest and/or has seen deployment, 390 6. Sufficiently tightly defined, so that different values for the 391 Run-time Parameters does not change the fundamental nature of the 392 measurement, nor change the practicality of its implementation. 394 In essence, there needs to be evidence that a candidate Registered 395 Performance Metric has significant industry interest, or has seen 396 deployment, and there is agreement that the candidate Registered 397 Performance Metric serves its intended purpose. 399 6. Performance Metric Registry: Prior attempt 401 There was a previous attempt to define a metric registry RFC 4148 402 [RFC4148]. However, it was obsoleted by RFC 6248 [RFC6248] because 403 it was "found to be insufficiently detailed to uniquely identify IPPM 404 metrics... [there was too much] variability possible when 405 characterizing a metric exactly" which led to the RFC4148 registry 406 having "very few users, if any". 408 A couple of interesting additional quotes from RFC 6248 might help 409 understand the issues related to that registry. 411 1. "It is not believed to be feasible or even useful to register 412 every possible combination of Type P, metric parameters, and 413 Stream parameters using the current structure of the IPPM Metrics 414 Registry." 416 2. "The registry structure has been found to be insufficiently 417 detailed to uniquely identify IPPM metrics." 419 3. "Despite apparent efforts to find current or even future users, 420 no one responded to the call for interest in the RFC 4148 421 registry during the second half of 2010." 423 The current approach learns from this by tightly defining each 424 Registered Performance Metric with only a few variable (Run-time) 425 Parameters to be specified by the measurement designer, if any. The 426 idea is that entries in the Performance Metrics Registry stem from 427 different measurement methods which require input (Run-time) 428 parameters to set factors like source and destination addresses 429 (which do not change the fundamental nature of the measurement). The 430 downside of this approach is that it could result in a large number 431 of entries in the Performance Metrics Registry. There is agreement 432 that less is more in this context - it is better to have a reduced 433 set of useful metrics rather than a large set of metrics, some with 434 with questionable usefulness. 436 6.1. Why this Attempt Will Succeed 438 As mentioned in the previous section, one of the main issues with the 439 previous registry was that the metrics contained in the registry were 440 too generic to be useful. This document specifies stricter criteria 441 for performance metric registration (see section 6), and imposes a 442 group of Performance Metrics Experts that will provide guidelines to 443 assess if a Performance Metric is properly specified. 445 Another key difference between this attempt and the previous one is 446 that in this case there is at least one clear user for the 447 Performance Metrics Registry: the LMAP framework and protocol. 448 Because the LMAP protocol will use the Performance Metrics Registry 449 values in its operation, this actually helps to determine if a metric 450 is properly defined. In particular, since we expect that the LMAP 451 control protocol will enable a controller to request a measurement 452 agent to perform a measurement using a given metric by embedding the 453 Performance Metric Registry value in the protocol, a metric is 454 properly specified if it is defined well-enough so that it is 455 possible (and practical) to implement the metric in the measurement 456 agent. This was the failure of the previous attempt: a registry 457 entry with an undefined Type-P (section 13 of RFC 2330 [RFC2330]) 458 allows implementation to be ambiguous. 460 7. Definition of the Performance Metric Registry 462 In this section we define the columns of the Performance Metric 463 Registry. This Performance Metric Registry is applicable to 464 Performance Metrics issued from Active Measurement, Passive 465 Measurement, and any other form of Performance Metric. Because of 466 that, it may be the case that some of the columns defined are not 467 applicable for a given type of metric. If this is the case, the 468 column(s) SHOULD be populated with the "NA" value (Non Applicable). 469 However, the "NA" value MUST NOT be used by any metric in the 470 following columns: Identifier, Name, URI, Status, Requester, 471 Revision, Revision Date, Description. In addition, it may be 472 possible that, in the future, a new type of metric requires 473 additional columns. Should that be the case, it is possible to add 474 new columns to the registry. The specification defining the new 475 column(s) must define how to populate the new column(s) for existing 476 entries. 478 The columns of the Performance Metric Registry are defined next. The 479 columns are grouped into "Categories" to facilitate the use of the 480 registry. Categories are described at the 8.x heading level, and 481 columns are at the 8.x.y heading level. The Figure below illustrates 482 this organization. An entry (row) therefore gives a complete 483 description of a Registered Performance Metric. 485 Each column serves as a check-list item and helps to avoid omissions 486 during registration and expert review. 488 Registry Categories and Columns, shown as 489 Category 490 ------------------ 491 Column | Column | 493 Summary 494 ------------------------------- 495 Identifier | Name | URIs | Description | 497 Metric Definition 498 ----------------------------------------- 499 Reference Definition | Fixed Parameters | 501 Method of Measurement 502 --------------------------------------------------------------------- 503 Reference | Packet | Traffic | Sampling | Run-time | Role | 504 Method | Stream | Filter | Distribution | Parameters | | 505 | Generation | 506 Output 507 ----------------------------- 508 | Type | Reference | Units | 509 | | Definition | | 511 Administrative Information 512 ---------------------------------- 513 Status |Request | Rev | Rev.Date | 515 Comments and Remarks 516 -------------------- 518 7.1. Summary Category 520 7.1.1. Identifier 522 A numeric identifier for the Registered Performance Metric. This 523 identifier MUST be unique within the Performance Metric Registry. 525 The Registered Performance Metric unique identifier is a 16-bit 526 integer (range 0 to 65535). When adding newly Registered Performance 527 Metrics to the Performance Metric Registry, IANA should assign the 528 lowest available identifier to the next Registered Performance 529 Metric. 531 7.1.2. Name 533 As the name of a Registered Performance Metric is the first thing a 534 potential implementor will use when determining whether it is 535 suitable for a given application, it is important to be as precise 536 and descriptive as possible. 538 New names of Registered Performance Metrics: 540 1. "MUST be chosen carefully to describe the Registered Performance 541 Metric and the context in which it will be used." 543 2. "MUST be unique within the Performance Metric Registry." 545 3. "MUST use capital letters for the first letter of each component. 546 All other letters MUST be lowercase, even for acronyms. 547 Exceptions are made for acronyms containing a mixture of 548 lowercase and capital letters, such as 'IPv4' and 'IPv6'." 550 4. MUST use '_' between each component of the Registered Performance 551 Metric name. 553 5. MUST start with prefix Act_ for active measurement Registered 554 Performance Metric. 556 6. MUST start with prefix Pas_ for passive monitoring Registered 557 Performance Metric. 559 7. Other types of Performance Metric should define a proper prefix 560 for identifying the type. 562 8. The remaining rules for naming are left for the Performance 563 Metric Experts to determine as they gather experience, so this is 564 an area of planned update by a future RFC. 566 An example is "Act_UDP_Latency_Poisson_mean" for an active 567 measurement of a UDP latency metric using a Poisson stream of packets 568 and producing the mean as output. 570 Some examples of names of passive metrics might be: Pas_L3_L4_Octets 571 (Layer 3 and 4 level accounting of bytes observed), Pas_DNS_RTT 572 (Round Trip Time of in DNS query response of observed traffic), and 573 Pas_L3_TCP_RTT (Passively observed round trip time in TCP handshake 574 organized with L3 addresses) 576 7.1.3. URIs 578 The URIs column MUST contain a URI [RFC3986] that uniquely identifies 579 the metric. This URI is a URN [RFC2141]. The URI is automatically 580 generated by prepending the prefix urn:ietf:params:ippm:metric: to 581 the metric name. The resulting URI is globally unique. 583 The URIs column MUST contain a second URI which is a URL [RFC3986] 584 and uniquely identifies and locates the metric entry so it is 585 accessible through the Internet. The URL points to a file containing 586 the human-readable information of exactly one registry entry. 587 Ideally, the file will be HTML-formated and contain URLs to 588 referenced sections of HTML-ized RFCs. The separate files for 589 different entries can be more easily edited and re-used when 590 preparing new entries. The exact composition of each metric URL will 591 be determined by IANA, but there will be some overlap with the URN 592 described above. The major sections of 593 [I-D.ietf-ippm-initial-registry] provide an example in HTML form 594 (sections 4 and above). 596 7.1.4. Description 598 A Registered Performance Metric description is a written 599 representation of a particular Performance Metrics Registry entry. 600 It supplements the Registered Performance Metric name to help 601 Performance Metrics Registry users select relevant Registered 602 Performance Metrics. 604 7.2. Metric Definition Category 606 This category includes columns to prompt all necessary details 607 related to the metric definition, including the RFC reference and 608 values of input factors, called fixed parameters, which are left open 609 in the RFC but have a particular value defined by the performance 610 metric. 612 7.2.1. Reference Definition 614 This entry provides a reference (or references) to the relevant 615 section(s) of the document(s) that define the metric, as well as any 616 supplemental information needed to ensure an unambiguous definition 617 for implementations. The reference needs to be an immutable 618 document, such as an RFC; for other standards bodies, it is likely to 619 be necessary to reference a specific, dated version of a 620 specification. 622 7.2.2. Fixed Parameters 624 Fixed Parameters are Parameters whose value must be specified in the 625 Performance Metrics Registry. The measurement system uses these 626 values. 628 Where referenced metrics supply a list of Parameters as part of their 629 descriptive template, a sub-set of the Parameters will be designated 630 as Fixed Parameters. For example, for active metrics, Fixed 631 Parameters determine most or all of the IPPM Framework convention 632 "packets of Type-P" as described in [RFC2330], such as transport 633 protocol, payload length, TTL, etc. An example for passive metrics 634 is for RTP packet loss calculation that relies on the validation of a 635 packet as RTP which is a multi-packet validation controlled by 636 MIN_SEQUENTIAL as defined by [RFC3550]. Varying MIN_SEQUENTIAL 637 values can alter the loss report and this value could be set as a 638 Fixed Parameter. 640 Parameters MUST have well defined names. For human readers, the 641 hanging indent style is preferred, and the names and definitions that 642 do not appear in the Reference Method Specification MUST appear in 643 this column. 645 Parameters MUST have a well-specified data format. 647 A Parameter which is a Fixed Parameter for one Performance Metrics 648 Registry entry may be designated as a Run-time Parameter for another 649 Performance Metrics Registry entry. 651 7.3. Method of Measurement Category 653 This category includes columns for references to relevant sections of 654 the RFC(s) and any supplemental information needed to ensure an 655 unambiguous method for implementations. 657 7.3.1. Reference Method 659 This entry provides references to relevant sections of the RFC(s) 660 describing the method of measurement, as well as any supplemental 661 information needed to ensure unambiguous interpretation for 662 implementations referring to the RFC text. 664 Specifically, this section should include pointers to pseudocode or 665 actual code that could be used for an unambigious implementation. 667 7.3.2. Packet Stream Generation 669 This column applies to Performance Metrics that generate traffic for 670 a part of their Measurement Method purposes including but not 671 necessarily limited to Active metrics. The generated traffic is 672 referred as stream and this columns describe its characteristics. 674 Each entry for this column contains the following information: 676 o Value: The name of the packet stream scheduling discipline 678 o Reference: the specification where the stream is defined 680 The packet generation stream may require parameters such as the the 681 average packet rate and distribution truncation value for streams 682 with Poisson-distributed inter-packet sending times. In case such 683 parameters are needed, they should be included either in the Fixed 684 parameter column or in the run time parameter column, depending on 685 wether they will be fixed or will be an input for the metric. 687 The simplest example of stream specification is Singleton scheduling 688 (see [RFC2330]), where a single atomic measurement is conducted. 689 Each atomic measurement could consist of sending a single packet 690 (such as a DNS request) or sending several packets (for example, to 691 request a webpage). Other streams support a series of atomic 692 measurements in a "sample", with a schedule defining the timing 693 between each transmitted packet and subsequent measurement. 694 Principally, two different streams are used in IPPM metrics, Poisson 695 distributed as described in [RFC2330] and Periodic as described in 696 [RFC3432]. Both Poisson and Periodic have their own unique 697 parameters, and the relevant set of parameters names and values 698 should be included either in the Fixed Parameters column or in the 699 Run-time parameter column. 701 7.3.3. Traffic Filter 703 This column applies to Performance Metrics that observe packets 704 flowing through (the device with) the measurement agent i.e. that is 705 not necessarily addressed to the measurement agent. This includes 706 but is not limited to Passive Metrics. The filter specifies the 707 traffic that is measured. This includes protocol field values/ 708 ranges, such as address ranges, and flow or session identifiers. 710 The traffic filter itself depends on needs of the metric itself and a 711 balance of operators measurement needs and user's need for privacy. 712 Mechanics for conveying the filter criteria might be the BPF (Berkley 713 Packet Filter) or PSAMP [RFC5475] Property Match Filtering which 714 reuses IPFIX [RFC7012]. An example BPF string for matching TCP/80 715 traffic to remote destination net 192.0.2.0/24 would be "dst net 716 192.0.2.0/24 and tcp dst port 80". More complex filter engines might 717 be supported by the implementation that might allow for matching 718 using Deep Packet Inspection (DPI) technology. 720 The traffic filter includes the following information: 722 Type: the type of traffic filter used, e.g. BPF, PSAMP, OpenFlow 723 rule, etc. as defined by a normative reference 725 Value: the actual set of rules expressed 727 7.3.4. Sampling Distribution 729 The sampling distribution defines out of all the packets that match 730 the traffic filter, which one of those are actually used for the 731 measurement. One possibility is "all" which implies that all packets 732 matching the Traffic filter are considered, but there may be other 733 sampling strategies. It includes the following information: 735 Value: the name of the sampling distribution 737 Reference definition: pointer to the specification where the 738 sampling distribution is properly defined. 740 The sampling distribution may require parameters. In case such 741 parameters are needed, they should be included either in the Fixed 742 parameter column or in the run time parameter column, depending on 743 wether they will be fixed or will be an input for the metric. 745 Sampling and Filtering Techniques for IP Packet Selection are 746 documented in the PSAMP (Packet Sampling) [RFC5475], while the 747 Framework for Packet Selection and Reporting, [RFC5474] provides more 748 background information. The sampling distribution parameters might 749 be expressed in terms of the Information Model for Packet Sampling 750 Exports, [RFC5477], and the Flow Selection Techniques, [RFC7014]. 752 7.3.5. Run-time Parameters 754 Run-Time Parameters are Parameters that must be determined, 755 configured into the measurement system, and reported with the results 756 for the context to be complete. However, the values of these 757 parameters is not specified in the Performance Metrics Registry (like 758 the Fixed Parameters), rather these parameters are listed as an aid 759 to the measurement system implementer or user (they must be left as 760 variables, and supplied on execution). 762 Where metrics supply a list of Parameters as part of their 763 descriptive template, a sub-set of the Parameters will be designated 764 as Run-Time Parameters. 766 Parameters MUST have well defined names. For human readers, the 767 hanging indent style is preferred, and the names and definitions that 768 do not appear in the Reference Method Specification MUST appear in 769 this column. 771 A Data Format for each Run-time Parameter MUST be specified in this 772 column, to simplify the control and implementation of measurement 773 devices. For example, parameters that include an IPv4 address can be 774 encoded as a 32 bit integer (i.e. binary base64 encoded value) or ip- 775 address as defined in [RFC6991]. The actual encoding(s) used must be 776 explicitly defined for each Run-time parameter. IPv6 addresses and 777 options MUST be accomodated, allowing Registered Metrics to be used 778 in either address family. 780 Examples of Run-time Parameters include IP addresses, measurement 781 point designations, start times and end times for measurement, and 782 other information essential to the method of measurement. 784 7.3.6. Role 786 In some method of measurements, there may be several roles defined 787 e.g. on a one-way packet delay active measurement, there is one 788 measurement agent that generates the packets and the other one that 789 receives the packets. This column contains the name of the role for 790 this particular entry. In the previous example, there should be two 791 entries in the registry, one for each role, so that when a 792 measurement agent is instructed to perform the one way delay source 793 metric know that it is supposed to generate packets. The values for 794 this field are defined in the reference method of measurement. 796 7.4. Output Category 798 For entries which involve a stream and many singleton measurements, a 799 statistic may be specified in this column to summarize the results to 800 a single value. If the complete set of measured singletons is 801 output, this will be specified here. 803 Some metrics embed one specific statistic in the reference metric 804 definition, while others allow several output types or statistics. 806 7.4.1. Type 808 This column contains the name of the output type. The output type 809 defines a single type of result that the metric produces. It can be 810 the raw results (packet send times and singleton metrics), or it can 811 be a summary statistic. The specification of the output type MUST 812 define the format of the output. In some systems, format 813 specifications will simplify both measurement implementation and 814 collection/storage tasks. Note that if two different statistics are 815 required from a single measurement (for example, both "Xth percentile 816 mean" and "Raw"), then a new output type must be defined ("Xth 817 percentile mean AND Raw"). 819 7.4.2. Reference Definition 821 This column contains a pointer to the specification where the output 822 type is defined 824 7.4.3. Metric Units 826 The measured results must be expressed using some standard dimension 827 or units of measure. This column provides the units. 829 When a sample of singletons (see [RFC2330] for definitions of these 830 terms) is collected, this entry will specify the units for each 831 measured value. 833 7.5. Administrative information 835 7.5.1. Status 837 The status of the specification of this Registered Performance 838 Metric. Allowed values are 'current' and 'deprecated'. All newly 839 defined Information Elements have 'current' status. 841 7.5.2. Requester 843 The requester for the Registered Performance Metric. The requester 844 MAY be a document, such as RFC, or person. 846 7.5.3. Revision 848 The revision number of a Registered Performance Metric, starting at 0 849 for Registered Performance Metrics at time of definition and 850 incremented by one for each revision. 852 7.5.4. Revision Date 854 The date of acceptance or the most recent revision for the Registered 855 Performance Metric. 857 7.6. Comments and Remarks 859 Besides providing additional details which do not appear in other 860 categories, this open Category (single column) allows for unforeseen 861 issues to be addressed by simply updating this informational entry. 863 8. The Life-Cycle of Registered Performance Metrics 865 Once a Performance Metric or set of Performance Metrics has been 866 identified for a given application, candidate Performance Metrics 867 Registry entry specifications in accordance with Section 7 are 868 submitted to IANA to follow the process for review by the Performance 869 Metric Experts, as defined below. This process is also used for 870 other changes to the Performance Metric Registry, such as deprecation 871 or revision, as described later in this section. 873 It is also desirable that the author(s) of a candidate Performance 874 Metrics Registry entry seek review in the relevant IETF working 875 group, or offer the opportunity for review on the WG mailing list. 877 8.1. Adding new Performance Metrics to the Performance Metrics Registry 879 Requests to change Registered Performance Metrics in the Performance 880 Metric Registry are submitted to IANA, which forwards the request to 881 a designated group of experts (Performance Metric Experts) appointed 882 by the IESG; these are the reviewers called for by the Expert Review 883 RFC5226 policy defined for the Performance Metric Registry. The 884 Performance Metric Experts review the request for such things as 885 compliance with this document, compliance with other applicable 886 Performance Metric-related RFCs, and consistency with the currently 887 defined set of Registered Performance Metrics. 889 Authors are expected to review compliance with the specifications in 890 this document to check their submissions before sending them to IANA. 892 The Performance Metric Experts should endeavor to complete referred 893 reviews in a timely manner. If the request is acceptable, the 894 Performance Metric Experts signify their approval to IANA, which 895 updates the Performance Metric Registry. If the request is not 896 acceptable, the Performance Metric Experts can coordinate with the 897 requester to change the request to be compliant. The Performance 898 Metric Experts may also choose in exceptional circumstances to reject 899 clearly frivolous or inappropriate change requests outright. 901 This process should not in any way be construed as allowing the 902 Performance Metric Experts to overrule IETF consensus. Specifically, 903 any Registered Performance Metrics that were added with IETF 904 consensus require IETF consensus for revision or deprecation. 906 Decisions by the Performance Metric Experts may be appealed as in 907 Section 7 of RFC5226. 909 8.2. Revising Registered Performance Metrics 911 A request for Revision is only permissible when the changes maintain 912 backward-compatibility with implementations of the prior Performance 913 Metrics Registry entry describing a Registered Performance Metric 914 (entries with lower revision numbers, but the same Identifier and 915 Name). 917 The purpose of the Status field in the Performance Metric Registry is 918 to indicate whether the entry for a Registered Performance Metric is 919 'current' or 'deprecated'. 921 In addition, no policy is defined for revising IANA Performance 922 Metric entries or addressing errors therein. To be certain, changes 923 and deprecations within the Performance Metric Registry are not 924 encouraged, and should be avoided to the extent possible. However, 925 in recognition that change is inevitable, the provisions of this 926 section address the need for revisions. 928 Revisions are initiated by sending a candidate Registered Performance 929 Metric definition to IANA, as in Section 8, identifying the existing 930 Performance Metrics Registry entry. 932 The primary requirement in the definition of a policy for managing 933 changes to existing Registered Performance Metrics is avoidance of 934 interoperability problems; Performance Metric Experts must work to 935 maintain interoperability above all else. Changes to Registered 936 Performance Metrics may only be done in an inter-operable way; 937 necessary changes that cannot be done in a way to allow 938 interoperability with unchanged implementations must result in the 939 creation of a new Registered Performance Metric and possibly the 940 deprecation of the earlier metric. 942 A change to a Registered Performance Metric is held to be backward- 943 compatible only when: 945 1. "it involves the correction of an error that is obviously only 946 editorial; or" 948 2. "it corrects an ambiguity in the Registered Performance Metric's 949 definition, which itself leads to issues severe enough to prevent 950 the Registered Performance Metric's usage as originally defined; 951 or" 953 3. "it corrects missing information in the metric definition without 954 changing its meaning (e.g., the explicit definition of 'quantity' 955 semantics for numeric fields without a Data Type Semantics 956 value); or" 958 4. "it harmonizes with an external reference that was itself 959 corrected." 961 If an Performance Metric revision is deemed permissible by the 962 Performance Metric Experts, according to the rules in this document, 963 IANA makes the change in the Performance Metric Registry. The 964 requester of the change is appended to the requester in the 965 Performance Metrics Registry. 967 Each Registered Performance Metric in the Performance Metrics 968 Registry has a revision number, starting at zero. Each change to a 969 Registered Performance Metric following this process increments the 970 revision number by one. 972 When a revised Registered Performance Metric is accepted into the 973 Performance Metric Registry, the date of acceptance of the most 974 recent revision is placed into the revision Date column of the 975 registry for that Registered Performance Metric. 977 Where applicable, additions to Registered Performance Metrics in the 978 form of text Comments or Remarks should include the date, but such 979 additions may not constitute a revision according to this process. 981 Older version(s) of the updated metric entries are kept in the 982 registry for archival purposes. The older entries are kept with all 983 fields unmodified (version, revision date) except for the status 984 field that is changed to "Deprecated". 986 8.3. Deprecating Registered Performance Metrics 988 Changes that are not permissible by the above criteria for Registered 989 Performance Metric's revision may only be handled by deprecation. A 990 Registered Performance Metric MAY be deprecated and replaced when: 992 1. "the Registered Performance Metric definition has an error or 993 shortcoming that cannot be permissibly changed as in 994 Section Revising Registered Performance Metrics; or" 996 2. "the deprecation harmonizes with an external reference that was 997 itself deprecated through that reference's accepted deprecation 998 method; or" 1000 A request for deprecation is sent to IANA, which passes it to the 1001 Performance Metric Expert for review. When deprecating an 1002 Performance Metric, the Performance Metric description in the 1003 Performance Metric Registry must be updated to explain the 1004 deprecation, as well as to refer to any new Performance Metrics 1005 created to replace the deprecated Performance Metric. 1007 The revision number of a Registered Performance Metric is incremented 1008 upon deprecation, and the revision Date updated, as with any 1009 revision. 1011 The use of deprecated Registered Performance Metrics should result in 1012 a log entry or human-readable warning by the respective application. 1014 Names and Metric ID of deprecated Registered Performance Metrics must 1015 not be reused. 1017 The deprecated entries are kept with all fields unmodified, except 1018 the version, revision date, and the status field (changed to 1019 "Deprecated"). 1021 9. Security considerations 1023 This draft doesn't introduce any new security considerations for the 1024 Internet. However, the definition of Performance Metrics may 1025 introduce some security concerns, and should be reviewed with 1026 security in mind. 1028 10. IANA Considerations 1030 This document specifies the procedure for Performance Metrics 1031 Registry setup. IANA is requested to create a new registry for 1032 Performance Metrics called "Registered Performance Metrics" with the 1033 columns defined in Section 7. 1035 New assignments for Performance Metric Registry will be administered 1036 by IANA through Expert Review [RFC5226], i.e., review by one of a 1037 group of experts, the Performance Metric Experts, appointed by the 1038 IESG upon recommendation of the Transport Area Directors. The 1039 experts can be initially drawn from the Working Group Chairs and 1040 document editors of the Performance Metrics Directorate among other 1041 sources of experts. 1043 The Identifier values from 64512 to 65536 are reserved for private 1044 use. The name starting with the prefix Priv- are reserved for 1045 private use. 1047 This document requests the allocation of the URI prefix 1048 urn:ietf:params:ippm:metric for the purpose of generating URIs for 1049 Registered Performance Metrics. 1051 11. Acknowledgments 1053 Thanks to Brian Trammell and Bill Cerveny, IPPM chairs, for leading 1054 some brainstorming sessions on this topic. Thanks to Barbara Stark 1055 and Juergen Schoenwaelder for the detailed feedback and suggestions. 1057 12. References 1059 12.1. Normative References 1061 [RFC2026] Bradner, S., "The Internet Standards Process -- Revision 1062 3", BCP 9, RFC 2026, DOI 10.17487/RFC2026, October 1996, 1063 . 1065 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1066 Requirement Levels", BCP 14, RFC 2119, 1067 DOI 10.17487/RFC2119, March 1997, 1068 . 1070 [RFC2141] Moats, R., "URN Syntax", RFC 2141, DOI 10.17487/RFC2141, 1071 May 1997, . 1073 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1074 "Framework for IP Performance Metrics", RFC 2330, 1075 DOI 10.17487/RFC2330, May 1998, 1076 . 1078 [RFC3986] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform 1079 Resource Identifier (URI): Generic Syntax", STD 66, 1080 RFC 3986, DOI 10.17487/RFC3986, January 2005, 1081 . 1083 [RFC4148] Stephan, E., "IP Performance Metrics (IPPM) Metrics 1084 Registry", BCP 108, RFC 4148, DOI 10.17487/RFC4148, August 1085 2005, . 1087 [RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an 1088 IANA Considerations Section in RFCs", BCP 26, RFC 5226, 1089 DOI 10.17487/RFC5226, May 2008, 1090 . 1092 [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics 1093 (IPPM) Registry of Metrics Are Obsolete", RFC 6248, 1094 DOI 10.17487/RFC6248, April 2011, 1095 . 1097 [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New 1098 Performance Metric Development", BCP 170, RFC 6390, 1099 DOI 10.17487/RFC6390, October 2011, 1100 . 1102 [RFC6576] Geib, R., Ed., Morton, A., Fardid, R., and A. Steinmitz, 1103 "IP Performance Metrics (IPPM) Standard Advancement 1104 Testing", BCP 176, RFC 6576, DOI 10.17487/RFC6576, March 1105 2012, . 1107 12.2. Informative References 1109 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 1110 Delay Metric for IPPM", RFC 2679, DOI 10.17487/RFC2679, 1111 September 1999, . 1113 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 1114 Delay Metric for IPPM", RFC 2681, DOI 10.17487/RFC2681, 1115 September 1999, . 1117 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 1118 Metric for IP Performance Metrics (IPPM)", RFC 3393, 1119 DOI 10.17487/RFC3393, November 2002, 1120 . 1122 [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network 1123 performance measurement with periodic streams", RFC 3432, 1124 DOI 10.17487/RFC3432, November 2002, 1125 . 1127 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 1128 Jacobson, "RTP: A Transport Protocol for Real-Time 1129 Applications", STD 64, RFC 3550, DOI 10.17487/RFC3550, 1130 July 2003, . 1132 [RFC3611] Friedman, T., Ed., Caceres, R., Ed., and A. Clark, Ed., 1133 "RTP Control Protocol Extended Reports (RTCP XR)", 1134 RFC 3611, DOI 10.17487/RFC3611, November 2003, 1135 . 1137 [RFC4566] Handley, M., Jacobson, V., and C. Perkins, "SDP: Session 1138 Description Protocol", RFC 4566, DOI 10.17487/RFC4566, 1139 July 2006, . 1141 [RFC5474] Duffield, N., Ed., Chiou, D., Claise, B., Greenberg, A., 1142 Grossglauser, M., and J. Rexford, "A Framework for Packet 1143 Selection and Reporting", RFC 5474, DOI 10.17487/RFC5474, 1144 March 2009, . 1146 [RFC5475] Zseby, T., Molina, M., Duffield, N., Niccolini, S., and F. 1147 Raspall, "Sampling and Filtering Techniques for IP Packet 1148 Selection", RFC 5475, DOI 10.17487/RFC5475, March 2009, 1149 . 1151 [RFC5477] Dietz, T., Claise, B., Aitken, P., Dressler, F., and G. 1152 Carle, "Information Model for Packet Sampling Exports", 1153 RFC 5477, DOI 10.17487/RFC5477, March 2009, 1154 . 1156 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 1157 Applicability Statement", RFC 5481, DOI 10.17487/RFC5481, 1158 March 2009, . 1160 [RFC5905] Mills, D., Martin, J., Ed., Burbank, J., and W. Kasch, 1161 "Network Time Protocol Version 4: Protocol and Algorithms 1162 Specification", RFC 5905, DOI 10.17487/RFC5905, June 2010, 1163 . 1165 [RFC6035] Pendleton, A., Clark, A., Johnston, A., and H. Sinnreich, 1166 "Session Initiation Protocol Event Package for Voice 1167 Quality Reporting", RFC 6035, DOI 10.17487/RFC6035, 1168 November 2010, . 1170 [RFC6776] Clark, A. and Q. Wu, "Measurement Identity and Information 1171 Reporting Using a Source Description (SDES) Item and an 1172 RTCP Extended Report (XR) Block", RFC 6776, 1173 DOI 10.17487/RFC6776, October 2012, 1174 . 1176 [RFC6792] Wu, Q., Ed., Hunt, G., and P. Arden, "Guidelines for Use 1177 of the RTP Monitoring Framework", RFC 6792, 1178 DOI 10.17487/RFC6792, November 2012, 1179 . 1181 [RFC7003] Clark, A., Huang, R., and Q. Wu, Ed., "RTP Control 1182 Protocol (RTCP) Extended Report (XR) Block for Burst/Gap 1183 Discard Metric Reporting", RFC 7003, DOI 10.17487/RFC7003, 1184 September 2013, . 1186 [RFC7012] Claise, B., Ed. and B. Trammell, Ed., "Information Model 1187 for IP Flow Information Export (IPFIX)", RFC 7012, 1188 DOI 10.17487/RFC7012, September 2013, 1189 . 1191 [RFC7014] D'Antonio, S., Zseby, T., Henke, C., and L. Peluso, "Flow 1192 Selection Techniques", RFC 7014, DOI 10.17487/RFC7014, 1193 September 2013, . 1195 [RFC7594] Eardley, P., Morton, A., Bagnulo, M., Burbridge, T., 1196 Aitken, P., and A. Akhter, "A Framework for Large-Scale 1197 Measurement of Broadband Performance (LMAP)", RFC 7594, 1198 DOI 10.17487/RFC7594, September 2015, 1199 . 1201 [I-D.ietf-ippm-active-passive] 1202 Morton, A., "Active and Passive Metrics and Methods (and 1203 everything in-between, or Hybrid)", draft-ietf-ippm- 1204 active-passive-06 (work in progress), January 2016. 1206 [I-D.ietf-ippm-initial-registry] 1207 Morton, A., Bagnulo, M., Eardley, P., and K. D'Souza, 1208 "Initial Performance Metric Registry Entries", draft-ietf- 1209 ippm-initial-registry-00 (work in progress), March 2016. 1211 [RFC6991] Schoenwaelder, J., Ed., "Common YANG Data Types", 1212 RFC 6991, DOI 10.17487/RFC6991, July 2013, 1213 . 1215 Authors' Addresses 1217 Marcelo Bagnulo 1218 Universidad Carlos III de Madrid 1219 Av. Universidad 30 1220 Leganes, Madrid 28911 1221 SPAIN 1223 Phone: 34 91 6249500 1224 Email: marcelo@it.uc3m.es 1225 URI: http://www.it.uc3m.es 1226 Benoit Claise 1227 Cisco Systems, Inc. 1228 De Kleetlaan 6a b1 1229 1831 Diegem 1230 Belgium 1232 Email: bclaise@cisco.com 1234 Philip Eardley 1235 BT 1236 Adastral Park, Martlesham Heath 1237 Ipswich 1238 ENGLAND 1240 Email: philip.eardley@bt.com 1242 Al Morton 1243 AT&T Labs 1244 200 Laurel Avenue South 1245 Middletown, NJ 1246 USA 1248 Email: acmorton@att.com 1250 Aamer Akhter 1251 Consultant 1252 118 Timber Hitch 1253 Cary, NC 1254 USA 1256 Email: aakhter@gmail.com