idnits 2.17.1 draft-ietf-bmwg-virtual-net-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 31, 2015) is 3253 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2330' is defined on line 560, but no explicit reference was found in the text == Unused Reference: 'RFC2679' is defined on line 567, but no explicit reference was found in the text == Unused Reference: 'RFC2680' is defined on line 570, but no explicit reference was found in the text == Unused Reference: 'RFC2681' is defined on line 573, but no explicit reference was found in the text == Unused Reference: 'RFC3393' is defined on line 576, but no explicit reference was found in the text == Unused Reference: 'RFC3432' is defined on line 580, but no explicit reference was found in the text == Unused Reference: 'RFC5357' is defined on line 592, but no explicit reference was found in the text == Unused Reference: 'RFC5905' is defined on line 596, but no explicit reference was found in the text == Unused Reference: 'RFC6248' is defined on line 614, but no explicit reference was found in the text == Unused Reference: 'RFC6390' is defined on line 618, but no explicit reference was found in the text ** Obsolete normative reference: RFC 2679 (Obsoleted by RFC 7679) ** Obsolete normative reference: RFC 2680 (Obsoleted by RFC 7680) Summary: 2 errors (**), 0 flaws (~~), 11 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft AT&T Labs 4 Intended status: Informational May 31, 2015 5 Expires: December 2, 2015 7 Considerations for Benchmarking Virtual Network Functions and Their 8 Infrastructure 9 draft-ietf-bmwg-virtual-net-00 11 Abstract 13 Benchmarking Methodology Working Group has traditionally conducted 14 laboratory characterization of dedicated physical implementations of 15 internetworking functions. This memo investigates additional 16 considerations when network functions are virtualized and performed 17 in commodity off-the-shelf hardware. 19 Version NOTES: 21 Addressed Barry Constantine's comments throughout the draft, see: 23 http://www.ietf.org/mail-archive/web/bmwg/current/msg03167.html 25 AND, comments from the extended discussion during IETF-92 BMWG 26 session: 28 1 & 2: General Purpose HW and why we care to a greater degree about 29 "what's in the black box" in this benchmarking context. 31 3: System under Test description = platform and VNFs and... 33 4.1 Scale and capacity benchmarks still needed. 35 4.4 Compromise on appearance of capacity and the 3x3 Matrix 37 new 4.5, Power consumption 39 Requirements Language 41 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 42 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 43 document are to be interpreted as described in RFC 2119 [RFC2119]. 45 Status of This Memo 47 This Internet-Draft is submitted in full conformance with the 48 provisions of BCP 78 and BCP 79. 50 Internet-Drafts are working documents of the Internet Engineering 51 Task Force (IETF). Note that other groups may also distribute 52 working documents as Internet-Drafts. The list of current Internet- 53 Drafts is at http://datatracker.ietf.org/drafts/current/. 55 Internet-Drafts are draft documents valid for a maximum of six months 56 and may be updated, replaced, or obsoleted by other documents at any 57 time. It is inappropriate to use Internet-Drafts as reference 58 material or to cite them other than as "work in progress." 60 This Internet-Draft will expire on December 2, 2015. 62 Copyright Notice 64 Copyright (c) 2015 IETF Trust and the persons identified as the 65 document authors. All rights reserved. 67 This document is subject to BCP 78 and the IETF Trust's Legal 68 Provisions Relating to IETF Documents 69 (http://trustee.ietf.org/license-info) in effect on the date of 70 publication of this document. Please review these documents 71 carefully, as they describe your rights and restrictions with respect 72 to this document. Code Components extracted from this document must 73 include Simplified BSD License text as described in Section 4.e of 74 the Trust Legal Provisions and are provided without warranty as 75 described in the Simplified BSD License. 77 Table of Contents 79 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 80 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 81 3. Considerations for Hardware and Testing . . . . . . . . . . . 5 82 3.1. Hardware Components . . . . . . . . . . . . . . . . . . . 5 83 3.2. Configuration Parameters . . . . . . . . . . . . . . . . 5 84 3.3. Testing Strategies . . . . . . . . . . . . . . . . . . . 6 85 3.4. Attention to Shared Resources . . . . . . . . . . . . . . 7 86 4. Benchmarking Considerations . . . . . . . . . . . . . . . . . 7 87 4.1. Comparison with Physical Network Functions . . . . . . . 7 88 4.2. Continued Emphasis on Black-Box Benchmarks . . . . . . . 8 89 4.3. New Benchmarks and Related Metrics . . . . . . . . . . . 8 90 4.4. Assessment of Benchmark Coverage . . . . . . . . . . . . 9 91 4.5. Power Consumption . . . . . . . . . . . . . . . . . . . . 11 92 5. Security Considerations . . . . . . . . . . . . . . . . . . . 12 93 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 94 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 95 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 96 8.1. Normative References . . . . . . . . . . . . . . . . . . 12 97 8.2. Informative References . . . . . . . . . . . . . . . . . 14 98 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 14 100 1. Introduction 102 Benchmarking Methodology Working Group (BMWG) has traditionally 103 conducted laboratory characterization of dedicated physical 104 implementations of internetworking functions (or physical network 105 functions, PNFs). The Black-box Benchmarks of Throughput, Latency, 106 Forwarding Rates and others have served our industry for many years. 107 [RFC1242] and [RFC2544] are the cornerstones of the work. 109 An emerging set of service provider and vendor development goals is 110 to reduce costs while increasing flexibility of network devices, and 111 drastically accelerate their deployment. Network Function 112 Virtualization (NFV) has the promise to achieve these goals, and 113 therefore has garnered much attention. It now seems certain that 114 some network functions will be virtualized following the success of 115 cloud computing and virtual desktops supported by sufficient network 116 path capacity, performance, and widespread deployment; many of the 117 same techniques will help achieve NFV. 119 In the context of Virtualized Network Functions (VNF), the supporting 120 Infrastructure requires general-purpose computing systems, storage 121 systems, networking systems, virtualization support systems (such as 122 hypervisors), and management systems for the virtual and physical 123 resources. There will be many potential suppliers of Infrastructure 124 systems and significant flexibility in configuring the systems for 125 best performance. There are also many potential suppliers of VNFs, 126 adding to the combinations possible in this environment. The 127 separation of hardware and software suppliers has a profound 128 implication on benchmarking activities: much more of the internal 129 configuration of the black-box device under test (DUT) must now be 130 specified and reported with the results, to foster both repeatability 131 and comparison testing at a later time. 133 Consider the following User Story as further background and 134 motivation: 136 "I'm designing and building my NFV Infrastructure platform. The 137 first steps were easy because I had a small number of categories of 138 VNFs to support and the VNF vendor gave HW recommendations that I 139 followed. Now I need to deploy more VNFs from new vendors, and there 140 are different hardware recommendations. How well will the new VNFs 141 perform on my existing hardware? Which among several new VNFs in a 142 given category are most efficient in terms of capacity they deliver? 143 And, when I operate multiple categories of VNFs (and PNFs) 144 *concurrently* on a hardware platform such that they share resources, 145 what are the new performance limits, and what are the software design 146 choices I can make to optimize my chosen hardware platform? 147 Conversely, what hardware platform upgrades should I pursue to 148 increase the capacity of these concurrently operating VNFs?" 150 See http://www.etsi.org/technologies-clusters/technologies/nfv for 151 more background, for example, the white papers there may be a useful 152 starting place. The Performance and Portability Best Practices 153 [NFV.PER001] are particularly relevant to BMWG. There are documents 154 available in the Open Area http://docbox.etsi.org/ISG/NFV/Open/ 155 Latest_Drafts/ including drafts describing Infrastructure aspects and 156 service quality. 158 2. Scope 160 BMWG will consider the new topic of Virtual Network Functions and 161 related Infrastructure to ensure that common issues are recognized 162 from the start, using background materials from industry and SDOs 163 (e.g., IETF, ETSI NFV). 165 This memo investigates additional methodological considerations 166 necessary when benchmarking VNFs instantiated and hosted in general- 167 purpose hardware, using bare-metal hypervisors or other isolation 168 environments such as Linux containers. An essential consideration is 169 benchmarking physical and virtual network functions in the same way 170 when possible, thereby allowing direct comparison. Also, 171 benchmarking combinations of physical and virtual devices and 172 functions in a System Under Test. 174 A clearly related goal: the benchmarks for the capacity of a general- 175 purpose platform to host a plurality of VNF instances should be 176 investigated. Existing networking technology benchmarks will also be 177 considered for adaptation to NFV and closely associated technologies. 179 A non-goal is any overlap with traditional computer benchmark 180 development and their specific metrics (SPECmark suites such as 181 SPECCPU). 183 A colossal non-goal is any form of architecture development related 184 to NFV and associated technologies in BMWG, consistent with all 185 chartered work since BMWG began in 1989. 187 3. Considerations for Hardware and Testing 189 This section lists the new considerations which must be addressed to 190 benchmark VNF(s) and their supporting infrastructure. The System 191 Under Test (SUT) is composed of the hardware platform components, the 192 VNFs installed, and many other supporting systems. It is critical to 193 document all aspects of the SUT to foster repeatability. 195 3.1. Hardware Components 197 New Hardware devices will become part of the test set-up. 199 1. High volume server platforms (general-purpose, possibly with 200 virtual technology enhancements). 202 2. Storage systems with large capacity, high speed, and high 203 reliability. 205 3. Network Interface ports specially designed for efficient service 206 of many virtual NICs. 208 4. High capacity Ethernet Switches. 210 Labs conducting comparisons of different VNFs may be able to use the 211 same hardware platform over many studies, until the steady march of 212 innovations overtakes their capabilities (as happens with the lab's 213 traffic generation and testing devices today). 215 3.2. Configuration Parameters 217 It will be necessary to configure and document the settings for the 218 entire general-purpose platform to ensure repeatability and foster 219 future comparisons, including: 221 o number of server blades (shelf occupation) 223 o CPUs 225 o caches 227 o storage system 229 o I/O 231 as well as configurations that support the devices which host the VNF 232 itself: 234 o Hypervisor (or other forms of virtual function hosting) 235 o Virtual Machine (VM) 237 o Infrastructure Virtual Network (which interconnects Virtual 238 Machines with physical network interfaces, or with each other 239 through virtual switches, for example) 241 and finally, the VNF itself, with items such as: 243 o specific function being implemented in VNF 245 o reserved resources for each function (e.g., CPU pinning) 247 o number of VNFs (or sub-VNF components, each with its own VM) in 248 the service function chain (see section 1.1 of [RFC7498] for a 249 definition of service function chain) 251 o number of physical interfaces and links transited in the service 252 function chain 254 In the physical device benchmarking context, most of the 255 corresponding infrastructure configuration choices were determined by 256 the vendor. Although the platform itself is now one of the 257 configuration variables, it is important to maintain emphasis on the 258 networking benchmarks and capture the platform variables as input 259 factors. 261 3.3. Testing Strategies 263 The concept of characterizing performance at capacity limits may 264 change. For example: 266 1. It may be more representative of system capacity to characterize 267 the case where Virtual Machines (VM, hosting the VNF) are 268 operating at 50% Utilization, and therefore sharing the "real" 269 processing power across many VMs. 271 2. Another important case stems from the need for partitioning 272 functions. A noisy neighbor (VM hosting a VNF in an infinite 273 loop) would ideally be isolated and the performance of other VMs 274 would continue according to their specifications. 276 3. System errors will likely occur as transients, implying a 277 distribution of performance characteristics with a long tail 278 (like latency), leading to the need for longer-term tests of each 279 set of configuration and test parameters. 281 4. The desire for elasticity and flexibility among network functions 282 will include tests where there is constant flux in the number of 283 VM instances. Requests for and instantiation of new VMs, along 284 with Releases for VMs hosting VNFs that are no longer needed 285 would be an normal operational condition. In other words, 286 benchmarking should include scenarios with production life cycle 287 management of VMs and their VNFs and network connectivity in- 288 progress, as well as static configurations. 290 5. All physical things can fail, and benchmarking efforts can also 291 examine recovery aided by the virtual architecture with different 292 approaches to resiliency. 294 3.4. Attention to Shared Resources 296 Since many components of the new NFV Infrastructure are virtual, test 297 set-up design must have prior knowledge of inter-actions/dependencies 298 within the various resource domains in the System Under Test (SUT). 299 For example, a virtual machine performing the role of a traditional 300 tester function such as generating and/or receiving traffic should 301 avoid sharing any SUT resources with the Device Under Test DUT. 302 Otherwise, the results will have unexpected dependencies not 303 encountered in physical device benchmarking. 305 Note: The term "tester" has traditionally referred to devices 306 dedicated to testing in BMWG literature. In this new context, 307 "tester" additionally refers to functions dedicated to testing, which 308 may be either virtual or physical. "Tester" has never referred to 309 the individuals performing the tests. 311 The shared-resource aspect of test design remains one of the critical 312 challenges to overcome in a reasonable way to produce useful results. 313 The physical test device remains a solid foundation to compare 314 against results using combinations of physical and virtual test 315 functions, or results using only virtual testers when necessary to 316 assess virtual interfaces and other virtual functions. 318 4. Benchmarking Considerations 320 This section discusses considerations related to Benchmarks 321 applicable to VNFs and their associated technologies. 323 4.1. Comparison with Physical Network Functions 325 In order to compare the performance of VNFs and system 326 implementations with their physical counterparts, identical 327 benchmarks must be used. Since BMWG has already developed 328 specifications for many network functions, there will be re-use of 329 existing benchmarks through references, while allowing for the 330 possibility of benchmark curation during development of new 331 methodologies. Consideration should be given to quantifying the 332 number of parallel VNFs required to achieve comparable scale/capacity 333 with a given physical device, or whether some limit of scale was 334 reached before the VNFs could achieve the comparable level. Again, 335 implementation based-on different hypervisors or other virtual 336 function hosting remain as critical factors in performance 337 assessment. 339 4.2. Continued Emphasis on Black-Box Benchmarks 341 When the network functions under test are based on Open Source code, 342 there may be a tendency to rely on internal measurements to some 343 extent, especially when the externally-observable phenomena only 344 support an inference of internal events (such as routing protocol 345 convergence observed in the dataplane). Examples include CPU/Core 346 utilization and Memory Comitted/used. However, external observations 347 remain essential as the basis for Benchmarks. Internal observations 348 with fixed specification and interpretation may be provided in 349 parallel, to assist the development of operations procedures when the 350 technology is deployed, for example. Internal metrics and 351 measurements from Open Source implementations may be the only direct 352 source of performance results in a desired dimension, but 353 corroborating external observations are still required to assure the 354 integrity of measurement discipline was maintained for all reported 355 results. 357 A related aspect of benchmark development is where the scope includes 358 multiple approaches to a common function under the same benchmark. 359 For example, there are many ways to arrange for activation of a 360 network path between interface points and the activation times can be 361 compared if the start-to-stop activation interval has a generic and 362 unambiguous definition. Thus, generic benchmark definitions are 363 preferred over technology/protocol specific definitions where 364 possible. 366 4.3. New Benchmarks and Related Metrics 368 There will be new classes of benchmarks needed for network design and 369 assistance when developing operational practices (possibly automated 370 management and orchestration of deployment scale). Examples follow 371 in the paragraphs below, many of which are prompted by the goals of 372 increased elasticity and flexibility of the network functions, along 373 with accelerated deployment times. 375 Time to deploy VNFs: In cases where the general-purpose hardware is 376 already deployed and ready for service, it is valuable to know the 377 response time when a management system is tasked with "standing-up" 378 100's of virtual machines and the VNFs they will host. 380 Time to migrate VNFs: In cases where a rack or shelf of hardware must 381 be removed from active service, it is valuable to know the response 382 time when a management system is tasked with "migrating" some number 383 of virtual machines and the VNFs they currently host to alternate 384 hardware that will remain in-service. 386 Time to create a virtual network in the general-purpose 387 infrastructure: This is a somewhat simplified version of existing 388 benchmarks for convergence time, in that the process is initiated by 389 a request from (centralized or distributed) control, rather than 390 inferred from network events (link failure). The successful response 391 time would remain dependent on dataplane observations to confirm that 392 the network is ready to perform. 394 Also, it appears to be valuable to measure traditional packet 395 transfer performance metrics during the assessment of traditional and 396 new benchmarks, including metrics that may be used to support service 397 engineering such as the Spatial Composition metrics found in 398 [RFC6049]. Examples include Mean one-way delay in section 4.1 of 399 [RFC6049], Packet Delay Variation (PDV) in [RFC5481], and Packet 400 Reordering [RFC4737] [RFC4689]. 402 4.4. Assessment of Benchmark Coverage 404 It can be useful to organize benchmarks according to their applicable 405 life cycle stage and the performance criteria they intend to assess. 406 The table below provides a way to organize benchmarks such that there 407 is a clear indication of coverage for the intersection of life cycle 408 stages and performance criteria. 410 |----------------------------------------------------------| 411 | | | | | 412 | | SPEED | ACCURACY | RELIABILITY | 413 | | | | | 414 |----------------------------------------------------------| 415 | | | | | 416 | Activation | | | | 417 | | | | | 418 |----------------------------------------------------------| 419 | | | | | 420 | Operation | | | | 421 | | | | | 422 |----------------------------------------------------------| 423 | | | | | 424 | De-activation | | | | 425 | | | | | 426 |----------------------------------------------------------| 427 For example, the "Time to deploy VNFs" benchmark described above 428 would be placed in the intersection of Activation and Speed, making 429 it clear that there are other potential performance criteria to 430 benchmark, such as the "percentage of unsuccessful VM/VNF stand-ups" 431 in a set of 100 attempts. This example emphasizes that the 432 Activation and De-activation life cycle stages are key areas for NFV 433 and related infrastructure, and encourage expansion beyond 434 traditional benchmarks for normal operation. Thus, reviewing the 435 benchmark coverage using this table (sometimes called the 3x3 matrix) 436 can be a worthwhile exercise in BMWG. 438 In one of the first applications of the 3x3 matrix on BMWG, we 439 discovered that metrics on measured size, capacity, or scale do not 440 easily match one of the three columns above. Following discussion, 441 this was resolved in two ways: 443 o Add a column, Scaleability, for use when categorizing benchmarks. 445 o If using the matrix to report results in an organized way, keep 446 size, capacity, and scale metrics separate from the 3x3 matrix and 447 incorporate them in the report with other qualifications of the 448 results. 450 This approach encourages use of the 3x3 matrix to organize reports of 451 results, where the capacity at which the various metrics were 452 measured could be included in the title of the matrix (and results 453 for multiple capacities would result in separate 3x3 matrices, if 454 there were sufficient measurements/results to organize in that way). 456 For example, results for each VM and VNF could appear in the 3x3 457 matrix, organized to illustrate resource occupation (CPU Cores) in a 458 particular physical computing system, as shown below. 460 VNF#1 461 .-----------. 462 |__|__|__|__| 463 Core 1 |__|__|__|__| 464 |__|__|__|__| 465 | | | | | 466 '-----------' 467 VNF#2 468 .-----------. 469 |__|__|__|__| 470 Cores 2-5 |__|__|__|__| 471 |__|__|__|__| 472 | | | | | 473 '-----------' 474 VNF#3 VNF#4 VNF#5 475 .-----------. .-----------. .-----------. 476 |__|__|__|__| |__|__|__|__| |__|__|__|__| 477 Core 6 |__|__|__|__| |__|__|__|__| |__|__|__|__| 478 |__|__|__|__| |__|__|__|__| |__|__|__|__| 479 | | | | | | | | | | | | | | | 480 '-----------' '-----------' '-----------' 481 VNF#6 482 .-----------. 483 |__|__|__|__| 484 Core 7 |__|__|__|__| 485 |__|__|__|__| 486 | | | | | 487 '-----------' 489 The combination of tables above could be built incrementally, 490 beginning with VNF#1 and one Core, then adding VNFs according to 491 their supporting core assignments. X-Y plots of critical benchmarks 492 would also provide insight to the effect of increased HW utilization. 493 All VNFs might be of the same type, or to match a production 494 environment there could be VNFs of multiple types and categories. In 495 this figure, VNFs #3-#5 are assumed to require small CPU resources, 496 while VNF#2 requires 4 cores to perform its function. 498 4.5. Power Consumption 500 Although there is incomplete work to benchmark physical network 501 function power consumption in a meaningful way, the desire to measure 502 the physical infrastructure supporting the virtual functions only 503 adds to the need. Both maximum power consumption and dynamic power 504 consumption (with varying load?) would be useful. 506 >>> ADD REC from Dallas meeting... 508 5. Security Considerations 510 Benchmarking activities as described in this memo are limited to 511 technology characterization of a Device Under Test/System Under Test 512 (DUT/SUT) using controlled stimuli in a laboratory environment, with 513 dedicated address space and the constraints specified in the sections 514 above. 516 The benchmarking network topology will be an independent test setup 517 and MUST NOT be connected to devices that may forward the test 518 traffic into a production network, or misroute traffic to the test 519 management network. 521 Further, benchmarking is performed on a "black-box" basis, relying 522 solely on measurements observable external to the DUT/SUT. 524 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 525 benchmarking purposes. Any implications for network security arising 526 from the DUT/SUT SHOULD be identical in the lab and in production 527 networks. 529 6. IANA Considerations 531 No IANA Action is requested at this time. 533 7. Acknowledgements 535 The author acknowledges an encouraging conversation on this topic 536 with Mukhtiar Shaikh and Ramki Krishnan in November 2013. Bhavani 537 Parise and Ilya Varlashkin have provided useful suggestions to expand 538 these considerations. Bhuvaneswaran Vengainathan has already tried 539 the 3x3 matrix with SDN controller draft, and contributed to many 540 discussions. Scott Bradner quickly pointed out shared resource 541 dependencies in an early vSwitch measurement proposal, and the topic 542 was included here as a key consideration. Further development was 543 encouraged by Barry Constantine's comments following the IETF-92 BMWG 544 session: the session itself was an affirmation for this memo with 545 many interested inputs from Scott, Ramki, Barry, Bhuvan, Jacob Rapp, 546 and others. 548 8. References 550 8.1. Normative References 552 [NFV.PER001] 553 "Network Function Virtualization: Performance and 554 Portability Best Practices", Group Specification ETSI GS 555 NFV-PER 001 V1.1.1 (2014-06), June 2014. 557 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 558 Requirement Levels", BCP 14, RFC 2119, March 1997. 560 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 561 "Framework for IP Performance Metrics", RFC 2330, May 562 1998. 564 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 565 Network Interconnect Devices", RFC 2544, March 1999. 567 [RFC2679] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 568 Delay Metric for IPPM", RFC 2679, September 1999. 570 [RFC2680] Almes, G., Kalidindi, S., and M. Zekauskas, "A One-way 571 Packet Loss Metric for IPPM", RFC 2680, September 1999. 573 [RFC2681] Almes, G., Kalidindi, S., and M. Zekauskas, "A Round-trip 574 Delay Metric for IPPM", RFC 2681, September 1999. 576 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 577 Metric for IP Performance Metrics (IPPM)", RFC 3393, 578 November 2002. 580 [RFC3432] Raisanen, V., Grotefeld, G., and A. Morton, "Network 581 performance measurement with periodic streams", RFC 3432, 582 November 2002. 584 [RFC4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, 585 "Terminology for Benchmarking Network-layer Traffic 586 Control Mechanisms", RFC 4689, October 2006. 588 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 589 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 590 November 2006. 592 [RFC5357] Hedayat, K., Krzanowski, R., Morton, A., Yum, K., and J. 593 Babiarz, "A Two-Way Active Measurement Protocol (TWAMP)", 594 RFC 5357, October 2008. 596 [RFC5905] Mills, D., Martin, J., Burbank, J., and W. Kasch, "Network 597 Time Protocol Version 4: Protocol and Algorithms 598 Specification", RFC 5905, June 2010. 600 [RFC7498] Quinn, P. and T. Nadeau, "Problem Statement for Service 601 Function Chaining", RFC 7498, April 2015. 603 8.2. Informative References 605 [RFC1242] Bradner, S., "Benchmarking terminology for network 606 interconnection devices", RFC 1242, July 1991. 608 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 609 Applicability Statement", RFC 5481, March 2009. 611 [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of 612 Metrics", RFC 6049, January 2011. 614 [RFC6248] Morton, A., "RFC 4148 and the IP Performance Metrics 615 (IPPM) Registry of Metrics Are Obsolete", RFC 6248, April 616 2011. 618 [RFC6390] Clark, A. and B. Claise, "Guidelines for Considering New 619 Performance Metric Development", BCP 170, RFC 6390, 620 October 2011. 622 Author's Address 624 Al Morton 625 AT&T Labs 626 200 Laurel Avenue South 627 Middletown,, NJ 07748 628 USA 630 Phone: +1 732 420 1571 631 Fax: +1 732 368 1192 632 Email: acmorton@att.com 633 URI: http://home.comcast.net/~acmacm/