idnits 2.17.1 draft-ietf-bmwg-virtual-net-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 16, 2017) is 2598 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-09) exists of draft-ietf-bmwg-sdn-controller-benchmark-meth-03 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft AT&T Labs 4 Intended status: Informational March 16, 2017 5 Expires: September 17, 2017 7 Considerations for Benchmarking Virtual Network Functions and Their 8 Infrastructure 9 draft-ietf-bmwg-virtual-net-05 11 Abstract 13 The Benchmarking Methodology Working Group has traditionally 14 conducted laboratory characterization of dedicated physical 15 implementations of internetworking functions. This memo investigates 16 additional considerations when network functions are virtualized and 17 performed in general purpose hardware. 19 Requirements Language 21 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 22 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 23 document are to be interpreted as described in RFC 2119 [RFC2119]. 25 Status of This Memo 27 This Internet-Draft is submitted in full conformance with the 28 provisions of BCP 78 and BCP 79. 30 Internet-Drafts are working documents of the Internet Engineering 31 Task Force (IETF). Note that other groups may also distribute 32 working documents as Internet-Drafts. The list of current Internet- 33 Drafts is at http://datatracker.ietf.org/drafts/current/. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or obsoleted by other documents at any 37 time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 This Internet-Draft will expire on September 17, 2017. 42 Copyright Notice 44 Copyright (c) 2017 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with respect 52 to this document. Code Components extracted from this document must 53 include Simplified BSD License text as described in Section 4.e of 54 the Trust Legal Provisions and are provided without warranty as 55 described in the Simplified BSD License. 57 Table of Contents 59 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 60 2. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 61 3. Considerations for Hardware and Testing . . . . . . . . . . . 4 62 3.1. Hardware Components . . . . . . . . . . . . . . . . . . . 4 63 3.2. Configuration Parameters . . . . . . . . . . . . . . . . 5 64 3.3. Testing Strategies . . . . . . . . . . . . . . . . . . . 6 65 3.4. Attention to Shared Resources . . . . . . . . . . . . . . 7 66 4. Benchmarking Considerations . . . . . . . . . . . . . . . . . 7 67 4.1. Comparison with Physical Network Functions . . . . . . . 7 68 4.2. Continued Emphasis on Black-Box Benchmarks . . . . . . . 8 69 4.3. New Benchmarks and Related Metrics . . . . . . . . . . . 8 70 4.4. Assessment of Benchmark Coverage . . . . . . . . . . . . 9 71 4.5. Power Consumption . . . . . . . . . . . . . . . . . . . . 12 72 5. Security Considerations . . . . . . . . . . . . . . . . . . . 12 73 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 74 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 75 8. Version history . . . . . . . . . . . . . . . . . . . . . . . 13 76 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 77 9.1. Normative References . . . . . . . . . . . . . . . . . . 14 78 9.2. Informative References . . . . . . . . . . . . . . . . . 14 79 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 15 81 1. Introduction 83 The Benchmarking Methodology Working Group (BMWG) has traditionally 84 conducted laboratory characterization of dedicated physical 85 implementations of internetworking functions (or physical network 86 functions, PNFs). The Black-box Benchmarks of Throughput, Latency, 87 Forwarding Rates and others have served our industry for many years. 88 [RFC1242] and [RFC2544] are the cornerstones of the work. 90 An emerging set of service provider and vendor development goals is 91 to reduce costs while increasing flexibility of network devices, and 92 drastically accelerate their deployment. Network Function 93 Virtualization (NFV) has the promise to achieve these goals, and 94 therefore has garnered much attention. It now seems certain that 95 some network functions will be virtualized following the success of 96 cloud computing and virtual desktops supported by sufficient network 97 path capacity, performance, and widespread deployment; many of the 98 same techniques will help achieve NFV. 100 In the context of Virtualized Network Functions (VNF), the supporting 101 Infrastructure requires general-purpose computing systems, storage 102 systems, networking systems, virtualization support systems (such as 103 hypervisors), and management systems for the virtual and physical 104 resources. There will be many potential suppliers of Infrastructure 105 systems and significant flexibility in configuring the systems for 106 best performance. There are also many potential suppliers of VNFs, 107 adding to the combinations possible in this environment. The 108 separation of hardware and software suppliers has a profound 109 implication on benchmarking activities: much more of the internal 110 configuration of the black-box device under test (DUT) must now be 111 specified and reported with the results, to foster both repeatability 112 and comparison testing at a later time. 114 Consider the following User Story as further background and 115 motivation: 117 "I'm designing and building my NFV Infrastructure platform. The 118 first steps were easy because I had a small number of categories of 119 VNFs to support and the VNF vendor gave HW recommendations that I 120 followed. Now I need to deploy more VNFs from new vendors, and there 121 are different hardware recommendations. How well will the new VNFs 122 perform on my existing hardware? Which among several new VNFs in a 123 given category are most efficient in terms of capacity they deliver? 124 And, when I operate multiple categories of VNFs (and PNFs) 125 *concurrently* on a hardware platform such that they share resources, 126 what are the new performance limits, and what are the software design 127 choices I can make to optimize my chosen hardware platform? 128 Conversely, what hardware platform upgrades should I pursue to 129 increase the capacity of these concurrently operating VNFs?" 131 See http://www.etsi.org/technologies-clusters/technologies/nfv for 132 more background, for example, the white papers there may be a useful 133 starting place. The Performance and Portability Best Practices 134 [NFV.PER001] are particularly relevant to BMWG. There are documents 135 available in the Open Area http://docbox.etsi.org/ISG/NFV/Open/ 136 Latest_Drafts/ including drafts describing Infrastructure aspects and 137 service quality. 139 2. Scope 141 At the time of this writing, BMWG is considering the new topic of 142 Virtual Network Functions and related Infrastructure to ensure that 143 common issues are recognized from the start, using background 144 materials from industry and SDOs (e.g., IETF, ETSI NFV). 146 This memo investigates additional methodological considerations 147 necessary when benchmarking VNFs instantiated and hosted in general- 148 purpose hardware, using bare metal hypervisors [BareMetal] or other 149 isolation environments such as Linux containers. An essential 150 consideration is benchmarking physical and virtual network functions 151 in the same way when possible, thereby allowing direct comparison. 152 Benchmarking combinations of physical and virtual devices and 153 functions in a System Under Test is another topic of keen interest. 155 A clearly related goal: the benchmarks for the capacity of a general- 156 purpose platform to host a plurality of VNF instances should be 157 investigated. Existing networking technology benchmarks will also be 158 considered for adaptation to NFV and closely associated technologies. 160 A non-goal is any overlap with traditional computer benchmark 161 development and their specific metrics (SPECmark suites such as 162 SPECCPU). 164 A continued non-goal is any form of architecture development related 165 to NFV and associated technologies in BMWG, consistent with all 166 chartered work since BMWG began in 1989. 168 3. Considerations for Hardware and Testing 170 This section lists the new considerations which must be addressed to 171 benchmark VNF(s) and their supporting infrastructure. The System 172 Under Test (SUT) is composed of the hardware platform components, the 173 VNFs installed, and many other supporting systems. It is critical to 174 document all aspects of the SUT to foster repeatability. 176 3.1. Hardware Components 178 New Hardware components will become part of the test set-up. 180 1. High volume server platforms (general-purpose, possibly with 181 virtual technology enhancements). 183 2. Storage systems with large capacity, high speed, and high 184 reliability. 186 3. Network Interface ports specially designed for efficient service 187 of many virtual NICs. 189 4. High capacity Ethernet Switches. 191 The components above are subjects for development of specialized 192 benchmarks which are focused on the special demands of network 193 function deployment. 195 Labs conducting comparisons of different VNFs may be able to use the 196 same hardware platform over many studies, until the steady march of 197 innovations overtakes their capabilities (as happens with the lab's 198 traffic generation and testing devices today). 200 3.2. Configuration Parameters 202 It will be necessary to configure and document the settings for the 203 entire general-purpose platform to ensure repeatability and foster 204 future comparisons, including but clearly not limited-to the 205 following: 207 o number of server blades (shelf occupation) 209 o CPUs 211 o caches 213 o memory 215 o storage system 217 o I/O 219 as well as configurations that support the devices which host the VNF 220 itself: 222 o Hypervisor (or other forms of virtual function hosting) 224 o Virtual Machine (VM) 226 o Infrastructure Virtual Network (which interconnects Virtual 227 Machines with physical network interfaces, or with each other 228 through virtual switches, for example) 230 and finally, the VNF itself, with items such as: 232 o specific function being implemented in VNF 234 o reserved resources for each function (e.g., CPU pinning and Non- 235 Uniform Memory Access, NUMA node assignment) 237 o number of VNFs (or sub-VNF components, each with its own VM) in 238 the service function chain (see section 1.1 of [RFC7498] for a 239 definition of service function chain) 241 o number of physical interfaces and links transited in the service 242 function chain 244 In the physical device benchmarking context, most of the 245 corresponding infrastructure configuration choices were determined by 246 the vendor. Although the platform itself is now one of the 247 configuration variables, it is important to maintain emphasis on the 248 networking benchmarks and capture the platform variables as input 249 factors. 251 3.3. Testing Strategies 253 The concept of characterizing performance at capacity limits may 254 change. For example: 256 1. It may be more representative of system capacity to characterize 257 the case where Virtual Machines (VM, hosting the VNF) are 258 operating at 50% Utilization, and therefore sharing the "real" 259 processing power across many VMs. 261 2. Another important case stems from the need for partitioning 262 functions. A noisy neighbor (VM hosting a VNF in an infinite 263 loop) would ideally be isolated and the performance of other VMs 264 would continue according to their specifications. 266 3. System errors will likely occur as transients, implying a 267 distribution of performance characteristics with a long tail 268 (like latency), leading to the need for longer-term tests of each 269 set of configuration and test parameters. 271 4. The desire for elasticity and flexibility among network functions 272 will include tests where there is constant flux in the number of 273 VM instances, the resources the VMs require, and the set-up/tear- 274 down of network paths that support VM connectivity. Requests for 275 and instantiation of new VMs, along with Releases for VMs hosting 276 VNFs that are no longer needed would be an normal operational 277 condition. In other words, benchmarking should include scenarios 278 with production life cycle management of VMs and their VNFs and 279 network connectivity in-progress, including VNF scaling up/down 280 operations, as well as static configurations. 282 5. All physical things can fail, and benchmarking efforts can also 283 examine recovery aided by the virtual architecture with different 284 approaches to resiliency. 286 6. The sheer number of test conditionas and configuration 287 combinations encourage increased efficiency, including automated 288 testing arrangements, combination sub-sampling through an 289 understanding of inter-relationships, and machine-readable test 290 results. 292 3.4. Attention to Shared Resources 294 Since many components of the new NFV Infrastructure are virtual, test 295 set-up design must have prior knowledge of inter-actions/dependencies 296 within the various resource domains in the System Under Test (SUT). 297 For example, a virtual machine performing the role of a traditional 298 tester function such as generating and/or receiving traffic should 299 avoid sharing any SUT resources with the Device Under Test DUT. 300 Otherwise, the results will have unexpected dependencies not 301 encountered in physical device benchmarking. 303 Note: The term "tester" has traditionally referred to devices 304 dedicated to testing in BMWG literature. In this new context, 305 "tester" additionally refers to functions dedicated to testing, which 306 may be either virtual or physical. "Tester" has never referred to 307 the individuals performing the tests. 309 The shared-resource aspect of test design remains one of the critical 310 challenges to overcome in a way to produce useful results. 311 Benchmarking set-ups may designate isolated resources for the DUT and 312 other critical support components (such as the host/kernel) as the 313 first baseline step, and add other loading processes. The added 314 complexity of each set-up leads to shared-resource testing scenarios, 315 where the characteristics of the competing load (in terms of memory, 316 storage, and CPU utilization) will directly affect the benchmarking 317 results (and variability of the results), but the results should 318 reconcile with the baseline. 320 The physical test device remains a solid foundation to compare with 321 results using combinations of physical and virtual test functions, or 322 results using only virtual testers when necessary to assess virtual 323 interfaces and other virtual functions. 325 4. Benchmarking Considerations 327 This section discusses considerations related to Benchmarks 328 applicable to VNFs and their associated technologies. 330 4.1. Comparison with Physical Network Functions 332 In order to compare the performance of VNFs and system 333 implementations with their physical counterparts, identical 334 benchmarks must be used. Since BMWG has already developed 335 specifications for many network functions, there will be re-use of 336 existing benchmarks through references, while allowing for the 337 possibility of benchmark curation during development of new 338 methodologies. Consideration should be given to quantifying the 339 number of parallel VNFs required to achieve comparable scale/capacity 340 with a given physical device, or whether some limit of scale was 341 reached before the VNFs could achieve the comparable level. Again, 342 implementation based-on different hypervisors or other virtual 343 function hosting remain as critical factors in performance 344 assessment. 346 4.2. Continued Emphasis on Black-Box Benchmarks 348 When the network functions under test are based on Open Source code, 349 there may be a tendency to rely on internal measurements to some 350 extent, especially when the externally-observable phenomena only 351 support an inference of internal events (such as routing protocol 352 convergence observed in the dataplane). Examples include CPU/Core 353 utilization, Network utilization, Storage utilization, and Memory 354 Comitted/used. These "white-box" metrics provide one view of the 355 resource footprint of a VNF. Note: The resource utilization metrics 356 do not easily match the 3x4 Matrix, described in Section 4.4 below. 358 However, external observations remain essential as the basis for 359 Benchmarks. Internal observations with fixed specification and 360 interpretation may be provided in parallel (as auxilliary metrics), 361 to assist the development of operations procedures when the 362 technology is deployed, for example. Internal metrics and 363 measurements from Open Source implementations may be the only direct 364 source of performance results in a desired dimension, but 365 corroborating external observations are still required to assure the 366 integrity of measurement discipline was maintained for all reported 367 results. 369 A related aspect of benchmark development is where the scope includes 370 multiple approaches to a common function under the same benchmark. 371 For example, there are many ways to arrange for activation of a 372 network path between interface points and the activation times can be 373 compared if the start-to-stop activation interval has a generic and 374 unambiguous definition. Thus, generic benchmark definitions are 375 preferred over technology/protocol specific definitions where 376 possible. 378 4.3. New Benchmarks and Related Metrics 380 There will be new classes of benchmarks needed for network design and 381 assistance when developing operational practices (possibly automated 382 management and orchestration of deployment scale). Examples follow 383 in the paragraphs below, many of which are prompted by the goals of 384 increased elasticity and flexibility of the network functions, along 385 with accelerated deployment times. 387 o Time to deploy VNFs: In cases where the general-purpose hardware 388 is already deployed and ready for service, it is valuable to know 389 the response time when a management system is tasked with 390 "standing-up" 100's of virtual machines and the VNFs they will 391 host. 393 o Time to migrate VNFs: In cases where a rack or shelf of hardware 394 must be removed from active service, it is valuable to know the 395 response time when a management system is tasked with "migrating" 396 some number of virtual machines and the VNFs they currently host 397 to alternate hardware that will remain in-service. 399 o Time to create a virtual network in the general-purpose 400 infrastructure: This is a somewhat simplified version of existing 401 benchmarks for convergence time, in that the process is initiated 402 by a request from (centralized or distributed) control, rather 403 than inferred from network events (link failure). The successful 404 response time would remain dependent on dataplane observations to 405 confirm that the network is ready to perform. 407 o Effect of verification measurements on performance: A complete 408 VNF, or something as simple as a new policy to implement in a VNF, 409 is implemented. The action to verify instantiation of the VNF or 410 policy could affect performance during normal operation. 412 Also, it appears to be valuable to measure traditional packet 413 transfer performance metrics during the assessment of traditional and 414 new benchmarks, including metrics that may be used to support service 415 engineering such as the Spatial Composition metrics found in 416 [RFC6049]. Examples include Mean one-way delay in section 4.1 of 417 [RFC6049], Packet Delay Variation (PDV) in [RFC5481], and Packet 418 Reordering [RFC4737] [RFC4689]. 420 4.4. Assessment of Benchmark Coverage 422 It can be useful to organize benchmarks according to their applicable 423 life cycle stage and the performance criteria they were designed to 424 assess. The table below (derived from [X3.102]) provides a way to 425 organize benchmarks such that there is a clear indication of coverage 426 for the intersection of life cycle stages and performance criteria. 428 |----------------------------------------------------------| 429 | | | | | 430 | | SPEED | ACCURACY | RELIABILITY | 431 | | | | | 432 |----------------------------------------------------------| 433 | | | | | 434 | Activation | | | | 435 | | | | | 436 |----------------------------------------------------------| 437 | | | | | 438 | Operation | | | | 439 | | | | | 440 |----------------------------------------------------------| 441 | | | | | 442 | De-activation | | | | 443 | | | | | 444 |----------------------------------------------------------| 446 For example, the "Time to deploy VNFs" benchmark described above 447 would be placed in the intersection of Activation and Speed, making 448 it clear that there are other potential performance criteria to 449 benchmark, such as the "percentage of unsuccessful VM/VNF stand-ups" 450 in a set of 100 attempts. This example emphasizes that the 451 Activation and De-activation life cycle stages are key areas for NFV 452 and related infrastructure, and encourage expansion beyond 453 traditional benchmarks for normal operation. Thus, reviewing the 454 benchmark coverage using this table (sometimes called the 3x3 matrix) 455 can be a worthwhile exercise in BMWG. 457 In one of the first applications of the 3x3 matrix in BMWG 458 [I-D.ietf-bmwg-sdn-controller-benchmark-meth], we discovered that 459 metrics on measured size, capacity, or scale do not easily match one 460 of the three columns above. Following discussion, this was resolved 461 in two ways: 463 o Add a column, Scale, for use when categorizing and assessing the 464 coverage of benchmarks (without measured results). Examples of 465 this use are found in 466 [I-D.ietf-bmwg-sdn-controller-benchmark-meth] and 467 [I-D.vsperf-bmwg-vswitch-opnfv]. This is the 3x4 Matrix. 469 o If using the matrix to report results in an organized way, keep 470 size, capacity, and scale metrics separate from the 3x3 matrix and 471 incorporate them in the report with other qualifications of the 472 results. 474 Note: The resource utilization (e.g., CPU) metrics do not fit in the 475 Matrix. They are not benchmarks, and omitting them confirms their 476 status as auxilliary metrics. Resource assignments are configuration 477 parameters, and these are reported seperately. 479 This approach encourages use of the 3x3 matrix to organize reports of 480 results, where the capacity at which the various metrics were 481 measured could be included in the title of the matrix (and results 482 for multiple capacities would result in separate 3x3 matrices, if 483 there were sufficient measurements/results to organize in that way). 485 For example, results for each VM and VNF could appear in the 3x3 486 matrix, organized to illustrate resource occupation (CPU Cores) in a 487 particular physical computing system, as shown below. 489 VNF#1 490 .-----------. 491 |__|__|__|__| 492 Core 1 |__|__|__|__| 493 |__|__|__|__| 494 | | | | | 495 '-----------' 496 VNF#2 497 .-----------. 498 |__|__|__|__| 499 Cores 2-5 |__|__|__|__| 500 |__|__|__|__| 501 | | | | | 502 '-----------' 503 VNF#3 VNF#4 VNF#5 504 .-----------. .-----------. .-----------. 505 |__|__|__|__| |__|__|__|__| |__|__|__|__| 506 Core 6 |__|__|__|__| |__|__|__|__| |__|__|__|__| 507 |__|__|__|__| |__|__|__|__| |__|__|__|__| 508 | | | | | | | | | | | | | | | 509 '-----------' '-----------' '-----------' 510 VNF#6 511 .-----------. 512 |__|__|__|__| 513 Core 7 |__|__|__|__| 514 |__|__|__|__| 515 | | | | | 516 '-----------' 518 The combination of tables above could be built incrementally, 519 beginning with VNF#1 and one Core, then adding VNFs according to 520 their supporting core assignments. X-Y plots of critical benchmarks 521 would also provide insight to the effect of increased HW utilization. 522 All VNFs might be of the same type, or to match a production 523 environment there could be VNFs of multiple types and categories. In 524 this figure, VNFs #3-#5 are assumed to require small CPU resources, 525 while VNF#2 requires 4 cores to perform its function. 527 4.5. Power Consumption 529 Although there is incomplete work to benchmark physical network 530 function power consumption in a meaningful way, the desire to measure 531 the physical infrastructure supporting the virtual functions only 532 adds to the need. Both maximum power consumption and dynamic power 533 consumption (with varying load) would be useful. The IPMI standard 534 [IPMI2.0] has been implemented by many manufacturers, and supports 535 measurement of instantaneous energy consumption. 537 To assess the instantaneous energy consumption of virtual resources, 538 it may be possible to estimate the value using an overall metric 539 based on utilization readings, according to 540 [I-D.krishnan-nfvrg-policy-based-rm-nfviaas]. 542 5. Security Considerations 544 Benchmarking activities as described in this memo are limited to 545 technology characterization of a Device Under Test/System Under Test 546 (DUT/SUT) using controlled stimuli in a laboratory environment, with 547 dedicated address space and the constraints specified in the sections 548 above. 550 The benchmarking network topology will be an independent test setup 551 and MUST NOT be connected to devices that may forward the test 552 traffic into a production network, or misroute traffic to the test 553 management network. 555 Further, benchmarking is performed on a "black-box" basis, relying 556 solely on measurements observable external to the DUT/SUT. 558 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 559 benchmarking purposes. Any implications for network security arising 560 from the DUT/SUT SHOULD be identical in the lab and in production 561 networks. 563 6. IANA Considerations 565 No IANA Action is requested at this time. 567 7. Acknowledgements 569 The author acknowledges an encouraging conversation on this topic 570 with Mukhtiar Shaikh and Ramki Krishnan in November 2013. Bhavani 571 Parise and Ilya Varlashkin have provided useful suggestions to expand 572 these considerations. Bhuvaneswaran Vengainathan has already tried 573 the 3x3 matrix with SDN controller draft, and contributed to many 574 discussions. Scott Bradner quickly pointed out shared resource 575 dependencies in an early vSwitch measurement proposal, and the topic 576 was included here as a key consideration. Further development was 577 encouraged by Barry Constantine's comments following the IETF-92 BMWG 578 session: the session itself was an affirmation for this memo. There 579 have been many interesting contributions from Maryam Tahhan, Marius 580 Georgescu, Jacob Rapp, Saurabh Chattopadhyay, and others. 582 8. Version history 584 (This section should be removed by the RFC Editor.) 586 version 05: Address IESG & Last Call Comments (editorial) 588 Version 03 & 04: address mininal comments and few WGLC comments 590 Version 02: 592 New version history section. 594 Added Memory in section 3.2, configuration. 596 Updated ACKs and References. 598 Version 01: 600 Addressed Ramki Krishnan's comments on section 4.5, power, see that 601 section (7/27 message to the list). Addressed Saurabh 602 Chattopadhyay's 7/24 comments on VNF resources and other resource 603 conditions and their effect on benchmarking, see section 3.4. 604 Addressed Marius Georgescu's 7/17 comments on the list (sections 4.3 605 and 4.4). 607 AND, comments from the extended discussion during IETF-93 BMWG 608 session: 610 Section 4.2: VNF footprint and auxilliary metrics (Maryam Tahhan), 611 Section 4.3: Verification affect metrics (Ramki Krishnan); 612 Section 4.4: Auxilliary metrics in the Matrix (Maryam Tahhan, Scott 613 Bradner, others) 615 9. References 616 9.1. Normative References 618 [NFV.PER001] 619 "Network Function Virtualization: Performance and 620 Portability Best Practices", Group Specification ETSI GS 621 NFV-PER 001 V1.1.1 (2014-06), June 2014. 623 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 624 Requirement Levels", BCP 14, RFC 2119, 625 DOI 10.17487/RFC2119, March 1997, 626 . 628 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 629 Network Interconnect Devices", RFC 2544, 630 DOI 10.17487/RFC2544, March 1999, 631 . 633 [RFC4689] Poretsky, S., Perser, J., Erramilli, S., and S. Khurana, 634 "Terminology for Benchmarking Network-layer Traffic 635 Control Mechanisms", RFC 4689, DOI 10.17487/RFC4689, 636 October 2006, . 638 [RFC4737] Morton, A., Ciavattone, L., Ramachandran, G., Shalunov, 639 S., and J. Perser, "Packet Reordering Metrics", RFC 4737, 640 DOI 10.17487/RFC4737, November 2006, 641 . 643 [RFC7498] Quinn, P., Ed. and T. Nadeau, Ed., "Problem Statement for 644 Service Function Chaining", RFC 7498, 645 DOI 10.17487/RFC7498, April 2015, 646 . 648 9.2. Informative References 650 [BareMetal] 651 Popek, Gerald J.; Goldberg, Robert P. , , "Formal 652 requirements for virtualizable third generation 653 architectures". Communications of the ACM. 17 (7): 654 412-421. doi:10.1145/361011.361073.", 1974. 656 [I-D.ietf-bmwg-sdn-controller-benchmark-meth] 657 Vengainathan, B., Basil, A., Tassinari, M., Manral, V., 658 and S. Banks, "Benchmarking Methodology for SDN Controller 659 Performance", draft-ietf-bmwg-sdn-controller-benchmark- 660 meth-03 (work in progress), January 2017. 662 [I-D.krishnan-nfvrg-policy-based-rm-nfviaas] 663 Krishnan, R., Figueira, N., Krishnaswamy, D., Lopez, D., 664 Wright, S., Hinrichs, T., Krishnaswamy, R., and A. Yerra, 665 "NFVIaaS Architectural Framework for Policy Based Resource 666 Placement and Scheduling", draft-krishnan-nfvrg-policy- 667 based-rm-nfviaas-06 (work in progress), March 2016. 669 [I-D.vsperf-bmwg-vswitch-opnfv] 670 Tahhan, M., O'Mahony, B., and A. Morton, "Benchmarking 671 Virtual Switches in OPNFV", draft-vsperf-bmwg-vswitch- 672 opnfv-02 (work in progress), March 2016. 674 [IPMI2.0] "Intelligent Platform Management Interface, v2.0 with 675 latest Errata", 676 http://www.intel.com/content/www/us/en/servers/ipmi/ipmi- 677 intelligent-platform-mgt-interface-spec-2nd-gen-v2-0-spec- 678 update.html, April 2015. 680 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 681 Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, 682 July 1991, . 684 [RFC5481] Morton, A. and B. Claise, "Packet Delay Variation 685 Applicability Statement", RFC 5481, DOI 10.17487/RFC5481, 686 March 2009, . 688 [RFC6049] Morton, A. and E. Stephan, "Spatial Composition of 689 Metrics", RFC 6049, DOI 10.17487/RFC6049, January 2011, 690 . 692 [X3.102] ANSI X3.102, , "ANSI Standard on Data Communications, 693 User-Oriented Data Communications Framework", 1983. 695 Author's Address 697 Al Morton 698 AT&T Labs 699 200 Laurel Avenue South 700 Middletown,, NJ 07748 701 USA 703 Phone: +1 732 420 1571 704 Fax: +1 732 368 1192 705 Email: acmorton@att.com 706 URI: http://home.comcast.net/~acmacm/