idnits 2.17.1 draft-skommu-bmwg-nvp-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == It seems as if not all pages are separated by form feeds - found 0 form feeds but 20 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. (A line matching the expected section header was found, but with an unexpected indentation: ' 1. Introduction' ) ** The document seems to lack a Security Considerations section. (A line matching the expected section header was found, but with an unexpected indentation: ' 6. Security Considerations' ) ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) (A line matching the expected section header was found, but with an unexpected indentation: ' 7. IANA Considerations' ) ** There are 276 instances of too long lines in the document, the longest one being 15 characters in excess of 72. ** The abstract seems to contain references ([RFC7364], [RFC2119], [1]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 27, 2018) is 2128 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Missing reference section? 'RFC2119' on line 132 looks like a reference -- Missing reference section? 'RFC7364' on line 618 looks like a reference -- Missing reference section? '1' on line 627 looks like a reference Summary: 5 errors (**), 0 flaws (~~), 2 warnings (==), 5 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 INTERNET-DRAFT 3 BMWG S. Kommu 4 Internet-Draft VMware 5 Intended status: Informational J. Rapp 6 Expires: December 2018 VMware 7 June 27, 2018 9 Considerations for Benchmarking Network Virtualization Platforms 10 draft-skommu-bmwg-nvp-02.txt 12 Status of this Memo 14 This Internet-Draft is submitted in full conformance with the 15 provisions of BCP 78 and BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six 23 months and may be updated, replaced, or obsoleted by other documents 24 at any time. It is inappropriate to use Internet-Drafts as 25 reference material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html 33 This Internet-Draft will expire on December 27, 2018. 35 Copyright Notice 37 Copyright (c) 2016 IETF Trust and the persons identified as the 38 document authors. All rights reserved. 40 This document is subject to BCP 78 and the IETF Trust's Legal 41 Provisions Relating to IETF Documents 42 (http://trustee.ietf.org/license-info) in effect on the date of 43 publication of this document. Please review these documents 44 carefully, as they describe your rights and restrictions with 45 respect to this document. Code Components extracted from this 46 document must include Simplified BSD License text as described in 47 Section 4.e of the Trust Legal Provisions and are provided without 48 warranty as described in the Simplified BSD License. 50 Abstract 52 Current network benchmarking methodologies are focused on physical 53 networking components and do not consider the actual application 54 layer traffic patterns and hence do not reflect the traffic that 55 virtual networking components work with. The purpose of this 56 document is to distinguish and highlight benchmarking considerations 57 when testing and evaluating virtual networking components in the 58 data center. 60 Table of Contents 62 1. Introduction ................................................. 2 63 2. Conventions used in this document ............................ 3 64 3. Definitions .................................................. 4 65 3.1. System Under Test (SUT) ................................ 4 66 3.2. Network Virtualization Platform ........................ 4 67 3.3. Micro-services ......................................... 6 68 4. Scope ........................................................ 7 69 4.1. Virtual Networking for Datacenter Applications ......... 7 70 4.2. Interaction with Physical Devices ...................... 8 71 5. Interaction with Physical Devices ............................ 8 72 5.1. Server Architecture Considerations .................... 11 73 6. Security Considerations ..................................... 14 74 7. IANA Considerations ......................................... 14 75 8. Conclusions ................................................. 14 76 9. References .................................................. 14 77 9.1. Normative References .................................. 14 78 9.2. Informative References ................................ 15 79 Appendix A. Partial List of Parameters to Document ............. 16 80 A.1. CPU ................................................... 16 81 A.2. Memory ................................................ 16 82 A.3. NIC ................................................... 16 83 A.4. Hypervisor ............................................ 17 84 A.5. Guest VM .............................................. 18 85 A.6. Overlay Network Physical Fabric ....................... 18 86 A.7. Gateway Network Physical Fabric ....................... 18 88 1. Introduction 90 Datacenter virtualization that includes both compute and network 91 virtualization is growing rapidly as the industry continues to look 92 for ways to improve productivity, flexibility and at the same time 93 cut costs. Network virtualization, is comparatively new and 94 expected to grow tremendously similar to compute virtualization. 95 There are multiple vendors and solutions out in the market, each 96 with their own benchmarks to showcase why a particular solution is 97 better than another. Hence, the need for a vendor and product 98 agnostic way to benchmark multivendor solutions to help with 99 comparison and make informed decisions when it comes to selecting 100 the right network virtualization solution. 102 Applications traditionally have been segmented using VLANs and ACLs 103 between the VLANs. This model does not scale because of the 4K 104 scale limitations of VLANs. Overlays such as VXLAN were designed to 105 address the limitations of VLANs 107 With VXLAN, applications are segmented based on VXLAN encapsulation 108 (specifically the VNI field in the VXLAN header), which is similar 109 to VLAN ID in the 802.1Q VLAN tag, however without the 4K scale 110 limitations of VLANs. For a more detailed discussion on this 111 subject please refer RFC 7364 "Problem Statement: Overlays for 112 Network Virtualization". 114 VXLAN is just one of several Network Virtualization Overlays(NVO). 115 Some of the others include STT, Geneve and NVGRE. . STT and Geneve 116 have expanded on the capabilities of VXLAN. Please refer IETF's 117 nvo3 working group < 118 https://datatracker.ietf.org/wg/nvo3/documents/> for more 119 information. 121 Modern application architectures, such as Micro-services, are going 122 beyond the three tier app models such as web, app and db. 123 Benchmarks MUST consider whether the proposed solution is able to 124 scale up to the demands of such applications and not just a three- 125 tier architecture. 127 2. Conventions used in this document 129 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 130 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 131 document are to be interpreted as described in RFC 2119 [RFC2119]. 133 In this document, these words will appear with that interpretation 134 only when in ALL CAPS. Lower case uses of these words are not to be 135 interpreted as carrying significance described in RFC 2119. 137 3. Definitions 139 3.1. System Under Test (SUT) 141 Traditional hardware based networking devices generally use the 142 device under test (DUT) model of testing. In this model, apart from 143 any allowed configuration, the DUT is a black box from a testing 144 perspective. This method works for hardware based networking 145 devices since the device itself is not influenced by any other 146 components outside the DUT. 148 Virtual networking components cannot leverage DUT model of testing 149 as the DUT is not just the virtual device but includes the hardware 150 components that were used to host the virtual device 152 Hence SUT model MUST be used instead of the traditional device under 153 test 155 With SUT model, the virtual networking component along with all 156 software and hardware components that host the virtual networking 157 component MUST be considered as part of the SUT. 159 Virtual networking components may also work with higher level TCP 160 segments such as TSO. In contrast, all physical switches and 161 routers, including the ones that act as initiators for NVOs, work 162 with L2/L3 packets. 164 Please refer to section 5 Figure 1 for a visual representation of 165 System Under Test in the case of Intra-Host testing and section 5 166 Figure 2 for System Under Test in the case of Inter-Host testing 168 3.2. Network Virtualization Platform 170 This document does not focus on Network Function Virtualization. 172 Network Function Virtualization (NFV) focuses on being independent 173 of networking hardware while providing the same functionality. In 174 the case of NFV, traditional benchmarking methodologies recommended 175 by IETF may be used. Considerations for Benchmarking Virtual 176 Network Functions and Their Infrastructure IETF document addresses 177 benchmarking NFVs. 179 Typical NFV implementations emulate in software, the characteristics 180 and features of physical switches. They are similar to any physical 181 L2/L3 switch from the perspective of the packet size, which is 182 typically enforced based on the maximum transmission unit used. 184 Network Virtualization platforms on the other hand, are closer to 185 the application layer and are able to work with not only L2/L3 186 packets but also segments that leverage TCP optimizations such as 187 Large Segment Offload (LSO). 189 NVPs leverage TCP stack optimizations such as TCP Segmentation 190 Offload (TSO) and Large Receive Offload (LRO) that enables NVPs to 191 work with much larger payloads of up to 64K unlike their 192 counterparts such as NFVs. 194 Because of the difference in the payload, which translates into one 195 operation per 64K of payload in NVP verses ~40 operations for the 196 same amount of payload in NFV because of having to divide it to MTU 197 sized packets, results in considerable difference in performance 198 between NFV and NVP. 200 Please refer to figure 1 for a pictorial representation of this 201 primary difference between NPV and NFV for a 64K payload 202 segment/packet running on network set to 1500 bytes MTU. 204 Note: Payload sizes in figure 1 are approximates. 206 NPV (1 segment) NFV (40 packets) 208 Segment 1 Packet 1 209 +-------------------------+ +-------------------------+ 210 | Headers | | Headers | 211 | +---------------------+ | | +---------------------+ | 212 | | Pay Load - upto 64K | | | | Pay Load < 1500 | | 213 | +---------------------+ | | +---------------------+ | 214 +-------------------------+ +-------------------------+ 216 Packet 2 217 +-------------------------+ 218 | Headers | 219 | +---------------------+ | 220 | | Pay Load < 1500 | | 221 | +---------------------+ | 222 +-------------------------+ 224 . 225 . 226 . 227 . 229 Packet 40 230 +-------------------------+ 231 | Headers | 232 | +---------------------+ | 233 | | Pay Load < 1500 | | 234 | +---------------------+ | 235 +-------------------------+ 237 Figure 1 Payload NPV vs NFV 239 Hence, normal benchmarking methods are not relevant to the NVPs. 241 Instead, newer methods that take into account the built in 242 advantages of TCP provided optimizations MUST be used for testing 243 Network Virtualization Platforms. 245 3.3. Micro-services 247 Traditional monolithic application architectures such as the three 248 tier web, app and db architectures are hitting scale and deployment 249 limits for the modern use cases. 251 Micro-services make use of classic unix style of small app with 252 single responsibility. 254 These small apps are designed with the following characteristics: 256 Each application only does one thing - like unix tools 258 Small enough that you could rewrite instead of maintain 260 Embedded with a simple web container 262 Packaged as a single executable 264 Installed as daemons 266 Each of these applications are completely separate 268 Interact via uniform interface 270 REST (over HTTP/HTTPS) being the most common 272 With Micro-services architecture, a single web app of the three tier 273 application model could now have 100s of smaller apps dedicated to 274 do just one job. 276 These 100s of small one responsibility only services will MUST be 277 secured into their own segment - hence pushing the scale boundaries 278 of the overlay from both simple segmentation perspective and also 279 from a security perspective 281 4. Scope 283 This document does not address Network Function Virtualization has 284 been covered already by previous IETF documents 285 (https://datatracker.ietf.org/doc/draft-ietf-bmwg-virtual- 286 net/?include_text=1) the focus of this document is Network 287 Virtualization Platform where the network functions are an intrinsic 288 part of the hypervisor's TCP stack, working closer to the 289 application layer and leveraging performance optimizations such 290 TSO/RSS provided by the TCP stack and the underlying hardware. 292 4.1. Virtual Networking for Datacenter Applications 294 While virtualization is growing beyond the datacenter, this document 295 focuses on the virtual networking for east-west traffic within the 296 datacenter applications only. For example, in a three tier app such 297 web, app and db, this document focuses on the east-west traffic 298 between web and app. It does not address north-south web traffic 299 accessed from outside the datacenter. A future document would 300 address north-south traffic flows. 302 This document addresses scale requirements for modern application 303 architectures such as Micro-services to consider whether the 304 proposed solution is able to scale up to the demands of micro- 305 services application models that basically have 100s of small 306 services communicating on some standard ports such as http/https 307 using protocols such as REST 309 4.2. Interaction with Physical Devices 311 Virtual network components cannot be tested independent of other 312 components within the system. Example, unlike a physical router or 313 a firewall, where the tests can be focused directly solely on the 314 device, when testing a virtual router or firewall, multiple other 315 devices may become part of the system under test. Hence the 316 characteristics of these other traditional networking switches and 317 routers, LB, FW etc. MUST be considered. 319 ! Hashing method used 321 ! Over-subscription rate 323 ! Throughput available 325 ! Latency characteristics 327 5. Interaction with Physical Devices 329 In virtual environments, System Under Test (SUT) may often share 330 resources and reside on the same Physical hardware with other 331 components involved in the tests. Hence SUT MUST be clearly 332 defined. In this tests, a single hypervisor may host multiple 333 servers, switches, routers, firewalls etc., 335 Intra host testing: Intra host testing helps in reducing the number 336 of components involved in a test. For example, intra host testing 337 would help focus on the System Under Test, logical switch and the 338 hardware that is running the hypervisor that hosts the logical 339 switch, and eliminate other components. Because of the nature of 340 virtual infrastructures and multiple elements being hosted on the 341 same physical infrastructure, influence from other components cannot 342 be completely ruled out. For example, unlike in physical 343 infrastructures, logical routing or distributed firewall MUST NOT be 344 benchmarked independent of logical switching. System Under Test 345 definition MUST include all components involved with that particular 346 test. 348 +---------------------------------------------------+ 349 | System Under Test | 350 | +-----------------------------------------------+ | 351 | | Hyper-Visor | | 352 | | | | 353 | | +-------------+ | | 354 | | | NVP | | | 355 | | +-----+ | Switch/ | +-----+ | | 356 | | | VM1 |<------>| Router/ |<------>| VM2 | | | 357 | | +-----+ VW | Fire Wall/ | VW +-----+ | | 358 | | | etc., | | | 359 | | +-------------+ | | 360 | | Legend | | 361 | | VM: Virtual Machine | | 362 | | VW: Virtual Wire | | 363 | +------------------------_----------------------+ | 364 +---------------------------------------------------+ 365 Figure 2 Intra-Host System Under Test 367 Inter host testing: Inter host testing helps in profiling the 368 underlying network interconnect performance. For example, when 369 testing Logical Switching, inter host testing would not only test 370 the logical switch component but also any other devices that are 371 part of the physical data center fabric that connects the two 372 hypervisors. System Under Test MUST be well defined to help with 373 repeatability of tests. System Under Test definition in the case of 374 inter host testing, MUST include all components, including the 375 underlying network fabric. 377 Figure 2 is a visual representation of system under test for inter- 378 host testing 379 +---------------------------------------------------+ 380 | System Under Test | 381 | +-----------------------------------------------+ | 382 | | Hyper-Visor | | 383 | | +-------------+ | | 384 | | | NVP | | | 385 | | +-----+ | Switch/ | +-----+ | | 386 | | | VM1 |<------>| Router/ |<------>| VM2 | | | 387 | | +-----+ VW | Fire Wall/ | VW +-----+ | | 388 | | | etc., | | | 389 | | +-------------+ | | 390 | +------------------------_----------------------+ | 391 | ^ | 392 | | Network Cabling | 393 | v | 394 | +-----------------------------------------------+ | 395 | | Physical Networking Components | | 396 | | switches, routers, firewalls etc., | | 397 | +-----------------------------------------------+ | 398 | ^ | 399 | | Network Cabling | 400 | v | 401 | +-----------------------------------------------+ | 402 | | Hyper-Visor | | 403 | | +-------------+ | | 404 | | | NVP | | | 405 | | +-----+ | Switch/ | +-----+ | | 406 | | | VM1 |<------>| Router/ |<------>| VM2 | | | 407 | | +-----+ VW | Fire Wall/ | VW +-----+ | | 408 | | | etc., | | | 409 | | +-------------+ | | 410 | +------------------------_----------------------+ | 411 +---------------------------------------------------+ 412 Legend 413 VM: Virtual Machine 414 VW: Virtual Wire 416 Figure 3 Inter-Host System Under Test 418 Virtual components have a direct dependency on the physical 419 infrastructure that is hosting these resources. Hardware 420 characteristics of the physical host impact the performance of the 421 virtual components. The components that are being tested and the 422 impact of the other hardware components within the hypervisor on the 423 performance of the SUT MUST be documented. Virtual component 424 performance is influenced by the physical hardware components within 425 the hypervisor. Access to various offloads such as TCP segmentation 426 offload, may have significant impact on performance. Firmware and 427 driver differences may also significantly impact results based on 428 whether the specific driver leverages any hardware level offloads 429 offered. Hence, all physical components of the physical server 430 running the hypervisor that hosts the virtual components MUST be 431 documented along with the firmware and driver versions of all the 432 components used to help ensure repeatability of test results. For 433 example, BIOS configuration of the server MUST be documented as some 434 of those changes are designed to improve performance. Please refer 435 to Appendix A for a partial list of parameters to document. 437 5.1. Server Architecture Considerations 439 When testing physical networking components, the approach taken is 440 to consider the device as a black-box. With virtual infrastructure, 441 this approach would no longer help as the virtual networking 442 components are an intrinsic part of the hypervisor they are running 443 on and are directly impacted by the server architecture used. 444 Server hardware components define the capabilities of the virtual 445 networking components. Hence, server architecture MUST be 446 documented in detail to help with repeatability of tests. And the 447 entire hardware and software components become the SUT. 449 5.1.1. Frame format/sizes within the Hypervisor 451 Maximum Transmission Unit (MTU) limits physical network component's 452 frame sizes. The most common max supported MTU for physical devices 453 is 9000. However, 1500 MTU is the standard. Physical network 454 testing and NFV uses these MTU sizes for testing. However, the 455 virtual networking components that live inside a hypervisor, may 456 work with much larger segments because of the availability of 457 hardware and software based offloads. Hence, the normal smaller 458 packets based testing is not relevant for performance testing of 459 virtual networking components. All the TCP related configuration 460 such as TSO size, number of RSS queues MUST be documented along with 461 any other physical NIC related configuration. 463 Virtual network components work closer to the application layer then 464 the physical networking components. Hence virtual network 465 components work with type and size of segments that are often not 466 the same type and size that the physical network works with. Hence, 467 testing virtual network components MUST be done with application 468 layer segments instead of the physical network layer packets. 470 5.1.2. Baseline testing with Logical Switch 472 Logical switch is often an intrinsic component of the test system 473 along with any other hardware and software components used for 474 testing. Also, other logical components cannot be tested 475 independent of the Logical Switch. 477 5.1.3. Repeatability 479 To ensure repeatability of the results, in the physical network 480 component testing, much care is taken to ensure the tests are 481 conducted with exactly the same parameters. Parameters such as MAC 482 addresses used etc., 484 When testing NPV components with an application layer test tool, 485 there may be a number of components within the system that may not 486 be available to tune or to ensure they maintain a desired state. 487 Example: housekeeping functions of the underlying Operating System. 489 Hence, tests MUST be repeated a number of times and each test case 490 MUST be run for at least 2 minutes if test tool provides such an 491 option. Results SHOULD be derived from multiple test runs. Variance 492 between the tests SHOULD be documented. 494 5.1.4. Tunnel encap/decap outside the hypervisor 496 Logical network components may also have performance impact based on 497 the functionality available within the physical fabric. Physical 498 fabric that supports NVO encap/decap is one such case that may have 499 an impact on the performance observed. Any such functionality that 500 exists on the physical fabric MUST be part of the test result 501 documentation to ensure repeatability of tests. In this case SUT 502 MUST include the physical fabric 504 5.1.5. SUT Hypervisor Profile 506 Physical networking equipment has well defined physical resource 507 characteristics such as type and number of ASICs/SoCs used, amount 508 of memory, type and number of processors etc., Virtual networking 509 components performance is dependent on the physical hardware that 510 hosts the hypervisor. Hence the physical hardware usage, which is 511 part of SUT, for a given test MUST be documented. Example, CPU 512 usage when running logical router. 514 CPU usage changes based on the type of hardware available within the 515 physical server. For example, TCP Segmentation Offload greatly 516 reduces CPU usage by offloading the segmentation process to the NIC 517 card on the sender side. Receive side scaling offers similar 518 benefit on the receive side. Hence, availability and status of such 519 hardware MUST be documented along with actual CPU/Memory usage when 520 the virtual networking components have access to such offload 521 capable hardware. 523 Following is a partial list of components that MUST be documented 524 both in terms of what is available and also what is used by the SUT 526 * CPU - type, speed, available instruction sets (e.g. AES-NI) 528 * Memory - type, amount 530 * Storage - type, amount 532 * NIC Cards - type, number of ports, offloads available/used, 533 drivers, firmware (if applicable), HW revision 535 * Libraries such as DPDK if available and used 537 * Number and type of VMs used for testing and 539 o vCPUs 541 o RAM 543 o Storage 545 o Network Driver 547 o Any prioritization of VM resources 549 o Operating System type, version and kernel if applicable 551 o TCP Configuration Changes - if any 553 o MTU 555 * Test tool 557 o Workload type 559 o Protocol being tested 561 o Number of threads 563 o Version of tool 565 * For inter-hypervisor tests, 567 o Physical network devices that are part of the test 569 ! Note: For inter-hypervisor tests, system under test 570 is no longer only the virtual component that is being 571 tested but the entire fabric that connects the 572 virtual components become part of the system under 573 test. 575 6. Security Considerations 577 Benchmarking activities as described in this memo are limited to 578 technology characterization of a Device Under Test/System Under Test 579 (DUT/SUT) using controlled stimuli in a laboratory environment, with 580 dedicated address space and the constraints specified in the 581 sections above. 583 The benchmarking network topology will be an independent test setup 584 and MUST NOT be connected to devices that may forward the test 585 traffic into a production network, or misroute traffic to the test 586 management network. 588 Further, benchmarking is performed on a "black-box" basis, relying 589 solely on measurements observable external to the DUT/SUT. 591 Special capabilities SHOULD NOT exist in the DUT/SUT specifically 592 for benchmarking purposes. Any implications for network security 593 arising from the DUT/SUT SHOULD be identical in the lab and in 594 production networks. 596 7. IANA Considerations 598 No IANA Action is requested at this time. 600 8. Conclusions 602 Network Virtualization Platforms, because of their proximity to the 603 application layer and since they can take advantage of TCP stack 604 optimizations, do not function on packets/sec basis. Hence, 605 traditional benchmarking methods, while still relevant for Network 606 Function Virtualization, are not designed to test Network 607 Virtualization Platforms. Also, advances in application 608 architectures such as micro-services, bring new challenges and need 609 benchmarking not just around throughput and latency but also around 610 scale. New benchmarking methods that are designed to take advantage 611 of the TCP optimizations or needed to accurately benchmark 612 performance of the Network Virtualization Platforms 614 9. References 616 9.1. Normative References 618 [RFC7364] T. Narten, E. Gray, D. Black, L. Fang, L. Kreeger, M. 619 Napierala, "Problem Statement: Overlays for Network Virtualization", 620 RFC 7364, October 2014, https://datatracker.ietf.org/doc/rfc7364/ 622 [nv03] IETF, WG, Network Virtualization Overlays, < 623 https://datatracker.ietf.org/wg/nvo3/documents/> 625 9.2. Informative References 627 [1] A. Morton " Considerations for Benchmarking Virtual Network 628 Functions and Their Infrastructure", draft-ietf-bmwg-virtual- 629 net-03, < https://datatracker.ietf.org/doc/draft-ietf-bmwg- 630 virtual-net/?include_text=1> 632 Appendix A. Partial List of Parameters to Document 634 A.1. CPU 636 CPU Vendor 638 CPU Number 640 CPU Architecture 642 # of Sockets (CPUs) 644 # of Cores 646 Clock Speed (GHz) 648 Max Turbo Freq. (GHz) 650 Cache per CPU (MB) 652 # of Memory Channels 654 Chipset 656 Hyperthreading (BIOS Setting) 658 Power Management (BIOS Setting) 660 VT-d 662 A.2. Memory 664 Memory Speed (MHz) 666 DIMM Capacity (GB) 668 # of DIMMs 670 DIMM configuration 672 Total DRAM (GB) 674 A.3. NIC 676 Vendor 678 Model 680 Port Speed (Gbps) 681 Ports 683 PCIe Version 685 PCIe Lanes 687 Bonded 689 Bonding Driver 691 Kernel Module Name 693 Driver Version 695 VXLAN TSO Capable 697 VXLAN RSS Capable 699 Ring Buffer Size RX 701 Ring Buffer Size TX 703 A.4. Hypervisor 705 Hypervisor Name 707 Version/Build 709 Based on 711 Hotfixes/Patches 713 OVS Version/Build 715 IRQ balancing 717 vCPUs per VM 719 Modifications to HV 721 Modifications to HV TCP stack 723 Number of VMs 725 IP MTU 727 Flow control TX (send pause) 729 Flow control RX (honor pause) 730 Encapsulation Type 732 A.5. Guest VM 734 Guest OS & Version 736 Modifications to VM 738 IP MTU Guest VM (Bytes) 740 Test tool used 742 Number of NetPerf Instances 744 Total Number of Streams 746 Guest RAM (GB) 748 A.6. Overlay Network Physical Fabric 750 Vendor 752 Model 754 # and Type of Ports 756 Software Release 758 Interface Configuration 760 Interface/Ethernet MTU (Bytes) 762 Flow control TX (send pause) 764 Flow control RX (honor pause) 766 A.7. Gateway Network Physical Fabric 768 Vendor 770 Model 772 # and Type of Ports 774 Software Release 776 Interface Configuration 778 Interface/Ethernet MTU (Bytes) 779 Flow control TX (send pause) 781 Flow control RX (honor pause) 783 Author's Addresses 785 Samuel Kommu 786 VMware 787 3401 Hillview Ave 788 Palo Alto, CA, 94304 790 Email: skommu@vmware.com 792 Jacob Rapp 793 VMware 794 3401 Hillview Ave 795 Palo Alto, CA, 94304 797 Email: jrapp@vmware.com