idnits 2.17.1 draft-skommu-bmwg-nvp-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 5 longer pages, the longest (page 19) being 74 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 70 instances of too long lines in the document, the longest one being 8 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- -- The first octets (the first characters of the first line) of this draft are 'BM', which can make the draft submission tool erroneously think that it is an image .bmp file. It is recommended that you change this, for instance by inserting a blank line before the line starting with 'BM'. == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 13, 2017) is 2600 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC2119' is mentioned on line 130, but not defined == Unused Reference: 'RFC7364' is defined on line 616, but no explicit reference was found in the text == Unused Reference: '1' is defined on line 625, but no explicit reference was found in the text == Outdated reference: A later version (-05) exists of draft-ietf-bmwg-virtual-net-03 Summary: 1 error (**), 0 flaws (~~), 6 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 BMWG S. Kommu 2 Internet Draft VMware 3 Intended status: Informational J. Rapp 4 Expires: September 2017 VMware 5 March 13, 2017 7 Considerations for Benchmarking Network Virtualization Platforms 8 draft-skommu-bmwg-nvp-00.txt 10 Status of this Memo 12 This Internet-Draft is submitted in full conformance with the 13 provisions of BCP 78 and BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six 21 months and may be updated, replaced, or obsoleted by other documents 22 at any time. It is inappropriate to use Internet-Drafts as 23 reference material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html 31 This Internet-Draft will expire on September 13, 2017. 33 Copyright Notice 35 Copyright (c) 2016 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents 40 (http://trustee.ietf.org/license-info) in effect on the date of 41 publication of this document. Please review these documents 42 carefully, as they describe your rights and restrictions with 43 respect to this document. Code Components extracted from this 44 document must include Simplified BSD License text as described in 45 Section 4.e of the Trust Legal Provisions and are provided without 46 warranty as described in the Simplified BSD License. 48 Abstract 50 Current network benchmarking methodologies are focused on physical 51 networking components and do not consider the actual application 52 layer traffic patterns and hence do not reflect the traffic that 53 virtual networking components work with. The purpose of this 54 document is to distinguish and highlight benchmarking considerations 55 when testing and evaluating virtual networking components in the 56 data center. 58 Table of Contents 60 1. Introduction .................................................. 2! 61 2. Conventions used in this document ............................. 3! 62 3. Definitions ................................................... 4! 63 3.1. System Under Test (SUT) .................................. 4! 64 3.2. Network Virtualization Platform .......................... 4! 65 3.3. Micro-services ........................................... 6! 66 4. Scope ......................................................... 7! 67 4.1. Virtual Networking for Datacenter Applications ........... 7! 68 4.2. Interaction with Physical Devices ........................ 8! 69 5. Interaction with Physical Devices ............................. 8! 70 5.1. Server Architecture Considerations ...................... 11! 71 6. Security Considerations ...................................... 14! 72 7. IANA Considerations .......................................... 14! 73 8. Conclusions .................................................. 14! 74 9. References ................................................... 14! 75 9.1. Normative References .................................... 14! 76 9.2. Informative References .................................. 15! 77 Appendix A. Partial List of Parameters to Document .............. 16! 78 A.1. CPU ..................................................... 16! 79 A.2. Memory .................................................. 16! 80 A.3. NIC ..................................................... 16! 81 A.4. Hypervisor .............................................. 17! 82 A.5. Guest VM ................................................ 18! 83 A.6. Overlay Network Physical Fabric ......................... 18! 84 A.7. Gateway Network Physical Fabric ......................... 18! 86 1. Introduction 88 Datacenter virtualization that includes both compute and network 89 virtualization is growing rapidly as the industry continues to look 90 for ways to improve productivity, flexibility and at the same time 91 cut costs. Network virtualization, is comparatively new and 92 expected to grow tremendously similar to compute virtualization. 93 There are multiple vendors and solutions out in the market, each 94 with their own benchmarks to showcase why a particular solution is 95 better than another. Hence, the need for a vendor and product 96 agnostic way to benchmark multivendor solutions to help with 97 comparison and make informed decisions when it comes to selecting 98 the right network virtualization solution. 100 Applications traditionally have been segmented using VLANs and ACLs 101 between the VLANs. This model does not scale because of the 4K 102 scale limitations of VLANs. Overlays such as VXLAN were designed to 103 address the limitations of VLANs 105 With VXLAN, applications are segmented based on VXLAN encapsulation 106 (specifically the VNI field in the VXLAN header), which is similar 107 to VLAN ID in the 802.1Q VLAN tag, however without the 4K scale 108 limitations of VLANs. For a more detailed discussion on this 109 subject please refer RFC 7364 "Problem Statement: Overlays for 110 Network Virtualization". 112 VXLAN is just one of several Network Virtualization Overlays(NVO). 113 Some of the others include STT, Geneve and NVGRE. . STT and Geneve 114 have expanded on the capabilities of VXLAN. Please refer IETF's 115 nvo3 working group < 116 https://datatracker.ietf.org/wg/nvo3/documents/> for more 117 information. 119 Modern application architectures, such as Micro-services, are going 120 beyond the three tier app models such as web, app and db. 121 Benchmarks MUST consider whether the proposed solution is able to 122 scale up to the demands of such applications and not just a three- 123 tier architecture. 125 2. Conventions used in this document 127 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 128 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 129 document are to be interpreted as described in RFC 2119 [RFC2119]. 131 In this document, these words will appear with that interpretation 132 only when in ALL CAPS. Lower case uses of these words are not to be 133 interpreted as carrying significance described in RFC 2119. 135 3. Definitions 137 3.1. System Under Test (SUT) 139 Traditional hardware based networking devices generally use the 140 device under test (DUT) model of testing. In this model, apart from 141 any allowed configuration, the DUT is a black box from a testing 142 perspective. This method works for hardware based networking 143 devices since the device itself is not influenced by any other 144 components outside the DUT. 146 Virtual networking components cannot leverage DUT model of testing 147 as the DUT is not just the virtual device but includes the hardware 148 components that were used to host the virtual device 150 Hence SUT model MUST be used instead of the traditional device under 151 test 153 With SUT model, the virtual networking component along with all 154 software and hardware components that host the virtual networking 155 component MUST be considered as part of the SUT. 157 Virtual networking components may also work with higher level TCP 158 segments such as TSO. In contrast, all physical switches and 159 routers, including the ones that act as initiators for NVOs, work 160 with L2/L3 packets. 162 Please refer to section 5 Figure 1 for a visual representation of 163 System Under Test in the case of Intra-Host testing and section 5 164 Figure 2 for System Under Test in the case of Inter-Host testing 166 3.2. Network Virtualization Platform 168 This document does not focus on Network Function Virtualization. 170 Network Function Virtualization (NFV) focuses on being independent 171 of networking hardware while providing the same functionality. In 172 the case of NFV, traditional benchmarking methodologies recommended 173 by IETF may be used. Considerations for Benchmarking Virtual 174 Network Functions and Their Infrastructure IETF document addresses 175 benchmarking NFVs. 177 Typical NFV implementations emulate in software, the characteristics 178 and features of physical switches. They are similar to any physical 179 L2/L3 switch from the perspective of the packet size, which is 180 typically enforced based on the maximum transmission unit used. 182 Network Virtualization platforms on the other hand, are closer to 183 the application layer and are able to work with not only L2/L3 184 packets but also segments that leverage TCP optimizations such as 185 Large Segment Offload (LSO). 187 NVPs leverage TCP stack optimizations such as TCP Segmentation 188 Offload (TSO) and Large Receive Offload (LRO) that enables NVPs to 189 work with much larger payloads of up to 64K unlike their 190 counterparts such as NFVs. 192 Because of the difference in the payload, which translates into one 193 operation per 64K of payload in NVP verses ~40 operations for the 194 same amount of payload in NFV because of having to divide it to MTU 195 sized packets, results in considerable difference in performance 196 between NFV and NVP. 198 Please refer to figure 1 for a pictorial representation of this 199 primary difference between NPV and NFV for a 64K payload 200 segment/packet running on network set to 1500 bytes MTU. 202 Note: Payload sizes in figure 1 are approximates. 204 NPV (1 segment) NFV (40 packets) 206 Segment 1 Packet 1 207 +-------------------------+ +-------------------------+ 208 | Headers | | Headers | 209 | +---------------------+ | | +---------------------+ | 210 | | Pay Load - upto 64K | | | | Pay Load < 1500 | | 211 | +---------------------+ | | +---------------------+ | 212 +-------------------------+ +-------------------------+ 214 Packet 2 215 +-------------------------+ 216 | Headers | 217 | +---------------------+ | 218 | | Pay Load < 1500 | | 219 | +---------------------+ | 220 +-------------------------+ 222 . 223 . 224 . 225 . 227 Packet 40 228 +-------------------------+ 229 | Headers | 230 | +---------------------+ | 231 | | Pay Load < 1500 | | 232 | +---------------------+ | 233 +-------------------------+ 235 Figure 1 Payload NPV vs NFV 237 Hence, normal benchmarking methods are not relevant to the NVPs. 239 Instead, newer methods that take into account the built in 240 advantages of TCP provided optimizations MUST be used for testing 241 Network Virtualization Platforms. 243 3.3. Micro-services 245 Traditional monolithic application architectures such as the three 246 tier web, app and db architectures are hitting scale and deployment 247 limits for the modern use cases. 249 Micro-services make use of classic unix style of small app with 250 single responsibility. 252 These small apps are designed with the following characteristics: 254 Each application only does one thing - like unix tools 256 Small enough that you could rewrite instead of maintain 258 Embedded with a simple web container 260 Packaged as a single executable 262 Installed as daemons 264 Each of these applications are completely separate 266 Interact via uniform interface 268 REST (over HTTP/HTTPS) being the most common 270 With Micro-services architecture, a single web app of the three tier 271 application model could now have 100s of smaller apps dedicated to 272 do just one job. 274 These 100s of small one responsibility only services will MUST be 275 secured into their own segment - hence pushing the scale boundaries 276 of the overlay from both simple segmentation perspective and also 277 from a security perspective 279 4. Scope 281 This document does not address Network Function Virtualization has 282 been covered already by previous IETF documents 283 (https://datatracker.ietf.org/doc/draft-ietf-bmwg-virtual- 284 net/?include_text=1) the focus of this document is Network 285 Virtualization Platform where the network functions are an intrinsic 286 part of the hypervisor's TCP stack, working closer to the 287 application layer and leveraging performance optimizations such 288 TSO/RSS provided by the TCP stack and the underlying hardware. 290 4.1. Virtual Networking for Datacenter Applications 292 While virtualization is growing beyond the datacenter, this document 293 focuses on the virtual networking for east-west traffic within the 294 datacenter applications only. For example, in a three tier app such 295 web, app and db, this document focuses on the east-west traffic 296 between web and app. It does not address north-south web traffic 297 accessed from outside the datacenter. A future document would 298 address north-south traffic flows. 300 This document addresses scale requirements for modern application 301 architectures such as Micro-services to consider whether the 302 proposed solution is able to scale up to the demands of micro- 303 services application models that basically have 100s of small 304 services communicating on some standard ports such as http/https 305 using protocols such as REST 307 4.2. Interaction with Physical Devices 309 Virtual network components cannot be tested independent of other 310 components within the system. Example, unlike a physical router or 311 a firewall, where the tests can be focused directly solely on the 312 device, when testing a virtual router or firewall, multiple other 313 devices may become part of the system under test. Hence the 314 characteristics of these other traditional networking switches and 315 routers, LB, FW etc. MUST be considered. 317 ! Hashing method used 319 ! Over-subscription rate 321 ! Throughput available 323 ! Latency characteristics 325 5. Interaction with Physical Devices 327 In virtual environments, System Under Test (SUT) may often share 328 resources and reside on the same Physical hardware with other 329 components involved in the tests. Hence SUT MUST be clearly 330 defined. In this tests, a single hypervisor may host multiple 331 servers, switches, routers, firewalls etc., 333 Intra host testing: Intra host testing helps in reducing the number 334 of components involved in a test. For example, intra host testing 335 would help focus on the System Under Test, logical switch and the 336 hardware that is running the hypervisor that hosts the logical 337 switch, and eliminate other components. Because of the nature of 338 virtual infrastructures and multiple elements being hosted on the 339 same physical infrastructure, influence from other components cannot 340 be completely ruled out. For example, unlike in physical 341 infrastructures, logical routing or distributed firewall MUST NOT be 342 benchmarked independent of logical switching. System Under Test 343 definition MUST include all components involved with that particular 344 test. 346 +---------------------------------------------------+ 347 | System Under Test | 348 | +-----------------------------------------------+ | 349 | | Hyper-Visor | | 350 | | | | 351 | | +-------------+ | | 352 | | | NVP | | | 353 | | +-----+ | Switch/ | +-----+ | | 354 | | | VM1 |<------>| Router/ |<------>| VM2 | | | 355 | | +-----+ VW | Fire Wall/ | VW +-----+ | | 356 | | | etc., | | | 357 | | +-------------+ | | 358 | | Legend | | 359 | | VM: Virtual Machine | | 360 | | VW: Virtual Wire | | 361 | +------------------------_----------------------+ | 362 +---------------------------------------------------+ 363 Figure 2 Intra-Host System Under Test 365 Inter host testing: Inter host testing helps in profiling the 366 underlying network interconnect performance. For example, when 367 testing Logical Switching, inter host testing would not only test 368 the logical switch component but also any other devices that are 369 part of the physical data center fabric that connects the two 370 hypervisors. System Under Test MUST be well defined to help with 371 repeatability of tests. System Under Test definition in the case of 372 inter host testing, MUST include all components, including the 373 underlying network fabric. 375 Figure 2 is a visual representation of system under test for inter- 376 host testing 377 +---------------------------------------------------+ 378 | System Under Test | 379 | +-----------------------------------------------+ | 380 | | Hyper-Visor | | 381 | | +-------------+ | | 382 | | | NVP | | | 383 | | +-----+ | Switch/ | +-----+ | | 384 | | | VM1 |<------>| Router/ |<------>| VM2 | | | 385 | | +-----+ VW | Fire Wall/ | VW +-----+ | | 386 | | | etc., | | | 387 | | +-------------+ | | 388 | +------------------------_----------------------+ | 389 | ^ | 390 | | Network Cabling | 391 | v | 392 | +-----------------------------------------------+ | 393 | | Physical Networking Components | | 394 | | switches, routers, firewalls etc., | | 395 | +-----------------------------------------------+ | 396 | ^ | 397 | | Network Cabling | 398 | v | 399 | +-----------------------------------------------+ | 400 | | Hyper-Visor | | 401 | | +-------------+ | | 402 | | | NVP | | | 403 | | +-----+ | Switch/ | +-----+ | | 404 | | | VM1 |<------>| Router/ |<------>| VM2 | | | 405 | | +-----+ VW | Fire Wall/ | VW +-----+ | | 406 | | | etc., | | | 407 | | +-------------+ | | 408 | +------------------------_----------------------+ | 409 +---------------------------------------------------+ 410 Legend 411 VM: Virtual Machine 412 VW: Virtual Wire 414 Figure 3 Inter-Host System Under Test 416 Virtual components have a direct dependency on the physical 417 infrastructure that is hosting these resources. Hardware 418 characteristics of the physical host impact the performance of the 419 virtual components. The components that are being tested and the 420 impact of the other hardware components within the hypervisor on the 421 performance of the SUT MUST be documented. Virtual component 422 performance is influenced by the physical hardware components within 423 the hypervisor. Access to various offloads such as TCP segmentation 424 offload, may have significant impact on performance. Firmware and 425 driver differences may also significantly impact results based on 426 whether the specific driver leverages any hardware level offloads 427 offered. Hence, all physical components of the physical server 428 running the hypervisor that hosts the virtual components MUST be 429 documented along with the firmware and driver versions of all the 430 components used to help ensure repeatability of test results. For 431 example, BIOS configuration of the server MUST be documented as some 432 of those changes are designed to improve performance. Please refer 433 to Appendix A for a partial list of parameters to document. 435 5.1. Server Architecture Considerations 437 When testing physical networking components, the approach taken is 438 to consider the device as a black-box. With virtual infrastructure, 439 this approach would no longer help as the virtual networking 440 components are an intrinsic part of the hypervisor they are running 441 on and are directly impacted by the server architecture used. 442 Server hardware components define the capabilities of the virtual 443 networking components. Hence, server architecture MUST be 444 documented in detail to help with repeatability of tests. And the 445 entire hardware and software components become the SUT. 447 5.1.1. Frame format/sizes within the Hypervisor 449 Maximum Transmission Unit (MTU) limits physical network component's 450 frame sizes. The most common max supported MTU for physical devices 451 is 9000. However, 1500 MTU is the standard. Physical network 452 testing and NFV uses these MTU sizes for testing. However, the 453 virtual networking components that live inside a hypervisor, may 454 work with much larger segments because of the availability of 455 hardware and software based offloads. Hence, the normal smaller 456 packets based testing is not relevant for performance testing of 457 virtual networking components. All the TCP related configuration 458 such as TSO size, number of RSS queues MUST be documented along with 459 any other physical NIC related configuration. 461 Virtual network components work closer to the application layer then 462 the physical networking components. Hence virtual network 463 components work with type and size of segments that are often not 464 the same type and size that the physical network works with. Hence, 465 testing virtual network components MUST be done with application 466 layer segments instead of the physical network layer packets. 468 5.1.2. Baseline testing with Logical Switch 470 Logical switch is often an intrinsic component of the test system 471 along with any other hardware and software components used for 472 testing. Also, other logical components cannot be tested 473 independent of the Logical Switch. 475 5.1.3. Repeatability 477 To ensure repeatability of the results, in the physical network 478 component testing, much care is taken to ensure the tests are 479 conducted with exactly the same parameters. Parameters such as MAC 480 addresses used etc., 482 When testing NPV components with an application layer test tool, 483 there may be a number of components within the system that may not 484 be available to tune or to ensure they maintain a desired state. 485 Example: housekeeping functions of the underlying Operating System. 487 Hence, tests MUST be repeated a number of times and each test case 488 MUST be run for at least 2 minutes if test tool provides such an 489 option. Results SHOULD be derived from multiple test runs. Variance 490 between the tests SHOULD be documented. 492 5.1.4. Tunnel encap/decap outside the hypervisor 494 Logical network components may also have performance impact based on 495 the functionality available within the physical fabric. Physical 496 fabric that supports NVO encap/decap is one such case that has 497 considerable impact on the performance. Any such functionality that 498 exists on the physical fabric MUST be part of the test result 499 documentation to ensure repeatability of tests. In this case SUT 500 MUST include the physical fabric 502 5.1.5. SUT Hypervisor Profile 504 Physical networking equipment has well defined physical resource 505 characteristics such as type and number of ASICs/SoCs used, amount 506 of memory, type and number of processors etc., Virtual networking 507 components' performance is dependent on the physical hardware that 508 hosts the hypervisor. Hence the physical hardware usage, which is 509 part of SUT, for a given test MUST be documented. Example, CPU 510 usage when running logical router. 512 CPU usage changes based on the type of hardware available within the 513 physical server. For example, TCP Segmentation Offload greatly 514 reduces CPU usage by offloading the segmentation process to the NIC 515 card on the sender side. Receive side scaling offers similar 516 benefit on the receive side. Hence, availability and status of such 517 hardware MUST be documented along with actual CPU/Memory usage when 518 the virtual networking components have access to such offload 519 capable hardware. 521 Following is a partial list of components that MUST be documented - 522 both in terms of what's available and also what's used by the SUT - 524 o CPU - type, speed, available instruction sets (e.g. AES-NI) 526 o Memory - type, amount 528 o Storage - type, amount 530 o NIC Cards - type, number of ports, offloads available/used, 531 drivers, firmware (if applicable), HW revision 533 o Libraries such as DPDK if available and used 535 o Number and type of VMs used for testing and 537 o vCPUs 539 o RAM 541 o Storage 543 o Network Driver 545 o Any prioritization of VM resources 547 o Operating System type, version and kernel if applicable 549 o TCP Configuration Changes - if any 551 o MTU 553 o Test tool 555 o Workload type 557 o Protocol being tested 559 o Number of threads 561 o Version of tool 563 o For inter-hypervisor tests, 565 o Physical network devices that are part of the test 567 ! Note: For inter-hypervisor tests, system under test 568 is no longer only the virtual component that is being 569 tested but the entire fabric that connects the 570 virtual components become part of the system under 571 test. 573 6. Security Considerations 575 Benchmarking activities as described in this memo are limited to 576 technology characterization of a Device Under Test/System Under Test 577 (DUT/SUT) using controlled stimuli in a laboratory environment, with 578 dedicated address space and the constraints specified in the 579 sections above. 581 The benchmarking network topology will be an independent test setup 582 and MUST NOT be connected to devices that may forward the test 583 traffic into a production network, or misroute traffic to the test 584 management network. 586 Further, benchmarking is performed on a "black-box" basis, relying 587 solely on measurements observable external to the DUT/SUT. 589 Special capabilities SHOULD NOT exist in the DUT/SUT specifically 590 for benchmarking purposes. Any implications for network security 591 arising from the DUT/SUT SHOULD be identical in the lab and in 592 production networks. 594 7. IANA Considerations 596 No IANA Action is requested at this time. 598 8. Conclusions 600 Network Virtualization Platforms, because of their proximity to the 601 application layer and since they can take advantage of TCP stack 602 optimizations, do not function on packets/sec basis. Hence, 603 traditional benchmarking methods, while still relevant for Network 604 Function Virtualization, are not designed to test Network 605 Virtualization Platforms. Also, advances in application 606 architectures such as micro-services, bring new challenges and need 607 benchmarking not just around throughput and latency but also around 608 scale. New benchmarking methods that are designed to take advantage 609 of the TCP optimizations or needed to accurately benchmark 610 performance of the Network Virtualization Platforms 612 9. References 614 9.1. Normative References 616 [RFC7364] T. Narten, E. Gray, D. Black, L. Fang, L. Kreeger, M. 617 Napierala, "Problem Statement: Overlays for Network Virtualization", 618 RFC 7364, October 2014, https://datatracker.ietf.org/doc/rfc7364/ 620 [nv03] IETF, WG, Network Virtualization Overlays, < 621 https://datatracker.ietf.org/wg/nvo3/documents/> 623 9.2. Informative References 625 [1] A. Morton " Considerations for Benchmarking Virtual Network 626 Functions and Their Infrastructure", draft-ietf-bmwg-virtual- 627 net-03, < https://datatracker.ietf.org/doc/draft-ietf-bmwg- 628 virtual-net/?include_text=1> 630 Appendix A. Partial List of Parameters to Document 632 A.1. CPU 634 CPU Vendor 636 CPU Number 638 CPU Architecture 640 # of Sockets (CPUs) 642 # of Cores 644 Clock Speed (GHz) 646 Max Turbo Freq. (GHz) 648 Cache per CPU (MB) 650 # of Memory Channels 652 Chipset 654 Hyperthreading (BIOS Setting) 656 Power Management (BIOS Setting) 658 VT-d 660 A.2. Memory 662 Memory Speed (MHz) 664 DIMM Capacity (GB) 666 # of DIMMs 668 DIMM configuration 670 Total DRAM (GB) 672 A.3. NIC 674 Vendor 676 Model 678 Port Speed (Gbps) 679 Ports 681 PCIe Version 683 PCIe Lanes 685 Bonded 687 Bonding Driver 689 Kernel Module Name 691 Driver Version 693 VXLAN TSO Capable 695 VXLAN RSS Capable 697 Ring Buffer Size RX 699 Ring Buffer Size TX 701 A.4. Hypervisor 703 Hypervisor Name 705 Version/Build 707 Based on 709 Hotfixes/Patches 711 OVS Version/Build 713 IRQ balancing 715 vCPUs per VM 717 Modifications to HV 719 Modifications to HV TCP stack 721 Number of VMs 723 IP MTU 725 Flow control TX (send pause) 727 Flow control RX (honor pause) 728 Encapsulation Type 730 A.5. Guest VM 732 Guest OS & Version 734 Modifications to VM 736 IP MTU Guest VM (Bytes) 738 Test tool used 740 Number of NetPerf Instances 742 Total Number of Streams 744 Guest RAM (GB) 746 A.6. Overlay Network Physical Fabric 748 Vendor 750 Model 752 # and Type of Ports 754 Software Release 756 Interface Configuration 758 Interface/Ethernet MTU (Bytes) 760 Flow control TX (send pause) 762 Flow control RX (honor pause) 764 A.7. Gateway Network Physical Fabric 766 Vendor 768 Model 770 # and Type of Ports 772 Software Release 774 Interface Configuration 776 Interface/Ethernet MTU (Bytes) 777 Flow control TX (send pause) 779 Flow control RX (honor pause) 781 Authors' Addresses 783 Samuel Kommu 784 VMware 785 3401 Hillview Ave 786 Palo Alto, CA, 94304 788 Email: skommu@vmware.com 790 Jacob Rapp 791 VMware 792 3401 Hillview Ave 793 Palo Alto, CA, 94304 795 Email: jrapp@vmware.com