< draft-skommu-bmwg-nvp-00.txt   draft-skommu-bmwg-nvp-01.txt >
BMWG S. Kommu INTERNET-DRAFT
Internet Draft VMware
Intended status: Informational J. Rapp
Expires: September 2017 VMware
March 13, 2017
Considerations for Benchmarking Network Virtualization Platforms BMWG S. Kommu
draft-skommu-bmwg-nvp-00.txt Internet-Draft VMware
Intended status: Informational J. Rapp
Expires: July 2018 VMware
January 2, 2018
Status of this Memo Considerations for Benchmarking Network Virtualization Platforms
draft-skommu-bmwg-nvp-01.txt
This Internet-Draft is submitted in full conformance with the Status of this Memo
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering This Internet-Draft is submitted in full conformance with the
Task Force (IETF), its areas, and its working groups. Note that provisions of BCP 78 and BCP 79.
other groups may also distribute working documents as Internet-
Drafts.
Internet-Drafts are draft documents valid for a maximum of six Internet-Drafts are working documents of the Internet Engineering
months and may be updated, replaced, or obsoleted by other documents Task Force (IETF), its areas, and its working groups. Note that
at any time. It is inappropriate to use Internet-Drafts as other groups may also distribute working documents as Internet-
reference material or to cite them other than as "work in progress." Drafts.
The list of current Internet-Drafts can be accessed at Internet-Drafts are draft documents valid for a maximum of six
http://www.ietf.org/ietf/1id-abstracts.txt months and may be updated, replaced, or obsoleted by other documents
at any time. It is inappropriate to use Internet-Drafts as
reference material or to cite them other than as "work in progress."
The list of Internet-Draft Shadow Directories can be accessed at The list of current Internet-Drafts can be accessed at
http://www.ietf.org/shadow.html http://www.ietf.org/ietf/1id-abstracts.txt
This Internet-Draft will expire on September 13, 2017. The list of Internet-Draft Shadow Directories can be accessed at
http://www.ietf.org/shadow.html
Copyright Notice This Internet-Draft will expire on July 2, 2018.
Copyright (c) 2016 IETF Trust and the persons identified as the Copyright Notice
document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust's Legal Copyright (c) 2016 IETF Trust and the persons identified as the
Provisions Relating to IETF Documents document authors. All rights reserved.
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License.
Abstract This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with
respect to this document. Code Components extracted from this
document must include Simplified BSD License text as described in
Section 4.e of the Trust Legal Provisions and are provided without
warranty as described in the Simplified BSD License.
Current network benchmarking methodologies are focused on physical Abstract
networking components and do not consider the actual application
layer traffic patterns and hence do not reflect the traffic that
virtual networking components work with. The purpose of this
document is to distinguish and highlight benchmarking considerations
when testing and evaluating virtual networking components in the
data center.
Table of Contents Current network benchmarking methodologies are focused on physical
networking components and do not consider the actual application
layer traffic patterns and hence do not reflect the traffic that
virtual networking components work with. The purpose of this
document is to distinguish and highlight benchmarking considerations
when testing and evaluating virtual networking components in the
data center.
1. Introduction .................................................. 2! Table of Contents
2. Conventions used in this document ............................. 3!
3. Definitions ................................................... 4!
3.1. System Under Test (SUT) .................................. 4!
3.2. Network Virtualization Platform .......................... 4!
3.3. Micro-services ........................................... 6!
4. Scope ......................................................... 7!
4.1. Virtual Networking for Datacenter Applications ........... 7!
4.2. Interaction with Physical Devices ........................ 8!
5. Interaction with Physical Devices ............................. 8!
5.1. Server Architecture Considerations ...................... 11!
6. Security Considerations ...................................... 14!
7. IANA Considerations .......................................... 14!
8. Conclusions .................................................. 14!
9. References ................................................... 14!
9.1. Normative References .................................... 14!
9.2. Informative References .................................. 15!
Appendix A. Partial List of Parameters to Document .............. 16!
A.1. CPU ..................................................... 16!
A.2. Memory .................................................. 16!
A.3. NIC ..................................................... 16!
A.4. Hypervisor .............................................. 17!
A.5. Guest VM ................................................ 18!
A.6. Overlay Network Physical Fabric ......................... 18!
A.7. Gateway Network Physical Fabric ......................... 18!
1. Introduction 1. Introduction ................................................. 2
2. Conventions used in this document ............................ 3
3. Definitions .................................................. 4
3.1. System Under Test (SUT) ................................ 4
3.2. Network Virtualization Platform ........................ 4
3.3. Micro-services ......................................... 6
4. Scope ........................................................ 7
4.1. Virtual Networking for Datacenter Applications ......... 7
4.2. Interaction with Physical Devices ...................... 8
5. Interaction with Physical Devices ............................ 8
5.1. Server Architecture Considerations .................... 11
6. Security Considerations ..................................... 14
7. IANA Considerations ......................................... 14
8. Conclusions ................................................. 14
9. References .................................................. 14
9.1. Normative References .................................. 14
9.2. Informative References ................................ 15
Appendix A. Partial List of Parameters to Document ............. 16
A.1. CPU ................................................... 16
A.2. Memory ................................................ 16
A.3. NIC ................................................... 16
A.4. Hypervisor ............................................ 17
A.5. Guest VM .............................................. 18
A.6. Overlay Network Physical Fabric ....................... 18
A.7. Gateway Network Physical Fabric ....................... 18
Datacenter virtualization that includes both compute and network 1. Introduction
virtualization is growing rapidly as the industry continues to look
for ways to improve productivity, flexibility and at the same time
cut costs. Network virtualization, is comparatively new and
expected to grow tremendously similar to compute virtualization.
There are multiple vendors and solutions out in the market, each
with their own benchmarks to showcase why a particular solution is
better than another. Hence, the need for a vendor and product
agnostic way to benchmark multivendor solutions to help with
comparison and make informed decisions when it comes to selecting
the right network virtualization solution.
Applications traditionally have been segmented using VLANs and ACLs Datacenter virtualization that includes both compute and network
between the VLANs. This model does not scale because of the 4K virtualization is growing rapidly as the industry continues to look
scale limitations of VLANs. Overlays such as VXLAN were designed to for ways to improve productivity, flexibility and at the same time
address the limitations of VLANs cut costs. Network virtualization, is comparatively new and
expected to grow tremendously similar to compute virtualization.
There are multiple vendors and solutions out in the market, each
with their own benchmarks to showcase why a particular solution is
better than another. Hence, the need for a vendor and product
agnostic way to benchmark multivendor solutions to help with
comparison and make informed decisions when it comes to selecting
the right network virtualization solution.
With VXLAN, applications are segmented based on VXLAN encapsulation Applications traditionally have been segmented using VLANs and ACLs
(specifically the VNI field in the VXLAN header), which is similar between the VLANs. This model does not scale because of the 4K
to VLAN ID in the 802.1Q VLAN tag, however without the 4K scale scale limitations of VLANs. Overlays such as VXLAN were designed to
limitations of VLANs. For a more detailed discussion on this address the limitations of VLANs
subject please refer RFC 7364 "Problem Statement: Overlays for
Network Virtualization".
VXLAN is just one of several Network Virtualization Overlays(NVO). With VXLAN, applications are segmented based on VXLAN encapsulation
Some of the others include STT, Geneve and NVGRE. . STT and Geneve (specifically the VNI field in the VXLAN header), which is similar
have expanded on the capabilities of VXLAN. Please refer IETF's to VLAN ID in the 802.1Q VLAN tag, however without the 4K scale
nvo3 working group < limitations of VLANs. For a more detailed discussion on this
https://datatracker.ietf.org/wg/nvo3/documents/> for more subject please refer RFC 7364 "Problem Statement: Overlays for
information. Network Virtualization".
Modern application architectures, such as Micro-services, are going VXLAN is just one of several Network Virtualization Overlays(NVO).
beyond the three tier app models such as web, app and db. Some of the others include STT, Geneve and NVGRE. . STT and Geneve
Benchmarks MUST consider whether the proposed solution is able to have expanded on the capabilities of VXLAN. Please refer IETF's
scale up to the demands of such applications and not just a three- nvo3 working group <
tier architecture. https://datatracker.ietf.org/wg/nvo3/documents/> for more
information.
2. Conventions used in this document Modern application architectures, such as Micro-services, are going
beyond the three tier app models such as web, app and db.
Benchmarks MUST consider whether the proposed solution is able to
scale up to the demands of such applications and not just a three-
tier architecture.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 2. Conventions used in this document
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119 [RFC2119].
In this document, these words will appear with that interpretation The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
only when in ALL CAPS. Lower case uses of these words are not to be "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
interpreted as carrying significance described in RFC 2119. document are to be interpreted as described in RFC 2119 [RFC2119].
3. Definitions In this document, these words will appear with that interpretation
only when in ALL CAPS. Lower case uses of these words are not to be
interpreted as carrying significance described in RFC 2119.
3.1. System Under Test (SUT) 3. Definitions
Traditional hardware based networking devices generally use the 3.1. System Under Test (SUT)
device under test (DUT) model of testing. In this model, apart from
any allowed configuration, the DUT is a black box from a testing
perspective. This method works for hardware based networking
devices since the device itself is not influenced by any other
components outside the DUT.
Virtual networking components cannot leverage DUT model of testing Traditional hardware based networking devices generally use the
as the DUT is not just the virtual device but includes the hardware device under test (DUT) model of testing. In this model, apart from
components that were used to host the virtual device any allowed configuration, the DUT is a black box from a testing
perspective. This method works for hardware based networking
devices since the device itself is not influenced by any other
components outside the DUT.
Hence SUT model MUST be used instead of the traditional device under Virtual networking components cannot leverage DUT model of testing
test as the DUT is not just the virtual device but includes the hardware
components that were used to host the virtual device
With SUT model, the virtual networking component along with all Hence SUT model MUST be used instead of the traditional device under
software and hardware components that host the virtual networking test
component MUST be considered as part of the SUT.
Virtual networking components may also work with higher level TCP With SUT model, the virtual networking component along with all
segments such as TSO. In contrast, all physical switches and software and hardware components that host the virtual networking
routers, including the ones that act as initiators for NVOs, work component MUST be considered as part of the SUT.
with L2/L3 packets.
Please refer to section 5 Figure 1 for a visual representation of Virtual networking components may also work with higher level TCP
System Under Test in the case of Intra-Host testing and section 5 segments such as TSO. In contrast, all physical switches and
Figure 2 for System Under Test in the case of Inter-Host testing routers, including the ones that act as initiators for NVOs, work
with L2/L3 packets.
3.2. Network Virtualization Platform Please refer to section 5 Figure 1 for a visual representation of
System Under Test in the case of Intra-Host testing and section 5
Figure 2 for System Under Test in the case of Inter-Host testing
This document does not focus on Network Function Virtualization. 3.2. Network Virtualization Platform
Network Function Virtualization (NFV) focuses on being independent This document does not focus on Network Function Virtualization.
of networking hardware while providing the same functionality. In
the case of NFV, traditional benchmarking methodologies recommended
by IETF may be used. Considerations for Benchmarking Virtual
Network Functions and Their Infrastructure IETF document addresses
benchmarking NFVs.
Typical NFV implementations emulate in software, the characteristics Network Function Virtualization (NFV) focuses on being independent
and features of physical switches. They are similar to any physical of networking hardware while providing the same functionality. In
L2/L3 switch from the perspective of the packet size, which is the case of NFV, traditional benchmarking methodologies recommended
typically enforced based on the maximum transmission unit used. by IETF may be used. Considerations for Benchmarking Virtual
Network Functions and Their Infrastructure IETF document addresses
benchmarking NFVs.
Network Virtualization platforms on the other hand, are closer to Typical NFV implementations emulate in software, the characteristics
the application layer and are able to work with not only L2/L3 and features of physical switches. They are similar to any physical
packets but also segments that leverage TCP optimizations such as L2/L3 switch from the perspective of the packet size, which is
Large Segment Offload (LSO). typically enforced based on the maximum transmission unit used.
NVPs leverage TCP stack optimizations such as TCP Segmentation Network Virtualization platforms on the other hand, are closer to
Offload (TSO) and Large Receive Offload (LRO) that enables NVPs to the application layer and are able to work with not only L2/L3
work with much larger payloads of up to 64K unlike their packets but also segments that leverage TCP optimizations such as
counterparts such as NFVs. Large Segment Offload (LSO).
Because of the difference in the payload, which translates into one NVPs leverage TCP stack optimizations such as TCP Segmentation
operation per 64K of payload in NVP verses ~40 operations for the Offload (TSO) and Large Receive Offload (LRO) that enables NVPs to
same amount of payload in NFV because of having to divide it to MTU work with much larger payloads of up to 64K unlike their
sized packets, results in considerable difference in performance counterparts such as NFVs.
between NFV and NVP.
Please refer to figure 1 for a pictorial representation of this Because of the difference in the payload, which translates into one
primary difference between NPV and NFV for a 64K payload operation per 64K of payload in NVP verses ~40 operations for the
segment/packet running on network set to 1500 bytes MTU. same amount of payload in NFV because of having to divide it to MTU
sized packets, results in considerable difference in performance
between NFV and NVP.
Note: Payload sizes in figure 1 are approximates. Please refer to figure 1 for a pictorial representation of this
primary difference between NPV and NFV for a 64K payload
segment/packet running on network set to 1500 bytes MTU.
NPV (1 segment) NFV (40 packets) Note: Payload sizes in figure 1 are approximates.
Segment 1 Packet 1 NPV (1 segment) NFV (40 packets)
+-------------------------+ +-------------------------+
| Headers | | Headers |
| +---------------------+ | | +---------------------+ |
| | Pay Load - upto 64K | | | | Pay Load < 1500 | |
| +---------------------+ | | +---------------------+ |
+-------------------------+ +-------------------------+
Packet 2 Segment 1 Packet 1
+-------------------------+ +-------------------------+ +-------------------------+
| Headers | | Headers | | Headers |
| +---------------------+ | | +---------------------+ | | +---------------------+ |
| | Pay Load < 1500 | | | | Pay Load - upto 64K | | | | Pay Load < 1500 | |
| +---------------------+ | | +---------------------+ | | +---------------------+ |
+-------------------------+ +-------------------------+ +-------------------------+
. Packet 2
. +-------------------------+
. | Headers |
. | +---------------------+ |
| | Pay Load < 1500 | |
| +---------------------+ |
+-------------------------+
Packet 40 .
+-------------------------+ .
| Headers | .
| +---------------------+ | .
| | Pay Load < 1500 | |
| +---------------------+ |
+-------------------------+
Figure 1 Payload NPV vs NFV Packet 40
+-------------------------+
| Headers |
| +---------------------+ |
| | Pay Load < 1500 | |
| +---------------------+ |
+-------------------------+
Hence, normal benchmarking methods are not relevant to the NVPs. Figure 1 Payload NPV vs NFV
Instead, newer methods that take into account the built in Hence, normal benchmarking methods are not relevant to the NVPs.
advantages of TCP provided optimizations MUST be used for testing
Network Virtualization Platforms.
3.3. Micro-services Instead, newer methods that take into account the built in
advantages of TCP provided optimizations MUST be used for testing
Network Virtualization Platforms.
Traditional monolithic application architectures such as the three 3.3. Micro-services
tier web, app and db architectures are hitting scale and deployment
limits for the modern use cases.
Micro-services make use of classic unix style of small app with Traditional monolithic application architectures such as the three
single responsibility. tier web, app and db architectures are hitting scale and deployment
limits for the modern use cases.
These small apps are designed with the following characteristics: Micro-services make use of classic unix style of small app with
single responsibility.
Each application only does one thing - like unix tools These small apps are designed with the following characteristics:
Small enough that you could rewrite instead of maintain Each application only does one thing - like unix tools
Embedded with a simple web container Small enough that you could rewrite instead of maintain
Packaged as a single executable Embedded with a simple web container
Installed as daemons Packaged as a single executable
Each of these applications are completely separate Installed as daemons
Interact via uniform interface Each of these applications are completely separate
REST (over HTTP/HTTPS) being the most common Interact via uniform interface
With Micro-services architecture, a single web app of the three tier REST (over HTTP/HTTPS) being the most common
application model could now have 100s of smaller apps dedicated to
do just one job.
These 100s of small one responsibility only services will MUST be With Micro-services architecture, a single web app of the three tier
secured into their own segment - hence pushing the scale boundaries application model could now have 100s of smaller apps dedicated to
of the overlay from both simple segmentation perspective and also do just one job.
from a security perspective
4. Scope These 100s of small one responsibility only services will MUST be
secured into their own segment - hence pushing the scale boundaries
of the overlay from both simple segmentation perspective and also
from a security perspective
This document does not address Network Function Virtualization has 4. Scope
been covered already by previous IETF documents
(https://datatracker.ietf.org/doc/draft-ietf-bmwg-virtual-
net/?include_text=1) the focus of this document is Network
Virtualization Platform where the network functions are an intrinsic
part of the hypervisor's TCP stack, working closer to the
application layer and leveraging performance optimizations such
TSO/RSS provided by the TCP stack and the underlying hardware.
4.1. Virtual Networking for Datacenter Applications This document does not address Network Function Virtualization has
been covered already by previous IETF documents
(https://datatracker.ietf.org/doc/draft-ietf-bmwg-virtual-
net/?include_text=1) the focus of this document is Network
Virtualization Platform where the network functions are an intrinsic
part of the hypervisor's TCP stack, working closer to the
application layer and leveraging performance optimizations such
TSO/RSS provided by the TCP stack and the underlying hardware.
While virtualization is growing beyond the datacenter, this document 4.1. Virtual Networking for Datacenter Applications
focuses on the virtual networking for east-west traffic within the
datacenter applications only. For example, in a three tier app such
web, app and db, this document focuses on the east-west traffic
between web and app. It does not address north-south web traffic
accessed from outside the datacenter. A future document would
address north-south traffic flows.
This document addresses scale requirements for modern application While virtualization is growing beyond the datacenter, this document
architectures such as Micro-services to consider whether the focuses on the virtual networking for east-west traffic within the
proposed solution is able to scale up to the demands of micro- datacenter applications only. For example, in a three tier app such
services application models that basically have 100s of small web, app and db, this document focuses on the east-west traffic
services communicating on some standard ports such as http/https between web and app. It does not address north-south web traffic
using protocols such as REST accessed from outside the datacenter. A future document would
address north-south traffic flows.
4.2. Interaction with Physical Devices This document addresses scale requirements for modern application
architectures such as Micro-services to consider whether the
proposed solution is able to scale up to the demands of micro-
services application models that basically have 100s of small
services communicating on some standard ports such as http/https
using protocols such as REST
Virtual network components cannot be tested independent of other 4.2. Interaction with Physical Devices
components within the system. Example, unlike a physical router or
a firewall, where the tests can be focused directly solely on the
device, when testing a virtual router or firewall, multiple other
devices may become part of the system under test. Hence the
characteristics of these other traditional networking switches and
routers, LB, FW etc. MUST be considered.
! Hashing method used Virtual network components cannot be tested independent of other
components within the system. Example, unlike a physical router or
a firewall, where the tests can be focused directly solely on the
device, when testing a virtual router or firewall, multiple other
devices may become part of the system under test. Hence the
characteristics of these other traditional networking switches and
routers, LB, FW etc. MUST be considered.
! Over-subscription rate ! Hashing method used
! Throughput available ! Over-subscription rate
! Latency characteristics ! Throughput available
5. Interaction with Physical Devices ! Latency characteristics
In virtual environments, System Under Test (SUT) may often share 5. Interaction with Physical Devices
resources and reside on the same Physical hardware with other
components involved in the tests. Hence SUT MUST be clearly
defined. In this tests, a single hypervisor may host multiple
servers, switches, routers, firewalls etc.,
Intra host testing: Intra host testing helps in reducing the number In virtual environments, System Under Test (SUT) may often share
of components involved in a test. For example, intra host testing resources and reside on the same Physical hardware with other
would help focus on the System Under Test, logical switch and the components involved in the tests. Hence SUT MUST be clearly
hardware that is running the hypervisor that hosts the logical defined. In this tests, a single hypervisor may host multiple
switch, and eliminate other components. Because of the nature of servers, switches, routers, firewalls etc.,
virtual infrastructures and multiple elements being hosted on the
same physical infrastructure, influence from other components cannot
be completely ruled out. For example, unlike in physical
infrastructures, logical routing or distributed firewall MUST NOT be
benchmarked independent of logical switching. System Under Test
definition MUST include all components involved with that particular
test.
+---------------------------------------------------+ Intra host testing: Intra host testing helps in reducing the number
| System Under Test | of components involved in a test. For example, intra host testing
| +-----------------------------------------------+ | would help focus on the System Under Test, logical switch and the
| | Hyper-Visor | | hardware that is running the hypervisor that hosts the logical
| | | | switch, and eliminate other components. Because of the nature of
| | +-------------+ | | virtual infrastructures and multiple elements being hosted on the
| | | NVP | | | same physical infrastructure, influence from other components cannot
| | +-----+ | Switch/ | +-----+ | | be completely ruled out. For example, unlike in physical
| | | VM1 |<------>| Router/ |<------>| VM2 | | | infrastructures, logical routing or distributed firewall MUST NOT be
| | +-----+ VW | Fire Wall/ | VW +-----+ | | benchmarked independent of logical switching. System Under Test
| | | etc., | | | definition MUST include all components involved with that particular
| | +-------------+ | | test.
| | Legend | |
| | VM: Virtual Machine | |
| | VW: Virtual Wire | |
| +------------------------_----------------------+ |
+---------------------------------------------------+
Figure 2 Intra-Host System Under Test
Inter host testing: Inter host testing helps in profiling the +---------------------------------------------------+
underlying network interconnect performance. For example, when | System Under Test |
testing Logical Switching, inter host testing would not only test | +-----------------------------------------------+ |
the logical switch component but also any other devices that are | | Hyper-Visor | |
part of the physical data center fabric that connects the two | | | |
hypervisors. System Under Test MUST be well defined to help with | | +-------------+ | |
repeatability of tests. System Under Test definition in the case of | | | NVP | | |
inter host testing, MUST include all components, including the | | +-----+ | Switch/ | +-----+ | |
underlying network fabric. | | | VM1 |<------>| Router/ |<------>| VM2 | | |
| | +-----+ VW | Fire Wall/ | VW +-----+ | |
| | | etc., | | |
| | +-------------+ | |
| | Legend | |
| | VM: Virtual Machine | |
| | VW: Virtual Wire | |
| +------------------------_----------------------+ |
+---------------------------------------------------+
Figure 2 Intra-Host System Under Test
Figure 2 is a visual representation of system under test for inter- Inter host testing: Inter host testing helps in profiling the
host testing underlying network interconnect performance. For example, when
+---------------------------------------------------+ testing Logical Switching, inter host testing would not only test
| System Under Test | the logical switch component but also any other devices that are
| +-----------------------------------------------+ | part of the physical data center fabric that connects the two
| | Hyper-Visor | | hypervisors. System Under Test MUST be well defined to help with
| | +-------------+ | | repeatability of tests. System Under Test definition in the case of
| | | NVP | | | inter host testing, MUST include all components, including the
| | +-----+ | Switch/ | +-----+ | | underlying network fabric.
| | | VM1 |<------>| Router/ |<------>| VM2 | | |
| | +-----+ VW | Fire Wall/ | VW +-----+ | |
| | | etc., | | |
| | +-------------+ | |
| +------------------------_----------------------+ |
| ^ |
| | Network Cabling |
| v |
| +-----------------------------------------------+ |
| | Physical Networking Components | |
| | switches, routers, firewalls etc., | |
| +-----------------------------------------------+ |
| ^ |
| | Network Cabling |
| v |
| +-----------------------------------------------+ |
| | Hyper-Visor | |
| | +-------------+ | |
| | | NVP | | |
| | +-----+ | Switch/ | +-----+ | |
| | | VM1 |<------>| Router/ |<------>| VM2 | | |
| | +-----+ VW | Fire Wall/ | VW +-----+ | |
| | | etc., | | |
| | +-------------+ | |
| +------------------------_----------------------+ |
+---------------------------------------------------+
Legend
VM: Virtual Machine
VW: Virtual Wire
Figure 3 Inter-Host System Under Test Figure 2 is a visual representation of system under test for inter-
host testing
+---------------------------------------------------+
| System Under Test |
| +-----------------------------------------------+ |
| | Hyper-Visor | |
| | +-------------+ | |
| | | NVP | | |
| | +-----+ | Switch/ | +-----+ | |
| | | VM1 |<------>| Router/ |<------>| VM2 | | |
| | +-----+ VW | Fire Wall/ | VW +-----+ | |
| | | etc., | | |
| | +-------------+ | |
| +------------------------_----------------------+ |
| ^ |
| | Network Cabling |
| v |
| +-----------------------------------------------+ |
| | Physical Networking Components | |
| | switches, routers, firewalls etc., | |
| +-----------------------------------------------+ |
| ^ |
| | Network Cabling |
| v |
| +-----------------------------------------------+ |
| | Hyper-Visor | |
| | +-------------+ | |
| | | NVP | | |
| | +-----+ | Switch/ | +-----+ | |
| | | VM1 |<------>| Router/ |<------>| VM2 | | |
| | +-----+ VW | Fire Wall/ | VW +-----+ | |
| | | etc., | | |
| | +-------------+ | |
| +------------------------_----------------------+ |
+---------------------------------------------------+
Legend
VM: Virtual Machine
VW: Virtual Wire
Virtual components have a direct dependency on the physical Figure 3 Inter-Host System Under Test
infrastructure that is hosting these resources. Hardware
characteristics of the physical host impact the performance of the
virtual components. The components that are being tested and the
impact of the other hardware components within the hypervisor on the
performance of the SUT MUST be documented. Virtual component
performance is influenced by the physical hardware components within
the hypervisor. Access to various offloads such as TCP segmentation
offload, may have significant impact on performance. Firmware and
driver differences may also significantly impact results based on
whether the specific driver leverages any hardware level offloads
offered. Hence, all physical components of the physical server
running the hypervisor that hosts the virtual components MUST be
documented along with the firmware and driver versions of all the
components used to help ensure repeatability of test results. For
example, BIOS configuration of the server MUST be documented as some
of those changes are designed to improve performance. Please refer
to Appendix A for a partial list of parameters to document.
5.1. Server Architecture Considerations Virtual components have a direct dependency on the physical
infrastructure that is hosting these resources. Hardware
characteristics of the physical host impact the performance of the
virtual components. The components that are being tested and the
impact of the other hardware components within the hypervisor on the
performance of the SUT MUST be documented. Virtual component
performance is influenced by the physical hardware components within
the hypervisor. Access to various offloads such as TCP segmentation
offload, may have significant impact on performance. Firmware and
driver differences may also significantly impact results based on
whether the specific driver leverages any hardware level offloads
offered. Hence, all physical components of the physical server
running the hypervisor that hosts the virtual components MUST be
documented along with the firmware and driver versions of all the
components used to help ensure repeatability of test results. For
example, BIOS configuration of the server MUST be documented as some
of those changes are designed to improve performance. Please refer
to Appendix A for a partial list of parameters to document.
When testing physical networking components, the approach taken is 5.1. Server Architecture Considerations
to consider the device as a black-box. With virtual infrastructure,
this approach would no longer help as the virtual networking
components are an intrinsic part of the hypervisor they are running
on and are directly impacted by the server architecture used.
Server hardware components define the capabilities of the virtual
networking components. Hence, server architecture MUST be
documented in detail to help with repeatability of tests. And the
entire hardware and software components become the SUT.
5.1.1. Frame format/sizes within the Hypervisor When testing physical networking components, the approach taken is
to consider the device as a black-box. With virtual infrastructure,
this approach would no longer help as the virtual networking
components are an intrinsic part of the hypervisor they are running
on and are directly impacted by the server architecture used.
Server hardware components define the capabilities of the virtual
networking components. Hence, server architecture MUST be
documented in detail to help with repeatability of tests. And the
entire hardware and software components become the SUT.
Maximum Transmission Unit (MTU) limits physical network component's 5.1.1. Frame format/sizes within the Hypervisor
frame sizes. The most common max supported MTU for physical devices
is 9000. However, 1500 MTU is the standard. Physical network
testing and NFV uses these MTU sizes for testing. However, the
virtual networking components that live inside a hypervisor, may
work with much larger segments because of the availability of
hardware and software based offloads. Hence, the normal smaller
packets based testing is not relevant for performance testing of
virtual networking components. All the TCP related configuration
such as TSO size, number of RSS queues MUST be documented along with
any other physical NIC related configuration.
Virtual network components work closer to the application layer then Maximum Transmission Unit (MTU) limits physical network component's
the physical networking components. Hence virtual network frame sizes. The most common max supported MTU for physical devices
components work with type and size of segments that are often not is 9000. However, 1500 MTU is the standard. Physical network
the same type and size that the physical network works with. Hence, testing and NFV uses these MTU sizes for testing. However, the
testing virtual network components MUST be done with application virtual networking components that live inside a hypervisor, may
layer segments instead of the physical network layer packets. work with much larger segments because of the availability of
hardware and software based offloads. Hence, the normal smaller
packets based testing is not relevant for performance testing of
virtual networking components. All the TCP related configuration
such as TSO size, number of RSS queues MUST be documented along with
any other physical NIC related configuration.
5.1.2. Baseline testing with Logical Switch Virtual network components work closer to the application layer then
the physical networking components. Hence virtual network
components work with type and size of segments that are often not
the same type and size that the physical network works with. Hence,
testing virtual network components MUST be done with application
layer segments instead of the physical network layer packets.
Logical switch is often an intrinsic component of the test system 5.1.2. Baseline testing with Logical Switch
along with any other hardware and software components used for
testing. Also, other logical components cannot be tested
independent of the Logical Switch.
5.1.3. Repeatability Logical switch is often an intrinsic component of the test system
along with any other hardware and software components used for
testing. Also, other logical components cannot be tested
independent of the Logical Switch.
To ensure repeatability of the results, in the physical network 5.1.3. Repeatability
component testing, much care is taken to ensure the tests are
conducted with exactly the same parameters. Parameters such as MAC
addresses used etc.,
When testing NPV components with an application layer test tool, To ensure repeatability of the results, in the physical network
there may be a number of components within the system that may not component testing, much care is taken to ensure the tests are
be available to tune or to ensure they maintain a desired state. conducted with exactly the same parameters. Parameters such as MAC
Example: housekeeping functions of the underlying Operating System. addresses used etc.,
Hence, tests MUST be repeated a number of times and each test case When testing NPV components with an application layer test tool,
MUST be run for at least 2 minutes if test tool provides such an there may be a number of components within the system that may not
option. Results SHOULD be derived from multiple test runs. Variance be available to tune or to ensure they maintain a desired state.
between the tests SHOULD be documented. Example: housekeeping functions of the underlying Operating System.
5.1.4. Tunnel encap/decap outside the hypervisor Hence, tests MUST be repeated a number of times and each test case
MUST be run for at least 2 minutes if test tool provides such an
option. Results SHOULD be derived from multiple test runs. Variance
between the tests SHOULD be documented.
Logical network components may also have performance impact based on 5.1.4. Tunnel encap/decap outside the hypervisor
the functionality available within the physical fabric. Physical
fabric that supports NVO encap/decap is one such case that has
considerable impact on the performance. Any such functionality that
exists on the physical fabric MUST be part of the test result
documentation to ensure repeatability of tests. In this case SUT
MUST include the physical fabric
5.1.5. SUT Hypervisor Profile Logical network components may also have performance impact based on
the functionality available within the physical fabric. Physical
fabric that supports NVO encap/decap is one such case that has
considerable impact on the performance. Any such functionality that
exists on the physical fabric MUST be part of the test result
documentation to ensure repeatability of tests. In this case SUT
MUST include the physical fabric
Physical networking equipment has well defined physical resource 5.1.5. SUT Hypervisor Profile
characteristics such as type and number of ASICs/SoCs used, amount
of memory, type and number of processors etc., Virtual networking
components' performance is dependent on the physical hardware that
hosts the hypervisor. Hence the physical hardware usage, which is
part of SUT, for a given test MUST be documented. Example, CPU
usage when running logical router.
CPU usage changes based on the type of hardware available within the Physical networking equipment has well defined physical resource
physical server. For example, TCP Segmentation Offload greatly characteristics such as type and number of ASICs/SoCs used, amount
reduces CPU usage by offloading the segmentation process to the NIC of memory, type and number of processors etc., Virtual networking
card on the sender side. Receive side scaling offers similar components performance is dependent on the physical hardware that
benefit on the receive side. Hence, availability and status of such hosts the hypervisor. Hence the physical hardware usage, which is
hardware MUST be documented along with actual CPU/Memory usage when part of SUT, for a given test MUST be documented. Example, CPU
the virtual networking components have access to such offload usage when running logical router.
capable hardware.
Following is a partial list of components that MUST be documented - CPU usage changes based on the type of hardware available within the
both in terms of what's available and also what's used by the SUT - physical server. For example, TCP Segmentation Offload greatly
reduces CPU usage by offloading the segmentation process to the NIC
card on the sender side. Receive side scaling offers similar
benefit on the receive side. Hence, availability and status of such
hardware MUST be documented along with actual CPU/Memory usage when
the virtual networking components have access to such offload
capable hardware.
o CPU - type, speed, available instruction sets (e.g. AES-NI) Following is a partial list of components that MUST be documented
both in terms of what is available and also what is used by the SUT
o Memory - type, amount * CPU - type, speed, available instruction sets (e.g. AES-NI)
o Storage - type, amount * Memory - type, amount
o NIC Cards - type, number of ports, offloads available/used, * Storage - type, amount
drivers, firmware (if applicable), HW revision
o Libraries such as DPDK if available and used * NIC Cards - type, number of ports, offloads available/used,
drivers, firmware (if applicable), HW revision
o Number and type of VMs used for testing and * Libraries such as DPDK if available and used
o vCPUs * Number and type of VMs used for testing and
o RAM o vCPUs
o Storage o RAM
o Network Driver o Storage
o Any prioritization of VM resources o Network Driver
o Operating System type, version and kernel if applicable o Any prioritization of VM resources
o TCP Configuration Changes - if any o Operating System type, version and kernel if applicable
o MTU o TCP Configuration Changes - if any
o Test tool o MTU
o Workload type * Test tool
o Protocol being tested o Workload type
o Number of threads o Protocol being tested
o Version of tool o Number of threads
o For inter-hypervisor tests, o Version of tool
o Physical network devices that are part of the test * For inter-hypervisor tests,
! Note: For inter-hypervisor tests, system under test o Physical network devices that are part of the test
is no longer only the virtual component that is being
tested but the entire fabric that connects the
virtual components become part of the system under
test.
6. Security Considerations ! Note: For inter-hypervisor tests, system under test
is no longer only the virtual component that is being
tested but the entire fabric that connects the
virtual components become part of the system under
test.
Benchmarking activities as described in this memo are limited to 6. Security Considerations
technology characterization of a Device Under Test/System Under Test
(DUT/SUT) using controlled stimuli in a laboratory environment, with
dedicated address space and the constraints specified in the
sections above.
The benchmarking network topology will be an independent test setup Benchmarking activities as described in this memo are limited to
and MUST NOT be connected to devices that may forward the test technology characterization of a Device Under Test/System Under Test
traffic into a production network, or misroute traffic to the test (DUT/SUT) using controlled stimuli in a laboratory environment, with
management network. dedicated address space and the constraints specified in the
sections above.
Further, benchmarking is performed on a "black-box" basis, relying The benchmarking network topology will be an independent test setup
solely on measurements observable external to the DUT/SUT. and MUST NOT be connected to devices that may forward the test
traffic into a production network, or misroute traffic to the test
management network.
Special capabilities SHOULD NOT exist in the DUT/SUT specifically Further, benchmarking is performed on a "black-box" basis, relying
for benchmarking purposes. Any implications for network security solely on measurements observable external to the DUT/SUT.
arising from the DUT/SUT SHOULD be identical in the lab and in
production networks.
7. IANA Considerations Special capabilities SHOULD NOT exist in the DUT/SUT specifically
for benchmarking purposes. Any implications for network security
arising from the DUT/SUT SHOULD be identical in the lab and in
production networks.
No IANA Action is requested at this time. 7. IANA Considerations
8. Conclusions No IANA Action is requested at this time.
Network Virtualization Platforms, because of their proximity to the 8. Conclusions
application layer and since they can take advantage of TCP stack
optimizations, do not function on packets/sec basis. Hence,
traditional benchmarking methods, while still relevant for Network
Function Virtualization, are not designed to test Network
Virtualization Platforms. Also, advances in application
architectures such as micro-services, bring new challenges and need
benchmarking not just around throughput and latency but also around
scale. New benchmarking methods that are designed to take advantage
of the TCP optimizations or needed to accurately benchmark
performance of the Network Virtualization Platforms
9. References Network Virtualization Platforms, because of their proximity to the
application layer and since they can take advantage of TCP stack
optimizations, do not function on packets/sec basis. Hence,
traditional benchmarking methods, while still relevant for Network
Function Virtualization, are not designed to test Network
Virtualization Platforms. Also, advances in application
architectures such as micro-services, bring new challenges and need
benchmarking not just around throughput and latency but also around
scale. New benchmarking methods that are designed to take advantage
of the TCP optimizations or needed to accurately benchmark
performance of the Network Virtualization Platforms
9.1. Normative References 9. References
[RFC7364] T. Narten, E. Gray, D. Black, L. Fang, L. Kreeger, M. 9.1. Normative References
Napierala, "Problem Statement: Overlays for Network Virtualization",
RFC 7364, October 2014, https://datatracker.ietf.org/doc/rfc7364/
[nv03] IETF, WG, Network Virtualization Overlays, < [RFC7364] T. Narten, E. Gray, D. Black, L. Fang, L. Kreeger, M.
https://datatracker.ietf.org/wg/nvo3/documents/> Napierala, "Problem Statement: Overlays for Network Virtualization",
RFC 7364, October 2014, https://datatracker.ietf.org/doc/rfc7364/
9.2. Informative References [nv03] IETF, WG, Network Virtualization Overlays, <
https://datatracker.ietf.org/wg/nvo3/documents/>
[1] A. Morton " Considerations for Benchmarking Virtual Network 9.2. Informative References
Functions and Their Infrastructure", draft-ietf-bmwg-virtual-
net-03, < https://datatracker.ietf.org/doc/draft-ietf-bmwg-
virtual-net/?include_text=1>
Appendix A. Partial List of Parameters to Document [1] A. Morton " Considerations for Benchmarking Virtual Network
Functions and Their Infrastructure", draft-ietf-bmwg-virtual-
net-03, < https://datatracker.ietf.org/doc/draft-ietf-bmwg-
virtual-net/?include_text=1>
A.1. CPU Appendix A. Partial List of Parameters to Document
CPU Vendor A.1. CPU
CPU Number CPU Vendor
CPU Architecture CPU Number
# of Sockets (CPUs) CPU Architecture
# of Cores # of Sockets (CPUs)
Clock Speed (GHz) # of Cores
Max Turbo Freq. (GHz) Clock Speed (GHz)
Cache per CPU (MB) Max Turbo Freq. (GHz)
# of Memory Channels Cache per CPU (MB)
Chipset # of Memory Channels
Hyperthreading (BIOS Setting) Chipset
Power Management (BIOS Setting) Hyperthreading (BIOS Setting)
VT-d Power Management (BIOS Setting)
A.2. Memory VT-d
Memory Speed (MHz) A.2. Memory
DIMM Capacity (GB) Memory Speed (MHz)
# of DIMMs DIMM Capacity (GB)
DIMM configuration # of DIMMs
Total DRAM (GB) DIMM configuration
A.3. NIC Total DRAM (GB)
Vendor A.3. NIC
Model Vendor
Port Speed (Gbps) Model
Ports
PCIe Version Port Speed (Gbps)
Ports
PCIe Lanes PCIe Version
Bonded PCIe Lanes
Bonding Driver Bonded
Kernel Module Name Bonding Driver
Driver Version Kernel Module Name
VXLAN TSO Capable Driver Version
VXLAN RSS Capable VXLAN TSO Capable
Ring Buffer Size RX VXLAN RSS Capable
Ring Buffer Size TX Ring Buffer Size RX
A.4. Hypervisor Ring Buffer Size TX
Hypervisor Name A.4. Hypervisor
Version/Build Hypervisor Name
Based on Version/Build
Hotfixes/Patches Based on
OVS Version/Build Hotfixes/Patches
IRQ balancing OVS Version/Build
vCPUs per VM IRQ balancing
Modifications to HV vCPUs per VM
Modifications to HV TCP stack Modifications to HV
Number of VMs Modifications to HV TCP stack
IP MTU Number of VMs
Flow control TX (send pause) IP MTU
Flow control RX (honor pause) Flow control TX (send pause)
Encapsulation Type
A.5. Guest VM Flow control RX (honor pause)
Encapsulation Type
Guest OS & Version A.5. Guest VM
Modifications to VM Guest OS & Version
IP MTU Guest VM (Bytes) Modifications to VM
Test tool used IP MTU Guest VM (Bytes)
Number of NetPerf Instances Test tool used
Total Number of Streams Number of NetPerf Instances
Guest RAM (GB) Total Number of Streams
A.6. Overlay Network Physical Fabric Guest RAM (GB)
Vendor A.6. Overlay Network Physical Fabric
Model Vendor
# and Type of Ports Model
Software Release # and Type of Ports
Interface Configuration Software Release
Interface/Ethernet MTU (Bytes) Interface Configuration
Flow control TX (send pause) Interface/Ethernet MTU (Bytes)
Flow control RX (honor pause) Flow control TX (send pause)
A.7. Gateway Network Physical Fabric Flow control RX (honor pause)
Vendor A.7. Gateway Network Physical Fabric
Model Vendor
# and Type of Ports Model
Software Release # and Type of Ports
Interface Configuration Software Release
Interface/Ethernet MTU (Bytes) Interface Configuration
Flow control TX (send pause)
Flow control RX (honor pause) Interface/Ethernet MTU (Bytes)
Flow control TX (send pause)
Authors' Addresses Flow control RX (honor pause)
Samuel Kommu Author's Addresses
VMware
3401 Hillview Ave
Palo Alto, CA, 94304
Email: skommu@vmware.com Samuel Kommu
VMware
3401 Hillview Ave
Palo Alto, CA, 94304
Jacob Rapp Email: skommu@vmware.com
VMware
3401 Hillview Ave
Palo Alto, CA, 94304
Email: jrapp@vmware.com Jacob Rapp
VMware
3401 Hillview Ave
Palo Alto, CA, 94304
Email: jrapp@vmware.com
 End of changes. 216 change blocks. 
579 lines changed or deleted 576 lines changed or added

This html diff was produced by rfcdiff 1.46. The latest version is available from http://tools.ietf.org/tools/rfcdiff/