idnits 2.17.1 draft-mkonstan-nf-service-density-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 11, 2019) is 1872 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC8174' is defined on line 997, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Konstantynowicz, Ed. 3 Internet-Draft P. Mikus, Ed. 4 Intended status: Informational Cisco Systems 5 Expires: September 12, 2019 March 11, 2019 7 NFV Service Density Benchmarking 8 draft-mkonstan-nf-service-density-00 10 Abstract 12 Network Function Virtualization (NFV) system designers and operators 13 continuously grapple with the problem of qualifying performance of 14 network services realised with software Network Functions (NF) 15 running on Commercial-Off-The-Shelf (COTS) servers. One of the main 16 challenges is getting repeatable and portable benchmarking results 17 and using them to derive deterministic operating range that is 18 production deployment worthy. 20 This document specifies benchmarking methodology for NFV services 21 that aims to address this problem space. It defines a way for 22 measuring performance of multiple NFV service instances, each 23 composed of multiple software NFs, and running them at a varied 24 service "packing" density on a single server. 26 The aim is to discover deterministic usage range of NFV system. In 27 addition specified methodology can be used to compare and contrast 28 different NFV virtualization technologies. 30 Status of This Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at https://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on September 12, 2019. 47 Copyright Notice 49 Copyright (c) 2019 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (https://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 Table of Contents 64 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3 66 2.1. Problem Description . . . . . . . . . . . . . . . . . . . 3 67 2.2. Proposed Solution . . . . . . . . . . . . . . . . . . . . 4 68 3. NFV Service . . . . . . . . . . . . . . . . . . . . . . . . . 5 69 3.1. Topology . . . . . . . . . . . . . . . . . . . . . . . . 5 70 3.2. Configuration . . . . . . . . . . . . . . . . . . . . . . 7 71 3.3. Packet Path(s) . . . . . . . . . . . . . . . . . . . . . 8 72 4. Virtualization Technology . . . . . . . . . . . . . . . . . . 10 73 5. Host Networking . . . . . . . . . . . . . . . . . . . . . . . 11 74 6. NFV Service Density Matrix . . . . . . . . . . . . . . . . . 12 75 7. Compute Resource Allocation . . . . . . . . . . . . . . . . . 13 76 8. NFV Service Density Benchmarks . . . . . . . . . . . . . . . 17 77 8.1. Test Methodology - MRR Throughput . . . . . . . . . . . . 17 78 8.2. VNF Service Chain . . . . . . . . . . . . . . . . . . . . 18 79 8.3. CNF Service Chain . . . . . . . . . . . . . . . . . . . . 18 80 8.4. CNF Service Pipeline . . . . . . . . . . . . . . . . . . 19 81 8.5. Sample Results: FD.io CSIT . . . . . . . . . . . . . . . 20 82 8.6. Sample Results: CNCF/CNFs . . . . . . . . . . . . . . . . 21 83 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 23 84 10. Security Considerations . . . . . . . . . . . . . . . . . . . 23 85 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 23 86 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 23 87 12.1. Normative References . . . . . . . . . . . . . . . . . . 23 88 12.2. Informative References . . . . . . . . . . . . . . . . . 23 89 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 25 91 1. Terminology 93 o NFV - Network Function Virtualization, a general industry term 94 describing network functionality implemented in software. 96 o NFV service - a software based network service realized by a 97 topology of interconnected constituent software network function 98 applications. 100 o NFV service instance - a single instantiation of NFV service. 102 o Data-plane optimized software - any software with dedicated 103 threads handling data-plane packet processing e.g. FD.io VPP 104 (Vector Packet Processor), OVS-DPDK. 106 2. Motivation 108 2.1. Problem Description 110 Network Function Virtualization (NFV) system designers and operators 111 continuously grapple with the problem of qualifying performance of 112 network services realised with software Network Functions (NF) 113 running on Commercial-Off-The-Shelf (COTS) servers. One of the main 114 challenges is getting repeatable and portable benchmarking results 115 and using them to derive deterministic operating range that is 116 production deployment worthy. 118 Lack of well defined and standardised NFV centric performance 119 methodology and metrics makes it hard to address fundamental 120 questions that underpin NFV production deployments: 122 1. What NFV service and how many instances can run on a single 123 compute node? 125 2. How to choose the best compute resource allocation scheme to 126 maximise service yield per node? 128 3. How do different NF applications compare from the service density 129 perspective? 131 4. How do the virtualisation technologies compare e.g. Virtual 132 Machines, Containers? 134 Getting answers to these points should allow designers to make a data 135 based decision about the NFV technology and service design best 136 suited to meet requirements of their use cases. Equally, obtaining 137 the benchmarking data underpinning those answers should make it 138 easier for operators to work out expected deterministic operating 139 range of chosen design. 141 2.2. Proposed Solution 143 The primary goal of the proposed benchmarking methodology is to focus 144 on NFV technologies used to construct NFV services. More 145 specifically to i) measure packet data-plane performance of multiple 146 NFV service instances while running them at varied service "packing" 147 densities on a single server and ii) quantify the impact of using 148 multiple NFs to construct each NFV service instance and introducing 149 multiple packet processing hops and links on each packet path. 151 The overarching aim is to discover a set of deterministic usage 152 ranges that are of interest to NFV system designers and operators. 153 In addition, specified methodology can be used to compare and 154 contrast different NFV virtualisation technologies. 156 In order to ensure wide applicability of the benchmarking 157 methodology, the approach is to separate NFV service packet 158 processing from the shared virtualisation infrastructure by 159 decomposing the software technology stack into three building blocks: 161 +-------------------------------+ 162 | NFV Service | 163 +-------------------------------+ 164 | Virtualization Technology | 165 +-------------------------------+ 166 | Host Networking | 167 +-------------------------------+ 169 Figure 1. NFV software technology stack. 171 Proposed methodology is complementary to existing NFV benchmarking 172 industry efforts focusing on vSwitch benchmarking [RFC8204], [TST009] 173 and extends the benchmarking scope to NFV services. 175 This document does not describe a complete benchmarking methodology, 176 instead it is focusing on system under test configuration part. Each 177 of the compute node configurations identified by (RowIndex, 178 ColumnIndex) is to be evaluated for NFV service data-plane 179 performance using existing and/or emerging network benchmarking 180 standards. This may include methodologies specified in [RFC2544], 181 [TST009], [draft-vpolak-mkonstan-bmwg-mlrsearch] and/or 182 [draft-vpolak-bmwg-plrsearch]. 184 3. NFV Service 186 It is assumed that each NFV service instance is built of one or more 187 constituent NFs and is described by: topology, configuration and 188 resulting packet path(s). 190 Each set of NFs forms an independent NFV service instance, with 191 multiple sets present in the host. 193 3.1. Topology 195 NFV topology describes the number of network functions per service 196 instance, and their inter-connections over packet interfaces. It 197 includes all point-to-point virtual packet links within the compute 198 node, Layer-2 Ethernet or Layer-3 IP, including the ones to host 199 networking data-plane. 201 Theoretically, a large set of possible NFV topologies can be realised 202 using software virtualisation topologies, e.g. ring, partial -/full- 203 mesh, star, line, tree, ladder. In practice however, only a few 204 topologies are in the actual use as NFV services mostly perform 205 either bumps-in-a-wire packet operations (e.g. security filtering/ 206 inspection, monitoring/telemetry) and/or inter-site forwarding 207 decisions (e.g. routing, switching). 209 Two main NFV topologies have been identified so far for NFV service 210 density benchmarking: 212 1. Chain topology: a set of NFs connect to host data-plane with 213 minimum of two virtual interfaces each, enabling host data-plane 214 to facilitate NF to NF service chain forwarding and provide 215 connectivity with external network. 217 2. Pipeline topology: a set of NFs connect to each other in a line 218 fashion with edge NFs homed to host data-plane. Host data-plane 219 provides connectivity with external network. 221 Both topologies are shown in figures below. 223 NF chain topology: 225 +-----------------------------------------------------------+ 226 | Host Compute Node | 227 | | 228 | +--------+ +--------+ +--------+ | 229 | | S1NF1 | | S1NF2 | | S1NFn | | 230 | | | | | .... | | Service1 | 231 | | | | | | | | 232 | +-+----+-+ +-+----+-+ + + +-+----+-+ | 233 | | | | | | | | | Virtual | 234 | | |<-CS->| |<-CS->| |<-CS->| | Interfaces | 235 | +-+----+------+----+------+----+------+----+-+ | 236 | | | CS: Chain | 237 | | | Segment | 238 | | Host Data-Plane | | 239 | +-+--+----------------------------------+--+-+ | 240 | | | | | | 241 +-----------------------------------------------------------+ 242 | | | | Physical 243 | | | | Interfaces 244 +---+--+----------------------------------+--+--------------+ 245 | | 246 | Traffic Generator | 247 | | 248 +-----------------------------------------------------------+ 250 Figure 2. NF chain topology forming a service instance. 252 NF pipeline topology: 254 +-----------------------------------------------------------+ 255 | Host Compute Node | 256 | | 257 | +--------+ +--------+ +--------+ | 258 | | S1NF1 | | S1NF2 | | S1NFn | | 259 | | +--+ +--+ .... +--+ | Service1 | 260 | | | | | | | | 261 | +-+------+ +--------+ +------+-+ | 262 | | | Virtual | 263 | |<-Pipeline Edge Pipeline Edge->| Interfaces | 264 | +-+----------------------------------------+-+ | 265 | | | | 266 | | | | 267 | | Host Data-Plane | | 268 | +-+--+----------------------------------+--+-+ | 269 | | | | | | 270 +-----------------------------------------------------------+ 271 | | | | Physical 272 | | | | Interfaces 273 +---+--+----------------------------------+--+--------------+ 274 | | 275 | Traffic Generator | 276 | | 277 +-----------------------------------------------------------+ 279 Figure 3. NF pipeline topology forming a service instance. 281 3.2. Configuration 283 NFV configuration includes all packet processing functions in NFs 284 including Layer-2, Layer-3 and/or Layer-4-to-7 processing as 285 appropriate to specific NF and NFV service design. L2 sub- interface 286 encapsulations (e.g. 802.1q, 802.1ad) and IP overlay encapsulation 287 (e.g. VXLAN, IPSec, GRE) may be represented here too as appropriate, 288 although in most cases they are used as external encapsulation and 289 handled by host networking data-plane. 291 NFV configuration determines logical network connectivity that is 292 Layer-2 and/or IPv4/IPv6 switching/routing modes, as well as NFV 293 service specific aspects. In the context of NFV density benchmarking 294 methodology the initial focus is on the former. 296 Building on the two identified NFV topologies, two common NFV 297 configurations are considered: 299 1. Chain configuration: 301 * Relies on chain topology to form NFV service chains. 303 * NF packet forwarding designs: 305 + IPv4/IPv6 routing. 307 * Requirements for host data-plane: 309 + L2 switching with L2 forwarding context per each NF chain 310 segment, or 312 + IPv4/IPv6 routing with IP forwarding context per each NF 313 chain segment or per NF chain. 315 2. Pipeline configuration: 317 * Relies on pipeline topology to form NFV service pipelines. 319 * Packet forwarding designs: 321 + IPv4/IPv6 routing. 323 * Requirements for host data-plane: 325 + L2 switching with L2 forwarding context per each NF 326 pipeline edge link, or 328 + IPv4/IPv6 routing with IP forwarding context per each NF 329 pipeline edge link or per NF pipeline. 331 3.3. Packet Path(s) 333 NFV packet path(s) describe the actual packet forwarding path(s) used 334 for benchmarking, resulting from NFV topology and configuration. 335 They are aimed to resemble true packet forwarding actions during the 336 NFV service lifecycle. 338 Based on the specified NFV topologies and configurations two NFV 339 packet paths are taken for benchmarking: 341 1. Snake packet path 343 * Requires chain topology and configuration. 345 * Packets enter the NFV chain through one edge NF and progress 346 to the other edge NF of the chain. 348 * Within the chain, packets follow a zigzagging "snake" path 349 entering and leaving host data-plane as they progress through 350 the NF chain. 352 * Host data-plane is involved in packet forwarding operations 353 between NIC interfaces and edge NFs, as well as between NFs in 354 the chain. 356 2. Pipeline packet path 358 * Requires pipeline topology and configuration. 360 * Packets enter the NFV chain through one edge NF and progress 361 to the other edge NF of the pipeline. 363 * Within the chain, packets follow a straight path entering and 364 leaving subsequent NFs as they progress through the NF 365 pipeline. 367 * Host data-plane is involved in packet forwarding operations 368 between NIC interfaces and edge NFs only. 370 Both packet paths are shown in figures below. 372 Snake packet path: 374 +-----------------------------------------------------------+ 375 | Host Compute Node | 376 | | 377 | +--------+ +--------+ +--------+ | 378 | | S1NF1 | | S1NF2 | | S1NFn | | 379 | | | | | .... | | Service1 | 380 | | XXXX | | XXXX | | XXXX | | 381 | +-+X--X+-+ +-+X--X+-+ +X X+ +-+X--X+-+ | 382 | |X X| |X X| |X X| |X X| Virtual | 383 | |X X| |X X| |X X| |X X| Interfaces | 384 | +-+X--X+------+X--X+------+X--X+------+X--X+-+ | 385 | | X XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX X | | 386 | | X X | | 387 | | X Host Data-Plane X | | 388 | +-+X-+----------------------------------+-X+-+ | 389 | |X | | X| | 390 +----X--------------------------------------X---------------+ 391 |X | | X| Physical 392 |X | | X| Interfaces 393 +---+X-+----------------------------------+-X+--------------+ 394 | | 395 | Traffic Generator | 396 | | 397 +-----------------------------------------------------------+ 399 Figure 4. Snake packet path thru NF chain topology. 401 Pipeline packet path: 403 +-----------------------------------------------------------+ 404 | Host Compute Node | 405 | | 406 | +--------+ +--------+ +--------+ | 407 | | S1NF1 | | S1NF2 | | S1NFn | | 408 | | +--+ +--+ .... +--+ | Service1 | 409 | | XXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXX | | 410 | +--X-----+ +--------+ +-----X--+ | 411 | |X X| Virtual | 412 | |X X| Interfaces | 413 | +-+X--------------------------------------X+-+ | 414 | | X X | | 415 | | X X | | 416 | | X Host Data-Plane X | | 417 | +-+X-+----------------------------------+-X+-+ | 418 | |X | | X| | 419 +----X--------------------------------------X---------------+ 420 |X | | X| Physical 421 |X | | X| Interfaces 422 +---+X-+----------------------------------+-X+--------------+ 423 | | 424 | Traffic Generator | 425 | | 426 +-----------------------------------------------------------+ 428 Figure 5. Pipeline packet path thru NF pipeline topology. 430 In all cases packets enter NFV system via shared physical NIC 431 interfaces controlled by shared host data-plane, are then associated 432 with specific NFV service (based on service discriminator) and 433 subsequently are cross- connected/switched/routed by host data-plane 434 to and through NF topologies per one of above listed schemes. 436 4. Virtualization Technology 438 NFV services are built of composite isolated NFs, with virtualisation 439 technology providing the workload isolation. Following 440 virtualisation technology types are considered for NFV service 441 density benchmarking: 443 1. Virtual Machines (VMs) 445 * Relying on host hypervisor technology e.g. KVM, ESXi, Xen. 447 * NFs running in VMs are referred to as VNFs. 449 2. Containers 451 * Relying on Linux container technology e.g. LXC, Docker. 453 * NFs running in Containers are referred to as CNFs. 455 Different virtual interface types are available to VNFs and CNFs: 457 1. VNF 459 * virtio-vhostuser: fully user-mode based virtual interface. 461 * virtio-vhostnet: involves kernel-mode based backend. 463 2. CNF 465 * memif: fully user-mode based virtual interface. 467 * af_packet: involves kernel-mode based backend. 469 * (add more common ones) 471 5. Host Networking 473 Host networking data-plane is the central shared resource that 474 underpins creation of NFV services. It handles all of the 475 connectivity to external physical network devices through physical 476 network connections using NICs, through which the benchmarking is 477 done. 479 Assuming that NIC interface resources are shared, here is the list of 480 widely available host data-plane options for providing packet 481 connectivity to/from NICs and constructing NFV chain and pipeline 482 topologies and configurations: 484 o Linux Kernel-Mode Networking. 486 o Linux User-Mode vSwitch. 488 o Virtual Machine vSwitch. 490 o Linux Container vSwitch. 492 o SRIOV NIC Virtual Function - note: restricted support for chain 493 and pipeline topologies, as it requires hair-pinning through the 494 NIC and oftentimes also through external physical switch. 496 Analysing properties of each of these options and their Pros/Cons for 497 specified NFV topologies and configurations is outside the scope of 498 this document. 500 From all listed options, performance optimised Linux user-mode 501 vswitch deserves special attention. Linux user-mode switch decouples 502 NFV service from the underlying NIC hardware, offers rich multi- 503 tenant functionality and most flexibility for supporting NFV 504 services. But in the same time it is consuming compute resources and 505 is harder to benchmark in NFV service density scenarios. 507 Following sections focus on using Linux user-mode vSwitch, focusing 508 on its performance benchmarking at increasing levels of NFV service 509 density. 511 6. NFV Service Density Matrix 513 In order to evaluate performance of multiple NFV services running on 514 a compute node, NFV service instances are benchmarked at increasing 515 density, allowing to construct an NFV Service Density Matrix. 516 Table below shows an example of such a matrix, capturing number of 517 NFV service instances (row indices), number of NFs per service 518 instance (column indices) and resulting total number of NFs (values). 520 NFV Service Density - NF Count View 522 SVC 001 002 004 006 008 00N 523 001 1 2 4 6 8 1*N 524 002 2 4 8 12 16 2*N 525 004 4 8 16 24 32 4*N 526 006 6 12 24 36 48 6*N 527 008 8 16 32 48 64 8*N 528 00M M*1 M*2 M*4 M*6 M*8 M*N 530 RowIndex: Number of NFV Service Instances, 1..M. 531 ColumnIndex: Number of NFs per NFV Service Instance, 1..N. 532 Value: Total number of NFs running in the system. 534 In order to deliver good and repeatable network data-plane 535 performance, NFs and host data-plane software require direct access 536 to critical compute resources. Due to a shared nature of all 537 resources on a compute node, a clearly defined resource allocation 538 scheme is defined in the next section to address this. 540 In each tested configuration host data-plane is a gateway between the 541 external network and the internal NFV network topologies. Offered 542 packet load is generated and received by an external traffic 543 generator per usual benchmarking practice. 545 It is proposed that initial benchmarks are done with the offered 546 packet load distributed equally across all configured NFV service 547 instances. This could be followed by various per NFV service 548 instance load ratios mimicking expected production deployment 549 scenario(s). 551 Following sections specify compute resource allocation, followed by 552 examples of applying NFV service density methodology to VNF and CNF 553 benchmarking use cases. 555 7. Compute Resource Allocation 557 Performance optimized NF and host data-plane software threads require 558 timely execution of packet processing instructions and are very 559 sensitive to any interruptions (or stalls) to this execution e.g. cpu 560 core context switching, or cpu jitter. To that end, NFV service 561 density methodology treats controlled mapping ratios of data plane 562 software threads to physical processor cores with directly allocated 563 cache hierarchies as the first order requirement. 565 Other compute resources including memory bandwidth and PCIe bandwidth 566 have lesser impact and as such are subject for further study. For 567 more detail and deep-dive analysis of software data plane performance 568 and impact on different shared compute resources is available in 569 [BSDP]. 571 It is assumed that NFs as well as host data-plane (e.g. vswitch) are 572 performance optimized, with their tasks executed in two types of 573 software threads: 575 o data-plane - handling data-plane packet processing and forwarding, 576 time critical, requires dedicated cores. To scale data-plane 577 performance, most NF apps use multiple data-plane threads and rely 578 on NIC RSS (Receive Side Scaling), virtual interface multi-queue 579 and/or integrated software hashing to distribute packets across 580 the data threads. 582 o main-control - handling application management, statistics and 583 control-planes, less time critical, allows for core sharing. For 584 most NF apps this is a single main thread, but often statistics 585 (counters) and various control protocol software are run in 586 separate threads. 588 Core mapping scheme described below allocates cores for all threads 589 of specified type belonging to each NF app instance, and separately 590 lists number of threads to a number of logical/physical core mappings 591 for processor configurations with enabled/disabled Symmetric Multi- 592 Threading (SMT) (e.g. AMD SMT, Intel Hyper-Threading). 594 If NFV service density benchmarking is run on server nodes with 595 Symmetric Multi-Threading (SMT) (e.g. AMD SMT, Intel Hyper- 596 Threading) for higher performance and efficiency, logical cores 597 allocated to data- plane threads should be allocated as pairs of 598 sibling logical cores corresponding to the hyper-threads running on 599 the same physical core. 601 Separate core ratios are defined for mapping threads of vSwitch and 602 NFs. In order to get consistent benchmarking results, the mapping 603 ratios are enforced using Linux core pinning. 605 +-------------+--------+----------+----------------+----------------+ 606 | application | thread | app:core | threads/pcores | threads/lcores | 607 | | type | ratio | (SMT disabled) | map (SMT | 608 | | | | | enabled) | 609 +-------------+--------+----------+----------------+----------------+ 610 | vSwitch-1c | data | 1:1 | 1DT/1PC | 2DT/2LC | 611 | | | | | | 612 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC | 613 | | | | | | 614 | | | | | | 615 | | | | | | 616 | vSwitch-2c | data | 1:2 | 2DT/2PC | 4DT/4LC | 617 | | | | | | 618 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC | 619 | | | | | | 620 | | | | | | 621 | | | | | | 622 | vSwitch-4c | data | 1:4 | 4DT/4PC | 8DT/8LC | 623 | | | | | | 624 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC | 625 | | | | | | 626 | | | | | | 627 | | | | | | 628 | NF-0.5c | data | 1:S2 | 1DT/S2PC | 1DT/1LC | 629 | | | | | | 630 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC | 631 | | | | | | 632 | | | | | | 633 | | | | | | 634 | NF-1c | data | 1:1 | 1DT/1PC | 2DT/2LC | 635 | | | | | | 636 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC | 637 | | | | | | 638 | | | | | | 639 | | | | | | 640 | NF-2c | data | 1:2 | 2DT/2PC | 4DT/4LC | 641 | | | | | | 642 | | main | 1:S2 | 1MT/S2PC | 1MT/1LC | 643 +-------------+--------+----------+----------------+----------------+ 645 o Legend to table 647 * Header row 649 + application - network application with optimized data-plane, 650 a vSwitch or Network Function (NF) application. 652 + thread type - either "data", short for data-plane; or 653 "main", short for all main-control threads. 655 + app:core ratio - ratio of per application instance threads 656 of specific thread type to physical cores. 658 + threads/pcores (SMT disabled) - number of threads of 659 specific type (DT for data-plane thread, MT for main thread) 660 running on a number of physical cores, with SMT disabled. 662 + threads/lcores map (SMT enabled) - number of threads of 663 specific type (DT, MT) running on a number of logical cores, 664 with SMT enabled. Two logical cores per one physical core. 666 * Content rows 668 + vSwitch-(1c|2c|4c) - vSwitch with 1 physical core (or 2, or 669 4) allocated to its data-plane software worker threads. 671 + NF-(0.5c|1c|2c) - NF application with half of a physical 672 core (or 1, or 2) allocated to its data-plane software 673 worker threads. 675 + Sn - shared core, sharing ratio of (n). 677 + DT - data-plane thread. 679 + MT - main-control thread. 681 + PC - physical core, with SMT/HT enabled has many (mostly 2 682 today) logical cores associated with it. 684 + LC - logical core, if more than one lc get allocated in sets 685 of two sibling logical cores running on the same physical 686 core. 688 + SnPC - shared physical core, sharing ratio of (n). 690 + SnLC - shared logical core, sharing ratio of (n). 692 Maximum benchmarked NFV service densities are limited by a number of 693 physical cores on a compute node. 695 A sample physical core usage view is shown in the matrix below. 697 NFV Service Density - Core Usage View 698 vSwitch-1c, NF-1c 700 SVC 001 002 004 006 008 010 701 001 2 3 6 9 12 15 702 002 3 6 12 18 24 30 703 004 6 12 24 36 48 60 704 006 9 18 36 54 72 90 705 008 12 24 48 72 96 120 706 010 15 30 60 90 120 150 708 RowIndex: Number of NFV Service Instances, 1..10. 709 ColumnIndex: Number of NFs per NFV Service Instance, 1..10. 710 Value: Total number of physical processor cores used for NFs. 712 8. NFV Service Density Benchmarks 714 To illustrate defined NFV service density applicability, following 715 sections describe three sets of NFV service topologies and 716 configurations that have been benchmarked in open-source: i) in 717 [LFN-FDio-CSIT], a continuous testing and data-plane benchmarking 718 project, and ii) as part of CNCF CNF Testbed initiative 719 [CNCF-CNF-Testbed]. 721 In both cases each NFV service instance definition is based on the 722 same set of NF applications, and varies only by network addressing 723 configuration to emulate multi-tenant operating environment. 725 8.1. Test Methodology - MRR Throughput 727 Initial NFV density throughput benchmarks have been performed using 728 Maximum Receive Rate (MRR) test methodology defined and used in FD.io 729 CSIT. 731 MRR tests measure the packet forwarding rate under the maximum load 732 offered by traffic generator over a set trial duration, regardless of 733 packet loss. Maximum load for specified Ethernet frame size is set 734 to the bi-directional link rate (2x 10GbE in referred results). 736 Tests were conducted with two traffic profiles: i) continuous stream 737 of 64B frames, ii) continuous stream of IMIX sequence of (7x 64B, 4x 738 570B, 1x 1518B), all sizes are L2 untagged Ethernet. 740 NFV service topologies tested include: VNF service chains, CNF 741 service chains and CNF service pipelines. 743 8.2. VNF Service Chain 745 VNF Service Chain (VSC) topology is tested with KVM hypervisor 746 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs 747 running in VMs (VNFs). Host data-plane is provided by FD.io VPP 748 vswitch. Virtual interfaces are virtio-vhostuser. Snake forwarding 749 packet path is tested using [TRex] traffic generator, see figure. 751 +-----------------------------------------------------------+ 752 | Host Compute Node | 753 | | 754 | +--------+ +--------+ +--------+ | 755 | | S1VNF1 | | S1VNF2 | | S1VNFn | | 756 | | | | | .... | | Service1 | 757 | | XXXX | | XXXX | | XXXX | | 758 | +-+X--X+-+ +-+X--X+-+ +-+X--X+-+ | 759 | |X X| |X X| |X X| Virtual | 760 | |X X| |X X| |X X| |X X| Interfaces | 761 | +-+X--X+------+X--X+------+X--X+------+X--X+-+ | 762 | | X XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX X | | 763 | | X X | | 764 | | X FD.io VPP vSwitch X | | 765 | +-+X-+----------------------------------+-X+-+ | 766 | |X | | X| | 767 +----X--------------------------------------X---------------+ 768 |X | | X| Physical 769 |X | | X| Interfaces 770 +---+X-+----------------------------------+-X+--------------+ 771 | | 772 | Traffic Generator (TRex) | 773 | | 774 +-----------------------------------------------------------+ 776 Figure 6. VNF service chain test setup. 778 8.3. CNF Service Chain 780 CNF Service Chain (CSC) topology is tested with Docker containers 781 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs 782 running in Containers (CNFs). Host data-plane is provided by FD.io 783 VPP vswitch. Virtual interfaces are memif. Snake forwarding packet 784 path is tested using [TRex] traffic generator, see figure. 786 +-----------------------------------------------------------+ 787 | Host Compute Node | 788 | | 789 | +--------+ +--------+ +--------+ | 790 | | S1CNF1 | | S1CNF2 | | S1CNFn | | 791 | | | | | .... | | Service1 | 792 | | XXXX | | XXXX | | XXXX | | 793 | +-+X--X+-+ +-+X--X+-+ +-+X--X+-+ | 794 | |X X| |X X| |X X| Virtual | 795 | |X X| |X X| |X X| |X X| Interfaces | 796 | +-+X--X+------+X--X+------+X--X+------+X--X+-+ | 797 | | X XXXXXXXXXX XXXXXXXXXX XXXXXXXXXX X | | 798 | | X X | | 799 | | X FD.io VPP vSwitch X | | 800 | +-+X-+----------------------------------+-X+-+ | 801 | |X | | X| | 802 +----X--------------------------------------X---------------+ 803 |X | | X| Physical 804 |X | | X| Interfaces 805 +---+X-+----------------------------------+-X+--------------+ 806 | | 807 | Traffic Generator (TRex) | 808 | | 809 +-----------------------------------------------------------+ 811 Figure 7. CNF service chain test setup. 813 8.4. CNF Service Pipeline 815 CNF Service Pipeline (CSP) topology is tested with Docker containers 816 (Ubuntu 18.04-LTS), with NFV service instances consisting of NFs 817 running in Containers (CNFs). Host data-plane is provided by FD.io 818 VPP vswitch. Virtual interfaces are memif. Pipeline forwarding 819 packet path is tested using [TRex] traffic generator, see figure. 821 +-----------------------------------------------------------+ 822 | Host Compute Node | 823 | | 824 | +--------+ +--------+ +--------+ | 825 | | S1NF1 | | S1NF2 | | S1NFn | | 826 | | +--+ +--+ .... +--+ | Service1 | 827 | | XXXXXXXXXXXXXXXXXXXXXXXX XXXXXXXXXXXX | | 828 | +--X-----+ +--------+ +-----X--+ | 829 | |X X| Virtual | 830 | |X X| Interfaces | 831 | +-+X--------------------------------------X+-+ | 832 | | X X | | 833 | | X X | | 834 | | X FD.io VPP vSwitch X | | 835 | +-+X-+----------------------------------+-X+-+ | 836 | |X | | X| | 837 +----X--------------------------------------X---------------+ 838 |X | | X| Physical 839 |X | | X| Interfaces 840 +---+X-+----------------------------------+-X+--------------+ 841 | | 842 | Traffic Generator (TRex) | 843 | | 844 +-----------------------------------------------------------+ 846 Figure 8. CNF service chain test setup. 848 8.5. Sample Results: FD.io CSIT 850 FD.io CSIT project introduced NFV density benchmarking in release 851 CSIT-1901 and published results for the following NFV service 852 topologies and configurations: 854 1. VNF Service Chains 856 * VNF: DPDK-L3FWD v18.10 858 + IPv4 forwarding 860 + NF-1c 862 * vSwitch: VPP v19.01-release 864 + L2 MAC switching 866 + vSwitch-1c, vSwitch-2c 868 * frame sizes: 64B, IMIX 870 2. CNF Service Chains 872 * CNF: VPP v19.01-release 874 + IPv4 routing 876 + NF-1c 878 * vSwitch: VPP v19.01-release 880 + L2 MAC switching 882 + vSwitch-1c, vSwitch-2c 884 * frame sizes: 64B, IMIX 886 3. CNF Service Pipelines 888 * CNF: VPP v19.01-release 890 + IPv4 routing 892 + NF-1c 894 * vSwitch: VPP v19.01-release 896 + L2 MAC switching 898 + vSwitch-1c, vSwitch-2c 900 * frame sizes: 64B, IMIX 902 More information is available in FD.io CSIT-1901 report, with 903 specific references listed below: 905 o Testbed: [CSIT-1901-testbed-2n-skx] 907 o Test environment: [CSIT-1901-test-enviroment] 909 o Methodology: [CSIT-1901-nfv-density-methodology] 911 o Results: [CSIT-1901-nfv-density-results] 913 8.6. Sample Results: CNCF/CNFs 915 CNCF CI team introduced a CNF testbed initiative focusing on 916 benchmaring NFV density with open-source network applications running 917 as VNFs and CNFs. Following NFV service topologies and 918 configurations have been tested to date: 920 1. VNF Service Chains 922 * VNF: VPP v18.10-release 924 + IPv4 routing 926 + NF-1c 928 * vSwitch: VPP v18.10-release 930 + L2 MAC switching 932 + vSwitch-1c, vSwitch-2c 934 * frame sizes: 64B, IMIX 936 2. CNF Service Chains 938 * CNF: VPP v18.10-release 940 + IPv4 routing 942 + NF-1c 944 * vSwitch: VPP v18.10-release 946 + L2 MAC switching 948 + vSwitch-1c, vSwitch-2c 950 * frame sizes: 64B, IMIX 952 3. CNF Service Pipelines 954 * CNF: VPP v18.10-release 956 + IPv4 routing 958 + NF-1c 960 * vSwitch: VPP v18.10-release 962 + L2 MAC switching 964 + vSwitch-1c, vSwitch-2c 966 * frame sizes: 64B, IMIX 968 More information is available in CNCF CNF Testbed github, with 969 summary test results presented in summary markdown file, references 970 listed below: 972 o Results: [CNCF-CNF-Testbed-Results] 974 9. IANA Considerations 976 No requests of IANA 978 10. Security Considerations 980 .. 982 11. Acknowledgements 984 Thanks to Vratko Polak of FD.io CSIT project and Michael Pedersen of 985 the CNCF Testbed initiative for their contributions and useful 986 suggestions. 988 12. References 990 12.1. Normative References 992 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 993 Network Interconnect Devices", RFC 2544, 994 DOI 10.17487/RFC2544, March 1999, 995 . 997 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 998 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 999 May 2017, . 1001 12.2. Informative References 1003 [BSDP] "Benchmarking Software Data Planes Intel(R) Xeon(R) 1004 Skylake vs. Broadwell", March 2019, . 1008 [CNCF-CNF-Testbed] 1009 "Cloud native Network Function (CNF) Testbed", March 2019, 1010 . 1012 [CNCF-CNF-Testbed-Results] 1013 "CNCF CNF Testbed: NFV Service Density Benchmarking", 1014 December 2018, . 1018 [CSIT-1901-nfv-density-methodology] 1019 "FD.io CSIT Test Methodology: NFV Service Density", March 1020 2019, 1021 . 1024 [CSIT-1901-nfv-density-results] 1025 "FD.io CSIT Test Results: NFV Service Density", March 1026 2019, . 1029 [CSIT-1901-test-enviroment] 1030 "FD.io CSIT Test Environment", March 2019, 1031 . 1034 [CSIT-1901-testbed-2n-skx] 1035 "FD.io CSIT Test Bed", March 2019, 1036 . 1039 [draft-vpolak-bmwg-plrsearch] 1040 "Probabilistic Loss Ratio Search for Packet Throughput 1041 (PLRsearch)", November 2018, . 1044 [draft-vpolak-mkonstan-bmwg-mlrsearch] 1045 "Multiple Loss Ratio Search for Packet Throughput 1046 (MLRsearch)", November 2018, . 1049 [LFN-FDio-CSIT] 1050 "Fast Data io, Continuous System Integration and Testing 1051 Project", March 2019, . 1053 [RFC8204] Tahhan, M., O'Mahony, B., and A. Morton, "Benchmarking 1054 Virtual Switches in the Open Platform for NFV (OPNFV)", 1055 RFC 8204, DOI 10.17487/RFC8204, September 2017, 1056 . 1058 [TRex] "TRex Low-Cost, High-Speed Stateful Traffic Generator", 1059 March 2019, . 1062 [TST009] "ETSI GS NFV-TST 009 V3.1.1 (2018-10), Network Functions 1063 Virtualisation (NFV) Release 3; Testing; Specification of 1064 Networking Benchmarks and Measurement Methods for NFVI", 1065 October 2018, . 1068 Authors' Addresses 1070 Maciek Konstantynowicz (editor) 1071 Cisco Systems 1073 Email: mkonstan@cisco.com 1075 Peter Mikus (editor) 1076 Cisco Systems 1078 Email: pmikus@cisco.com