idnits 2.17.1 draft-natarajan-nfvrg-containers-for-nfv-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- == There are 11 instances of lines with non-RFC2606-compliant FQDNs in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (July 8, 2016) is 2839 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'CLICKOS' is mentioned on line 432, but not defined == Unused Reference: 'ETSI-NFV-WHITE' is defined on line 636, but no explicit reference was found in the text == Unused Reference: 'ETSI-NFV-USE-CASES' is defined on line 639, but no explicit reference was found in the text == Unused Reference: 'ETSI-NFV-REQ' is defined on line 643, but no explicit reference was found in the text == Unused Reference: 'ETSI-NFV-ARCH' is defined on line 647, but no explicit reference was found in the text == Unused Reference: 'ETSI-NFV-TERM' is defined on line 651, but no explicit reference was found in the text == Unused Reference: 'KUBERNETES-SELF-HEALING' is defined on line 658, but no explicit reference was found in the text == Unused Reference: 'VCPE-CONTAINER-PERF' is defined on line 682, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 11 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 NFVRG S. Natarajan 2 Internet Draft Google 3 Category: Informational R. Krishnan 4 A. Ghanwani 5 Dell 6 D. Krishnaswamy 7 IBM Research 8 P. Willis 9 BT 10 A. Chaudhary 11 Verizon 12 F. Huici 13 NEC 15 Expires: January 2017 July 8, 2016 17 An Analysis of Lightweight Virtualization Technologies for NFV 19 draft-natarajan-nfvrg-containers-for-nfv-03 21 Abstract 23 Traditionally, NFV platforms were limited to using standard 24 virtualization technologies (e.g., Xen, KVM, VMWare, Hyper-V, etc.) 25 running guests based on general-purpose operating systems such as 26 Windows, Linux or FreeBSD. More recently, a number of light-weight 27 virtualization technologies including containers, unikernels 28 (specialized VMs) and minimalistic distributions of general-purpose 29 OSes have widened the spectrum of possibilities when constructing an 30 NFV platform. This draft describes the challenges in building such a 31 platform and discusses to what extent these technologies, as well as 32 traditional VMs, are able to address them. 34 Status of this Memo 36 This Internet-Draft is submitted to IETF in full conformance with 37 the provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF), its areas, and its working groups. Note that 41 other groups may also distribute working documents as Internet- 42 Drafts. 44 Internet-Drafts are draft documents valid for a maximum of six 45 months and may be updated, replaced, or obsoleted by other documents 46 at any time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 49 The list of current Internet-Drafts can be accessed at 50 http://www.ietf.org/ietf/1id-abstracts.txt. 52 The list of Internet-Draft Shadow Directories can be accessed at 53 http://www.ietf.org/shadow.html. 55 This Internet-Draft will expire in January 2017. 57 Copyright Notice 59 Copyright (c) 2015 IETF Trust and the persons identified as the 60 document authors. All rights reserved. 62 This document is subject to BCP 78 and the IETF Trust's Legal 63 Provisions Relating to IETF Documents 64 (http://trustee.ietf.org/license-info) in effect on the date of 65 publication of this document. Please review these documents 66 carefully, as they describe your rights and restrictions with 67 respect to this document. 69 Conventions used in this document 71 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 72 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 73 document are to be interpreted as described in RFC 2119. 75 Table of Contents 77 1. Introduction...................................................3 78 2. Lightweight Virtualization Background..........................3 79 2.1. Containers................................................3 80 2.2. OS Tinyfication...........................................3 81 2.3. Unikernels................................................4 82 3. Challenges in Building NFV Platforms...........................4 83 3.1. Performance (SLA).........................................4 84 3.1.1. Challenges...........................................4 85 3.2. Continuity, Elasticity and Portability....................5 86 3.2.1. Challenges:..........................................5 87 3.3. Security..................................................6 88 3.3.1. Challenges...........................................6 89 3.4. Management................................................7 90 3.4.1. Challenges...........................................8 91 4. Benchmarking Experiments.......................................8 92 4.1. Experimental Setup........................................8 93 4.2. Instantiation Times.......................................9 94 4.3. Throughput................................................9 95 4.4. RTT......................................................10 96 4.5. Image Size...............................................11 97 4.6. Memory Usage.............................................11 98 5. Discussion....................................................12 99 6. Conclusion....................................................13 100 7. Future Work...................................................13 101 8. IANA Considerations...........................................14 102 9. Security Considerations.......................................14 103 10. Contributors.................................................14 104 11. Acknowledgements.............................................14 105 12. References...................................................14 106 12.1. Normative References....................................14 107 12.2. Informative References..................................14 108 Authors' Addresses...............................................16 110 1. Introduction 112 This draft describes the challenges when building an NFV platform by 113 describing to what extent different types of lightweight 114 virtualization technologies, such as VMs based on minimalistic 115 distributions, unikernels and containers, are able to address them. 117 2. Lightweight Virtualization Background 119 2.1. Containers 121 Containers are a form of operating-system virtualization. To provide 122 isolation, containers such as Docker rely on features of the Linux 123 kernel such as cgroups, namespaces and a union-capable file system 124 such as aufs and others [AUFS]. Because they run within a single OS 125 instance, they avoid the overheads typically associated with 126 hypervisors and virtual machines. 128 2.2. OS Tinyfication 130 OS tinyfication consists of creating a minimalistic distribution of 131 a general-purpose operating system such as Linux or FreeBSD. This 132 involves two parts: (1) configuring the kernel so that only needed 133 features and modules are enabled/included (e.g., removing extraneous 134 drivers); and (2) including only the required user-level libraries 135 and applications needed for the task at hand, and running only the 136 minimum amount of required processes. The most notable example of a 137 tinyfied OS is the work on Linux tinyfication [LINUX-TINY]. 139 2.3. Unikernels 141 Unikernels are essentially single-application virtual machines based 142 on minimalistic OSes. Such minimalistic OSes have minimum overhead 143 and are typically single-address space (so no user/kernel space 144 divide and no expensive system calls) and have a co-operative 145 scheduler (so reducing context switch costs). Examples of such 146 minimalistic OSes are MiniOS [MINIOS] which runs on Xen and OSv 147 [OSV] which runs on KVM, Xen and VMWare. 149 3. Challenges in Building NFV Platforms 151 In this section, we outline the set of main challenges for an NFV 152 platform in the context of lightweight virtualization technologies 153 as well as traditional VMs. 155 3.1. Performance (SLA) 156 Performance requirements vary with each VNF type and configuration. 157 The platform should support the specification, realization and 158 runtime adaptation of different performance metrics. Achievable 159 performance can vary depending on several factors such as the 160 workload type, the size of the workload, the set of virtual machines 161 sharing the underlying infrastructure, etc. Here we highlight some 162 of the challenges based on potential deployment considerations. 164 3.1.1. Challenges 166 . VNF provisioning time (including up/down/update) constitutes the 167 time it takes to spin-up the VNF process, its application-specific 168 dependencies, and additional system dependencies. The resource 169 choices such as the hypervisor type, the guest and host OS flavor 170 and the need for hardware and software accelerators, etc., 171 constitute a significant portion of this processing time 172 (instantiation or down time) when compared to just bringing up the 173 actual VNF process. 175 . The runtime performance (achievable throughput, line rate speed, 176 maximum concurrent sessions that can be maintained, number of new 177 sessions that can be added per second) for each VNF is directly 178 dependent on the amount of resources (e.g., virtual CPUs, RAM) 179 allocated to individual VMs. Choosing the right resource setting 180 is a tricky task. If VM resources are over-provisioned, we end up 181 under-utilizing the physical resources. On the contrary if we 182 under-provision the VM resources, then upgrading the resource to 183 an advanced system setting might require scaling out or scaling up 184 of the resources and re-directing traffic to the new VM; scaling 185 up/down operations consume time and add to the latency. This 186 overhead stems from the need to account resources of components 187 other than the actual VNF process (e.g., guest OS requirements). 189 . If each network function is hosted in individual VMs/containers, 190 then an efficient inter-VM networking solution is required for 191 performance. 193 3.2. Continuity, Elasticity and Portability 195 VNF service continuity can be interrupted due to several factors: 196 undesired state of the VNF (e.g., VNF upgrade progress), underlying 197 hardware failure, unavailability of virtualized resources, VNF SW 198 failure, etc. Some of the requirements that need consideration are: 200 3.2.1. Challenges: 202 o VNF's are not completely decoupled from the underlying 203 infrastructure. As discussed in the previous section, most VNFs 204 have a dependency on the guest OS, hypervisor type, accelerator 205 used, and the host OS (this last one applies to containers too). 206 Therefore porting VNFs to a new platform might require identifying 207 equivalent resources (e.g., hypervisor support, new hardware 208 model, understanding resource capabilities) and repeating the 209 provisioning steps to bring back the VNF to a working state. 211 o Service continuity requirements can be classified as follows: 212 seamless (with zero impact) or non-seamless continuity (accepts 213 measurable impacts to offered services). To achieve this, the 214 virtualization technology needs to provide an efficient high 215 availability solution or a quick restoration mechanism that can 216 bring back the VNF to an operational state. For example, an 217 anomaly caused by a hardware failure can impact all VNFs hosted on 218 that infrastructure resource. To restore the VNF to a working 219 state, the user should first provision the VM/container, spin-up 220 and configure the VNF process inside the VM, setup the 221 interconnects to forward network traffic, manage the VNF-related 222 state, and update any dependent runtime agents. 224 o Addressing the service elasticity challenges require a holistic 225 view of the underlying resources. The challenges for presenting a 226 holistic view include the following 228 o Performing Scalable Monitoring: Scalable continuous 229 monitoring of the individual resource's current state is 230 needed to spin-up additional resources (auto-scale or auto- 231 heal) when the system encounters performance degradation or 232 spin-down idle resources to optimize resource usage. 234 o Handling CPU-intensive vs I/O-intensive VNFs: For CPU- 235 intensive VNFs the degradation can primarily depend on the 236 VNF processing functionality. On the other hand, for I/O 237 intense workloads, the overhead is significantly impacted by 238 to the hypervisor/host features, its type, the number of 239 VMs/contaiers it manages, the modules loaded in the guest OS, 240 etc. 242 3.3. Security 244 Broadly speaking, security can be classified into: 246 o Security features provided by the VNFs to manage the state, and 248 o Security of the VNFs and its resources. 250 Some considerations on the security of the VNF infrastructure are 251 listed here. 253 3.3.1. Challenges 255 o The adoption of virtualization techniques (e.g., para- 256 virtualization, OS-level) for hosting network functions and the 257 deployment need to support multi-tenancy requires secure slicing 258 of the infrastructure resources. In this regard, it is critical to 259 provide a solution that can ensure the following: 261 o Provision the network functions by guaranteeing complete 262 isolation across resource entities (hardware units, 263 hypervisor, virtual networks, etc.). This includes secure 264 access between VM/container and host interface, VM-VM or 265 container-to-container communication, etc. For maximizing 266 overall resource utilization and improving service 267 agility/elasticity, sharing of resources across network 268 functions must be possible. 270 o When a resource component is compromised, quarantine the 271 compromised entity but ensure service continuity for other 272 resources. 274 o Securely recover from runtime vulnerabilities or attacks and 275 restore the network functions to an operational state. 276 Achieving this with minimal or no downtime is important. 278 Realizing the above requirements is a complex task in any type of 279 virtualization option (virtual machines, containers, etc.) 281 o Resource starvation / Availability: Applications hosted in 282 VMs/containers can starve the underlying physical resources such 283 that co-hosted entities become unavailable. Ideally, 284 countermeasures are required to monitor the usage patterns of 285 individual VMs/containers and ensure fair use of individual 286 resources. 288 3.4. Management 290 The management and operational aspects are primarily focused on the 291 VNF lifecycle management and its related functionalities. In 292 addition, the solution is required to handle the management of 293 failures, resource usage, state processing, smooth rollouts, and 294 security as discussed in the previous sections. Some features of 295 management solutions include: 297 oCentralized control and visibility: Support for web client, 298 multi-hypervisor management, single sign-on, inventory search, 299 alerts & notifications. 301 oProactive Management: Creating host profiles, resource management 302 of VMs/containers, dynamic resource allocation, auto-restart in 303 HA model, audit trails, patch management. 305 oExtensible platform: Define roles, permissions and licenses 306 across resources and use of APIs to integrate with other 307 solutions. 309 Thus, the key requirements for a management solution 311 o Simple to operate and deploy VNFs. 313 o Uses well-defined standard interfaces to integrate seamlessly 314 with different vendor implementations. 316 o Creates functional automation to handle VNF lifecycle 317 requirements. 319 o Provide APIs that abstracts the complex low-level information 320 from external components. 322 o Is secure. 324 3.4.1. Challenges 326 The key challenge is addressing the aforementioned requirements for 327 a management solution while dealing with the multi-dimensional 328 complexity introduced by the hypervisor, guest OS, VNF 329 functionality, and the state of network. 331 4. Benchmarking Experiments 333 Having considered the basic requirements and challenges of building 334 an NFV platform, we now provide a benchmark of a number of 335 lightweight virtualization technologies to quantify to what extent 336 they can be used to build such a platform. 338 4.1. Experimental Setup 340 In terms of hardware, all tests are run on an x86-64 server with an 341 Intel Xeon E5-1630 v3 3.7GHz CPU (4 cores) and 32GB RAM. 343 For the hypervisors we use KVM running on Linux 4.4.1 and Xen 344 version 4.6.0. The virtual machines running on KVM and Xen are of 345 three types: 347 (1)Unikernels, on top of the minimalistic operating systems OSv and 348 MiniOS for KVM and Xen, respectively. The only application 349 built into them is iperf. To denote them we use the shorthand 350 unikernel.osv.kvm or unikernels.minios.xen. 352 (2)Tinyfied Linux (a.k.a. Tinyx), consisting of a Linux kernel 353 version 4.4.1 with only a reduced set of drivers (ext4, and 354 netfront/blkfront for Xen), and a distribution containing only 355 busybox, an ssh server for convenience, and iperf. We use the 356 shorthand tinyx.kvm and tinyx.xen. 358 (3)Standard VM, consisting of a Debian distribution including iperf 359 and Linux version 4.4.1. We use the shorthand standardvm.kvm 360 and standardvm.xen for it. 362 For containers, we use Docker version 1.11 running on Linux 4.4.1. 364 It is worth noting that the numbers reported here for virtual 365 machines (whether standard, Tinyx or unikernels) include the 366 following optimizations to the underlying virtualization 367 technologies. For Xen, we use the optimized Xenstore, toolstack and 368 hotplug scripts reported in [SUPERFLUIDITY] as well as the 369 accelerated packet I/O derived from persistent grants (for Tx) 371 [PGRANTS]. For KVM, we remove the creation of a tap device from the 372 VM's boot process and use a pre-created tap device instead. 374 4.2. Instantiation Times 376 We begin by measuring how long it takes to create and boot a 377 container or VM. The beginning time is when we issue the create 378 operation. To measure the end time, we carry out a SYN flood from an 379 external server and measure the time it takes for the container/VM 380 to respond with a RST packet. The reason for a SYN flood is that it 381 guarantees the shortest reply time after the unikernels/container is 382 booted. It is just to measure boot time, nothing to do with real- 383 world deployments and DoS attacks. 385 +-----------------------+--------------+ 386 | Technology Type | Time (msecs) | 387 |--------------------------------------+ 388 | standardvm.xen | 6500 | 389 | standardvm.kvm | 2988 | 390 | Container | 1711 | 391 | tinyx.kvm | 1081 | 392 | tinyx.xen | 431 | 393 | unikernel.osv.kvm | 330 | 394 | unikernels.minios.xen | 31 | 395 +-----------------------+--------------+ 397 The table above shows the results. Unsurprisingly, standard VMs with 398 a regular distribution (in this case Debian) fare the worst, with 399 times in the seconds: 6.5s on Xen and almost 3s on KVM. The Docker 400 container with iperf comes next, clocking in at 1.7s. The next best 401 times are from Tinyx: 1s approximately on KVM and 431ms on Xen. 402 Finally, the best numbers come from unikernels, with 330ms for OSv 403 on KVM and 31ms for MiniOS on Xen. These results show that at least 404 when compared to unoptimized containers, minimalistic VMs or 405 unikernels can have instantiation times comparable to or better than 406 containers. 408 4.3. Throughput 410 To measure throughput we use the iperf application that is built in 411 to the unikernels, included as an application in Tinyx and the 412 Debian-based VMs, and containerized for Docker. The experiments in 413 this section are for TCP traffic between the guest and the host 414 where the guest resides: there are no NICs involved so that rates 415 are not bound by physical medium limitations. 417 +-----------------------+-------------------+-------------------+ 418 | Technology | Throughput (Gb/s) | Throughput (Gb/s) | 419 | Type | Tx | Rx | 420 |-----------------------+-------------------+-------------------+ 421 | standardvm.xen | 23.1 | 24.5 | 422 | standardvm.kvm | 20.1 | 38.9 | 423 | Container | 45.1 | 43.8 | 424 | tinyx.kvm | 21.5 | 37.9 | 425 | tinyx.xen | 28.6 | 24.9 | 426 | unikernel.osv.kvm | 47.9 | 47.7 | 427 | unikernels.minios.xen | 49.5 | 32.6 | 428 +-----------------------+-------------------+-------------------+ 429 The table above shows the results for Tx and Rx. The first thing to 430 note is that throughput is not only dependent on the guest's 431 efficiency, but also on the host's packet I/O framework (e.g., see 432 [CLICKOS] for an example of how optimizing Xen's packet I/O 433 subsystem can lead to large performance gains). This is evident from 434 the Xen numbers, where Tx has been optimized and Rx not. Having said 435 that, the guest also matters, which is why, for example, Tinyx 436 scores somewhat higher throughput than standard VMs. Containers and 437 unikernels (at least for Tx and for Tx/Rx for KVM) are fairly 438 equally matched and perform best, with unikernels having a slight 439 edge. 441 4.4. RTT 443 To measure round-trip time (RTT) from an external server to the 444 VM/container we carry out a ping flood and report the average RTT. 446 +-----------------------+--------------+ 447 | Technology Type | Time (msecs) | 448 |--------------------------------------+ 449 | standardvm.xen | 34 | 450 | standardvm.kvm | 18 | 451 | Container | 4 | 452 | tinyx.kvm | 19 | 453 | tinyx.xen | 15 | 454 | unikernel.osv.kvm | 9 | 455 | unikernels.minios.xen | 5 | 456 +-----------------------+--------------+ 458 As shown in the table above, the Docker container comes out on top 459 with 4ms, but unikernels achieve for all practical intents and 460 purposes the same RTT (5ms on MiniOS/Xen and 9ms on OSv/KVM). Tinyx 461 fares slightly better than the standard VMs. 463 4.5. Image Size 465 We measure image size using the standard "ls" tool. 467 +-----------------------+------------+ 468 | Technology Type | Size (MBs) | 469 |------------------------------------+ 470 | standardvm.xen | 913 | 471 | standardvm.kvm | 913 | 472 | Container | 61 | 473 | tinyx.kvm | 3.5 | 474 | tinyx.xen | 3.7 | 475 | unikernel.osv.kvm | 12 | 476 | unikernels.minios.xen | 2 | 477 +-----------------------+------------+ 479 The table shows the standard VMs to be unsurprisingly the largest 480 and, followed by the Docker/iperf container. OSv-based unikernels 481 are next with about 12MB, followed by Tinyx (3.5MB or 3.7MB on KVM 482 and Xen respectively). The smallest image is the one based on 483 MiniOS/Xen with 2MB. 485 4.6. Memory Usage 487 For the final experiment we measure memory usage for the various 488 VMs/container. To do so we use standard tools such as "top" and "xl" 489 (Xen's management tool). 491 +-----------------------+-------------+ 492 | Technology Type | Usage (MBs) | 493 |-------------------------------------+ 494 | standardvm.xen | 112 | 495 | standardvm.kvm | 82 | 496 | Container | 3.8 | 497 | tinyx.kvm | 30 | 498 | tinyx.xen | 31 | 499 | unikernel.osv.kvm | 52 | 500 | unikernels.minios.xen | 8 | 501 +-----------------------+-------------+ 503 The largest memory consumption, as shown in the table above, comes 504 from the standard VMs. The OSv-based unikernels comes next due to 505 the fact that OSv pre-allocates memory for buffers, among other 506 things. Tinyx is next with about 30MB. From there there's a big jump 507 to the MiniOS-based unikernels with 8MB. The best result comes from 508 the Docker container, which is expected given that it relies on the 509 host and its memory allocations to function. 511 5. Discussion 513 In this section we provide a discussion comparing and contrasting 514 the various lightweight virtualization technologies in view of the 515 reported benchmarks. There are a number of issues at stake: 517 . Service agility/elasticity: this is largely dependent on the 518 ability to quickly spin up/down VMs/containers and migrate 519 them. Clearly the best numbers in this category come from 520 unikernels and containers. 522 . Memory consumption: containers use and share resources from 523 the common host they use and so each container instance uses up 524 less memory than VMs, as shown in the previous section 525 (although unikernels are not far behind). Note: VMs also have a 526 common host (or dom0 in the case of Xen) but they incur the 527 overhead of each having its own guest OS. 529 . Security/Isolation: an NFV platform needs to provide good 530 isolation for its tenants. Generally speaking, VM-based 531 technologies have been around for longer and so have had time 532 to iron out most of the security issues they had. Type-1 533 hypervisors (e.g., Xen), in addition, provide a smaller attack 534 surface than Type-2 ones (e.g., KVM) so should in principle be 535 more robust. Containers are relatively newcomers and as such 536 still have a number of open issues [CONTAINER-SECURITY]. Use of 537 kernel security modules like SELinux [SELINUX], AppArmor 538 [APPARMOR] along with containers can provide at least some of 539 the required features for a secure VNF deployment. Use of 540 resource quota techniques such as those in Kubernetes 541 [KUBERNETES-RESOURCE-QUOTA] can provide at least some of the 542 resource guarantees for a VNF deployment. 544 . Management frameworks: both virtual machines and containers 545 have fully-featured management frameworks with large open 546 source communities continually improving them. Unikernels might 547 need a bit of "glue" to adapt them to an existing framework 548 (e.g., OpenStack). 550 . Compatibility with applications. Both containers and standard 551 VMs can run any application that is able to run on the general- 552 purpose OS those VMs/containers are based on (typically Linux). 553 Unikernels, on the hand, use minimalistic OSes, which might 554 present a problem. OSv, for example, is able to build a 555 unikernels as long as the application can be recompiled as a 556 shared library. MiniOS requires that the application be 557 directly compiled with it (c/c++ is the default, but MiniOS 558 unikernels based on OCaml, Haskell and other languages exist). 560 Overall, the choice between standard virtual machines, tinyfied 561 ones, unikernels or containers is often not a black and white one. 562 Rather, these technologies present points in a spectrum where 563 criteria such as security/isolation, performance, and compatibility 564 with existing applications and frameworks may point NFV operators, 565 and their clients, towards a particular solution. For instance, an 566 operator for whom excellent isolation and multi-tenancy is a must 567 might lean towards hypervisor-based solutions. If that operator 568 values ease of application deployment he will further choose guests 569 based on a general-purpose OS (whether tinfyied or not). Another 570 operator might put a prime on performance and so might prefer 571 unikernels. Yet another one might not have a need for multi-tenancy 572 (e.g., Google, Edge use cases such as CPE) and so would lean towards 573 enjoying the benefits of containers. Hybrid solutions, where 574 containers are run within VMs, are also possible. In short, choosing 575 a virtualization technology for an NFV platform is (no longer) as 576 simple as choosing VMs or containers. 578 6. Conclusion 580 In this draft we presented the challenges when building an NFV 581 platform. We further introduced a set of benchmark results to 582 quantify to what extent a number of virtualization technologies 583 (standard VMs, tinfyied VMs, unikernels and containers) can meet 584 those challenges. We conclude that choosing a solution is nuanced, 585 and depends on how much value different NFV operators place on 586 criteria such as strong isolation, performance and compatibility 587 with applications and management frameworks. 589 7. Future Work 591 Opportunistic areas for future work include but not limited to 592 developing solutions to address the VNF challenges described in 593 Section 3, distributed micro-service network functions, etc. 595 8. IANA Considerations 597 This draft does not have any IANA considerations. 599 9. Security Considerations 601 VM-based VNFs can offer a greater degree of isolation and security 602 due to technology maturity as well as hardware support. Light-weight 603 virtualization technologies such as unikernels (specialized VMs) and 604 tinyfied VMs which were discussed enjoyed the security benefits of a 605 standard VM. Since container-based VNFs provide abstraction at the 606 OS level, it can introduce potential vulnerabilities in the system 607 when deployed without proper OS-level security features. This is one 608 of the key implementation/deployment challenges that needs to be 609 further investigated. 611 In addition, as containerization technologies evolve to leverage the 612 virtualization capabilities provided by hardware, they can provide 613 isolation and security assurances similar to VMs. 615 10. Contributors 617 11. Acknowledgements 619 The authors would like to thank Vineed Konkoth for the Virtual 620 Customer CPE Container Performance white paper. The authors would 621 like to acknowledge Louise Krug (BT) for their valuable comments. 623 12. References 625 12.1. Normative References 627 12.2. Informative References 629 [AUFS] "Advanced Multi-layered Unification Filesystem," 630 https://en.wikipedia.org/wiki/Aufs 632 [CONTAINER-SECURITY] "Container Security article," 633 http://www.itworld.com/article/2920349/security/for-containers- 634 security-is-problem-1.html 636 [ETSI-NFV-WHITE] "ETSI NFV White Paper," 637 http://portal.etsi.org/NFV/NFV_White_Paper.pdf 639 [ETSI-NFV-USE-CASES] "ETSI NFV Use Cases," 640 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/001/01.01.01_60/gs_N 641 FV001v010101p.pdf 643 [ETSI-NFV-REQ] "ETSI NFV Virtualization Requirements," 644 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/004/01.01.01_60/gs_N 645 FV004v010101p.pdf 647 [ETSI-NFV-ARCH] "ETSI NFV Architectural Framework," 648 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.01.01_60/gs_N 649 FV002v010101p.pdf 651 [ETSI-NFV-TERM] "Terminology for Main Concepts in NFV," 652 http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.01.01_60/gs_n 653 fv003v010101p.pdf 655 [KUBERNETES-RESOURCE-QUOTA] "Kubernetes Resource Quota," 656 http://kubernetes.io/v1.0/docs/admin/resource-quota.html 658 [KUBERNETES-SELF-HEALING] "Kubernetes Design Overview," 659 http://kubernetes.io/v1.0/docs/design/README.html 661 [LINUX-TINY] "Linux Kernel Tinification," 662 https://tiny.wiki.kernel.org/ 664 [MINIOS] "Mini-OS - Xen," http://wiki.xenproject.org/wiki/Mini-OS 666 [OSV] "OSv - The Operating System Designed for the Cloud," 667 http://osv.io/ 669 [PGRANTS] http://lists.xenproject.org/archives/ html/xen- 670 devel/2015- 05/msg01498.html 672 [SELINUX] "Security Enhanced Linux (SELinux) project," 673 http://selinuxproject.org/ 675 [SUPERFLUIDITY] "The Case for the Suplerfluid Cloud," F. Manco, J. 676 Martins, K. Yasukata, J. Mendes, S. Kuenzer, and F. Huici. USENIX 677 HotCloud 2015 679 [APPARMOR] "Mandatory Access Control Framework," 680 https://wiki.debian.org/AppArmor 682 [VCPE-CONTAINER-PERF] "Virtual Customer CPE Container Performance 683 White Paper," http://info.ixiacom.com/rs/098-FRB-840/images/Calsoft- 684 Labs-CaseStudy2015.pdf 686 Authors' Addresses 688 Sriram Natarajan 689 Google 690 natarajan.sriram@gmail.com 692 Ram (Ramki) Krishnan 693 Dell 694 ramki_krishnan@dell.com 696 Anoop Ghanwani 697 Dell 698 anoop@alumni.duke.edu 700 Dilip Krishnaswamy 701 IBM Research 702 dilikris@in.ibm.com 704 Peter Willis 705 BT 706 peter.j.willis@bt.com 708 Ashay Chaudhary 709 Verizon 710 the.ashay@gmail.com 712 Felipe Huici 713 NEC 714 felipe.huici@neclab.eu