idnits 2.17.1 draft-bernini-nfvrg-vnf-orchestration-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == It seems as if not all pages are separated by form feeds - found 0 form feeds but 15 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (July 3, 2015) is 3213 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.zong-vnfpool-problem-statement' is defined on line 622, but no explicit reference was found in the text == Outdated reference: A later version (-06) exists of draft-ietf-sfc-dc-use-cases-02 == Outdated reference: A later version (-11) exists of draft-ietf-sfc-architecture-09 == Outdated reference: A later version (-28) exists of draft-ietf-sfc-nsh-00 Summary: 0 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFV Research Group G. Bernini 3 Internet-Draft V. Maffione 4 Intended status: Informational Nextworks 5 Expires: January 4, 2016 D. Lopez 6 P. Aranda Gutierrez 7 Telefonica I+D 8 July 3, 2015 10 VNF Orchestration For Automated Resiliency in Service Chains 11 draft-bernini-nfvrg-vnf-orchestration-00 13 Abstract 15 Network Function Virtualization (NFV) aims at evolving the way 16 network operators design, deploy and provision their networks by 17 leveraging standard IT virtualization technologies to move and 18 consolidate a wide range of network functions and services onto 19 industry standard high volume servers, switches and storage. Primary 20 area of impact for operators is the network edge, being stimulated by 21 the recent updates on NFV and SDN. In fact, operators are looking at 22 their future datacenters and Points of Presence (PoPs) as 23 increasingly dynamic infrastructures to deploy Virtualized Network 24 Functions (VNFs) and on-demand chained services with high elasticity. 26 This document presents an orchestration framework for automated 27 deployment of highly available VNF chains. Resiliency of VNFs and 28 chained services is a key requirement for operators to improve, ease, 29 automate and speed up services lifecycle management. The proposed 30 VNFs orchestration framework is also positioned with respect to 31 current NFV and Service Function Chaining (SFC) architectures and 32 solutions. 34 Status of This Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at http://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on January 4, 2016. 50 Copyright Notice 52 Copyright (c) 2015 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 68 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 69 3. VNF Orchestration for Resilient Virtual Appliances . . . . . 4 70 3.1. Problem Statement . . . . . . . . . . . . . . . . . . . . 5 71 3.2. Orchestration Framework . . . . . . . . . . . . . . . . . 6 72 3.2.1. Orchestrator . . . . . . . . . . . . . . . . . . . . 8 73 3.2.2. SDN Controller . . . . . . . . . . . . . . . . . . . 8 74 3.2.3. VNF Chain Configurator . . . . . . . . . . . . . . . 9 75 3.2.4. Edge Configurator . . . . . . . . . . . . . . . . . . 10 76 3.3. Resiliency Control Functions for Chained VNFs . . . . . . 10 77 4. Positioning in Existing NFV and SFC Frameworks . . . . . . . 12 78 4.1. Mapping into NFV Architecture . . . . . . . . . . . . . . 12 79 4.2. Mapping into SFC Architecture . . . . . . . . . . . . . . 12 80 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 81 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13 82 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 83 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 84 8.1. Normative References . . . . . . . . . . . . . . . . . . 13 85 8.2. Informative References . . . . . . . . . . . . . . . . . 13 86 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 88 1. Introduction 90 Current Telco infrastructures are facing the rapid development of the 91 cloud market, which includes a broad range of emerging virtualized 92 services and distributed applications. Network Function 93 Virtualization (NFV) is gaining wide interest across operators to 94 evolve the way networks are operated and provisioned, with the 95 execution in virtualized environments of those network functions and 96 services traditionally integrated in hardware devices. 98 A Virtualized Network Function (VNF) provides the same function as 99 the equivalent network function (e.g. firewall, load balancer) but is 100 deployed as a software instance running on general purpose servers 101 via a virtualization technology. The main idea is therefore to run 102 network functions in either datacenters or commodity network nodes 103 that are, in some cases, close to the end user premises. With NFV, 104 network functions that were traditionally implemented on specialized 105 hardware devices are moved to general purpose servers, running in 106 self-contained virtual machines. These virtualized functions can be 107 deployed in multiple instances or moved to various locations in the 108 network, adapting themselves to traffic dynamicity and customer 109 demands without the overhead cost and management of installing new 110 equipment. Moreover, operators' networks are populated with a large 111 and increasing variety of proprietary software and hardware tools and 112 appliances. The deployment of new network services in operational 113 environments is often a complex and costly procedure, requiring 114 additional space and power to accommodate new boxes. Moreover, 115 current hardware-based appliances rapidly reach end of life, 116 requiring much of the design integration and deployment cycle to be 117 repeated with little revenue benefit. In this context, the 118 transition of network functions and appliances from hardware to 119 software solutions by means of NFV promises to address and overcome 120 these hindrances for network operators. 122 While the above considerations are valid for stand-alone VNFs running 123 independently, additional challenges and requirements raise for 124 network operators when services offered to customers are built by the 125 composition of multiple VNFs. In this case, the deployment and 126 provisioning of each VNF composing the service for the customer needs 127 to be coordinated with the other VNFs, applying control functions to 128 steer the traffic through them following the proper order (i.e. 129 according to the specific service function path). An orchestration 130 framework capable of coordinating the automated deployment, 131 configuration, provisioning and chaining of multiple VNFs would ease 132 the management of the whole lifecycle of services offered to 133 customers. Additionally, when dealing with virtualized functions, 134 resiliency and high availability of chained services pose additional 135 requirements for a VNF orchestration framework, in terms of detection 136 of software failure at various levels, including hypervisors and 137 virtual machines, hardware failure, and virtual appliance migration. 139 This document presents an orchestration framework for automated 140 deployment of high available VNF chains, and introduces its 141 architecture and building blocks. Resiliency for both stand-alone 142 VNFs and chained services is considered in this document as a key 143 control function based on VNF pool concepts. The proposed VNF 144 orchestration framework is also positioned with respect to approaches 145 and architectures currently defined for Network Function 146 Virtualization and Service Function Chaining (SFC). 148 2. Terminology 150 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 151 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 152 document are to be interpreted as described in [RFC2119]. 154 The following acronyms are used in this document: 156 NFV: Network Function Virtualization. 158 SDN: Software Defined Networking. 160 VNF: Virtualized Network Function. 162 SFC: Service Function Chaining. 164 CPE: Customer Premise Equipment. 166 VPN: Virtual Private Network. 168 EMS: Element Management System. 170 PoP: Point of Presence. 172 VM: Virtual Machine. 174 3. VNF Orchestration for Resilient Virtual Appliances 176 The telco market is rapidly moving towards an Everything as a Service 177 model, where the virtualization of traditionally in-the-box network 178 functions can benefit from Software Defined Networking (SDN) tools 179 and technologies. As said, the recent updates and proposed solutions 180 on NFV and SDN is practically bringing, from an operator perspective, 181 a deep evolution on how the network edge is architected, operated and 182 provisioned, since that is the place where VNFs and virtual services 183 can be deployed and provisioned close to the customers. Operators 184 target to evolve their datacenters and PoPs into more and more 185 dynamic infrastructures where VNFs and chained services can be 186 deployed with high availability and high elasticity to scale up and 187 down while optimizing performances and resources utilization. 189 This section introduces the VNF orchestration framework proposed in 190 this document for the deployment, provisioning and chaining of 191 resilient virtual appliances and services within operators' 192 datacenters. 194 3.1. Problem Statement 196 The orchestration framework proposed in this document aims at solving 197 some of the challenges that operators face when, trying to apply the 198 base NFV concepts, they replace hardware devices implementing well- 199 known network functions with software-based virtual appliances. In 200 particular, this VNF orchestration framework targets an automated, 201 flexible and elastic provisioning of service chains within operators' 202 datacenters. 204 When operators need to compose and chain multiple VNFs to provision a 205 given service to the customer, they need to operate network and 206 computing resources in a coordinated way, and above all to implement 207 control mechanisms and procedures to steer the traffic through the 208 different VNFs and the customer sites. As an example, the 209 virtualization of the Customer Premises Equipment (CPE) is emerging 210 as one of the first applications of the Network Functions 211 Virtualization (NFV) architecture that is currently being 212 commercialized by several software (and hardware) vendors. It has 213 the potential to generate a significant impact on the operators 214 businesses. The term virtual CPE (vCPE) refers to the execution in a 215 virtual environment of the network functions that traditionally 216 integrated in hardware gears at customer premises, like BGP speakers, 217 firewall, NAT, etc. Different scenarios and use cases exist for the 218 vCPE. Currently, the typical scenario is the vCPE in the PoP, that 219 actually provides softwarization and shift to the first PoP of the 220 operator for those network functions normally deployed at customer 221 premises (e.g. NAT, Firewall, etc.). The goal is to manage the 222 whole LAN environment of the customer, while preserving QoE of its 223 services, providing added value services to users not willing to get 224 involved with technology issues, and reducing maintenance trouble 225 tickets and the need for in-house problem solving. In addition, the 226 vCPE can be used in the operator's datacenter to implement in 227 software those chained network functions to provision automated VPN 228 services for customers [VCPE-T2], in order to dynamically and 229 automatically extend existing L3 VPNs (e.g. connecting remote 230 customer sites) to incorporate new virtual assets (like virtual 231 machines) into a private cloud. 233 As additional requirements for the proposed orchestration framework, 234 the use of VNFs opens new challenges concerning the reliability of 235 provided virtual services. When network functions are deployed on 236 monolithic hardware platforms, the lifecycle of individual services 237 is strictly bound to the availability of the physical device, and 238 management tools may detect outages and migrate affected services to 239 new instances deployed on backup hardware. When introducing VNFs, 240 individual network functions may still fail, but with more risk 241 factors such as software failure at various levels, including 242 hypervisors and virtual machines, hardware failure, and virtual 243 appliance migration. Moreover, when considering chains of VNFs, the 244 management and control tools used by the operators have to consider 245 and apply reliability mechanisms at the service level, including 246 transparent migration to backup VNFs and synchronization of state 247 information. In this context, VNF pooling mechanisms and concepts 248 are valid and applicable, thus considering VNF instances grouped as 249 pools to provide the same function in a reliable way. 251 3.2. Orchestration Framework 253 The VNF orchestration framework proposed in this document aims to 254 provide automated functions for the deployment, provisioning and 255 composition of resilient VNFs within operators' datacenters. 256 Figure 1 presents the high level architecture, including building 257 blocks and functional components. 259 This VNF orchestration framework is built around two key components: 260 the orchestrator and the SDN controller. The orchestrator includes 261 all the functions related to the management, coordination, and 262 control of VNFs instantiation, configuration and composition. It is 263 the component at the highest level of the architecture and represents 264 the access point to the VNF orchestration framework for the operator. 265 On the other hand, the SDN controller provides dynamic traffic 266 steering and flexible network provisioning within the datacenter as 267 needed by the VNF chains. The basic controller functions are 268 augmented by a set of enhanced network applications deployed on top, 269 that might be themselves control and management VNFs for operator use 270 (i.e. not related to customers and users functions). 272 Therefore, the architecture depicted in Figure 1 is a practical 273 demonstration of how SDN and NFV technologies and concepts can be 274 integrated to provide substantial benefits to network operators in 275 terms of robustness, ease of management, control and provisioning of 276 their network infrastructures and services. SDN and NFV are clearly 277 complementary solutions for enabling virtualization of network 278 infrastructures, services and functions while supporting dynamic and 279 flexible network traffic engineering. SDN focuses on network 280 programmability, traffic steering and multi-tenancy by means of a 281 common, open, and dynamic abstraction of network resources. NFV 282 targets a progressive migration of network elements, network 283 appliances and fixed function boxes into VMs that can be ran on 284 commodity hardware, enabling the benefits of cloud and datacenters to 285 be applied to network functions. 287 +----------------------------------------------+ 288 | Orchestrator | 289 +-----+----------------+----------------+------+ 290 | | | 291 V V V 292 +------------+ +-------------+ +-------------+ 293 | | | | | | 294 | VNF Pool | | VNF Chain | | Edge | 295 | Manager | |Configurator | |Configurator | 296 | | | | | | 297 +-----+------+ +------+------+ +------+------+ 298 | | | 299 V V V 300 +----------------------------------------------+ 301 | SDN Controller | 302 +--------------+---+----+---------------+------+ 303 +.......................+ | | | | 304 : +--------+ \ | | | +-----+ 305 : |+-------++ :| +-----+---+----+----+ | 306 : +| || :\ | +-----------------+-+ +---+----+ 307 : /|| VNF1 || : | | | +---------------+-+-+ | | 308 : / || || :..+ | | | Virtual | | +-------+ Edge | 309 : / ++-------+| : : | | | Switche(s) | | +-------+ Router | 310 : / +---+----+ : :| +-+-+---------------+ | | | | 311 : / | : :\ +-+-----------------+ | +--------+ 312 : +--+-----+ | : : | +-----+---+---+-----+ 313 : |+-------++ | : : \ | | | 314 : || || | : : | | | | 315 : || VNF2 || | : : \---------+---+---+------+ 316 : || || | : : | | 317 : ++-------+| | : : | | 318 : +--+-+---+ | : : | Virtual Compute | 319 : \ +----+---+ : : | Infrastructure | 320 : \|+-------++ : : | | 321 : +| || : : | | 322 : || VNF3 || : : /------------------------+ 323 : || || : : | 324 : ++-------+| : : | 325 : +--------+ : : | 326 :Service Chain "X" : : / 327 +.......................+ : | 328 :Service Chain "Y" :/ 329 +........................+ 331 Figure 1: VNF Orchestration Framework Architecture. 333 3.2.1. Orchestrator 335 The VNF orchestrator implements a set of functions to seamlessly 336 control and manage in a coordinated way the instantiation and 337 deployment of VNFs on one hand, and their composition and chaining to 338 steer the traffic through them on the other. It is fully controlled 339 and operated by the network operator, and basically it is the highest 340 control and orchestration layer that sits above all the softwarized 341 and virtualized components in the proposed architecture. 343 Therefore the VNF orchestrator provides a consistent way to access 344 the system and provision chains of VNFs to the operator. It exposes 345 a set of primitives to instantiate, configure VNFs and compose them 346 according to the specific service chain requirements. Practically, 347 it aims at enabling an efficient and dynamic management of operator's 348 infrastructure resources with great flexibility by means of a 349 consistent set of APIs. 351 To enable this, the orchestrator can be seen as a composition of 352 several internal functionalities, each providing a given coordination 353 function needed to orchestrate the lower layer control and management 354 functions depicted in Figure 1 (i.e. VNF chain configuration, VNF 355 pool provisioning, etc.). In practice, the orchestrator needs to 356 include at least an internal component to manage the instantiation 357 and configuration of stand-alone VNFs (e.g. implemented by a self- 358 contained VM) that might be directly interfaced with the physical 359 servers in the datacenter. And also a dedicated component for 360 programmatic coordination and provisioning of VNF chains is needed to 361 properly orchestrate the traffic steering through VNFs belonging to 362 the same service chain. This should also provide multi-tenant 363 functionalities and maintain isolation across VNF chains deployed for 364 different customers. It is then clear that the VNF orchestrator is 365 the overall coordinator of the proposed framework, and it drives all 366 the lower layer components that implement the actual control logic. 368 3.2.2. SDN Controller 370 The SDN controller provides the logic for network control, 371 provisioning and monitoring. It is the component where the SDN 372 abstraction happens. This means it exposes a set of primitives to 373 configure the datacenter network according to the requirements of the 374 VNF chains to be provisioned, while hiding the specific technology 375 constraints and capabilities of the software switches and edge 376 routers underneath. The deployment of an SDN controller allows to 377 implement a software driven VNF orchestration, with flexible and 378 programmable network functions for service chaining and resilient 379 virtual appliances. 381 At its southbound interface, the SDN controller interfaces with 382 software switches running in servers, physical switches 383 interconnecting them and edge routers connecting the datacenter with 384 external networks. Multiple control protocols can be used at this 385 southbound interface to actually provision the datacenter network and 386 enable traffic steering through VNFs, including OpenFlow, OVSDB, 387 NETCONF and others. 389 Therefore the SDN controller provides the basic network provisioning 390 functions needed by upper layer coordination functions to perform 391 service chain and VNF pool-wide actions. Indeed, the logic and the 392 state at the service level is only maintained and coordinated by 393 network applications on top of the SDN controller. 395 3.2.3. VNF Chain Configurator 397 The VNF chain configurator is deployed as a bridging component 398 between the orchestrator and the SDN controller, and it is mostly 399 dedicated to the implementation of VNF chaining and composition 400 logic. It computes a suitable path to interconnect the involved VNFs 401 (already instantiated and identified by the orchestrator) and 402 forwards the network configuration request to the SDN controller for 403 each new VNF chain requested by the orchestrator. 405 Following the datacenter service chains and related traffic types 406 defined in [I-D.ietf-sfc-dc-use-cases], the VNF chain configurator 407 should implement its coordination logic to support both north-south 408 and east-west chains. The former refer to network traffic staying 409 within the datacenter but coming from a remote datacenter or a user 410 through the edge router connecting to an external network. In this 411 case, the VNF chain configurator should also coordinate with the edge 412 configurator to properly provision the datacenter edge router. 413 Moreover, this north-south case may also refer to VNF chains spanning 414 multiple datacenters, thus requiring a further inter-datacenter 415 coordination between VNF chain configurators and orchestrators. 416 These coordination functions are out of the scope of this document. 417 On the other hand, the east-west chains refer to VNFs treating 418 network traffic that do not exit the datacenter. For both cases, the 419 VNF chain configurator (in combination with the SDN controller) 420 should implement and support proper service chains encapsulation 421 solutions [I-D.ietf-sfc-nsh] to isolate and segregate traffic related 422 to VNF chains belonging to different tenants. 424 Different deployment models may exist for the VNF chain configurator: 425 a dedicated configurator for each chain, or a single configurator for 426 all the VNF chains. In the first approach, the orchestrator needs to 427 implement some coordination logic related to the dynamic 428 instantiation of configurators when new VNF chains are provisioned. 430 3.2.4. Edge Configurator 432 The edge configurator is a network control application deployed on 433 top of the SDN controller. Its main role is to coordinate the 434 provisioning and configuration of the edge router for those north- 435 south VNF chains exiting the datacenter. In particular, it keeps the 436 binding between the traffic steered through the VNFs and the related 437 network service outside the datacenter terminated at the edge router 438 (e.g. a L3 VPN, VLAN, VXLAN, VRF etc), possibly considering the 439 service chain encapsulation implemented within the VNF chain. The 440 mediation of the SDN controller allows to support a variety of 441 control and management protocols for the actual configuration of the 442 datacenter edge router. 444 3.3. Resiliency Control Functions for Chained VNFs 446 In the proposed orchestration architecture, the resiliency control 447 functions that have been identified as a key feature for a flexible 448 and dynamic provisioning of chained VNF services, are implemented by 449 the VNF pool manager depicted in Figure 1. It is the entity that 450 manages and coordinates VNFs reliability providing high availability 451 and resiliency features at both stand-alone and chained VNFs level. 453 The deployment of VNF based services requires a transition of 454 resiliency capabilities and mechanisms from physical network devices 455 typically highly available (and often specialized) to entities (like 456 self-contained VMs) running VNFs in the context of pools of 457 virtualized resources. 459 When moving towards a resilient approach for VNFs deployment and 460 operation, the generic high availability requirements to be matched 461 are translated into the following ones: 463 Service continuity: when a hardware failure or capacity limits 464 (memory and CPU) occur on platforms hosting VMs (and therefore 465 VNFs), it is necessary to migrate VNFs to other VMs and/or 466 hardware platforms to guarantee service continuity with minimum 467 impact on the users 469 Topological transparency: the hand-over between live and backup VNFs 470 must be implemented in a transparent way for the user and also for 471 the service chain itself. The backup VNF instances need to 472 replicate the necessary information (configuration, addressing, 473 etc.) so that the network function is taken over without any 474 topological disruption (i.e. at the VNF chain level) 476 Load balancing or scaling: migration of VNF instances may also 477 happen for load-balancing purposes (e.g. for CPU, memory overload 478 in virtualized platforms) or scaling of network services (with 479 VNFs moved to new hardware platforms). In both cases the working 480 network function is moved to a new VNF instance and the service 481 continuity must be maintained. 483 Auto scale of VNFs instances: when a VNF requires increased resource 484 allocation to improve overall service performance, the network 485 function could be distributed across multiple VMs, and to 486 guarantee the performance improvement dedicated pooling mechanisms 487 for scaling up or down resources to each VNF in a consistent way 488 are needed. 490 Multiple VNF resiliency classes: each type of end-to-end service 491 (e.g. web, financial backend, video streaming, etc.) has its own 492 specific resiliency requirements for the related VNFs. While for 493 operators it is not easy to achieve service resiliency SLAs 494 without building to peak, a basic set of VNF resiliency classes 495 can be defined to identify some metrics, such as: if a VNF needs 496 status synchronization; fault detection and restoration time 497 objective (e.g. real-time); service availability metrics; service 498 quality metrics; service latency metrics for VNF chain components. 500 The aim of the VNF orchestration presented in this document is to 501 address the above requirements by introducing the VNF pool manager 502 that follows the principles of the IETF VNFPOOL architecture [I- 503 D.zong-vnfpool-arch], where a pool manager coordinates the 504 reliability of stand-alone VNFs, by selecting the active instance and 505 interacting with the Service Control Entity for consistent end-to-end 506 service chain reliability and provisioning. In the VNF orchestration 507 architecture illustrated in Figure 1, the Service Control Entity is 508 implemented by the combination of the orchestrator (for overall 509 coordination of service chains) and the VNF chain configuration (for 510 actual provisioning and coordination of individual service chains). 512 Different deployment models may exist for the VNF pool manager: a 513 dedicated manager for each VNF chain, or a single one for all the 514 chains. 516 In terms of offered resiliency functionalities, the VNF pool manager 517 provides some post-configuration functions to instantiate VNFs (as 518 self-contained VMs) with the desired degree of reliability and 519 redundancy. This translates into further actions to create and 520 configure additional VMs as backups, therefore building a pool for 521 each VNF in the chain. 523 The VNF pool manager is conceived to offer several types and degrees 524 of reliability functions. First, it provides specific functions for 525 the persistence of VNFs configuration, including making periodic 526 snapshots of the VMs running the VNF. Moreover, at runtime (i.e. 527 with the service chain in place), it monitors the operational status 528 and performances of the master VNFs VMs, and collects notifications 529 about VMs status, e.g. by registering as an observer to dedicated 530 services offered by the virtualization platform used within the 531 virtual compute infrastructure. Moreover, VNF pool manager reacts to 532 any failure condition by autonomously replacing the master VNF with 533 one of its backup on the pool, basically implementing a swap of VMs 534 for service chain recovery purposes. Thus, the VNF pool manager also 535 takes care in coordination with the VNF chain configurator of 536 implementing those resiliency mechanisms at the chain level. Two 537 options have been identified so far: cold recovery and hot recovery. 538 In the former, backup VNFs, properly configured with the same master 539 configuration, are kept ready (but switched off) to be started when 540 the master dies. In this case the recovery time depends on the 541 specific VNF and its type of function, e.g. it may depend on 542 convergence time for a virtual BGP router. In the hot recovery, 543 active backup VNFs are kept synchronized with the master ones, and 544 the recovery of the service chain (mostly performed at the VNF chain 545 configurator) in case of failure is faster than cold recovery. 547 4. Positioning in Existing NFV and SFC Frameworks 549 4.1. Mapping into NFV Architecture 551 For the presented solution to be integrated in the ETSI NFV reference 552 architecture, some modifications need to be applied to it. The VNF 553 pools replace the VNFs in the architecture. They are then controlled 554 by the Element Management System (EMS) on the northbound. So the EMS 555 has to be made VNFPOOL aware. Additional elements that need to 556 support the mechanisms proposed by VNFPOOL are the VNF managers, 557 which need to implement the resiliency and VNF scaling 558 (up-/downscale) functions. This has also implications on the 559 orchestrator, which has to be aware of the augmented functionality 560 offered by the VNF Manager. In fact, the orchestrator also provides 561 primitives for VNFs chaining, matching the Service Control Entity in 562 the VNFPool architecture. 564 4.2. Mapping into SFC Architecture 566 [I-D.ietf-sfc-architecture] describes the Service Function Chaining 567 (SFC) architecture. It describes the concept of a service function 568 (SF) and how to chain SFs and provides only little detail of the SFC 569 control plane, which is responsible with the coordination of the SFs 570 and their stitching into SFCs. The combination of orchestrator, VNF 571 chain configurator and VNF pool manager functionalities described in 572 this document cover most of the functions expected from the SFC 573 control plane. 575 5. IANA Considerations 577 This draft does not have any IANA consideration. 579 6. Security Considerations 581 Security issues related to VNF orchestration and resiliency of 582 service chains are left for further study. 584 7. Acknowledgements 586 This work has been partially supported by the European Commission 587 through the FP7 ICT Trilogy2 project (Building the Liquid Net, grant 588 agreement no:317756). 590 The views expressed here are those of the author only. The European 591 Commission is not liable for any use that may be made of the 592 information in this document. 594 Authors would like to thank G. Carrozzo and G. Landi from Nextworks 595 for valuable discussions and contributions to the topics addressed in 596 this document. 598 8. References 600 8.1. Normative References 602 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 603 Requirement Levels", BCP 14, RFC 2119, March 1997. 605 8.2. Informative References 607 [I-D.ietf-sfc-dc-use-cases] 608 Surendra, S., Tufail, M., Majee, S., Captari, C., and S. 609 Homma, "Service Function Chaining Use Cases In Data 610 Centers", draft-ietf-sfc-dc-use-cases-02 (work in 611 progress), January 2015. 613 [I-D.ietf-sfc-architecture] 614 Halpern, J. and C. Pignataro, "Service Function Chaining 615 (SFC) Architecture", draft-ietf-sfc-architecture-09 (work 616 in progress), June 2015. 618 [I-D.ietf-sfc-nsh] 619 Quinn, P. and U. Elzur, "Network Service Header", draft- 620 ietf-sfc-nsh-00 (work in progress), March 2015. 622 [I-D.zong-vnfpool-problem-statement] 623 Zong, N., Dunbar, L., Shore, M., Lopez, D., and G. 624 Karagiannis, "Virtualized Network Function (VNF) Pool 625 Problem Statement", draft-zong-vnfpool-problem- 626 statement-06 (work in progress), July 2014. 628 [VCPE-T2] G. Bernini, G. Carrozzo, P. A. Gutierrez, D. R. Lopez, , 629 "Virtualizing the Network Edge: Virtual CPE for the 630 datacenter and the PoP", European Conference on Networks 631 and Communications , June 2014. 633 Authors' Addresses 635 Giacomo Bernini 636 Nextworks 637 via Livornese 1027 638 San Piero a Grado, Pisa 56122 639 Italy 641 Phone: +39 050 3871600 642 Email: g.bernini@nextworks.it 644 Vincenzo Maffione 645 Nextworks 646 via Livornese 1027 647 San Piero a Grado, Pisa 56122 648 Italy 650 Phone: +39 050 3871600 651 Email: v.maffione@nextworks.it 653 Diego R. Lopez 654 Telefonica I+D 655 Calle Zubaran, 12 656 Madrid 28010 657 Spain 659 Email: diego.r.lopez@telefonica.com 660 Pedro Andres Aranda Gutierrez 661 Telefonica I+D 662 Calle Zubaran, 12 663 Madrid 28010 664 Spain 666 Email: pedroa.aranda@telefonica.com