idnits 2.17.1 draft-bernini-nfvrg-vnf-orchestration-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == It seems as if not all pages are separated by form feeds - found 0 form feeds but 15 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (October 11, 2015) is 3113 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'I-D.zong-vnfpool-problem-statement' is defined on line 627, but no explicit reference was found in the text == Outdated reference: A later version (-06) exists of draft-ietf-sfc-dc-use-cases-03 == Outdated reference: A later version (-28) exists of draft-ietf-sfc-nsh-01 Summary: 0 errors (**), 0 flaws (~~), 6 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFV Research Group G. Bernini 3 Internet-Draft V. Maffione 4 Intended status: Informational Nextworks 5 Expires: April 13, 2016 D. Lopez 6 P. Aranda Gutierrez 7 Telefonica I+D 8 October 11, 2015 10 VNF Pool Orchestration For Automated Resiliency in Service Chains 11 draft-bernini-nfvrg-vnf-orchestration-01 13 Abstract 15 Network Function Virtualisation (NFV) aims at evolving the way 16 network operators design, deploy and provision their networks by 17 leveraging on standard IT virtualisation technologies to move and 18 consolidate a wide range of network functions and services onto 19 industry standard high volume servers, switches and storage. The 20 primary area of impact for operators is the network edge, being 21 stimulated by the recent updates on NFV and SDN. In fact, operators 22 are looking at their future datacentres and Points of Presence (PoPs) 23 as increasingly dynamic infrastructures to deploy Virtualised Network 24 Functions (VNFs) and on-demand chained services with high elasticity. 26 This document presents an orchestration framework for automated 27 deployment of highly available VNF chains. Resiliency of VNFs and 28 chained services is a key requirement for operators to improve, ease, 29 automate and speed up services lifecycle management. The proposed 30 VNFs orchestration framework is also positioned with respect to 31 current NFV and Service Function Chaining (SFC) architectures and 32 solutions. 34 Status of This Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at http://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on April 13, 2016. 50 Copyright Notice 52 Copyright (c) 2015 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (http://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 68 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 69 3. VNF Pool Orchestration for Resilient Virtual Appliances . . . 4 70 3.1. Problem Statement . . . . . . . . . . . . . . . . . . . . 5 71 3.2. Orchestration Framework . . . . . . . . . . . . . . . . . 6 72 3.2.1. Orchestrator . . . . . . . . . . . . . . . . . . . . 8 73 3.2.2. SDN Controller . . . . . . . . . . . . . . . . . . . 8 74 3.2.3. VNF Chain Configurator . . . . . . . . . . . . . . . 9 75 3.2.4. Edge Configurator . . . . . . . . . . . . . . . . . . 10 76 3.3. Resiliency Control Functions for Chained VNFs . . . . . . 10 77 4. Positioning in Existing NFV and SFC Frameworks . . . . . . . 12 78 4.1. Mapping into NFV Architecture . . . . . . . . . . . . . . 12 79 4.2. Mapping into SFC Architecture . . . . . . . . . . . . . . 13 80 5. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 81 6. Security Considerations . . . . . . . . . . . . . . . . . . . 13 82 7. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 83 8. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 84 8.1. Normative References . . . . . . . . . . . . . . . . . . 13 85 8.2. Informative References . . . . . . . . . . . . . . . . . 13 86 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 88 1. Introduction 90 Current Telco infrastructures are facing the rapid development of the 91 cloud market, which includes a broad range of emerging virtualised 92 services and distributed applications. Network Function 93 Virtualisation (NFV) is gaining wide interest across operators as a 94 means to evolve the way networks are operated and provisioned, with 95 network functions and services traditionally integrated in hardware 96 devices executed in virtualised environments. 98 A Virtualised Network Function (VNF) provides the same function as 99 its non virtualised equivalent (e.g. firewall, load balancer) but is 100 deployed as a software instance running on general purpose servers 101 using virtualisation technologies. The main idea, therefore, is to 102 run network functions in datacentres or commodity network nodes that 103 are, in some cases, close to the end user premises. With NFV, 104 network functions are moved from specialised hardware devices to 105 self-contained virtual machines running in general purpose servers. 106 These virtualised functions can be deployed in multiple instances or 107 moved to various locations in the network, adapting themselves to 108 traffic dynamicity and customer demands without the overhead cost and 109 management of installing new equipment. 111 Operator networks are populated with a large and increasing variety 112 of proprietary software and hardware tools and appliances. The 113 deployment of new network services in operational environments is 114 often a complex and costly procedure, where additional physical space 115 and power are required to accommodate new boxes. Additionally, 116 current hardware-based appliances rapidly reach end of life. This 117 requires that much of the design integration and deployment cycle be 118 repeated with little revenue benefit. In this context, the 119 transition of network functions and appliances from hardware to 120 software solutions by means of NFV promises to address and overcome 121 these hindrances for network operators. 123 The considerations above are valid for stand-alone VNFs running 124 independently. However, additional challenges and requirements raise 125 for network operators when services offered to customers are built by 126 the composition of multiple VNFs. In this case, the deployment and 127 provisioning of each (virtual) service component for the customer 128 needs to be coordinated with the other VNFs, applying control 129 functions to steer the traffic through them following a predefined 130 order (i.e. according to the specific service function path). An 131 orchestration framework capable of coordinating the automated 132 deployment, configuration, provisioning and chaining of multiple VNFs 133 would ease the management of the whole lifecycle of services offered 134 to customers. Additionally, when dealing with virtualised functions, 135 resiliency and high availability of chained services pose additional 136 requirements for a VNF orchestration framework, in terms of detection 137 of software failure at various levels, including hypervisors and 138 virtual machines, hardware failure, and virtual appliance migration. 140 This document presents an orchestration framework for automated 141 deployment of high available VNF chains, and introduces its 142 architecture and building blocks. Resiliency for both stand-alone 143 VNFs and chained services is considered in this document as a key 144 control function based on VNF pool concepts. The proposed VNF pool 145 orchestration framework is also positioned with respect to approaches 146 and architectures currently defined for Network Function 147 Virtualisation and Service Function Chaining (SFC). 149 2. Terminology 151 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 152 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 153 document are to be interpreted as described in [RFC2119]. 155 The following acronyms are used in this document: 157 NFV: Network Function Virtualisation. 159 SDN: Software Defined Networking. 161 VNF: Virtualised Network Function. 163 SFC: Service Function Chaining. 165 CPE: Customer Premise Equipment. 167 VPN: Virtual Private Network. 169 EMS: Element Management System. 171 PoP: Point of Presence. 173 VM: Virtual Machine. 175 3. VNF Pool Orchestration for Resilient Virtual Appliances 177 The telco market is rapidly moving towards an "Everything as a 178 Service" model, where the virtualisation of traditionally in-the-box 179 network functions can benefit from Software Defined Networking (SDN) 180 tools and technologies. As said, the recent updates and proposed 181 solutions on NFV and SDN is practically bringing, from an operator 182 perspective, a deep evolution on how the network edge is architected, 183 operated and provisioned, since that is the place where VNFs and 184 virtual services can be deployed and provisioned close to the 185 customers. Operators target to evolve their datacentres and PoPs 186 into increasingly dynamic infrastructures where VNFs and chained 187 services can be deployed with high availability and high elasticity 188 to scale up and down while optimizing performances and resources 189 utilization. 191 This section introduces the VNF pool orchestration framework for the 192 deployment, provisioning and chaining of resilient virtual appliances 193 and services within operator data-centres proposed in this document. 195 3.1. Problem Statement 197 The orchestration framework proposed in this document aims at solving 198 some of the challenges that operators face when, trying to apply the 199 base NFV concepts, they replace hardware devices implementing well- 200 known network functions with software-based virtual appliances. In 201 particular, this VNF orchestration framework targets an automated, 202 flexible and elastic provisioning of service chains within operators' 203 datacentres. 205 When operators need to compose and chain multiple VNFs to provision a 206 given service to the customer, they need to operate network and 207 computing resources in a coordinated way, and above all to implement 208 control mechanisms and procedures to steer the traffic through the 209 different VNFs and the customer sites. As an example, the 210 virtualisation of the Customer Premises Equipment (CPE) is emerging 211 as one of the first applications of the Network Functions 212 Virtualisation (NFV) architecture that is currently being 213 commercialized by several software (and hardware) vendors. It has 214 the potential to generate a significant impact on the operators 215 businesses. The term virtual CPE (vCPE) refers to the execution in a 216 virtual environment of the network functions that traditionally 217 integrated in hardware gears at customer premises, like BGP speakers, 218 firewall, NAT, etc. 220 Different scenarios and use cases exist for the vCPE. Currently, the 221 typical scenario is the vCPE in the PoP, that actually provides 222 softwarization and shift to the first PoP of the operator for those 223 network functions normally deployed at customer premises (e.g. NAT, 224 Firewall, etc.). The goal is to manage the whole LAN environment of 225 the customer, while preserving QoE of its services, providing added 226 value services to users not willing to get involved with technology 227 issues, and reducing maintenance trouble tickets and the need for in- 228 house problem solving. In addition, the vCPE can be used in the 229 operator's datacenter to implement in software those chained network 230 functions to provision automated VPN services for customers 231 [VCPE-T2], in order to dynamically and automatically extend existing 232 L3 VPNs (e.g. connecting remote customer sites) to incorporate new 233 virtual assets (like virtual machines) into a private cloud. 235 As additional requirements for the proposed orchestration framework, 236 the use of VNFs opens new challenges concerning the reliability of 237 provided virtual services. When network functions are deployed on 238 monolithic hardware platforms, the lifecycle of individual services 239 is strictly bound to the availability of the physical device, and 240 management tools may detect outages and migrate affected services to 241 new instances deployed on backup hardware. When introducing VNFs, 242 individual network functions may still fail, but with more risk 243 factors such as software failure at various levels, including 244 hypervisors and virtual machines, hardware failure, and virtual 245 appliance migration. Moreover, when considering chains of VNFs, the 246 management and control tools used by the operators have to consider 247 and apply reliability mechanisms at the service level, including 248 transparent migration to backup VNFs and synchronization of state 249 information. In this context, VNF pooling mechanisms and concepts 250 are valid and applicable, thus considering VNF instances grouped as 251 pools to provide the same function in a reliable way. 253 3.2. Orchestration Framework 255 The VNF pool orchestration framework proposed in this document aims 256 to provide automated functions for the deployment, provisioning and 257 composition of resilient VNFs within operators' datacentres. 258 Figure Figure 1 presents the high level architecture, including 259 building blocks and functional components. 261 This VNF pool orchestration framework is built around two key 262 components: the orchestrator and the SDN controller. The 263 orchestrator includes all the functions related to the management, 264 coordination, and control of VNFs instantiation, configuration and 265 composition. It is the component at the highest level of the 266 architecture and represents the access point to the VNF pool 267 orchestration framework for the operator. On the other hand, the SDN 268 controller provides dynamic traffic steering and flexible network 269 provisioning within the datacenter as needed by the VNF chains. The 270 basic controller functions are augmented by a set of enhanced network 271 applications deployed on top, that might be themselves control and 272 management VNFs for operator use (i.e. not related to customers and 273 users functions). 275 Therefore, the architecture depicted in Figure Figure 1 is a 276 practical demonstration of how SDN and NFV technologies and concepts 277 can be integrated to provide substantial benefits to network 278 operators in terms of robustness, ease of management, control and 279 provisioning of their network infrastructures and services. SDN and 280 NFV are clearly complementary solutions for enabling virtualisation 281 of network infrastructures, services and functions while supporting 282 dynamic and flexible network traffic engineering. SDN focuses on 283 network programmability, traffic steering and multi-tenancy by means 284 of a common, open, and dynamic abstraction of network resources. 286 +----------------------------------------------+ 287 | Orchestrator | 288 +-----+----------------+----------------+------+ 289 | | | 290 V V V 291 +------------+ +-------------+ +-------------+ 292 | | | | | | 293 | VNF Pool | | VNF Chain | | Edge | 294 | Manager | |Configurator | |Configurator | 295 | | | | | | 296 +-----+------+ +------+------+ +------+------+ 297 | | | 298 V V V 299 +----------------------------------------------+ 300 | SDN Controller | 301 +--------------+---+----+---------------+------+ 302 +.......................+ | | | | 303 : +--------+ \ | | | +-----+ 304 : |+-------++ :| +-----+---+----+----+ | 305 : +| || :\ | +-----------------+-+ +---+----+ 306 : /|| VNF1 || : | | | +---------------+-+-+ | | 307 : / || || :..+ | | | Virtual | | +-------+ Edge | 308 : / ++-------+| : : | | | Switch(es) | | +-------+ Router | 309 : / +---+----+ : :| +-+-+---------------+ | | | | 310 : / | : :\ +-+-----------------+ | +--------+ 311 : +--+-----+ | : : | +-----+---+---+-----+ 312 : |+-------++ | : : \ | | | 313 : || || | : : | | | | 314 : || VNF2 || | : : \---------+---+---+------+ 315 : || || | : : | | 316 : ++-------+| | : : | | 317 : +--+-+---+ | : : | Virtual Compute | 318 : \ +----+---+ : : | Infrastructure | 319 : \|+-------++ : : | | 320 : +| || : : | | 321 : || VNF3 || : : /------------------------+ 322 : || || : : | 323 : ++-------+| : : | 324 : +--------+ : : | 325 :Service Chain "X" : : / 326 +.......................+ : | 327 :Service Chain "Y" :/ 328 +........................+ 330 Figure 1: VNF Pool Orchestration Framework Architecture. 332 NFV targets a progressive migration of network elements, network 333 appliances and fixed function boxes into VMs that can be ran on 334 commodity hardware, enabling the benefits of cloud and datacentres to 335 be applied to network functions. 337 3.2.1. Orchestrator 339 The VNF orchestrator implements a set of functions to seamlessly 340 control and manage in a coordinated way the instantiation and 341 deployment of VNFs on one hand, and their composition and chaining to 342 steer the traffic through them on the other. It is fully controlled 343 and operated by the network operator, and basically it is the highest 344 control and orchestration layer that sits above all the softwarized 345 and virtualised components in the proposed architecture. 347 Therefore the VNF orchestrator provides a consistent way to access 348 the system and provision chains of VNFs to the operator. It exposes 349 a set of primitives to instantiate, configure VNFs and compose them 350 according to the specific service chain requirements. Practically, 351 it aims at enabling an efficient and dynamic management of operator's 352 infrastructure resources with great flexibility by means of a 353 consistent set of APIs. 355 To enable this, the VNF orchestrator can be seen as a composition of 356 several internal functionalities, each providing a given coordination 357 function needed to orchestrate the lower layer control and management 358 functions depicted in Figure Figure 1 (i.e. VNF chain configuration, 359 VNF pool provisioning, etc.). In practice, the VNF orchestrator 360 needs to include at least an internal component to manage the 361 instantiation and configuration of stand-alone VNFs (e.g. implemented 362 by a self-contained VM) that might be directly interfaced with the 363 physical servers in the datacenter. And also a dedicated component 364 for programmatic coordination and provisioning of VNF chains is 365 needed to properly orchestrate the traffic steering through VNFs 366 belonging to the same service chain. This should also provide multi- 367 tenant functionalities and maintain isolation across VNF chains 368 deployed for different customers. It is then clear that the VNF 369 orchestrator is the overall coordinator of the proposed framework, 370 and it drives all the lower layer components that implement the 371 actual control logic. 373 3.2.2. SDN Controller 375 The SDN controller provides the logic for network control, 376 provisioning and monitoring. It is the component where the SDN 377 abstraction happens. This means it exposes a set of primitives to 378 configure the datacenter network according to the requirements of the 379 VNF chains to be provisioned, while hiding the specific technology 380 constraints and capabilities of the software switches and edge 381 routers underneath. The deployment of an SDN controller allows to 382 implement a software driven VNF orchestration, with flexible and 383 programmable network functions for service chaining and resilient 384 virtual appliances. 386 At its southbound interface, the SDN controller interfaces with 387 software switches running in servers, physical switches 388 interconnecting them and edge routers connecting the datacenter with 389 external networks. Multiple control protocols can be used at this 390 southbound interface to actually provision the datacenter network and 391 enable traffic steering through VNFs, including OpenFlow, OVSDB, 392 NETCONF and others. 394 Therefore the SDN controller provides the basic network provisioning 395 functions needed by upper layer coordination functions to perform 396 service chain and VNF pool-wide actions. Indeed, the logic and the 397 state at the service level is only maintained and coordinated by 398 network applications on top of the SDN controller. 400 3.2.3. VNF Chain Configurator 402 The VNF chain configurator is deployed as a bridging component 403 between the orchestrator and the SDN controller, and it is mostly 404 dedicated to the implementation of VNF chaining and composition 405 logic. It computes a suitable path to interconnect the involved VNFs 406 (already instantiated and identified by the orchestrator) and 407 forwards the network configuration request to the SDN controller for 408 each new VNF chain requested by the orchestrator. 410 Following the datacenter service chains and related traffic types 411 defined in [I-D.ietf-sfc-dc-use-cases], the VNF chain configurator 412 should implement its coordination logic to support both north-south 413 and east-west chains. The former refer to network traffic staying 414 within the datacenter but coming from a remote datacenter or a user 415 through the edge router connecting to an external network. In this 416 case, the VNF chain configurator should also coordinate with the edge 417 configurator to properly provision the datacenter edge router. 418 Moreover, this north-south case may also refer to VNF chains spanning 419 multiple datacentres, thus requiring a further inter-datacenter 420 coordination between VNF chain configurators and orchestrators. 421 These coordination functions are out of the scope of this document. 422 On the other hand, the east-west chains refer to VNFs treating 423 network traffic that do not exit the datacenter. For both cases, the 424 VNF chain configurator (in combination with the SDN controller) 425 should implement and support proper service chains encapsulation 426 solutions [I-D.ietf-sfc-nsh] to isolate and segregate traffic related 427 to VNF chains belonging to different tenants. 429 Different deployment models may exist for the VNF chain configurator: 430 a dedicated configurator for each chain, or a single configurator for 431 all the VNF chains. In the first approach, the orchestrator needs to 432 implement some coordination logic related to the dynamic 433 instantiation of configurators when new VNF chains are provisioned. 435 3.2.4. Edge Configurator 437 The edge configurator is a network control application deployed on 438 top of the SDN controller. Its main role is to coordinate the 439 provisioning and configuration of the edge router for those north- 440 south VNF chains exiting the datacenter. In particular, it keeps the 441 binding between the traffic steered through the VNFs and the related 442 network service outside the datacenter terminated at the edge router 443 (e.g. a L3 VPN, VLAN, VXLAN, VRF etc), possibly considering the 444 service chain encapsulation implemented within the VNF chain. The 445 mediation of the SDN controller allows to support a variety of 446 control and management protocols for the actual configuration of the 447 datacenter edge router. 449 3.3. Resiliency Control Functions for Chained VNFs 451 In the proposed orchestration architecture, the resiliency control 452 functions that have been identified as a key feature for a flexible 453 and dynamic provisioning of chained VNF services are implemented by 454 the VNF pool manager depicted in Figure Figure 1. It is the entity 455 that manages and coordinates VNFs reliability providing high 456 availability and resiliency features at both stand-alone and chained 457 VNFs level. 459 The deployment of VNF based services requires moving the resiliency 460 capabilities and mechanisms from physical network devices (which are 461 typically highly available and often specialized) to entities (like 462 self-contained VMs) running VNFs in the context of pools of 463 virtualised resources. When moving towards a resilient approach for 464 VNF deployment and operation, the generic high availability 465 requirements to be matched are translated into: 467 Service continuity: when a hardware failure or capacity limits 468 (memory and CPU) occur on platforms hosting VMs (and therefore 469 VNFs), it is necessary to migrate VNFs to other VMs and/or 470 hardware platforms to guarantee service continuity with minimum 471 impact on the users 473 Topological transparency: the hand-over between live and backup VNFs 474 must be implemented in a transparent way for the user and also for 475 the service chain itself. The backup VNF instances need to 476 replicate the necessary information (configuration, addressing, 477 etc.) so that the network function is taken over without any 478 topological disruption (i.e. at the VNF chain level) 480 Load balancing or scaling: migration of VNF instances may also 481 happen for load-balancing purposes (e.g. for CPU, memory overload 482 in virtualised platforms) or scaling of network services (with 483 VNFs moved to new hardware platforms). In both cases the working 484 network function is moved to a new VNF instance and the service 485 continuity must be maintained. 487 Auto scale of VNFs instances: when a VNF requires increased resource 488 allocation to improve overall service performance, the network 489 function could be distributed across multiple VMs, and to 490 guarantee the performance improvement dedicated pooling mechanisms 491 for scaling up or down resources to each VNF in a consistent way 492 are needed. 494 Multiple VNF resiliency classes: each type of end-to-end service 495 (e.g. web, financial backend, video streaming, etc.) has its own 496 specific resiliency requirements for the related VNFs. While for 497 operators it is not easy to achieve service resiliency SLAs 498 without building to peak, a basic set of VNF resiliency classes 499 can be defined to identify some metrics, such as: if a VNF needs 500 status synchronization; fault detection and restoration time 501 objective (e.g. real-time); service availability metrics; service 502 quality metrics; service latency metrics for VNF chain components. 504 The aim of the VNF pool orchestration presented in this document is 505 to address the above requirements by introducing the VNF pool manager 506 that follows the principles of the IETF VNFPOOL architecture [I- 507 D.zong-vnfpool-arch], where a pool manager coordinates the 508 reliability of stand-alone VNFs, by selecting the active instance and 509 interacting with the Service Control Entity for consistent end-to-end 510 service chain reliability and provisioning. In the VNF pool 511 orchestration architecture illustrated in Figure Figure 1, the 512 Service Control Entity is implemented by the combination of the 513 orchestrator (for overall coordination of service chains) and the VNF 514 chain configuration (for actual provisioning and coordination of 515 individual service chains). 517 Different deployment models may exist for the VNF pool manager: a 518 dedicated manager for each VNF chain, or a single one for all the 519 chains. 521 In terms of offered resiliency functionalities, the VNF pool manager 522 provides some post-configuration functions to instantiate VNFs (as 523 self-contained VMs) with the desired degree of reliability and 524 redundancy. This translates into further actions to create and 525 configure additional VMs as backups, therefore building a pool for 526 each VNF in the chain. 528 The VNF pool manager is conceived to offer several types and degrees 529 of reliability functions. First, it provides specific functions for 530 the persistence of VNFs configuration, including making periodic 531 snapshots of the VMs running the VNF. Moreover, at runtime (i.e. 532 with the service chain in place), it monitors the operational status 533 and performances of the master VNFs VMs, and collects notifications 534 about VMs status, e.g. by registering as an observer to dedicated 535 services offered by the virtualisation platform used within the 536 virtual compute infrastructure. Moreover, VNF pool manager reacts to 537 any failure condition by autonomously replacing the master VNF with 538 one of its backup on the pool, basically implementing a swap of VMs 539 for service chain recovery purposes. Thus, the VNF pool manager also 540 takes care in coordination with the VNF chain configurator of 541 implementing those resiliency mechanisms at the chain level. Two 542 options have been identified so far: cold recovery and hot recovery. 543 In the former, backup VNFs, properly configured with the same master 544 configuration, are kept ready (but switched off) to be started when 545 the master dies. In this case the recovery time depends on the 546 specific VNF and its type of function, e.g. it may depend on 547 convergence time for a virtual BGP router. In the hot recovery, 548 active backup VNFs are kept synchronized with the master ones, and 549 the recovery of the service chain (mostly performed at the VNF chain 550 configurator) in case of failure is faster than cold recovery. 552 4. Positioning in Existing NFV and SFC Frameworks 554 4.1. Mapping into NFV Architecture 556 For the presented solution to be integrated in the ETSI NFV reference 557 architecture, some modifications need to be applied to it. The VNF 558 pools replace the VNFs in the architecture. They are then controlled 559 by the Element Management System (EMS) on the northbound. So the EMS 560 has to be made VNFPOOL aware. Additional elements that need to 561 support the mechanisms proposed by VNFPOOL are the VNF managers, 562 which need to implement the resiliency and VNF scaling 563 (up-/downscale) functions. This has also implications on the 564 orchestrator, which has to be aware of the augmented functionality 565 offered by the VNF Manager. In fact, the orchestrator also provides 566 primitives for VNFs chaining, matching the Service Control Entity in 567 the VNFPool architecture. 569 4.2. Mapping into SFC Architecture 571 [I-D.ietf-sfc-architecture] describes the Service Function Chaining 572 (SFC) architecture. It describes the concept of a service function 573 (SF) and how to chain SFs and provides only little detail of the SFC 574 control plane, which is responsible with the coordination of the SFs 575 and their stitching into SFCs. The combination of orchestrator, VNF 576 chain configurator and VNF pool manager functionalities described in 577 this document cover most of the functions expected from the SFC 578 control plane. 580 5. IANA Considerations 582 This draft does not have any IANA consideration. 584 6. Security Considerations 586 Security issues related to VNF pool orchestration and resiliency of 587 service chains are left for further study. 589 7. Acknowledgements 591 This work has been partially supported by the European Commission 592 through the FP7 ICT Trilogy2 project (Building the Liquid Net, grant 593 agreement no:317756). The views expressed here are those of the 594 authors only. The European Commission is not liable for any use that 595 may be made of the information in this document. 597 Authors would like to thank G. Carrozzo and G. Landi from Nextworks 598 for valuable discussions and contributions to the topics addressed in 599 this document. 601 8. References 603 8.1. Normative References 605 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 606 Requirement Levels", BCP 14, RFC 2119, 607 DOI 10.17487/RFC2119, March 1997, 608 . 610 8.2. Informative References 612 [I-D.ietf-sfc-dc-use-cases] 613 Surendra, S., Tufail, M., Majee, S., Captari, C., and S. 614 Homma, "Service Function Chaining Use Cases In Data 615 Centers", draft-ietf-sfc-dc-use-cases-03 (work in 616 progress), July 2015. 618 [I-D.ietf-sfc-architecture] 619 Halpern, J. and C. Pignataro, "Service Function Chaining 620 (SFC) Architecture", draft-ietf-sfc-architecture-11 (work 621 in progress), July 2015. 623 [I-D.ietf-sfc-nsh] 624 Quinn, P. and U. Elzur, "Network Service Header", draft- 625 ietf-sfc-nsh-01 (work in progress), July 2015. 627 [I-D.zong-vnfpool-problem-statement] 628 Zong, N., Dunbar, L., Shore, M., Lopez, D., and G. 629 Karagiannis, "Virtualized Network Function (VNF) Pool 630 Problem Statement", draft-zong-vnfpool-problem- 631 statement-06 (work in progress), July 2014. 633 [VCPE-T2] G. Bernini, G. Carrozzo, P. A. Gutierrez, D. R. Lopez, , 634 "Virtualising the Network Edge: Virtual CPE for the 635 datacenter and the PoP", European Conference on Networks 636 and Communications , June 2014. 638 Authors' Addresses 640 Giacomo Bernini 641 Nextworks 642 Via Livornese 1027 643 San Piero a Grado, Pisa 56122 644 Italy 646 Phone: +39 050 3871600 647 Email: g.bernini@nextworks.it 649 Vincenzo Maffione 650 Nextworks 651 Via Livornese 1027 652 San Piero a Grado, Pisa 56122 653 Italy 655 Phone: +39 050 3871600 656 Email: v.maffione@nextworks.it 657 Diego R. Lopez 658 Telefonica I+D 659 C. Zurbaran, 12 660 Madrid 28010 661 Spain 663 Email: diego.r.lopez@telefonica.com 665 Pedro Andres Aranda Gutierrez 666 Telefonica I+D 667 C. Zurbaran, 12 668 Madrid 28010 669 Spain 671 Email: pedroa.aranda@telefonica.com