idnits 2.17.1 draft-unify-nfvrg-challenges-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 9, 2015) is 3336 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: '---SP1---' is mentioned on line 286, but not defined == Missing Reference: '--------SP2-------' is mentioned on line 286, but not defined == Missing Reference: '---SP3----' is mentioned on line 335, but not defined == Missing Reference: '-------------------SP0-------------------' is mentioned on line 287, but not defined == Missing Reference: '----SP1---' is mentioned on line 335, but not defined == Missing Reference: '---------SP2--------' is mentioned on line 335, but not defined == Missing Reference: '---------------------SP0--------------------' is mentioned on line 336, but not defined Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFVRG R. Szabo 3 Internet-Draft A. Csaszar 4 Intended status: Informational Ericsson 5 Expires: September 10, 2015 K. Pentikousis 6 EICT 7 M. Kind 8 Deutsche Telekom AG 9 D. Daino 10 Telecom Italia 11 Z. Qiang 12 Ericsson 13 H. Woesner 14 BISDN 15 March 9, 2015 17 Unifying Carrier and Cloud Networks: Problem Statement and Challenges 18 draft-unify-nfvrg-challenges-01 20 Abstract 22 The introduction of network and service functionality virtualization 23 in carrier-grade networks promises improved operations in terms of 24 flexibility, efficiency, and manageability. In current practice, 25 virtualization is controlled through orchestrator entities that 26 expose programmable interfaces according to the underlying resource 27 types. Typically this means the adoption of, on the one hand, 28 established data center compute/storage and, on the other, network 29 control APIs which were originally developed in isolation. Arguably, 30 the possibility for innovation highly depends on the capabilities and 31 openness of the aforementioned interfaces. This document introduces 32 in simple terms the problems arising when one follows this approach 33 and motivates the need for a high level of programmability beyond 34 policy and service descriptions. This document also summarizes the 35 challenges related to orchestration programming in this unified cloud 36 and carrier network production environment. 38 Status of This Memo 40 This Internet-Draft is submitted in full conformance with the 41 provisions of BCP 78 and BCP 79. 43 Internet-Drafts are working documents of the Internet Engineering 44 Task Force (IETF). Note that other groups may also distribute 45 working documents as Internet-Drafts. The list of current Internet- 46 Drafts is at http://datatracker.ietf.org/drafts/current/. 48 Internet-Drafts are draft documents valid for a maximum of six months 49 and may be updated, replaced, or obsoleted by other documents at any 50 time. It is inappropriate to use Internet-Drafts as reference 51 material or to cite them other than as "work in progress." 53 This Internet-Draft will expire on September 10, 2015. 55 Copyright Notice 57 Copyright (c) 2015 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents 62 (http://trustee.ietf.org/license-info) in effect on the date of 63 publication of this document. Please review these documents 64 carefully, as they describe your rights and restrictions with respect 65 to this document. Code Components extracted from this document must 66 include Simplified BSD License text as described in Section 4.e of 67 the Trust Legal Provisions and are provided without warranty as 68 described in the Simplified BSD License. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 73 2. Terms and Definitions . . . . . . . . . . . . . . . . . . . . 3 74 3. Motivations . . . . . . . . . . . . . . . . . . . . . . . . . 4 75 4. Problem Statement . . . . . . . . . . . . . . . . . . . . . . 10 76 5. Challenges . . . . . . . . . . . . . . . . . . . . . . . . . 10 77 5.1. Orchestration . . . . . . . . . . . . . . . . . . . . . . . 10 78 5.2. Resource description . . . . . . . . . . . . . . . . . . . 10 79 5.3. Dependencies (de-composition) . . . . . . . . . . . . . . . 11 80 5.4. Elastic VNF . . . . . . . . . . . . . . . . . . . . . . . . 11 81 5.5. Measurement and analytics . . . . . . . . . . . . . . . . . 12 82 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 83 7. Security Considerations . . . . . . . . . . . . . . . . . . . 13 84 8. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 13 85 9. Informative References . . . . . . . . . . . . . . . . . . . 13 86 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 88 1. Introduction 90 To a large degree there is agreement in the network research, 91 practitioner, and standardization communities that rigid network 92 control limits the flexibility and manageability of speedy service 93 creation, as discussed in [NSC] and the references therein. For 94 instance, it is not unusual that today an average service creation 95 time cycle exceeds 90 hours, whereas given the recent advances in 96 virtualization and cloudification one would be interested in service 97 creation times in the order of minutes [EU-5GPPP-Contract] if not 98 seconds. 100 Flexible service definition and creation start by formalizing the 101 service into the concept of network function forwarding graphs, such 102 as the ETSI VNF Forwarding Graph [ETSI-NFV-Arch] or the ongoing work 103 in IETF [I-D.ietf-sfc-problem-statement]. These graphs represent the 104 way in which service end-points (e.g., customer access) are 105 interconnected with a set of selected network functionalities such as 106 firewalls, load balancers, and so on, to deliver a network service. 107 Service graph representations form the input for the management and 108 orchestration to instantiate and configure the requested service. 109 For example, ETSI defined a Management and Orchestration (MANO) 110 framework in [ETSI-NFV-MANO]. We note that throughout such a 111 management and orchestration framework different abstractions may 112 appear for separation of concerns, roles or functionality, or for 113 information hiding. 115 Compute virtualization is central to the concept of Network Function 116 Virtualization (NFV). However, carrier-grade services demand that 117 all components of the data path, such as Network Functions (NFs), 118 virtual NFs (VNFs) and virtual links, meet key performance 119 requirements. In this context, the inclusion of Data Center (DC) 120 platforms, such as OpenStack [OpenStack], into the SDN infrastructure 121 is far from trivial. 123 In this document we examine the problems arising as one combines 124 these two formerly isolated environments in an effort to create a 125 unified production environment and discuss the associated emerging 126 challenges. Our goal is the definition of a production environment 127 that allows multi-vendor and multi-domain operation based on open and 128 interoperable implementations of the key entities described in the 129 remainder of this document. 131 2. Terms and Definitions 133 We use the term compute and "compute and storage" interchangeably 134 throughout the document. Moreover, we use the following definitions, 135 as established in [ETSI-NFV-Arch]: 137 NFV: Network Function Virtualization - The principle of separating 138 network functions from the hardware they run on by using virtual 139 hardware abstraction. 141 NFVI PoP: NFV Infrastructure Point of Presence - Any combination of 142 virtualized compute, storage and network resources. 144 NFVI: NFV Infrastructure - Collection of NFVI PoPs under one 145 orchestrator. 147 VNF: Virtualized Network Function - a software-based network 148 function. 150 VNF FG: Virtualized Network Function Forwarding Graph - an ordered 151 list of VNFs creating a service chain. 153 MANO: Management and Orchestration - In the ETSI NFV framework 154 [ETSI-NFV-MANO], this is the global entity responsible for 155 management and orchestration of NFV lifecycle. 157 Further, we make use of the following terms: 159 NF: a network function, either software-based (VNF) or appliance- 160 based. 162 SW: a (routing/switching) network element with a programmable 163 control plane interface. 165 DC: a data center network element which in addition to a 166 programmable control plane interface offers a DC control interface 168 LSI: Logical Switch Instance - a software switch instance. 170 3. Motivations 172 Figure 1 illustrates a simple service graph comprising three network 173 functions (NFs). For the sake of simplicity, we will assume only two 174 types of infrastructure resources, namely SWs and DCs as per the 175 terminology introduced above, and ignore appliance-based NFs for the 176 time being. The goal is to implement the given service based on the 177 available infrastructure resources. 179 fr2 +---+ fr3 180 +->o-|NF2|-o-+ 181 | 4 +---+ 5 | 182 +---+ | V +---+ 183 1-->o-|NF1|-o----------->o-|NF3|-o-->8 184 2 +---+ 3 fr1 6 +---+ 7 186 Figure 1: Service graph 188 The service graph definition contains NF types (NF1, NF2, NF3) along 189 with the 190 o corresponding ports (NF1:{2,3}; NF2:{4,5}; NF3:{6,7}) 192 o service access points {1,8} corresponding to infrastructure 193 resources, 195 o definition of forwarding behavior (fr1, fr2, fr3) 197 The forwarding behavior contains classifications for matching of 198 traffic flows and corresponding outbound forwarding actions. 200 Assume now that we would like to use the infrastructure (topology, 201 network and software resources) depicted in Figure 2 and Figure 3 to 202 implement the aforementioned service graph. That is, we have three 203 SWs and two Points of Presence (PoPs) with DC software resources at 204 our disposal. 206 +---+ 207 +--|SW3|--+ 208 | +---+ | 209 +---+ | | +---+ 210 1 |PoP| +---+ +---+ |PoP| 8 211 o--|DC1|----|SW2|------|SW4 |---|DC2|--o 212 +---+ +---+ +---+ +---+ 214 [---SP1---][--------SP2-------][---SP3----] 216 Figure 2: Infrastructure resources 218 +----------+ 219 | +----+ |PoP DC (== NFVI PoP) 220 | | CN | | 221 | +----+ | 222 | | | | 223 | +----+ | 224 o-+--| SW |--+-o 225 | +----+ | 226 +----------+ 228 Figure 3: A virtualized Point of Presence (PoP) with software 229 resources (Compute Node - CN) 231 In the simplest case, all resources would be part of the same service 232 provider (SP) domain. We need to ensure that each entity in Figure 2 233 can be procured from a different vendor and therefore 234 interoperability is key for multi-vendor NFVI deployment. Without 235 such interoperability different technologies for data center and 236 network operation result in distinct technology domains within a 237 single carrier. Multi-technology barriers start to emerge hindering 238 the full programmability of the NFVI and limiting the potential for 239 rapid service deployment. 241 We are also interested in a multi-operation environment, where the 242 roles and responsibilities are distributed according to some 243 organizational structure within the organization. Finally, we are 244 interested in multi-provider environment, where different 245 infrastructure resources are available from different service 246 providers (SPs). Figure 2 indicates a multi-provider environment in 247 the lower part of the figure as an example. We expect that this type 248 of deployments will become more common in the future as they are well 249 suited with the elasticity and flexibility requirements [NSC]. 251 Figure 2 also shows the service access points corresponding to the 252 overarching domain view, i.e., {1,8}. In order to deploy the service 253 graph of Figure 1 on the infrastructure resources of Figure 2, we 254 will need an appropriate mapping which can be implemented in 255 practice. In Figure 4 we illustrate a resource orchestrator (RO) as 256 a functional entity whose task is to map the service graph to the 257 infrastructure resources under some service constraints and taking 258 into account the NF resource descriptions. 260 fr2 +---+ fr3 261 +->o-|NF2|-o-+ 262 | 4 +---+ 5 | 263 +---+ | V +---+ 264 1-->o-|NF1|-o----------->o-|NF3|-o-->8 265 2 +---+ 3 fr1 6 +---+ 7 267 || 268 || 269 +--------+ \/ SP0 270 | NF | +---------------------+ 271 |Resource|==>|Resource Orchestrator|==> MAPPING 272 | Descr. | | (RO) | 273 +--------+ +---------------------+ 274 /\ 275 || 276 || 278 +---+ 279 +--|SW3|--+ 280 | +---+ | 281 +---+ | | +---+ 282 1 |PoP| +---+ +---+ |PoP| 8 283 o--|DC1|-----|SW2|-----|SW4|----|DC2|--o 284 +---+ +---+ +---+ +---+ 286 [---SP1---][--------SP2-------][---SP3----] 287 [-------------------SP0-------------------] 289 Figure 4: Resource Orchestrator: information base, inputs and output 291 NF resource descriptions are assumed to contain information necessary 292 to map NF types to a choice of instantiable VNF flavor or a selection 293 of an already deployed NF appliance and networking demands for 294 different operational policies. For example, if energy efficiency is 295 to be considered during the decision process then information related 296 to energy consumption of different NF flavors under different 297 conditions (e.g., network load) should be included in the resource 298 description. 300 Note that we also introduce a new service provider (SP0) which 301 effectively operates on top of the virtualized infrastructure offered 302 by SP1, SP2 and SP3. 304 In order for the RO to execute the resource mapping (which in general 305 is a hard problem) it needs to operate on the combined control plane 306 illustrated in Figure 5. In this figure we mark clearly that the 307 interfaces to the compute (DC) control plane and the SDN (SW) control 308 plane are distinct and implemented through different interfaces/APIs. 309 For example, Ic1 could be the Apache CloudStack API, while Ic2 could 310 be a control plane protocol such as ForCES or OpenFlow 311 [I-D.irtf-sdnrg-layer-terminology]. In this case, the orchestrator 312 at SP0 (top part of the figure) needs to maintain a tight 313 coordination across this range of interfaces. 315 +---------+ 316 |Orchestr.| 317 | SP0 | 318 _____+---------+_____ 319 / | \ 320 / V Ic2 \ 321 | +---------+ | 322 Ic1 V |SDN Ctrl | V Ic3 323 +---------+ | SP2 | +---------+ 324 |Comp Ctrl| +---------+ |Comp Ctrl| 325 | SP1 | / | \ | SP3 | 326 +---------+ +--- V ----+ +---------+ 327 | | +----+ | | 328 | | |SW3 | | | 329 V | +----+ | V 330 +----+ V / \ V +----+ 331 1 |PoP | +----+ +----+ |PoP | 8 332 o--|DC1 |----|SW2 |------|SW4 |----|DC2 |--o 333 +----+ +----+ +----+ +----+ 335 [----SP1---][---------SP2--------][---SP3----] 336 [---------------------SP0--------------------] 338 Figure 5: The RO Control Plane view. Control plane interfaces are 339 inticated with (line) arrows. Data plane connections are indicated 340 with simple lines. 342 In the real-world, however, orchestration operations do not stop, for 343 example, at the DC1 level as depicted in Figure 5. If we (so-to- 344 speak) "zoom into" DC1 we will see a similar pattern and the need to 345 coordinate SW and DC resources within DC1 as illustrated in Figure 6. 346 As depicted, this edge PoP includes compute nodes (CNs) and SWs which 347 in most of the cases will also contain an internal topology. 349 In Figure 6, IcA is an interface similar to Ic2 in Figure 5, while 350 IcB could be, for example, OpenStack Nova or similar. The Northbound 351 Interface (NBI) to the Compute Controller can use Ic1 or Ic3 as shown 352 in Figure 5. 354 NBI 355 | 356 +---------+ 357 |Comp Ctrl| 358 +---------+ 359 +----+ | 360 IcA V | IcB:to CNs 361 +---------+ V 362 |SDN Ctrl | | | ext port 363 +---------+ +---+ +---+ 364 to|SW |SW | |SW | 365 +-> ,+--++.._ _+-+-+ 366 V ,-" _|,,`.""-..+ 367 _,,,--"" | `. |""-.._ 368 +---+ +--++ `+-+-+ ""+---+ 369 |SW | |SW | |SW | |SW | 370 +---+ ,'+---+ ,'+---+ ,'+---+ 371 | | ,-" | | ,-" | | ,-" | | 372 +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ 373 |CN| |CN| |CN| |CN| |CN| |CN| |CN| |CN| 374 +--+ +--+ +--+ +--+ +--+ +--+ +--+ +--+ 376 Figure 6: PoP DC Network with Compute Nodes (CN) 378 In turn, each single Compute Node (CN) may also have internal 379 switching resources (see Figure 7). In a carrier environment, in 380 order to meet data path requirements, allocation of compute node 381 internal distributed resources (blades, CPU cores, etc.) may become 382 equivalently important. 384 +-+ +-+ +-+ +-+ 385 |V| |V| |V| |V| 386 |N| |N| |N| |N| 387 |F| |F| |F| |F| 388 +-+ +-+ +-+ +-+ 389 | / / | 390 +---+ +---+ +---+ 391 |LSI| |LSI| |LSI| 392 +---+ +---+ +---+ 393 | / | 394 +---+ +---+ 395 |NIC| |NIC| 396 +---+ +---+ 397 | | 399 Figure 7: Compute Node with internal switching resource 401 4. Problem Statement 403 The motivational examples of Section 3 illustrate that compute 404 virtualization implicitly involves network virtualization. On the 405 other hand, if one starts with an SDN network and adds compute 406 resources to network elements, then compute resources must be 407 assigned to some virtualized network resources if offered to clients. 408 That is, we observe that compute virtualization is implicitly 409 associated with network virtualization. Furthermore, virtualization 410 leads to recursions with clients (redefining and) reselling resources 411 and services [I-D.huang-sfc-use-case-recursive-service]. 413 We argue that given the multi-level virtualization of compute, 414 storage and network domains, automation of the corresponding resource 415 provisioning needs a recursive programmatic interface. The current 416 separated compute and network programming interfaces cannot provide 417 such recursions and cannot satisfy key requirements for multi-vendor, 418 multi-technology and multi-provider interoperability environments. 419 Therefore we foresee the necessity of a recursive programmatic 420 interface for joint compute, storage and network provisioning. 422 5. Challenges 424 We summarize in this section the key questions and challenges, which 425 we hope will initiate further discussions in the NFVRG community. 427 5.1. Orchestration 429 Firstly, as motivated in Section 3, orchestrating networking 430 resources appears to have a recursive nature at different levels of 431 the hierarchy. Would a programmatic interface at the combined 432 compute and network abstraction better support this recursive and 433 constraint-based resource allocation? 435 Secondly, can such a joint compute, storage and network programmatic 436 interface allow an automated resource orchestration similar to a 437 recursive SDN architecture [ONF-SDN-ARCH]? 439 5.2. Resource description 441 Prerequisite for joint placement decisions of compute, storage and 442 network is the adequate description of available resources. This 443 means that the interfaces (IcA, IcB etc. in Figure 5 and Figure 6) 444 are of bidirectional nature, exposing resources as well as reserving. 445 There have been manifold attempts to create frameworks for resource 446 description, most prominently RDF of W3C, NDL, the GENI RPC and its 447 concept of Aggregate Managers, ONF's TTP and many more. 449 Quite naturally, all attempts to standardize "arbitrary" resource 450 descriptions lead to creating ontologies, complex graphs describing 451 relations of terms to each other. 453 Practical descriptions of compute resources are currently focusing on 454 number of logical CPU cores, available RAM and storage, allowing, 455 e.g., the OpenStack Nova scheduler to meet placement decisions. In 456 heterogeneous network and compute environments, hardware may have 457 different acceleration capabilities (e.g., AES-NI or hardware random 458 number generators), so the notion of logical compute cores is not 459 expressive enough. In addition, the network interfaces (and link 460 load) provide important information on how fast a certain VNF can be 461 executed in one node. 463 This may lead to a description of resources as VNF-FGs themselves. 464 Networking resource (SW) may expose the capability to forward and 465 process frames in, e.g., OpenFlow TableFeatures reply. Compute nodes 466 in the VNF-FG would expose lists of capabilities like the presence of 467 AES hardware acceleration, Intel DPDK support, or complex functions 468 like a running web server. An essential part of the compute node's 469 capability would be the ability to run a certain VNF of type X within 470 a certain QoS spec. As the QoS is service specific, it can only be 471 exposed by a control function within the instantiated VNF-FG. 473 5.3. Dependencies (de-composition) 475 Salt [SALT], Puppet [PUPPET], Chef [CHEF] and Ansible [ANSIBLE] are 476 tools to manage large scale installations of virtual machines in DC 477 environments. Essentially, the decomposition of a complex function 478 into its dependencies is encoded in "recipes" (Chef). 480 OASIS TOSCA [TOSCA] specification aims at describing application 481 layer services to automate interoperable deployment in alternative 482 cloud environments. The TOSCA specification "provides a language to 483 describe service components and their relationships using a service 484 topology". 486 Is there a dependency (decomposition) abstraction suitable to drive 487 resource orchestration between application layer descriptions (like 488 TOSCA) and cloud specific installations (like Chef recipes)? 490 5.4. Elastic VNF 492 In many use cases, a VNF may not be designed for scaling up/down, as 493 scaling up/down may require a restart of the VNF which the state data 494 may be lost. Normally a VNF may be capable for scaling in/out only. 495 Such VNF is designed running on top of a small VM and grouped as a 496 pool of one VNF function. VNF scaling may crossing multiple NFVI 497 PoPs (or data center)s in order to avoid limitation of the NVFI 498 capability. At cross DC scaling, the result is that the new VNF 499 instance may be placed at a remote cloud location. At VNF scaling, 500 it is a must requirement to provide the same level of Service Level 501 Agreement (SLA) including performance, reliability and security. 503 In general, a VNF is part of a VNF Forwarding Graph (VNF FG), meaning 504 the data traffic may traverse multiple stateful and stateless VNF 505 functions in sequence. When some VNF instances of a given service 506 function chain are placed / scaled out in a distant cloud execution, 507 the service traffic may have to traverse multiple VNF instances which 508 are located in multiple physical locations. In the worst case, the 509 data traffic may ping-pong between multiple physical locations. 510 Therefore it is important to take the whole service function chain's 511 performance into consideration when placing and scaling one of its 512 VNF instance. Network and cloud resources need mutual 513 considerations, see [I-D.zu-nfvrg-elasticity-vnf]. 515 5.5. Measurement and analytics 517 Programmable, dynamic, and elastic VNF deployment requires that the 518 Resource Orchestrator (RO) entities obtain timely information about 519 the actual operational conditions between different locations where 520 VNFs can be placed. Scaling VNFs in/out/up/down, VNF execution 521 migration and VNF mobility, as well as right-sizing the VNFI resource 522 allocations is a research area that is expected to grow in the coming 523 years as mechanisms, heuristics, and measurement and analytics 524 frameworks are developed. 526 For example, Veitch et al. [IAF] point out that NFV deployment will 527 "present network operators with significant implementation 528 challenges". They look into the problems arising from the lack of 529 proper tools for testing and diagnostics and explore the use of 530 embedded instrumentation. They find that in certain scenarios fine- 531 tuning resource allocation based on instrumentation can lead to at 532 least 50% reduction in compute provisioning. In this context, three 533 categories emerge where more research is needed. 535 First, in the compute domain, performance analysis will need to 536 evolve significantly from the current "safety factor" mentality which 537 has served well carriers in the dedicated, hardware-based appliances 538 era. In the emerging softwarized deployments, VNFI will require new 539 tools for planning, testing, and reliability assurance. 541 Second, in the network domain, performance measurement and analysis 542 will play a key role in determining the scope and range of VNF 543 distribution across the resources available. For example, IETF has 544 worked on the standardization of IP performance metrics for years. 546 The Two-Way Active Measurement Protocol (TWAMP) could be employed, 547 for instance, to capture the actual operational state of the network 548 prior to making RO decisions. TWAMP management, however, still lacks 549 a standardized and programmable management and configuration data 550 model. We expect that as VNFI programmability gathers interest from 551 network carriers several IETF protocols will be revisited in order to 552 bring them up to date with respect to the current operational 553 requirements. To this end, NFVRG can play an active role in 554 identifying future IETF standardization directions. 556 Third, non-technical considerations which relate to business aspects 557 or priorities need to be modeled and codified so that ROs can take 558 intelligent decisions. Energy efficiency and cost, for example, can 559 steer NFV placement. In NFVI deployments operational practices such 560 as follow-the-sun will be considered as earlier research in the data 561 center context implies. 563 6. IANA Considerations 565 This memo includes no request to IANA. 567 7. Security Considerations 569 TBD 571 8. Acknowledgement 573 The authors would like to thank the UNIFY team for inspiring 574 discussions and in particular Fritz-Joachim Westphal for his comments 575 and suggestions on how to refine this draft. 577 This work is supported by FP7 UNIFY, a research project partially 578 funded by the European Community under the Seventh Framework Program 579 (grant agreement no. 619609). The views expressed here are those of 580 the authors only. The European Commission is not liable for any use 581 that may be made of the information in this document. 583 9. Informative References 585 [ANSIBLE] Ansible Inc., "Ansible Documentation", 2015, 586 . 588 [CHEF] Chef Software Inc., "An Overview of Chef", 2015, 589 . 591 [ETSI-NFV-Arch] 592 ETSI, "Architectural Framework v1.1.1", Oct 2013, 593 . 596 [ETSI-NFV-MANO] 597 ETSI, "Network Function Virtualization (NFV) Management 598 and Orchestration V0.6.1 (draft)", Jul. 2014, 599 . 602 [EU-5GPPP-Contract] 603 5G-PPP Association, "Contractual Arrangement: Setting up a 604 Public- Private Partnership in the Area of Advance 5G 605 Network Infrastructure for the Future Internet between the 606 European Union and the 5G Infrastructure Association", Dec 607 2013, . 609 [I-D.huang-sfc-use-case-recursive-service] 610 Huang, C., Zhu, J., and P. He, "SFC Use Cases on Recursive 611 Services", draft-huang-sfc-use-case-recursive-service-01 612 (work in progress), January 2015. 614 [I-D.ietf-sfc-problem-statement] 615 Quinn, P. and T. Nadeau, "Service Function Chaining 616 Problem Statement", draft-ietf-sfc-problem-statement-13 617 (work in progress), February 2015. 619 [I-D.irtf-sdnrg-layer-terminology] 620 Haleplidis, E., Pentikousis, K., Denazis, S., Salim, J., 621 Meyer, D., and O. Koufopavlou, "SDN Layers and 622 Architecture Terminology", draft-irtf-sdnrg-layer- 623 terminology-04 (work in progress), October 2014. 625 [I-D.zu-nfvrg-elasticity-vnf] 626 Qiang, Z. and R. Szabo, "Elasticity VNF", draft-zu-nfvrg- 627 elasticity-vnf-01 (work in progress), March 2015. 629 [IAF] Veitch, P., McGrath, M. J., and Bayon, V., "An 630 Instrumentation and Analytics Framework for Optimal and 631 Robust NFV Deployment", Communications Magazine, vol. 53, 632 no. 2 IEEE, February 2015. 634 [NSC] John, W., Pentikousis, K., et al., "Research directions in 635 network service chaining", Proc. SDN for Future Networks 636 and Services (SDN4FNS), Trento, Italy IEEE, November 2013. 638 [ONF-SDN-ARCH] 639 ONF, "SDN architecture", Jun. 2014, 640 . 644 [OpenStack] 645 The OpenStack project, "Openstack cloud software", 2014, 646 . 648 [PUPPET] Puppet Labs., "Puppet 3.7 Reference Manual", 2015, 649 . 651 [SALT] SaltStack, "Salt (Documentation)", 2015, 652 . 654 [TOSCA] OASIS Standard, "Topology and Orchestration Specification 655 for Cloud Applications Version 1.0", November 2013, 656 . 659 Authors' Addresses 661 Robert Szabo 662 Ericsson Research, Hungary 663 Irinyi Jozsef u. 4-20 664 Budapest 1117 665 Hungary 667 Email: robert.szabo@ericsson.com 668 URI: http://www.ericsson.com/ 670 Andras Csaszar 671 Ericsson Research, Hungary 672 Irinyi Jozsef u. 4-20 673 Budapest 1117 674 Hungary 676 Email: andras.csaszar@ericsson.com 677 URI: http://www.ericsson.com/ 678 Kostas Pentikousis 679 EICT GmbH 680 EUREF-Campus Haus 13 681 Torgauer Strasse 12-15 682 10829 Berlin 683 Germany 685 Email: k.pentikousis@eict.de 687 Mario Kind 688 Deutsche Telekom AG 689 Winterfeldtstr. 21 690 10781 Berlin 691 Germany 693 Email: mario.kind@telekom.de 695 Diego Daino 696 Telecom Italia 697 Via Guglielmo Reiss Romoli 274 698 10148 Turin 699 Italy 701 Email: diego.daino@telecomitalia.ite 703 Zu Qiang 704 Ericsson 705 8400, boul. Decarie 706 Ville Mont-Royal, QC 8400 707 Canada 709 Email: zu.qiang@ericsson.com 710 URI: http://www.ericsson.com/ 712 Hagen Woesner 713 BISDN 714 Koernerstr. 7-10 715 Berlin 10785 716 Germany 718 Email: hagen.woesner@bisdn.de 719 URI: http://www.bisdn.de