idnits 2.17.1 draft-bernardos-nfvrg-gaps-network-virtualization-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 21, 2016) is 2957 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-04) exists of draft-bernini-nfvrg-vnf-orchestration-01 == Outdated reference: A later version (-02) exists of draft-ceccarelli-teas-actn-framework-01 == Outdated reference: A later version (-09) exists of draft-ietf-bmwg-sdn-controller-benchmark-meth-01 == Outdated reference: A later version (-10) exists of draft-ietf-bmwg-sdn-controller-benchmark-term-01 == Outdated reference: A later version (-05) exists of draft-ietf-bmwg-virtual-net-01 == Outdated reference: A later version (-14) exists of draft-ietf-dmm-fpc-cpdp-01 == Outdated reference: A later version (-15) exists of draft-ietf-i2rs-architecture-13 == Outdated reference: A later version (-08) exists of draft-ietf-nvo3-arch-04 == Outdated reference: A later version (-05) exists of draft-jeong-i2nsf-sdn-security-services-04 == Outdated reference: A later version (-01) exists of draft-kim-bmwg-ha-nfvi-00 == Outdated reference: A later version (-06) exists of draft-matsushima-stateless-uplane-vepc-05 == Outdated reference: A later version (-02) exists of draft-vsperf-bmwg-vswitch-opnfv-01 Summary: 0 errors (**), 0 flaws (~~), 13 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFVRG CJ. Bernardos 3 Internet-Draft UC3M 4 Intended status: Informational A. Rahman 5 Expires: September 22, 2016 JC. Zuniga 6 InterDigital 7 LM. Contreras 8 P. Aranda 9 TID 10 March 21, 2016 12 Gap Analysis on Network Virtualization Activities 13 draft-bernardos-nfvrg-gaps-network-virtualization-04 15 Abstract 17 The main goal of this document is to serve as a survey of the 18 different efforts that have been taken and are currently taking place 19 at IETF and IRTF in regards to network virtualization, automation and 20 orchestration, putting them into context considering efforts by other 21 SDOs, and identifying current gaps and challenges that can be tackled 22 at IETF or researched at the IRTF. 24 Status of This Memo 26 This Internet-Draft is submitted in full conformance with the 27 provisions of BCP 78 and BCP 79. 29 Internet-Drafts are working documents of the Internet Engineering 30 Task Force (IETF). Note that other groups may also distribute 31 working documents as Internet-Drafts. The list of current Internet- 32 Drafts is at http://datatracker.ietf.org/drafts/current/. 34 Internet-Drafts are draft documents valid for a maximum of six months 35 and may be updated, replaced, or obsoleted by other documents at any 36 time. It is inappropriate to use Internet-Drafts as reference 37 material or to cite them other than as "work in progress." 39 This Internet-Draft will expire on September 22, 2016. 41 Copyright Notice 43 Copyright (c) 2016 IETF Trust and the persons identified as the 44 document authors. All rights reserved. 46 This document is subject to BCP 78 and the IETF Trust's Legal 47 Provisions Relating to IETF Documents 48 (http://trustee.ietf.org/license-info) in effect on the date of 49 publication of this document. Please review these documents 50 carefully, as they describe your rights and restrictions with respect 51 to this document. Code Components extracted from this document must 52 include Simplified BSD License text as described in Section 4.e of 53 the Trust Legal Provisions and are provided without warranty as 54 described in the Simplified BSD License. 56 Table of Contents 58 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 59 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 60 3. Background . . . . . . . . . . . . . . . . . . . . . . . . . 5 61 3.1. Network Function Virtualization . . . . . . . . . . . . . 5 62 3.2. Software Defined Networking . . . . . . . . . . . . . . . 7 63 3.3. Mobile Edge Computing . . . . . . . . . . . . . . . . . . 11 64 3.4. IEEE 802.1CF (OmniRAN) . . . . . . . . . . . . . . . . . 12 65 3.5. Distributed Management Task Force . . . . . . . . . . . . 12 66 3.6. Open Source initiatives . . . . . . . . . . . . . . . . . 12 67 4. Network Virtualization at IETF/IRTF . . . . . . . . . . . . . 14 68 4.1. SDN RG . . . . . . . . . . . . . . . . . . . . . . . . . 14 69 4.2. SFC WG . . . . . . . . . . . . . . . . . . . . . . . . . 14 70 4.3. NVO3 WG . . . . . . . . . . . . . . . . . . . . . . . . . 15 71 4.4. DMM WG . . . . . . . . . . . . . . . . . . . . . . . . . 16 72 4.5. I2RS WG . . . . . . . . . . . . . . . . . . . . . . . . . 17 73 4.6. BESS WG . . . . . . . . . . . . . . . . . . . . . . . . . 18 74 4.7. BM WG . . . . . . . . . . . . . . . . . . . . . . . . . . 19 75 4.8. TEAS WG . . . . . . . . . . . . . . . . . . . . . . . . . 20 76 4.9. I2NSF WG . . . . . . . . . . . . . . . . . . . . . . . . 20 77 4.10. IPPM WG . . . . . . . . . . . . . . . . . . . . . . . . . 21 78 4.11. NFV RG . . . . . . . . . . . . . . . . . . . . . . . . . 22 79 4.12. VNFpool BoF . . . . . . . . . . . . . . . . . . . . . . . 22 80 5. Summary of Gaps . . . . . . . . . . . . . . . . . . . . . . . 23 81 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 24 82 7. Security Considerations . . . . . . . . . . . . . . . . . . . 25 83 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 25 84 9. Informative References . . . . . . . . . . . . . . . . . . . 25 85 Appendix A. The mobile network use case . . . . . . . . . . . . 28 86 A.1. The 3GPP Evolved Packet System . . . . . . . . . . . . . 28 87 A.2. Virtualizing the 3GPP EPS . . . . . . . . . . . . . . . . 30 88 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 30 90 1. Introduction 92 The telecommunications sector is experiencing a major revolution that 93 will shape the way networks and services are designed and deployed 94 for the next decade. We are witnessing an explosion in the number of 95 applications and services demanded by users, which are now really 96 capable of accessing them on the move. In order to cope with such a 97 demand, some network operators are looking at the cloud computing 98 paradigm, which enables a potential reduction of the overall costs by 99 outsourcing communication services from specific hardware in the 100 operator's core to server farms scattered in datacenters. These 101 services have different characteristics if compared with conventional 102 IT services that have to be taken into account in this cloudification 103 process. Also the transport network is affected in that it is 104 evolving to a more sophisticated form of IP architecture with trends 105 like separation of control and data plane traffic, and more fine- 106 grained forwarding of packets (beyond looking at the destination IP 107 address) in the network to fulfill new business and service goals. 109 Virtualization of functions also provides operators with tools to 110 deploy new services much faster, as compared to the traditional use 111 of monolithic and tightly integrated dedicated machinery. As a 112 natural next step, mobile network operators need to re-think how to 113 evolve their existing network infrastructures and how to deploy new 114 ones to address the challenges posed by the increasing customers' 115 demands, as well as by the huge competition among operators. All 116 these changes are triggering the need for a modification in the way 117 operators and infrastructure providers operate their networks, as 118 they need to significantly reduce the costs incurred in deploying a 119 new service and operating it. Some of the mechanisms that are being 120 considered and already adopted by operators include: sharing of 121 network infrastructure to reduce costs, virtualization of core 122 servers running in data centers as a way of supporting their load- 123 aware elastic dimensioning, and dynamic energy policies to reduce the 124 monthly electricity bill. However, this has proved to be tough to 125 put in practice, and not enough. Indeed, it is not easy to deploy 126 new mechanisms in a running operational network due to the high 127 dependency on proprietary (and sometime obscure) protocols and 128 interfaces, which are complex to manage and often require configuring 129 multiple devices in a decentralized way. 131 Network Function Virtualization (NFV) and Software Defined Networking 132 (SDN) are changing the way the telecommunications sector will deploy, 133 extend and operate their networks. This document provides a survey 134 of the different efforts that have taken and are currently taking 135 place at IETF and IRTF in regards of network virtualization, looking 136 at how they relate to the ETSI NFV ISG, ETSI MEC ISG and ONF 137 architectural frameworks. Based on this analysis, we also go a step 138 farther, identifying which are the potential work areas where IETF/ 139 IRTF can work on to complement the complex network virtualization map 140 of technologies being standardized today. 142 2. Terminology 144 The following terms used in this document are defined by the ETSI NVF 145 ISG, the ONF and the IETF: 147 Application Plane - The collection of applications and services 148 that program network behavior. 150 Control Plane (CP) - The collection of functions responsible for 151 controlling one or more network devices. CP instructs network 152 devices with respect to how to process and forward packets. The 153 control plane interacts primarily with the forwarding plane and, 154 to a lesser extent, with the operational plane. 156 Forwarding Plane (FP) - The collection of resources across all 157 network devices responsible for forwarding traffic. 159 Management Plane (MP) - The collection of functions responsible 160 for monitoring, configuring, and maintaining one or more network 161 devices or parts of network devices. The management plane is 162 mostly related to the operational plane (it is related less to the 163 forwarding plane). 165 NFV Infrastructure (NFVI): totality of all hardware and software 166 components which build up the environment in which VNFs are 167 deployed 169 NFV Management and Orchestration (NFV-MANO): functions 170 collectively provided by NFVO, VNFM, and VIM. 172 NFV Orchestrator (NFVO): functional block that manages the Network 173 Service (NS) lifecycle and coordinates the management of NS 174 lifecycle, VNF lifecycle (supported by the VNFM) and NFVI 175 resources (supported by the VIM) to ensure an optimized allocation 176 of the necessary resources and connectivity. 178 OpenFlow protocol (OFP): allowing vendor independent programming 179 of control functions in network nodes. 181 Operational Plane (OP) - The collection of resources responsible 182 for managing the overall operation of individual network devices. 184 Service Function Chain (SFC): for a given service, the abstracted 185 view of the required service functions and the order in which they 186 are to be applied. This is somehow equivalent to the Network 187 Function Forwarding Graph (NF-FG) at ETSI. 189 Service Function Path (SFP): the selection of specific service 190 function instances on specific network nodes to form a service 191 graph through which an SFC is instantiated. 193 virtual EPC (vEPC): control plane of 3GPPs EPC operated on NFV 194 framework (as defined by [I-D.matsushima-stateless-uplane-vepc]). 196 Virtualized Infrastructure Manager (VIM): functional block that is 197 responsible for controlling and managing the NFVI compute, storage 198 and network resources, usually within one operator's 199 Infrastructure Domain. 201 Virtualized Network Function (VNF): implementation of a Network 202 Function that can be deployed on a Network Function Virtualisation 203 Infrastructure (NFVI). 205 Virtualized Network Function Manager (VNFM): functional block that 206 is responsible for the lifecycle management of VNF. 208 3. Background 210 3.1. Network Function Virtualization 212 The ETSI ISG NFV is a working group which, since 2012, aims to evolve 213 quasi-standard IT virtualization technology to consolidate many 214 network equipment types into industry standard high volume servers, 215 switches, and storage. It enables implementing network functions in 216 software that can run on a range of industry standard server hardware 217 and can be moved to, or loaded in, various locations in the network 218 as required, without the need to install new equipment. To date, 219 ETSI NFV is by far the most accepted NFV reference framework and 220 architectural footprint [etsi_nvf_whitepaper]. The ETSI NFV 221 framework architecture framework is composed of three domains 222 (Figure 1): 224 o Virtualized Network Function, running over the NFVI. 226 o NFV Infrastructure (NFVI), including the diversity of physical 227 resources and how these can be virtualized. NFVI supports the 228 execution of the VNFs. 230 o NFV Management and Orchestration, which covers the orchestration 231 and life-cycle management of physical and/or software resources 232 that support the infrastructure virtualization, and the life-cycle 233 management of VNFs. NFV Management and Orchestration focuses on 234 all virtualization specific management tasks necessary in the NFV 235 framework. 237 +-------------------------------------------+ +---------------+ 238 | Virtualized Network Functions (VNFs) | | | 239 | ------- ------- ------- ------- | | | 240 | | | | | | | | | | | | 241 | | VNF | | VNF | | VNF | | VNF | | | | 242 | | | | | | | | | | | | 243 | ------- ------- ------- ------- | | | 244 +-------------------------------------------+ | | 245 | | 246 +-------------------------------------------+ | | 247 | NFV Infrastructure (NFVI) | | NFV | 248 | ----------- ----------- ----------- | | Management | 249 | | Virtual | | Virtual | | Virtual | | | and | 250 | | Compute | | Storage | | Network | | | Orchestration | 251 | ----------- ----------- ----------- | | | 252 | +---------------------------------------+ | | | 253 | | Virtualization Layer | | | | 254 | +---------------------------------------+ | | | 255 | +---------------------------------------+ | | | 256 | | ----------- ----------- ----------- | | | | 257 | | | Compute | | Storage | | Network | | | | | 258 | | ----------- ----------- ----------- | | | | 259 | | Hardware resources | | | | 260 | +---------------------------------------+ | | | 261 +-------------------------------------------+ +---------------+ 263 Figure 1: ETSI NFV framework 265 The NFV architectural framework identifies functional blocks and the 266 main reference points between such blocks. Some of these are already 267 present in current deployments, whilst others might be necessary 268 additions in order to support the virtualization process and 269 consequent operation. The functional blocks are (Figure 2): 271 o Virtualized Network Function (VNF). 273 o Element Management (EM). 275 o NFV Infrastructure, including: Hardware and virtualized resources, 276 and Virtualization Layer. 278 o Virtualized Infrastructure Manager(s) (VIM). 280 o NFV Orchestrator. 282 o VNF Manager(s). 284 o Service, VNF and Infrastructure Description. 286 o Operations and Business Support Systems (OSS/BSS). 288 +--------------------+ 289 +-------------------------------------------+ | ---------------- | 290 | OSS/BSS | | | NFV | | 291 +-------------------------------------------+ | | Orchestrator +-- | 292 | ---+------------ | | 293 +-------------------------------------------+ | | | | 294 | --------- --------- --------- | | | | | 295 | | EM 1 | | EM 2 | | EM 3 | | | | | | 296 | ----+---- ----+---- ----+---- | | ---+---------- | | 297 | | | | |--|-| VNF | | | 298 | ----+---- ----+---- ----+---- | | | manager(s) | | | 299 | | VNF 1 | | VNF 2 | | VNF 3 | | | ---+---------- | | 300 | ----+---- ----+---- ----+---- | | | | | 301 +------|-------------|-------------|--------+ | | | | 302 | | | | | | | 303 +------+-------------+-------------+--------+ | | | | 304 | NFV Infrastructure (NFVI) | | | | | 305 | ----------- ----------- ----------- | | | | | 306 | | Virtual | | Virtual | | Virtual | | | | | | 307 | | Compute | | Storage | | Network | | | | | | 308 | ----------- ----------- ----------- | | ---+------ | | 309 | +---------------------------------------+ | | | | | | 310 | | Virtualization Layer | |--|-| VIM(s) +-------- | 311 | +---------------------------------------+ | | | | | 312 | +---------------------------------------+ | | ---------- | 313 | | ----------- ----------- ----------- | | | | 314 | | | Compute | | Storage | | Network | | | | | 315 | | | hardware| | hardware| | hardware| | | | | 316 | | ----------- ----------- ----------- | | | | 317 | | Hardware resources | | | NFV Management | 318 | +---------------------------------------+ | | and Orchestration | 319 +-------------------------------------------+ +--------------------+ 321 Figure 2: ETSI NFV reference architecture 323 3.2. Software Defined Networking 325 The Software Defined Networking (SDN) paradigm pushes the 326 intelligence currently residing in the network elements to a central 327 controller implementing the network functionality through software. 328 In contrast to traditional approaches, in which the network's control 329 plane is distributed throughout all network devices, with SDN the 330 control plane is logically centralized. In this way, the deployment 331 of new characteristics in the network no longer requires of complex 332 and costly changes in equipment or firmware updates, but only a 333 change in the software running in the controller. The main advantage 334 of this approach is the flexibility it provides operators with to 335 manage their network, i.e., an operator can easily change its 336 policies on how traffic is distributed throughout the network. 338 The most visible of the SDN protocol stacks is the OpenFlow protocol 339 (OFP), which is maintained and extended by the Open Network 340 Foundation (ONF: https://www.opennetworking.org/). Originally this 341 protocol was developed specifically for IEEE 802.1 switches 342 conforming to the ONF OpenFlow Switch specification. As the benefits 343 of the SDN paradigm have reached a wider audience, its application 344 has been extended to more complex scenarios such as Wireless and 345 Mobile networks. Within this area of work, the ONF is actively 346 developing new OFP extensions addressing three key scenarios: (i) 347 Wireless backhaul, (ii) Cellular Evolved Packet Core (EPC), and (iii) 348 Unified access and management across enterprise wireless and fixed 349 networks. 351 +----------+ 352 | ------- | 353 | |Oper.| | O 354 | |Mgmt.| |<........> -+- Network Operator 355 | |Iface| | ^ 356 | ------- | +----------------------------------------+ 357 | | | +------------------------------------+ | 358 | | | | --------- --------- --------- | | 359 |--------- | | | | App 1 | | App 2 | ... | App n | | | 360 ||Plugins| |<....>| | --------- --------- --------- | | 361 |--------- | | | Plugins | | 362 | | | +------------------------------------+ | 363 | | | Application Plane | 364 | | +----------------------------------------+ 365 | | A 366 | | | 367 | | V 368 | | +----------------------------------------+ 369 | | | +------------------------------------+ | 370 |--------- | | | ------------ ------------ | | 371 || Netw. | | | | | Module 1 | | Module 2 | | | 372 ||Engine | |<....>| | ------------ ------------ | | 373 |--------- | | | Network Engine | | 374 | | | +------------------------------------+ | 375 | | | Controller Plane | 376 | | +----------------------------------------+ 377 | | A 378 | | | 379 | | V 380 | | +----------------------------------------+ 381 | | | +--------------+ +--------------+ | 382 | | | | ------------ | | ------------ | | 383 |----------| | | | OpenFlow | | | | OpenFlow | | | 384 ||OpenFlow||<....>| | ------------ | | ------------ | | 385 |----------| | | NE | | NE | | 386 | | | +--------------+ +--------------+ | 387 | | | Data Plane | 388 |Management| +----------------------------------------+ 389 +----------+ 391 Figure 3: High level SDN ONF architecture 393 Figure 3 shows the blocks and the functional interfaces of the ONF 394 architecture, which comprises three planes: Data, Controller, and 395 Application. The Data plane comprehends several Network Entities 396 (NE), which expose their capabilities toward the Controller plane via 397 a Southbound API. The Controller plane includes several cooperating 398 modules devoted to the creation and maintenance of an abstracted 399 resource model of the underneath network. Such model is exposed to 400 the applications via a Northbound API where the Application plane 401 comprises several applications/services, each of which has exclusive 402 control of a set of exposed resources. 404 The Management plane spans its functionality across all planes 405 performing the initial configuration of the network elements in the 406 Data plane, the assignment of the SDN controller and the resources 407 under its responsibility. In the Controller plane, the Management 408 needs to configure the policies defining the scope of the control 409 given to the SDN applications, to monitor the performance of the 410 system, and to configure the parameters required by the SDN 411 controller modules. In the Application plane, Management configures 412 the parameters of the applications and the service level agreements. 413 In addition to the these interactions, the Management plane exposes 414 several functions to network operators which can easily and quickly 415 configure and tune the network at each layer. 417 The SDNRG has documented a reference layer model in RFC7426 418 [RFC7426], which is reproduced in Figure 4. This model structures 419 SDN in planes and layers which are glued together by different 420 abstraction layers. This architecture differentiates between the 421 control and the management planes and provides for differentiated 422 southbound interfaces (SBIs). 424 o--------------------------------o 425 | | 426 | +-------------+ +----------+ | 427 | | Application | | Service | | 428 | +-------------+ +----------+ | 429 | Application Plane | 430 o---------------Y----------------o 431 | 432 *-----------------------------Y---------------------------------* 433 | Network Services Abstraction Layer (NSAL) | 434 *------Y------------------------------------------------Y-------* 435 | | 436 | Service Interface | 437 | | 438 o------Y------------------o o---------------------Y------o 439 | | Control Plane | | Management Plane | | 440 | +----Y----+ +-----+ | | +-----+ +----Y----+ | 441 | | Service | | App | | | | App | | Service | | 442 | +----Y----+ +--Y--+ | | +--Y--+ +----Y----+ | 443 | | | | | | | | 444 | *----Y-----------Y----* | | *---Y---------------Y----* | 445 | | Control Abstraction | | | | Management Abstraction | | 446 | | Layer (CAL) | | | | Layer (MAL) | | 447 | *----------Y----------* | | *----------Y-------------* | 448 | | | | | | 449 o------------|------------o o------------|---------------o 450 | | 451 | CP | MP 452 | Southbound | Southbound 453 | Interface | Interface 454 | | 455 *------------Y---------------------------------Y----------------* 456 | Device and resource Abstraction Layer (DAL) | 457 *------------Y---------------------------------Y----------------* 458 | | | | 459 | o-------Y----------o +-----+ o--------Y----------o | 460 | | Forwarding Plane | | App | | Operational Plane | | 461 | o------------------o +-----+ o-------------------o | 462 | Network Device | 463 +---------------------------------------------------------------+ 465 Figure 4: SDN Layer Architecture 467 3.3. Mobile Edge Computing 469 Mobile Edge Computing capabilities deployed in the edge of the mobile 470 network can facilitate the efficient and dynamic provision of 471 services to mobile users. The ETSI ISG MEC working group, operative 472 from end of 2014, intends to specify an open environment for 473 integrating MEC capabilities with service providers networks, 474 including also applications from 3rd parties. These distributed 475 computing capabilities will make available IT infrastructure as in a 476 cloud environment for the deployment of functions in mobile access 477 networks. It can be seen then as a complement to both NFV and SDN. 479 3.4. IEEE 802.1CF (OmniRAN) 481 The IEEE 802.1CF Recommended Practice specifies an access network, 482 which connects terminals to their access routers, utilizing 483 technologies based on the family of IEEE 802 Standards (e.g., 802.3 484 Ethernet, 802.11 Wi-Fi, etc.). The specification defines an access 485 network reference model, including entities and reference points 486 along with behavioral and functional descriptions of communications 487 among those entities. 489 The goal of this project is to help unifying the support of different 490 interfaces, enabling shared network control and use of software 491 defined network (SDN) principles, thereby lowering the barriers to 492 new network technologies, to new network operators, and to new 493 service providers. 495 3.5. Distributed Management Task Force 497 The DMTF is an industry standards organization working to simplify 498 the manageability of network-accessible technologies through open and 499 collaborative efforts by some technology companies. The DMTF is 500 involved in the creation and adoption of interoperable management 501 standards, supporting implementations that enable the management of 502 diverse traditional and emerging technologies including cloud, 503 virtualization, network and infrastructure. 505 There are several DMTF initiatives that are relevant to the network 506 virtualization area, such as the Open Virtualization Format (OVF), 507 for VNF packaging; the Cloud Infrastructure Management Interface 508 (CIM), for cloud infrastructure management; the Network Management 509 (NETMAN), for VNF management; and, the Virtualization Management 510 (VMAN), for virtualization infrastructure management. 512 3.6. Open Source initiatives 514 The Open Source community is especially active in the area of network 515 virtualization. We next summarize some of the active efforts: 517 o OpenStack. OpenStack is a free and open-source cloud-computing 518 software platform. OpenStack software controls large pools of 519 compute, storage, and networking resources throughout a 520 datacenter, managed through a dashboard or via the OpenStack API. 522 o OpenDayLight. OpenDaylight (ODL) is a highly available, modular, 523 extensible, scalable and multi-protocol controller infrastructure 524 built for SDN deployments on modern heterogeneous multi-vendor 525 networks. It provides a model-driven service abstraction platform 526 that allows users to write apps that easily work across a wide 527 variety of hardware and southbound protocols. 529 o ONOS. The ONOS (Open Network Operating System) project is an open 530 source community hosted by The Linux Foundation. The goal of the 531 project is to create a software-defined networking (SDN) operating 532 system for communications service providers that is designed for 533 scalability, high performance and high availability. 535 o OpenContrail. OpenContrail is an Apache 2.0-licensed project that 536 is built using standards-based protocols and provides all the 537 necessary components for network virtualization-SDN controller, 538 virtual router, analytics engine, and published northbound APIs. 539 It has an extensive REST API to configure and gather operational 540 and analytics data from the system. 542 o OPNFV. OPNFV is a carrier-grade, integrated, open source platform 543 to accelerate the introduction of new NFV products and services. 544 By integrating components from upstream projects, the OPNFV 545 community aims at conducting performance and use case-based 546 testing to ensure the platform's suitability for NFV use cases. 547 The scope of OPNFV's initial release is focused on building NFV 548 Infrastructure (NFVI) and Virtualized Infrastructure Management 549 (VIM) by integrating components from upstream projects such as 550 OpenDaylight, OpenStack, Ceph Storage, KVM, Open vSwitch, and 551 Linux. These components, along with application programmable 552 interfaces (APIs) to other NFV elements form the basic 553 infrastructure required for Virtualized Network Functions (VNF) 554 and Management and Network Orchestration (MANO) components. 555 OPNFV's goal is to increase performance and power efficiency; 556 improve reliability, availability, and serviceability; and deliver 557 comprehensive platform instrumentation. 559 o OSM. Open Source Mano (OSM) is an ETSI-hosted project to develop 560 an Open Source NFV Management and Orchestration (MANO) software 561 stack aligned with ETSI NFV. OSM is based on components from 562 previous projects, such Telefonica's OpenMANO or Canonical's Juju, 563 among others. 565 o OpenBaton. OpenBaton is a ETSI NFV compliant Network Function 566 Virtualization Orchestrator (NFVO). OpenBaton was part of the 567 OpenSDNCore project started with the objective of providing a 568 compliant implementation of the ETSI NFV specification. 570 Among the main areas that are being developed by the former open 571 source activities that related to network virtualization research, we 572 can highlight: policy-based resource management, analytics for 573 visibility and orchestration, service verification with regards to 574 security and resiliency. 576 4. Network Virtualization at IETF/IRTF 578 4.1. SDN RG 580 The SDNRG provides the grounds for an open-minded investigation of 581 Software Defined Networking. They aim at identifying approaches that 582 can be defined and used in the near term as well as the research 583 challenges in the field. As such, they SDNRG will not define 584 standards, but provide inputs to standards defining and standards 585 producing organizations. 587 It is working on classifying SDN models, including definitions and 588 taxonomies. It is also studying complexity, scalability and 589 applicability of the SDN model. Additionally, the SDNRG is working 590 on network description languages (and associated tools), abstractions 591 and interfaces. They also investigate the verification of correct 592 operation of network or node function. 594 The SDNRG has produced a reference layer model RFC7426 [RFC7426], 595 which structures SDNs in planes and layers which are glued together 596 by different abstraction layers. This architecture differentiates 597 between the control and the management planes and provides for 598 differentiated southbound interfaces (SBIs). 600 4.2. SFC WG 602 Current network services deployed by operators often involve the 603 composition of several individual functions (such as packet 604 filtering, deep packet inspection, load balancing). These services 605 are typically implemented by the ordered combination of a number of 606 service functions that are deployed at different points within a 607 network, not necessary on the direct data path. This requires 608 traffic to be steered through the required service functions, 609 wherever they are deployed. 611 For a given service, the abstracted view of the required service 612 functions and the order in which they are to be applied is called a 613 Service Function Chain (SFC), which is called Network Function 614 Forwarding Graph (NF-FG) in ETSI. An SFC is instantiated through 615 selection of specific service function instances on specific network 616 nodes to form a service graph: this is called a Service Function Path 617 (SFP). The service functions may be applied at any layer within the 618 network protocol stack (network layer, transport layer, application 619 layer, etc.). 621 The SFC working group is working on an architecture for service 622 function chaining that includes the necessary protocols or protocol 623 extensions to convey the Service Function Chain and Service Function 624 Path information to nodes that are involved in the implementation of 625 service functions and Service Function Chains, as well as mechanisms 626 for steering traffic through service functions. 628 In terms of actual work items, the SFC WG is chartered to deliver: 629 (i) a problem statement document [RFC7498], (ii) an architecture 630 document [RFC7665], (iii) a service-level data plane encapsulation 631 format (the encapsulation should indicate the sequence of service 632 functions that make up the Service Function Chain, specify the 633 Service Function Path, and communicate context information between 634 nodes that implement service functions and Service Function Chains), 635 and (iv) a document describing requirements for conveying information 636 between control or management elements and SFC implementation points. 638 Potential gap: as stated in the SFC charter, any work on the 639 management and configuration of SFC components related to the support 640 of Service Function Chaining will not be done yet, until better 641 understood and scoped. This part is of special interest for 642 operators and would be required in order to actually put SFC 643 mechanisms into operation. 645 Potential gap: redundancy and reliability mechanisms are currently 646 not dealt with by any WG in the IETF. While this has been the main 647 goal of the VNFpool BoF efforts, it still remains un-addressed. 649 4.3. NVO3 WG 651 The Network Virtualization Overlays (NVO3) WG is developing protocols 652 that enable network virtualization overlays within large Data Center 653 (DC) environments. Specifically NVO3 assumes an underlying physical 654 Layer 3 (IP) fabric on which multiple tenant networks are virtualized 655 on top (i.e. overlays). With overlays, data traffic between tenants 656 is tunneled across the underlying DC's IP network. The use of 657 tunnels provides a number of benefits by decoupling the network as 658 viewed by tenants from the underlying physical network across which 659 they communicate [I-D.ietf-nvo3-arch]. 661 Potential gap: It would be worthwhile to see if some of the specific 662 approaches developed in this WG (e.g. overlays, traffic isolation, VM 663 migration) can be applied outside the DC, and specifically if they 664 can be applicable to network virtualization (NFV). These approaches 665 would be most relevant to the ETSI Network Function Virtualization 666 Infrastructure (NFVI), and the Virtualized Infrastructure Manager 667 part of the MANO. 669 4.4. DMM WG 671 The Distributed Mobility Management (DMM) WG is looking at solutions 672 for IP networks that enable traffic between mobile and correspondent 673 nodes taking an optimal route, preventing some of the issues caused 674 by the use of centralized mobility solutions, which anchor all the 675 traffic at a given node (or a very limited set of nodes). The DMM WG 676 is considering the latest developments in mobile networking research 677 and operational practices (i.e., flattening network architectures, 678 the impact of virtualization, new deployment needs as wireless access 679 technologies evolve in the coming years) and aims at describing how 680 distributed mobility management addresses the new needs in this area 681 better than previously standardized solutions. 683 Although network virtualization is not the main area of the DMM work, 684 the impact of SDN and NFV mechanisms is clear on the work that is 685 currently being done in the WG. One example is architecture defined 686 for the virtual Evolved Packet Core (vEPC) in 687 [I-D.matsushima-stateless-uplane-vepc]. Here, the authors describe a 688 particular realization of the vEPC concept, which is designed to 689 support NFV. In the defined architecture, the user plane of EPC is 690 decoupled from the control-plane and uses routing information to 691 forward packets of mobile nodes. This proposal does not modify the 692 signaling of the EPC control plane, although the EPC control plane 693 runs on an hypervisor. 695 Potential gap: in a vEPC/DMM context, how to run the EPC control 696 plane on NFV. 698 The DMM WG is also looking at ways to supporting the separation of 699 the Control-Plane for mobility- and session management from the 700 actual Data-Plane [I-D.ietf-dmm-fpc-cpdp]. The protocol semantics 701 being defined abstract from the actual details for the configuration 702 of Data-Plane nodes and apply between a Client function, which is 703 used by an application of the mobility Control-Plane, and an Agent 704 function, which is associated with the configuration of Data-Plane 705 nodes according to the policies issued by the mobility Control-Plane. 707 Potential gap: the actual mappings between these generic protocol 708 semantics and the configuration commands required on the data plane 709 network elements are not in the scope of this document, and are 710 therefore a potential gap that will need to be addressed (e.g., for 711 OpenFlow switches). 713 4.5. I2RS WG 715 The Interface to the Routing System (I2RS) WG is developing a high- 716 level architecture that describes the basic building-blocks to access 717 the routing system through a set of protocol-based control or 718 management interfaces. This architecture, as described in 719 [I-D.ietf-i2rs-architecture], comprises an I2RS Agent as a unified 720 interface that is accessed by I2RS clients using the I2RS protocol. 721 The client is controlled by one or more network applications and 722 accesses one or more agents, as shown in the following figure: 724 ****************** ***************** ***************** 725 * Application C * * Application D * * Application E * 726 ****************** ***************** ***************** 727 | | | 728 +--------------+ | +-------------+ 729 | | | 730 *************** 731 * Client P *----------------------+ 732 *************** | 733 *********************** | | 734 * Application A * | | 735 * * | *********************** | 736 * +----------------+ * | * Application B * | 737 * | Client A | * | * * | 738 * +----------------+ * | * +----------------+ * | 739 *********************** | * | Client B | * | 740 | | * +----------------+ * | 741 | +----------------+ *********************** | 742 | | | | | 743 | | +------------------------+ | +-----+ 744 | | | | | 745 ******************************* ******************************* 746 * * * * 747 * Routing Element 1 * * Routing Element 2 * 748 * * * * 749 ******************************* ******************************* 751 Figure 5: High level I2RS architecture 753 Routing elements consist of an agent that communicates with the 754 client or clients driven by the applications and accesses the 755 different subsystems in the element as shown in the following figure: 757 | 758 *****************v************** 759 * +---------------------+ * 760 * | Agent | * 761 * +---------------------+ * 762 * ^ ^ ^ ^ * 763 * | | | | * 764 * | | | +--+ * 765 * | | | | * 766 * v | | v * 767 * +---+-----+ | | +----+---+ * 768 * | Routing | | | | Local | * 769 * | and | | | | Config | * 770 * |Signaling| | | +--------+ * 771 * +---------+ | | ^ * 772 * ^ | | | * 773 * | +----+ | | * 774 * v v v v * 775 * +----------+ +------------+ * 776 * | Dynamic | | Static | * 777 * | System | | System | * 778 * | State | | State | * 779 * +----------+ +------------+ * 780 * * 781 * Routing Element * 782 ******************************** 784 Figure 6: Architecture of a routing element 786 The I2RS architecture proposes to use model-driven APIs. Services 787 can correspond to different data-models and agents can indicate which 788 model they support. 790 Potential gap: network virtualization is not the main aim of the I2RS 791 WG. However, they provide an infrastructure that can be part of an 792 SDN deployment. 794 4.6. BESS WG 796 BGP is already used as a protocol for provisioning and operating 797 Layer-3 (routed) Virtual Private Networks (L3VPNs). The BGP Enabled 798 Services (BESS) working group is responsible for defining, 799 specifying, and extending network services based on BGP. In 800 particular, the working group will work on the following services: 802 o BGP-enabled VPN solutions for use in the data center networking. 803 This work includes consideration of VPN scaling issues and 804 mechanisms applicable to such environments. 806 o Extensions to BGP-enabled VPN solutions for the construction of 807 virtual topologies in support of services such as Service Function 808 Chaining. 810 Potential gap: The most relevant activity in BESS that would be 811 worthwhile to investigate for relevance to network virtualization 812 (NFV) is the extensions to BGP-enabled VPN solutions to support of 813 Service Function Chaining [I-D.rfernando-bess-service-chaining]. 815 4.7. BM WG 817 The Benchmarking Methodology Working Group (BMWG) provides 818 recommendations concerning the key performance characteristics of 819 internetworking technologies, or benchmarks for network devices, 820 systems, and services. The scope of BMWG includes benchmarks for the 821 management, control, and forwarding planes, and is. 823 The main distinguishing characteristic of BMWG from other IETF 824 measurement initiatives like the IPPM WG is that BMWG is limited to 825 characterization of implementations using controlled stimuli in a lab 826 environment. The BMWG does not attempt to produce benchmarks for 827 live, operational networks. 829 As part of the tasks of the BMWG, it is explicitly tasked to develop 830 benchmarks and methodologies for VNF and related infrastructure 831 benchmarking, Benchmarking Methodologies have reliably characterized 832 many physical devices. This work item extends and enhances the 833 methods to virtual network functions (VNF) and their unique 834 supporting infrastructure. The first deliverable from this activity 835 mentioned in the charter of the WG is a document 836 [I-D.ietf-bmwg-virtual-net] that considers the new benchmarking space 837 to ensure that common issues are recognized from the start, using 838 background materials from industry and SDOs (e.g., IETF, ETSI NFV). 839 This document investigates the additional methodological 840 considerations necessary when benchmarking VNFs instantiated and 841 hosted in general-purpose hardware. The approach is to benchmark 842 physical and virtual network functions in the same way when possible, 843 thereby allowing direct comparison. Also defining benchmarking 844 combinations of physical and virtual devices in a System Under Test. 846 Benchmarks for platform capacity and performance characteristics of 847 virtual routers, switches, and related components will be also 848 addressed, including comparisons between physical and virtual network 849 functions. In many cases, the traditional benchmarks should be 850 applicable to VNFs, but the lab set-ups, configurations, and 851 measurement methods will likely need to be revised or enhanced. 853 There are additional documents of the BMWG relevant to the 854 virtualization area, such as: 855 [I-D.ietf-bmwg-sdn-controller-benchmark-term], 856 [I-D.ietf-bmwg-sdn-controller-benchmark-meth], [I-D.kim-bmwg-ha-nfvi] 857 and [I-D.vsperf-bmwg-vswitch-opnfv]. 859 4.8. TEAS WG 861 Transport network infrastructure provides end-to-end connectivity for 862 networked applications and services. Network virtualization 863 facilitates effective sharing (or 'slicing') of physical 864 infrastructure by representing resources and topologies via 865 abstractions, even in a multi-administration, multi-vendor, multi- 866 technology environment. In this way, it becomes possible to operate, 867 control and manage multiple physical networks elements as single 868 virtualized network. The users of such virtualized network can 869 control the allocated resources in an optimal and flexible way, 870 better adapting to the specific circumstances of higher layer 871 applications. 873 Abstraction and Control of Transport Networks (ACTN) intends to 874 define methods and capabilities for the deployment and operation of 875 transport network resources [I-D.ceccarelli-teas-actn-framework]. 876 This activity is currently being carried out within the Traffic 877 Engineering Architecture and Signaling (TEAS) WG. 879 Several use cases are being proposed for both fixed and mobile 880 scenarios [I-D.leeking-teas-actn-problem-statement]. 882 Potential gap: Several use cases in ACTN are relevant to network 883 virtualization (NFV) in mobile environments. Control of multi-tenant 884 mobile backhaul transport networks, mobile virtual network operation, 885 etc, can be influenced by the location of the network functions. A 886 control architecture allowing for inter-operation of NFV and 887 transport network (e.g., for combined optimization) is one relevant 888 area for research. 890 4.9. I2NSF WG 892 The I2NSF WG at defining interfaces to the flow based network 893 security functions (NSFs) hosted by service providers at different 894 premises. Network Security Function (NSF) is to ensure integrity, 895 confidentiality and availability of network communications, to detect 896 unwanted activity, and to block it or at least mitigate its effects. 897 NSFs are provided and consumed in increasingly diverse environments. 898 Users of NSFs could consume network security services hosted by one 899 or more providers, which may be their own enterprise, service 900 providers, or a combination of both. The NSFs may be provided by 901 physical and/or virtualized infrastructure. 903 Without standard interfaces to express, monitor, and control security 904 policies that govern the behavior of NSFs, it becomes virtually 905 impossible for security service providers to automate their service 906 offerings that utilize different security functions from multiple 907 vendors. Based on this, the main goal of I2NSF is to define an 908 information model, a set of software interfaces and data models for 909 controlling and monitoring aspects of NSFs (both physical and 910 virtual) [I-D.jeong-i2nsf-sdn-security-services]. 912 Since different security vendors may support different features and 913 functions on their devices, I2NSF focuses on flow based NSFs that 914 provide treatment to packets/flow. 916 The I2NSF WG's target deliverables include: (i) a use cases, problem 917 statement, gap analysis document, (ii) a framework document, 918 presenting an overview of the use of NSFs and the purpose of the 919 models developed by the WG, (iii) a single, unified, Information 920 Model for controlling and monitoring flow-based NSFs, (iv) the 921 corresponding YANG Data Models derived from the Information Model, 922 (v) a vendor-neutral vocabulary to enable the characteristics and 923 behavior of NSFs to be specified without requiring the NSFs 924 themselves to be standardized, and (vi) an examination of existing 925 secure communication mechanisms to identify the appropriate ones for 926 carrying the controlling and monitoring information between the NSFs 927 and their management entities. The WG is also targeted to work 928 closely with I2RS, Netconf and Netmod WGs, as well as to communicate 929 with external SDOs like ETSI NFV. 931 Potential gap: aspects of NSFs such as device or network provisioning 932 and configuration are out of scope. 934 Potential gap: the use of SDN tools to interact with security 935 functions is not explictly considered, but seems a potential 936 approach, as for example described for the particular case of IPsec 937 flow protection in [I-D.abad-sdnrg-sdn-ipsec-flow-protection]. 939 4.10. IPPM WG 941 The IP Performance Metrics (IPPM) WG defines metrics that can be used 942 to measure the quality and performance of Internet services and 943 applications running over transport layer protocols (e.g. TCP, UPD) 944 over IP. It also develops and maintains protocols for the 945 measurement of these metrics. The IPPM WG is a long running WG that 946 started in 1997. The architecture (framework) for IPPM WG metrics 947 and associated protocols are defined in RFC 2330 [RFC2330]. Some 948 examples of recent output by IPPM WG include "A Reference Path and 949 Measurement Points for Large-Scale Measurement of Broadband 950 Performance" (RFC 7398 [RFC7398]) and "Framework for TCP Throughput 951 Testing" (RFC 6349 [RFC6349]). 953 The IPPM WG currently does not have a charter item or active drafts 954 related to the topic of network virtualization. On the automation 955 and orchestration side, there is an ongoing effort 956 [I-D.cmzrjp-ippm-twamp-yang] to define a YANG model for the IPPM 957 protocol. 959 Potential gap: There is a pressing need to define metrics and 960 associated protocols to measure the performance of NFV. 961 Specifically, since NFV is based on the concept of taking centralized 962 functions and evolving it to highly distributed SW functions, there 963 is a commensurate need to fully understand and measure the baseline 964 performance of such systems. A potential topic for the IPPM WG is 965 defining packet delay, throughput, and test framework for the 966 application traffic flowing through the NFVI. 968 4.11. NFV RG 970 The NFVRG focuses on research problems associated with virtualization 971 of fixed and mobile network infrastructures, new network 972 architectures based on virtualized network functions, virtualization 973 of the home and enterprise network environments, co-existence with 974 non-virtualized infrastructure and services, and application to 975 growing areas of concern such as Internet of Things (IoT) and next 976 generation content distribution. Another goal of the NFVRG is to 977 bring a research community together that can jointly address such 978 problems, concentrating on problems that relate not just to 979 networking but also to computing and storage constraints in such 980 environments. 982 Since the NFVRG is a research group, it has a wide scope. In order 983 to keep the focus, the group has identified some near term work 984 items: (i) Policy based Resource Management, (ii) Analytics for 985 Visibility and Orchestration, (iii) Virtual Network Function (VNF) 986 Performance Modelling to facilitate transition to NFV and (iv) 987 Security and Service Verification. 989 4.12. VNFpool BoF 991 The VNFPOOL BoF proposed to work on the way to group Virtual Network 992 Function (VNF) into pools to improve resilience, provide better 993 scale-out and scale-in characteristics, implement stateful failover 994 among VNF members of a pool, etc. Additionally, they propose to 995 create VNF sets from VNF pools. For this, the BoF proposed to study 996 signaling (both between members of a pool and across pools), state 997 sharing mechanisms between members of a VNFPOOL, the exchange of 998 reliability information between VNF sets, their users and the 999 underlying network, and the reliability and security of the control 1000 plane needed to transport the exchanged information. 1002 The use cases initially considered by VNFPOOL include Content Deliver 1003 Networks (CDNs), the LTE mobile core network and reliable server 1004 pooling. The VNFPOOL work has been dropped in the IETF. 1006 Potential gap: VNFPOOL tried to introduce and manage resilience in 1007 virtualized networking environments and therefore addresses a 1008 desirable feature for any software defined network. VNFPOOL has also 1009 been integrated into the NFV architecture 1010 [I-D.bernini-nfvrg-vnf-orchestration]. 1012 5. Summary of Gaps 1014 Potential Gap-1: as stated in the SFC charter, any work on the 1015 management and configuration of SFC components related to the support 1016 of Service Function Chaining will not be done yet, until better 1017 understood and scoped. This part is of special interest for 1018 operators and would be required in order to actually put SFC 1019 mechanisms into operation. 1021 Potential Gap-2: redundancy and reliability mechanisms are currently 1022 not dealt with by SFC or any other WG in the IETF. While this has 1023 been the main goal of the VNFpool BoF efforts, since VNFPOOL work has 1024 been dropped for the time being without any WG being chartered, the 1025 technical topics it aimed at targetting still remain un-addressed. 1027 Potential Gap-3: it would be worthwhile to see if some of the 1028 specific approaches developed in the NVO3 WG (e.g. overlays, traffic 1029 isolation, VM migration) can be applied outside the DC, and 1030 specifically if they can be applicable to network virtualization 1031 (NFV). These approaches would be most relevant to the ETSI Network 1032 Function Virtualization Infrastructure (NFVI), and the Virtualized 1033 Infrastructure Manager part of the MANO. 1035 Potential Gap-4: the most relevant activity in BESS that would be 1036 worthwhile to investigate for relevance to network virtualization 1037 (NFV) is the extensions to BGP-enabled VPN solutions to support of 1038 Service Function Chaining. 1040 Potential Gap-5: in a vEPC/DMM context, how to run the EPC control 1041 plane on NFV. 1043 Potential Gap-6: in DMM, on the work item addressing the separation 1044 of the Control-Plane for mobility- and session management from the 1045 actual Data-Plane, the actual mappings between these generic protocol 1046 semantics and the configuration commands required on the data plane 1047 network elements (e.g., OpenFlow switches) are not currently in the 1048 scope of the DMM WG. 1050 Potential Gap-7: network virtualization is not the main aim of the 1051 I2RS WG. However, they provide an infrastructure that can be part of 1052 an SDN deployment. 1054 Potential Gap-8: VNFPOOL tries to introduce and manage resilience in 1055 virtualized networking environments and therefore addresses a 1056 desirable feature for any software defined network. VNFPOOL has also 1057 been integrated into the NFV architecture 1058 [I-D.bernini-nfvrg-vnf-orchestration]. 1060 Potential Gap-9: within the Traffic Engineering Architecture and 1061 Signaling (TEAS) WG, several use cases in ACTN are relevant to 1062 network virtualization (NFV) in mobile environments. Control of 1063 multi-tenant mobile backhaul transport networks, mobile virtual 1064 network operation, etc, can be influenced by the location of the 1065 network functions. A control architecture allowing for inter- 1066 operation of NFV and transport network (e.g., for combined 1067 optimization) is one relevant area for research. 1069 Potential Gap-10: within I2NSF', aspects of NSFs such as device or 1070 network provisioning and configuration are out of scope. 1072 Potential Gap-11: the use of SDN tools to interact with security 1073 functions is not explictly considered in I2NSF, but seems a potential 1074 approach, as for example described for the particular case of IPsec 1075 flow protection in [I-D.abad-sdnrg-sdn-ipsec-flow-protection]. 1077 Potential Gap-12: there is a pressing need to define metrics and 1078 associated protocols to measure the performance of NFV. 1079 Specifically, since NFV is based on the concept of taking centralized 1080 functions and evolving it to highly distributed SW functions, there 1081 is a commensurate need to fully understand and measure the baseline 1082 performance of such systems. A potential topic for the IPPM WG is 1083 defining packet delay, throughput, and test framework for the 1084 application traffic flowing through the NFVI. 1086 6. IANA Considerations 1088 N/A. 1090 7. Security Considerations 1092 TBD. 1094 8. Acknowledgments 1096 The authors want to thank Dirk von Hugo, Rafa Marin, Diego Lopez, 1097 Ramki Krishnan, Kostas Pentikousis, Rana Pratap Sircar and Alfred 1098 Morton for their very useful reviews and comments to the document. 1100 The work of Pedro Aranda is supported by the European FP7 Project 1101 Trilogy2 under grant agreement 317756. 1103 9. Informative References 1105 [etsi_nvf_whitepaper] 1106 "Network Functions Virtualisation (NFV). White Paper 2", 1107 October 2014. 1109 [I-D.abad-sdnrg-sdn-ipsec-flow-protection] 1110 Abad-Carrascosa, A., Lopez, R., and G. Lopez-Millan, 1111 "Software-Defined Networking (SDN)-based IPsec Flow 1112 Protection", draft-abad-sdnrg-sdn-ipsec-flow-protection-01 1113 (work in progress), October 2015. 1115 [I-D.bernini-nfvrg-vnf-orchestration] 1116 Bernini, G., Maffione, V., Lopez, D., and P. Aranda, "VNF 1117 Pool Orchestration For Automated Resiliency in Service 1118 Chains", draft-bernini-nfvrg-vnf-orchestration-01 (work in 1119 progress), October 2015. 1121 [I-D.ceccarelli-teas-actn-framework] 1122 Ceccarelli, D. and Y. Lee, "Framework for Abstraction and 1123 Control of Traffic Engineered Networks", draft-ceccarelli- 1124 teas-actn-framework-01 (work in progress), March 2016. 1126 [I-D.cmzrjp-ippm-twamp-yang] 1127 Civil, R., Morton, A., Zheng, L., Rahman, R., 1128 Jethanandani, M., and K. Pentikousis, "Two-Way Active 1129 Measurement Protocol (TWAMP) Data Model", draft-cmzrjp- 1130 ippm-twamp-yang-02 (work in progress), October 2015. 1132 [I-D.ietf-bmwg-sdn-controller-benchmark-meth] 1133 Vengainathan, B., Basil, A., Tassinari, M., Manral, V., 1134 and S. Banks, "Benchmarking Methodology for SDN Controller 1135 Performance", draft-ietf-bmwg-sdn-controller-benchmark- 1136 meth-01 (work in progress), March 2016. 1138 [I-D.ietf-bmwg-sdn-controller-benchmark-term] 1139 Vengainathan, B., Basil, A., Tassinari, M., Manral, V., 1140 and S. Banks, "Terminology for Benchmarking SDN Controller 1141 Performance", draft-ietf-bmwg-sdn-controller-benchmark- 1142 term-01 (work in progress), March 2016. 1144 [I-D.ietf-bmwg-virtual-net] 1145 Morton, A., "Considerations for Benchmarking Virtual 1146 Network Functions and Their Infrastructure", draft-ietf- 1147 bmwg-virtual-net-01 (work in progress), September 2015. 1149 [I-D.ietf-dmm-fpc-cpdp] 1150 Liebsch, M., Matsushima, S., Gundavelli, S., and D. Moses, 1151 "Protocol for Forwarding Policy Configuration (FPC) in 1152 DMM", draft-ietf-dmm-fpc-cpdp-01 (work in progress), July 1153 2015. 1155 [I-D.ietf-i2rs-architecture] 1156 Atlas, A., Halpern, J., Hares, S., Ward, D., and T. 1157 Nadeau, "An Architecture for the Interface to the Routing 1158 System", draft-ietf-i2rs-architecture-13 (work in 1159 progress), February 2016. 1161 [I-D.ietf-nvo3-arch] 1162 Black, D., Hudson, J., Kreeger, L., Lasserre, M., and T. 1163 Narten, "An Architecture for Overlay Networks (NVO3)", 1164 draft-ietf-nvo3-arch-04 (work in progress), October 2015. 1166 [I-D.jeong-i2nsf-sdn-security-services] 1167 Jeong, J., Kim, H., Jung-Soo, P., Ahn, T., and s. 1168 sehuilee@kt.com, "Software-Defined Networking Based 1169 Security Services using Interface to Network Security 1170 Functions", draft-jeong-i2nsf-sdn-security-services-04 1171 (work in progress), March 2016. 1173 [I-D.kim-bmwg-ha-nfvi] 1174 Kim, T. and E. Paik, "Considerations for Benchmarking High 1175 Availability of NFV Infrastructure", draft-kim-bmwg-ha- 1176 nfvi-00 (work in progress), October 2015. 1178 [I-D.leeking-teas-actn-problem-statement] 1179 Lee, Y., King, D., Boucadair, M., Jing, R., and L. 1180 Contreras, "Problem Statement for Abstraction and Control 1181 of Transport Networks", draft-leeking-teas-actn-problem- 1182 statement-00 (work in progress), June 2015. 1184 [I-D.matsushima-stateless-uplane-vepc] 1185 Matsushima, S. and R. Wakikawa, "Stateless user-plane 1186 architecture for virtualized EPC (vEPC)", draft- 1187 matsushima-stateless-uplane-vepc-05 (work in progress), 1188 September 2015. 1190 [I-D.rfernando-bess-service-chaining] 1191 Fernando, R., Rao, D., Fang, L., Napierala, M., So, N., 1192 and A. Farrel, "Virtual Topologies for Service Chaining in 1193 BGP/IP MPLS VPNs", draft-rfernando-bess-service- 1194 chaining-01 (work in progress), April 2015. 1196 [I-D.vsperf-bmwg-vswitch-opnfv] 1197 Tahhan, M., O'Mahony, B., and A. Morton, "Benchmarking 1198 Virtual Switches in OPNFV", draft-vsperf-bmwg-vswitch- 1199 opnfv-01 (work in progress), October 2015. 1201 [RFC2330] Paxson, V., Almes, G., Mahdavi, J., and M. Mathis, 1202 "Framework for IP Performance Metrics", RFC 2330, 1203 DOI 10.17487/RFC2330, May 1998, 1204 . 1206 [RFC6349] Constantine, B., Forget, G., Geib, R., and R. Schrage, 1207 "Framework for TCP Throughput Testing", RFC 6349, 1208 DOI 10.17487/RFC6349, August 2011, 1209 . 1211 [RFC7398] Bagnulo, M., Burbridge, T., Crawford, S., Eardley, P., and 1212 A. Morton, "A Reference Path and Measurement Points for 1213 Large-Scale Measurement of Broadband Performance", 1214 RFC 7398, DOI 10.17487/RFC7398, February 2015, 1215 . 1217 [RFC7426] Haleplidis, E., Ed., Pentikousis, K., Ed., Denazis, S., 1218 Hadi Salim, J., Meyer, D., and O. Koufopavlou, "Software- 1219 Defined Networking (SDN): Layers and Architecture 1220 Terminology", RFC 7426, DOI 10.17487/RFC7426, January 1221 2015, . 1223 [RFC7498] Quinn, P., Ed. and T. Nadeau, Ed., "Problem Statement for 1224 Service Function Chaining", RFC 7498, 1225 DOI 10.17487/RFC7498, April 2015, 1226 . 1228 [RFC7665] Halpern, J., Ed. and C. Pignataro, Ed., "Service Function 1229 Chaining (SFC) Architecture", RFC 7665, 1230 DOI 10.17487/RFC7665, October 2015, 1231 . 1233 Appendix A. The mobile network use case 1235 A.1. The 3GPP Evolved Packet System 1237 TBD. This will include a high level summary of the 3GPP EPS 1238 architecture, detailing both the EPC (core) and the RAN (access) 1239 parts. A link with the two related ETSI NFV use cases 1240 (Virtualisation of Mobile Core Network and IMS, and Virtualisation of 1241 Mobile base station) will be included. 1243 The EPS architecture and some of its standardized interfaces are 1244 depicted in Figure 7. The EPS provides IP connectivity to user 1245 equipment (UE) (i.e., mobile nodes) and access to operator services, 1246 such as global Internet access and voice communications. The EPS 1247 comprises the core network -- called Evolved Packet Core (EPC) -- and 1248 different radio access networks: the 3GPP Access Network (AN), the 1249 Untrusted non-3GPP AN and the Trusted non-3GPP AN. There are 1250 different types of 3GPP ANs, with the evolved UMTS Terrestrial Radio 1251 Access Network (E-UTRAN) as the most advanced one. QoS is supported 1252 through an EPS bearer concept, providing bindings to resource 1253 reservation within the network. 1255 The evolved NodeB (eNB), the Long Term Evolution (LTE) base station, 1256 is part of the access network that provides radio resource 1257 management, header compression, security and connectivity to the core 1258 network through the S1 interface. In an LTE network, the control 1259 plane signaling traffic and the data traffic ar handled separately. 1260 The eNBs transmit the control traffic and data traffic separately via 1261 two logically separate interfaces. 1263 The Home Subscriber Server, HSS, is a database that contains user 1264 subscriptions and QoS profiles. The Mobility Management Entity, MME, 1265 is responsible for mobility management, user authentication, bearer 1266 establishment and modification and maintenance of the UE context. 1268 The Serving gateway, S-GW, is the mobility anchor and manages the 1269 user plane data tunnels during the inter-eNB handovers. It tunnels 1270 all user data packets and buffers downlink IP packets destined for 1271 UEs that happen to be in idle mode. 1273 The Packet Data Network (PDN) Gateway, P-GW, is responsible for IP 1274 address allocation to the UE and is a tunnel endpoint for user and 1275 control plane protocols. It is also responsible for charging, packet 1276 filtering, and policy-based control of flows. It interconnects the 1277 mobile network to external IP networks, e.g. the Internet. 1279 In this architecture, data packets are not sent directly on an IP 1280 network between the eNB and the gateways. Instead, every packet is 1281 tunneled over a tunneling protocol - the GPRS Tunneling Protocol (GTP 1282 over UDP/IP. A GTP path is identified in each node with the IP 1283 address and a UDP port number on the eNB/gateways. The GTP protocol 1284 carries both the data traffic (GTP-U tunnels) and the control traffic 1285 (GTP-C tunnels). Alternatively Proxy Mobile IP (PMIPv6) is used on 1286 the S5 interface between S-GW and P-GW. 1288 In addition to the above basic functions and entities, there are also 1289 additional features being discussed by the 3GPP that are relevant 1290 from a network virtualization viewpoint. One example is the Traffic 1291 Detection Function (TDF), which can be used by the P-GW, and in 1292 general by the whole transport network, to decide how to forward the 1293 traffic. In a virtualized infrastructure, this kinf of information 1294 can be used to elastic and dynamically adapt the network capabilities 1295 to the traffic nature and volume. 1297 +---------------------------------------------------------+ 1298 | PCRF | 1299 +-----------+--------------------------+----------------+-+ 1300 | | | 1301 +----+ +-----------+------------+ +--------+-----------+ +-+-+ 1302 | | | +-+ | | Core Network | | | 1303 | | | +------+ |S|__ | | +--------+ +---+ | | | 1304 | | | |GERAN/|_|G| \ | | | HSS | | | | | | 1305 | +-----+ UTRAN| |S| \ | | +---+----+ | | | | E | 1306 | | | +------+ |N| +-+-+ | | | | | | | x | 1307 | | | +-+ /|MME| | | +---+----+ | | | | t | 1308 | | | +---------+ / +---+ | | | 3GPP | | | | | e | 1309 | +-----+ E-UTRAN |/ | | | AAA | | | | | r | 1310 | | | +---------+\ | | | SERVER | | | | | n | 1311 | | | \ +---+ | | +--------+ | | | | a | 1312 | | | 3GPP AN \|SGW+----- S5---------------+ P | | | l | 1313 | | | +---+ | | | G | | | | 1314 | | +------------------------+ | | W | | | I | 1315 | UE | | | | | | P | 1316 | | +------------------------+ | | +-----+ | 1317 | | |+-------------+ +------+| | | | | | n | 1318 | | || Untrusted +-+ ePDG +-S2b---------------+ | | | e | 1319 | +---+| non-3GPP AN | +------+| | | | | | t | 1320 | | |+-------------+ | | | | | | w | 1321 | | +------------------------+ | | | | | o | 1322 | | | | | | | r | 1323 | | +------------------------+ | | | | | k | 1324 | +---+ Trusted non-3GPP AN +-S2a--------------+ | | | s | 1325 | | +------------------------+ | | | | | | 1326 | | | +-+-+ | | | 1327 | +--------------------------S2c--------------------| | | | 1328 | | | | | | 1329 +----+ +--------------------+ +---+ 1331 Figure 7: EPS (non-roaming) architecture overview 1333 A.2. Virtualizing the 3GPP EPS 1335 TBD. We describe how a "virtual EPS" (vEPS) would look like and the 1336 existing gaps that exist from the point of view of network 1337 virtualization. 1339 Authors' Addresses 1340 Carlos J. Bernardos 1341 Universidad Carlos III de Madrid 1342 Av. Universidad, 30 1343 Leganes, Madrid 28911 1344 Spain 1346 Phone: +34 91624 6236 1347 Email: cjbc@it.uc3m.es 1348 URI: http://www.it.uc3m.es/cjbc/ 1350 Akbar Rahman 1351 InterDigital Communications, LLC 1352 1000 Sherbrooke Street West, 10th floor 1353 Montreal, Quebec H3A 3G4 1354 Canada 1356 Email: Akbar.Rahman@InterDigital.com 1357 URI: http://www.InterDigital.com/ 1359 Juan Carlos Zuniga 1360 InterDigital Communications, LLC 1361 1000 Sherbrooke Street West, 10th floor 1362 Montreal, Quebec H3A 3G4 1363 Canada 1365 Email: JuanCarlos.Zuniga@InterDigital.com 1366 URI: http://www.InterDigital.com/ 1368 Luis M. Contreras 1369 Telefonica I+D 1370 Ronda de la Comunicacion, S/N 1371 Madrid 28050 1372 Spain 1374 Email: luismiguel.conterasmurillo@telefonica.com 1376 Pedro Aranda 1377 Telefonica I+D 1378 Ronda de la Comunicacion, S/N 1379 Madrid 28050 1380 Spain 1382 Email: pedroa.aranda@telefonica.com