idnits 2.17.1 draft-bernardos-nfvrg-multidomain-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 5, 2018) is 2216 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 NFV RG CJ. Bernardos, Ed. 3 Internet-Draft UC3M 4 Intended status: Informational LM. Contreras 5 Expires: September 6, 2018 TID 6 I. Vaishnavi 7 Huawei 8 R. Szabo 9 Ericsson 10 J. Mangues 11 CTTC 12 X. Li 13 NEC 14 F. Paolucci 15 A. Sgambelluri 16 B. Martini 17 L. Valcarenghi 18 SSSA 19 G. Landi 20 Nextworks 21 D. Andrushko 22 MIRANTIS 23 A. Mourad 24 InterDigital 25 March 5, 2018 27 Multi-domain Network Virtualization 28 draft-bernardos-nfvrg-multidomain-04 30 Abstract 32 This document analyzes the problem of multi-provider multi-domain 33 orchestration, by first scoping the problem, then looking into 34 potential architectural approaches, and finally describing the 35 solutions being developed by the European 5GEx and 5G-TRANSFORMER 36 projects. 38 Status of This Memo 40 This Internet-Draft is submitted in full conformance with the 41 provisions of BCP 78 and BCP 79. 43 Internet-Drafts are working documents of the Internet Engineering 44 Task Force (IETF). Note that other groups may also distribute 45 working documents as Internet-Drafts. The list of current Internet- 46 Drafts is at https://datatracker.ietf.org/drafts/current/. 48 Internet-Drafts are draft documents valid for a maximum of six months 49 and may be updated, replaced, or obsoleted by other documents at any 50 time. It is inappropriate to use Internet-Drafts as reference 51 material or to cite them other than as "work in progress." 53 This Internet-Draft will expire on September 6, 2018. 55 Copyright Notice 57 Copyright (c) 2018 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents 62 (https://trustee.ietf.org/license-info) in effect on the date of 63 publication of this document. Please review these documents 64 carefully, as they describe your rights and restrictions with respect 65 to this document. Code Components extracted from this document must 66 include Simplified BSD License text as described in Section 4.e of 67 the Trust Legal Provisions and are provided without warranty as 68 described in the Simplified BSD License. 70 Table of Contents 72 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 73 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 74 3. Background: the ETSI NFV 75 architecture . . . . . . . . . . . . . . . . . . . . . . . . 5 76 4. Multi-domain problem statement . . . . . . . . . . . . . . . 8 77 5. Multi-domain architectural approaches . . . . . . . . . . . . 9 78 5.1. ETSI NFV approaches . . . . . . . . . . . . . . . . . . . 9 79 5.2. Hierarchical . . . . . . . . . . . . . . . . . . . . . . 17 80 5.3. Cascading . . . . . . . . . . . . . . . . . . . . . . . . 20 81 6. Virtualization and Control for Multi-Provider Multi-Domain . 20 82 6.1. Interworking interfaces . . . . . . . . . . . . . . . . . 22 83 6.2. 5GEx Multi Architecture . . . . . . . . . . . . . . . . . 23 84 6.3. 5G-TRANSFORMER Architecture . . . . . . . . . . . . . . . 26 85 6.3.1. So-Mtp Interface (IF3) . . . . . . . . . . . . . . . 28 86 6.3.2. So-So Interface (IF2) . . . . . . . . . . . . . . . . 29 87 6.3.3. Vs-So Interface (IF1) . . . . . . . . . . . . . . . . 30 88 7. Multi-domain orchestration and Open Source . . . . . . . . . 31 89 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 32 90 9. Security Considerations . . . . . . . . . . . . . . . . . . . 32 91 10. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 32 92 11. Informative References . . . . . . . . . . . . . . . . . . . 33 93 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 33 95 1. Introduction 97 The telecommunications sector is experiencing a major revolution that 98 will shape the way networks and services are designed and deployed 99 for the next decade. We are witnessing an explosion in the number of 100 applications and services demanded by users, which are now really 101 capable of accessing them on the move. In order to cope with such a 102 demand, some network operators are looking at the cloud computing 103 paradigm, which enables a potential reduction of the overall costs by 104 outsourcing communication services from specific hardware in the 105 operator's core to server farms scattered in datacenters. These 106 services have different characteristics if compared with conventional 107 IT services that have to be taken into account in this cloudification 108 process. Also the transport network is affected in that it is 109 evolving to a more sophisticated form of IP architecture with trends 110 like separation of control and data plane traffic, and more fine- 111 grained forwarding of packets (beyond looking at the destination IP 112 address) in the network to fulfill new business and service goals. 114 Virtualization of functions also provides operators with tools to 115 deploy new services much faster, as compared to the traditional use 116 of monolithic and tightly integrated dedicated machinery. As a 117 natural next step, mobile network operators need to re-think how to 118 evolve their existing network infrastructures and how to deploy new 119 ones to address the challenges posed by the increasing customers' 120 demands, as well as by the huge competition among operators. All 121 these changes are triggering the need for a modification in the way 122 operators and infrastructure providers operate their networks, as 123 they need to significantly reduce the costs incurred in deploying a 124 new service and operating it. Some of the mechanisms that are being 125 considered and already adopted by operators include: sharing of 126 network infrastructure to reduce costs, virtualization of core 127 servers running in data centers as a way of supporting their load- 128 aware elastic dimensioning, and dynamic energy policies to reduce the 129 monthly electricity bill. However, this has proved to be tough to 130 put in practice, and not enough. Indeed, it is not easy to deploy 131 new mechanisms in a running operational network due to the high 132 dependency on proprietary (and sometime obscure) protocols and 133 interfaces, which are complex to manage and often require configuring 134 multiple devices in a decentralized way. 136 Furthermore, 5G networks are being designed to be capable of 137 fulfilling the needs of a plethora of vertical industries (e.g., 138 automotive, eHealth, media), which have a wide variety of 139 requirements [ngmn_5g_whitepaper]. The slicing concept tries to make 140 the network of the provider aware of the business needs of tenants 141 (e.g., vertical industries) by customizing the share of the network 142 assigned to them. The term network slice was coined to refer to a 143 complete logical network composed of network functions and the 144 resources to run them [ngmn_slicing]. These resources include 145 network, storage, and computing. The way in which services requested 146 by customers of the provider are assigned to slices depends on 147 customer needs and provider policies. The system must be flexible to 148 accommodate a variety of options. 150 Another characteristic of current and future telecommunication 151 networks is complexity. It comes from three main aspects. First, 152 heterogeneous technologies are often separated in multiple domains 153 under the supervision of different network managers, which exchange 154 provisioning orders that are manually handled. This does not only 155 happen between different operators, but also inside the network of 156 the same operator. Second, the different regional scope of each 157 operator requires peering with others to extend their reach. And 158 third, the increasing variety of interaction among specialized 159 providers (e.g., mobile operator, cloud service provider, transport 160 network provider) that complement each other to satisfy the service 161 requests from customers. In conclusion, realizing the slicing vision 162 to adapt the network to needs of verticals will require handling 163 multi-provider and multi-domain aspects. 165 Additionally, Network Function Virtualization (NFV) and Software 166 Defined Networking (SDN) are changing the way the telecommunications 167 sector will deploy, extend and operate its networks. Together, they 168 bring the required programmability and flexibility. Moreover, these 169 concepts and network slicing are tightly related. In fact, slices 170 may be implemented as NFV network services. However, building a 171 complete end-to-end logical network will likely require stitching 172 services offered by multiple domains from multiple providers. This 173 is why multi-domain network virtualization is crucial in 5G networks. 175 2. Terminology 177 The following terms used in this document are defined by the ETSI NVF 178 ISG, and the ONF and the IETF: 180 NFV Infrastructure (NFVI): totality of all hardware and software 181 components which build up the environment in which VNFs are 182 deployed 184 NFV Management and Orchestration (NFV-MANO): functions 185 collectively provided by NFVO, VNFM, and VIM. 187 NFV Orchestrator (NFVO): functional block that manages the Network 188 Service (NS) lifecycle and coordinates the management of NS 189 lifecycle, VNF lifecycle (supported by the VNFM) and NFVI 190 resources (supported by the VIM) to ensure an optimized allocation 191 of the necessary resources and connectivity. 193 Network Service Orchestration (NSO): function responsible for the 194 Network Service lifecycle management, including operations such 195 as: On-board Network Service, Instantiate Network Service, Scale 196 Network Service, Update Network Service, etc. 198 OpenFlow protocol (OFP): allowing vendor independent programming 199 of control functions in network nodes. 201 Resource Orchestration (RO): subset of NFV Orchestrator functions 202 that are responsible for global resource management governance. 204 Service Function Chain (SFC): for a given service, the abstracted 205 view of the required service functions and the order in which they 206 are to be applied. This is somehow equivalent to the Network 207 Function Forwarding Graph (NF-FG) at ETSI. 209 Service Function Path (SFP): the selection of specific service 210 function instances on specific network nodes to form a service 211 graph through which an SFC is instantiated. 213 Virtualized Infrastructure Manager (VIM): functional block that is 214 responsible for controlling and managing the NFVI compute, storage 215 and network resources, usually within one operator's 216 Infrastructure Domain. 218 Virtualized Network Function (VNF): implementation of a Network 219 Function that can be deployed on a Network Function Virtualization 220 Infrastructure (NFVI). 222 Virtualized Network Function Manager (VNFM): functional block that 223 is responsible for the lifecycle management of VNF. 225 3. Background: the ETSI NFV architecture 227 The ETSI ISG NFV is a working group which, since 2012, aims to evolve 228 quasi-standard IT virtualization technology to consolidate many 229 network equipment types into industry standard high volume servers, 230 switches, and storage. It enables implementing network functions in 231 software that can run on a range of industry standard server hardware 232 and can be moved to, or loaded in, various locations in the network 233 as required, without the need to install new equipment. To date, 234 ETSI NFV is by far the most accepted NFV reference framework and 235 architectural footprint [etsi_nvf_whitepaper]. The ETSI NFV 236 framework architecture framework is composed of three domains 237 (Figure 1): 239 o Virtualized Network Function, running over the NFVI. 241 o NFV Infrastructure (NFVI), including the diversity of physical 242 resources and how these can be virtualized. NFVI supports the 243 execution of the VNFs. 245 o NFV Management and Orchestration, which covers the orchestration 246 and life-cycle management of physical and/or software resources 247 that support the infrastructure virtualization, and the life-cycle 248 management of VNFs. NFV Management and Orchestration focuses on 249 all virtualization specific management tasks necessary in the NFV 250 framework. 252 +-------------------------------------------+ +---------------+ 253 | Virtualized Network Functions (VNFs) | | | 254 | ------- ------- ------- ------- | | | 255 | | | | | | | | | | | | 256 | | VNF | | VNF | | VNF | | VNF | | | | 257 | | | | | | | | | | | | 258 | ------- ------- ------- ------- | | | 259 +-------------------------------------------+ | | 260 | | 261 +-------------------------------------------+ | | 262 | NFV Infrastructure (NFVI) | | NFV | 263 | ----------- ----------- ----------- | | Management | 264 | | Virtual | | Virtual | | Virtual | | | and | 265 | | Compute | | Storage | | Network | | | Orchestration | 266 | ----------- ----------- ----------- | | | 267 | +---------------------------------------+ | | | 268 | | Virtualization Layer | | | | 269 | +---------------------------------------+ | | | 270 | +---------------------------------------+ | | | 271 | | ----------- ----------- ----------- | | | | 272 | | | Compute | | Storage | | Network | | | | | 273 | | ----------- ----------- ----------- | | | | 274 | | Hardware resources | | | | 275 | +---------------------------------------+ | | | 276 +-------------------------------------------+ +---------------+ 278 Figure 1: ETSI NFV framework 280 The NFV architectural framework identifies functional blocks and the 281 main reference points between such blocks. Some of these are already 282 present in current deployments, whilst others might be necessary 283 additions in order to support the virtualization process and 284 consequent operation. The functional blocks are (Figure 2): 286 o Virtualized Network Function (VNF). 288 o Element Management (EM). 290 o NFV Infrastructure, including: Hardware and virtualized resources, 291 and Virtualization Layer. 293 o Virtualized Infrastructure Manager(s) (VIM). 295 o NFV Orchestrator. 297 o VNF Manager(s). 299 o Service, VNF and Infrastructure Description. 301 o Operations and Business Support Systems (OSS/BSS). 303 +--------------------+ 304 +-------------------------------------------+ | ---------------- | 305 | OSS/BSS | | | NFV | | 306 +-------------------------------------------+ | | Orchestrator +-- | 307 | ---+------------ | | 308 +-------------------------------------------+ | | | | 309 | --------- --------- --------- | | | | | 310 | | EM 1 | | EM 2 | | EM 3 | | | | | | 311 | ----+---- ----+---- ----+---- | | ---+---------- | | 312 | | | | |--|-| VNF | | | 313 | ----+---- ----+---- ----+---- | | | manager(s) | | | 314 | | VNF 1 | | VNF 2 | | VNF 3 | | | ---+---------- | | 315 | ----+---- ----+---- ----+---- | | | | | 316 +------|-------------|-------------|--------+ | | | | 317 | | | | | | | 318 +------+-------------+-------------+--------+ | | | | 319 | NFV Infrastructure (NFVI) | | | | | 320 | ----------- ----------- ----------- | | | | | 321 | | Virtual | | Virtual | | Virtual | | | | | | 322 | | Compute | | Storage | | Network | | | | | | 323 | ----------- ----------- ----------- | | ---+------ | | 324 | +---------------------------------------+ | | | | | | 325 | | Virtualization Layer | |--|-| VIM(s) +-------- | 326 | +---------------------------------------+ | | | | | 327 | +---------------------------------------+ | | ---------- | 328 | | ----------- ----------- ----------- | | | | 329 | | | Compute | | Storage | | Network | | | | | 330 | | | hardware| | hardware| | hardware| | | | | 331 | | ----------- ----------- ----------- | | | | 332 | | Hardware resources | | | NFV Management | 333 | +---------------------------------------+ | | and Orchestration | 334 +-------------------------------------------+ +--------------------+ 336 Figure 2: ETSI NFV reference architecture 338 4. Multi-domain problem statement 340 Market fragmentation results from having a multitude of 341 telecommunications network and cloud operators each with a footprint 342 focused to a specific region. This makes it difficult to deploy cost 343 effective infrastructure services, such as virtual connectivity or 344 compute resources, spanning multiple countries as no single operator 345 has a big enough footprint. Even if operators largely aim to provide 346 the same infrastructure services (VPN connectivity, compute resources 347 based on virtual machines and block storage), inter-operator 348 collaboration tools for providing a service spanning several 349 administrative boundaries are very limited and cumbersome. This 350 makes service development and provisioning very time consuming. For 351 example, having a VPN with end-points in several countries, in order 352 to connect multiple sites of a business (such as a hotel chain), 353 requires contacting several network operators. Such an approach is 354 possible only with significant effort and integration work from the 355 side of the business. This is not only slow, but also inefficient 356 and expensive, since the business also needs to employ networking 357 specialists to do the integration instead of focusing on its core 358 business 360 Technology fragmentation also represents a major bottleneck 361 internally for an operator. Different networks and different parts 362 of a network may be built as different domains using separate 363 technologies, such as optical or packet switched (with different 364 packet switching paradigms included); having equipment from different 365 vendors; having different control paradigms, etc. Managing and 366 integrating these separate technology domains requires substantial 367 amount of effort, expertise, and time. The associated costs are paid 368 by both network operators and vendors alike, who need to design 369 equipment and develop complex integration features. In addition to 370 technology domains, there are other reasons for having multiple 371 domains within an operator, such as, different geographies, different 372 performance characteristics, scalability, policy or simply historic 373 (e.g., result of a merge or an acquisition). Multiple domains in a 374 network are a necessary and permanent feature however, these should 375 not be a roadblock towards service development and provisioning, 376 which should be fast and efficient. 378 A solution is needed to deal with both the multi-operator 379 collaboration issue, and address the multi-domain problem within a 380 single network operator. While these two problems are quite 381 different, they also share a lot of common aspects and can benefit 382 from having a number of common tools to solve them. 384 5. Multi-domain architectural approaches 386 This section summarizes different architectural options that can be 387 considered to tackle the multi-domain orchestration problem. 389 5.1. ETSI NFV approaches 391 Recently, the ETSI NFV ISG has started to look into viable 392 architectural options supporting the placement of functions in 393 different administrative domains. In the document [etsi_nvf_ifa009], 394 different approaches are considered, which we summarize next. 396 The first option (shown in Figure 3) is based on a split of the NFVO 397 into Network Service Orchestrator (NSO) and Resource Orchestrator 398 (RO). A use case that this separation could enable is the following: 400 a network operator offering its infrastructure to different 401 departments within the same operator, as well as to a different 402 network operator like in cases of network sharing agreements. In 403 this scenario, an administrative domain can be defined as one or more 404 data centers and VIMs, providing an abstracted view of the resources 405 hosted in it. 407 A service is orchestrated out of VNFs that can run on infrastructure 408 provided and managed by another Service Provider. The NSO manages 409 the lifecycle of network services, while the RO provides an overall 410 view of the resources present in the administrative domain to which 411 it provides access and hides the interfaces of the VIMs present below 412 it. 414 ------- 415 | NSO | 416 /-------\ 417 / \ 418 -------- / -------- \ -------- 419 | VNFM | | | VNFM | | | VNFM | 420 -------- / -------- \ -------- 421 / ____/ / \ \____ \ 422 / / _________/ \_________ \ \ 423 / / / \ \ \ 424 +-----------/-/-/---------+ +----------\-\-\----------+ 425 | --------- | | --------- | 426 | | RO | | | | RO | | 427 | --------- | | --------- | 428 | / | \ | | / | \ | 429 | / | \ | | / | \ | 430 | / | \ | | / | \ | 431 | ------- ------- ------- | | ------- ------- ------- | 432 | |VIM 1| |VIM 2| |VIM 3| | | |VIM 1| |VIM 2| |VIM 3| | 433 | ------- ------- ------- | | ------- ------- ------- | 434 | Administrative domain A | | Administrative domain B | 435 +-------------------------+ +-------------------------+ 437 Figure 3: Infrastructure provided using multiple administrative 438 domains (from ETSI GS NFV-IFA 009 V1.1.1) 440 The second option (shown in Figure 4) is based on having an umbrella 441 NFVO. A use case enabled by this is the following: a Network 442 Operator offers Network Services to different departments within the 443 same operator, as well as to a different network operator like in 444 cases of network sharing agreements. In this scenario, an 445 administrative domain is compose of one or more Datacentres, VIMs, 446 VNFMs (together with their related VNFs) and NFVO, allowing distinct 447 specific sets of network services to be hosted and offered on each. 449 A top Network Service can include another Network Service. A Network 450 Service containing other Network Services might also contain VNFs. 451 The NFVO in each admin domain provides visibility of the Network 452 Services specific to this admin domain. The umbrella NFVO is 453 providing the lifecycle management of umbrella network services 454 defined in this NFVO. In each admin domain, the NFVO is providing 455 standard NFVO functionalities, with a scope limited to the network 456 services, VNFs and resources that are part of its admin domain. 458 ------------ 459 | Umbrella | 460 | NFVO | 461 ------------ 462 / | \ 463 / | \ 464 / -------- \ 465 / | VNFM | \ 466 / -------- \ 467 / | \ 468 / ------- \ 469 / |VIM 1| \ 470 / ------- \ 471 --------------/------------ -------------\------------- 472 | -------- | | -------- | 473 | | NFVO | | | | NFVO | | 474 | -------- | | -------- | 475 | | | | | | | | | | 476 | -------- | | | -------- | | -------- | | | -------- | 477 | | VNFM | | | | | VNFM | | | | VNFM | | | | | VNFM | | 478 | -------- | | | -------- | | -------- | | | -------- | 479 | | \__/__|__\_/_ | | | | \__/__|__\_/_ | | 480 | | __/___|___/\ \ | | | | __/___|___/\ \ | | 481 | | / / | \ \ | | | | / / | \ \ | | 482 | ------- ------- ------- | | ------- ------- ------- | 483 | |VIM 1| |VIM 2| |VIM 3| | | |VIM 1| |VIM 2| |VIM 3| | 484 | ------- ------- ------- | | ------- ------- ------- | 485 | Administrative domain A | | Administrative domain B | 486 +-------------------------+ +-------------------------+ 488 Figure 4: Network services provided using multiple administrative 489 domains (from ETSI GS NFV-IFA 009 V1.1.1) 491 More recently, ETSI NFV has released a new whitepaper, titled 492 "Network Operator Perspectives on NFV priorities for 5G" 493 [etsi_nvf_whitepaper_5g], which provides network operator 494 perspectives on NFV priorities for 5G and identifies common technical 495 features in terms of NFV. This whitepaper identifies multi-site/ 496 multi-tenant orchestration as one key priority. ETSI highlights the 497 support of Infrastructure as a Service (IaaS), NFV as a Service 498 (NFVaaS) and Network Service (NS) composition in different 499 administrative domains (for example roaming scenarios in wireless 500 networks) as critical for the 5G work. 502 In January 2018 ETSI NFV released a report about NFV MANO 503 architectural options to support multiple administrative domains 504 [etsi_nvf_ifa028]. This report presents two use cases: the NFVI as a 505 Service (NFVIaaS) case, where a service provider runs VNFs inside an 506 NFVI operated by a different service provider, and the case of 507 Network Services (NS) offered by multiple administrative domains, 508 where an organization uses NS(s) offered by another organization. 510 In the NFVIaaS use case, the NFVIaaS consumer runs VNF instances 511 inside an NFVI provided by a different service provider, called 512 NFVIaaS provider, that offers computing, storage, and networking 513 resources to the NFVIaaS consumer. Therefore, the NFVIaaS consumer 514 has the control on the applications that run on the virtual 515 resources, but has not the control of the underlying infrastructure, 516 which is instead managed by the NFVIaaS provider. In this scenario, 517 the NFVIaaS provider's domain is composed of one or more NFVI-PoPs 518 and VIMs, while the NFVIaaS consumer's domain includes one or more 519 NSs and VNFs managed by its own NFVO and VNFMs, as depicted in 520 Figure 5. 522 +------------------------------------------------+ 523 | NFVIaaS consumer's administrative domain | 524 | | 525 | +----------+ | 526 | | NS(s) | | 527 | +----------+ | 528 | | 529 | +----------+ +----------+ +----------+ | 530 | | VNF(s) | | VNFM(s) | | NFVO | | 531 | +----------+ +----------+ +----------+ | 532 | | 533 +-------------------------+----------------------+ 534 + 535 Administrative domain + 536 ++++++++++++++++++++++++++++++++++++++++++++++++++ 537 boundary + NFVIaaS 538 + 539 +-------------------------+----------------------+ 540 | | 541 | +----------+ +-----------+ | 542 | | NFVI | | VIM(s) | | 543 | +----------+ +-----------+ | 544 | | 545 +------------------------------------------------+ 547 Figure 5: NFVI use case 549 The ETSI IFA 028 defines two main options to model the interfaces 550 between NFVIaaS provider and consumer for NFVIaaS service requests, 551 as follows: 553 1. Access to Multiple Logical Points of Contacts (MLPOC) in the 554 NFVIaaS provider's administrative domain. In this case the 555 NFVIaaS consumer has visibility of the NFVIaaS provider's VIMs 556 and it interacts with each of them to issue NFVIaaS service 557 requests, through Or-Vi (IFA 005) or Vi-Vnfm (IFA 006) reference 558 points. 560 2. Access to a Single Logical Point of Contact (SLPOC) in the 561 NFVIaaS provider's administrative domain. In this case the 562 NFVIaaS provider's VIMs are hidden from the NFVIaaS consumer and 563 a single unified interface is exposed by the SLPOC to the NFVIaaS 564 consumer. The SLPOC manages the information about the 565 organization, the availability and the utilization of the 566 infrastructure resources, forwarding the requests from the 567 NFVIaaS consumer to the VIMs. The interaction between SLPOC and 568 NFVIaaS consumer is based on IFA 005 or IFA 006 interfaces, while 569 the interface between the SLPOC and the underlying VIMs is based 570 on the IFA 005. 572 The two options are shown in Figure 6 and Figure 7 respectively, 573 where we assume the direct mode for the management of VNF resources. 574 In addition, the ETSI IFA 028 includes the possibility of an indirect 575 management mode of the VNF resources through the consumer NFVIaaS 576 NFVO and the IFA 007 interface. In this latter case between the 577 consumer NFVIaaS NFVO and the provider NFVIaaS NFVO only the IFA 005 578 interface is utilized. 580 +------------------------------------------------+ 581 | NFVIaaS consumer's administrative domain | 582 | | 583 | +-----------+ +------------+ | 584 | | VNFM |---+ | NFVO | | 585 | +-+---------+ |---+ +-+--+-------+ | 586 | + +-+---------+ | + + | 587 | + + +----+----+-+ + + | 588 | + + + + + + | 589 +----+---+----+------+-------+----+--------------+ 590 + + + + + + 591 IFA 006 --+---+---+-------+-- --+----+-- IFA 005 592 + + + + + + 593 +----+---+--+--------+----+----+-----------------+ 594 | + + + ++++++++++ + | 595 | +-+---+--+-+ + + + | 596 | | VIM |-+-+ + + | 597 | +----------+ |--+-+ + | 598 | +----------+ ++++ | 599 | | VIM | | 600 | +-----------+ | 601 | NFVIaaS provider's administrative domain | 602 +------------------------------------------------+ 604 Figure 6: NFVIaaS architecture: MLPOC option 606 +------------------------------------------------+ 607 | NFVIaaS consumer's administrative domain | 608 | | 609 | +-----------+ +------------+ | 610 | | VNFM |---+ | NFVO | | 611 | +-+---------+ |---+ +-+----------+ | 612 | + +-----------+ | + | 613 | + | VNFM | + | 614 | + +---------+-+ + | 615 | + + + | 616 +-------+------------+-------+-------------------+ 617 + + + 618 IFA 006 -------+----------+-- --+-- IFA 005 619 + + + 620 +-----------+--------+----+----------------------+ 621 | + + + | 622 | +---+------+--+--+ | 623 | | SLPOC function | | 624 | +-+---+---+------+ | 625 | + + + | 626 | ---+-----+---+--- IFA 005 | 627 | + + + | 628 | +----+-----+ + + | 629 | | VIM |-+-+ + | 630 | +----------+ |-+-+ | 631 | +----------+ | | 632 | | VIM | | 633 | +----------+ | 634 | NFVIaaS provider's administrative domain | 635 +------------------------------------------------+ 637 Figure 7: NFVIaaS architecture: SLPOC option 639 In the use case related to Network Services provided using multiple 640 administrative domains, each domain includes an NFVO and one or more 641 NFVI PoPs, VIMs and VNFMs. The NFVO in each domain offers a 642 catalogue of Network Services that can be used to deploy nested NSs, 643 which in turn can be composed into composite NSs, as shown in 644 Figure 8. Nested NSs can be also shared among different composite 645 NSs. 647 | 648 ***********+*************************************** 649 * | * 650 * +-------+------+ * 651 * | | * 652 --+--+ Nested NS A +-------+ * 653 * | | +--------+ * 654 * +-------+------+ | * 655 * | | * 656 * | +-----+--------+ * 657 * | | | * 658 * +-----------------+ Nested NS B +-----+--- 659 * | | * 660 * +-------+------+ * 661 * Composite NS C | * 662 * | * 663 *************************************+************* 664 | 666 Figure 8: Composite and nested NSs 668 The management of the NS hierarchy is handled through a hierarchy of 669 NFVOs, with one of them responsible for the instantiation and 670 lifecycle management of the composite NS, coordinating the actions of 671 the other NFVOs that manage the nested NSs. These two different 672 kinds of NFVOs interact through a new reference point, named Or-Or, 673 as shown in Figure 9, where NFVO-1 manages composite NSs and NFVO-2 674 manages nested NSs. To build the composite NSs, the responsible NFVO 675 consult its own catalogue and may subscribe to the NSD notifications 676 sent by other NFVOs. 678 +---------------------------------------------+ 679 | | 680 | +-------------+ | 681 | +++++++ VNFM1-1 | | 682 | +-----------+ + +-------------+ | 683 | | NFVO-1 ++++++ | 684 | +---+-------+ + +-------------+ | 685 | + +++++++ VNFM1-2 | | 686 | + +-------------+ | 687 | + Administrative domain C | 688 +--------+------------------------------------+ 689 + 690 + 691 + Or-Or 692 + 693 +--------+------------------------------------+ 694 | + | 695 | + +-------------+ | 696 | + +++++++ VNFM2-1 | | 697 | +---+-------+ + +-------------+ | 698 | | NFVO-2 ++++++ | 699 | +-----------+ + +-------------+ | 700 | +++++++ VNFM2-2 | | 701 | +-------------+ | 702 | Administrative domain A | 703 +---------------------------------------------+ 705 Figure 9: Architecture for management of composite and nested NS 707 5.2. Hierarchical 709 Considering the potential split of the NFVO into a Network Service 710 Orchestrator (NSO) and a Resource Orchestrator (RO), multi-provider 711 hierarchical interfaces may exist at their northbound APIs. 712 Figure 10 illustrates the various interconnection options, namely: 714 E/NSO (External NSO): an evolved NFVO northbound API based on 715 Network Service (NS). 717 E/RO (External RO): VNF-FG oriented resource embedding service. A 718 received VNF-FG that is mapped to the northbound resource view is 719 embedded into the distributed resources collected from southbound, 720 i.e., VNF-FG_in = VNF-FG_out_1 + VNF-FG_out_2 + ... + VNF- 721 FG_out_N, where VNF-FG_out_j corresponds to a spatial embedding to 722 subordinate domain "j". For example, Provider 3's MP-NFVO/RO 723 creates VNF-FG corresponding to its E/RO and E/VIM sub-domains. 725 E/VIM (External VIM): a generic VIM interface offered to an 726 external consumer. In this case the NFVI-PoP may be shared for 727 multiple consumers, each seeing a dedicated NFVI-PoP. This 728 corresponds to IaaS interface. 730 I/NSO (Internal NSO): if a Multi-provider NSO (MP-NSO) is 731 separated from the provider's operational NSO, e.g., due to 732 different operational policies, the MP-NSO may need this interface 733 to realize its northbound E/NSO requests. Provider 1 illustrates 734 a scenario the MP-NSO and the NSO are logically separated. 735 Observe that Provider 1's tenants connect to the NSO and MP-NSO 736 corresponds to "wholesale" services. 738 I/RO (Internal RO): VNF-FG oriented resource embedding service. A 739 received VNF-FG that is mapped to the northbound resource view is 740 embedded into the distributed resources collected from southbound, 741 i.e., VNF-FG_in = VNF-FG_out_1 + VNF-FG_out_2 + ... + VNF- 742 FG_out_N, where VNF-FG_out_j corresponds to a spatial embedding to 743 subordinate domain "j". For example, Provider 1's MP-NFVO/RO 744 creates VNF-FG corresponding to its I/RO and I/VIM sub-domains. 746 I/VIM (Internal VIM): a generic VIM interface at an NFVI-PoP. 748 Nfvo-Vim: a generic VIM interface between a (monolithic) NFVO and 749 a VIM. 751 Some questions arise from this. It would be good to explore use- 752 cases and potential benefits for the above multi-provider interfaces 753 as well as to learn how much they may differ from their existing 754 counterparts. For example, are (E/RO, I/RO), (E/NSO, I/NSO), (E/VIM, 755 I/VIM) pairs different? 756 Tenants 757 * Provider | 758 * * Domain 4 +--+-----------+ 759 * * |MP-NFVO/NSO: | 760 * * |Network Serv. | 761 * Provider * |Orchestrator | 762 * Domain 3 * +--+-----------+ 763 * Tenants * |E/RO 764 * | ************|************* 765 * ++-------------+ | 766 * |MP-NFVO/NSO: | | 767 Provider * |Network Serv. | | 768 Domain 1 * |Orchestrator | | 769 * +-+-----+------+ | 770 * E/NSO| | I/RO / 771 *.---------' +-+---------+--+ 772 /* |MP-NFVO/RO: | 773 / * |Resource | 774 Tenants / * |Orchestrator | 775 | | * +--+---+-------+ 776 | +-----------+--+ *************|***|******************** 777 | |MP-NFVO/NSO: | | * \ Provider 778 | |Network Serv. | E/RO / * \ E/VIM Domain 2 779 | |Orchestrator | .-----------' * `-------. 780 | +-+------+-----+ | * | 781 | |I/NSO |I/RO | * | 782 | | +--+--------+--+ * | 783 | | |MP-NFVO/RO: | * | 784 | | |Resource | * | 785 \ | |Orchestrator | * +------+-------+ 786 \ | +----+---- --+-+ * |VIM: | 787 +--+-----+ |I/RO |I/VIM * |Virtualized | 788 |NFVO/NSO| | | * |Pys mapping | 789 +------+-+ | | * +--------------+ 790 I/RO| | | * 791 +------+----+---+ | * 792 | NFVO/RO | | * 793 ++-------------++ | * 794 |Nfvo-Vim | | * 795 ++-------+ ++----+--+ * 796 |WIM|VIM || |VIM|WIM | * 797 +--------+| +--------+ * 798 +--------+ * 800 Figure 10: NSO-RO Split: possible multi-provider APIs - an 801 illustration 803 5.3. Cascading 805 Cascading is an alternative way of relationship among providers, from 806 the network service point of view. In this case, service 807 decomposition is implemented in a paired basis. This can be extended 808 in a recursive manner, then allowing for a concatenation of cascaded 809 relations between providers. 811 As a complement to this, from a service perspective, the cascading of 812 two remote providers (i.e., providers not directly interconnected) 813 could require the participation of a third provider (or more) 814 facilitating the necessary communication among the other two. In 815 that sense, the final service involves two providers while the 816 connectivity imposes the participation of more parties at resource 817 level. 819 6. Virtualization and Control for Multi-Provider Multi-Domain 821 Orchestration operation in multi-domain is somewhat different from 822 that in a single domain as the assumption in single domain single 823 provider orchestration is that the orchestrator is aware of the 824 entire topology and resource availability within its domain as well 825 as has complete control over those resources. This assumption of 826 technical control cannot be made in a multi domain scenario, 827 furthermore the assumption of the knowledge of the resources and 828 topologies cannot be made across providers. In such a scenario 829 solutions are required that enable the exchange of relevant 830 information across these orchestrators. This exchange needs to be 831 standardized as shown in Figure 11. 833 | | 834 + IF1 + 835 _____|____ ____|_____ 836 | Multi | IF2 | Multi | 837 | Provider |<--------+---------->| Provider | 838 |___Orch___| |___Orch___| 839 /\ /\ 840 / \ / \ 841 / \ IF3 / \ 842 _______/__ _\_________ ________/_ _\________ 843 | Domain | | Domain | | Domain | | Domain | 844 |___Orch___| |___Orch___| |___Orch___| |___Orch___| 846 Figure 11: Multi Domain Multi Provider reference architecture 848 The figure shows the Multi Provider orchestrator exposing an 849 interface 1 (IF1) to the tenant, interface 2 (IF2) to other Multi 850 Provider Orchestrator (MPO) and an interface 3 (IF3) to individual 851 domain orchestratrators. Each one of these interfaces could be a 852 possible standardization candidate. Interface 1 is exposed to the 853 tenant who could request his specific services and/or slices to be 854 deployed. Interface 2 is between the orchestrator and is a key 855 interface to enable multi-provider operation. Interface 3 focuses on 856 abstracting the technology or vendor dependent implementation details 857 to support orchestration. 859 The proposed operation of the MPO follows three main technical steps. 860 First, over interface 2 various functions such as abstracted topology 861 discovery, pricing and service details are detected. Second, once a 862 request for deploying a service is received over interface 1 the 863 Multi Provider Orchestrator evaluates the best orchestrators to 864 implement parts of this request. The request to deploy these parts 865 are sent to the different domain orchestrators over IF2 and IF3 and 866 the acknowledgement that these are deployed in different domain are 867 received back over those interfaces. Third, on receipt of the 868 acknowledgement the slice specific assurance management is started 869 within the MPO. This assurance function collects the appropriate 870 information over IF2 and IF3 and reports the performance back to the 871 tenant over IF1. The assurance is also responsible for detecting any 872 failures in the service and violations in the SLA and recommending to 873 the orchestration engine the reconfiguration of the service or slice 874 which again needs to be performed over IF2 and IF3. 876 Each of the three steps is assigned to a specific block in our high 877 level architecture shown in Figure 12. 879 | | 880 + IF1 + 881 ______________|______________ ____|_____ 882 | Multi Provider Orch | | Multi | 883 | ______ ________ _______ |<------+------->| Provider | 884 ||Assur-| | | | Catal-|| IF2 |___Orch___| 885 ||-ance | | NFVO | | logue || 886 || Mgmt.| | | | Topo. || 887 ||______| |________| |_Mgmt._|| 888 |_____________________________| 889 /\ 890 / \ IF3 892 Figure 12: Detailed MPO reference architecture 894 The catalogue and topology management system is responsible for step 895 1. It discovers the service as well as the resources exposed by the 896 other domains both on IF2 and IF3. The combination of these services 897 with coverage over the detected topology is provided to the user over 898 IF1. In turn the catalogue and topology management system is also 899 responsible for exposing the topology and service deployment 900 capabilities to the other domain. The exposure over interface 2 to 901 other MPO maybe abstracted and the mapping of this abstracted view to 902 the real view when requested by the NFVO. 904 The NFVO (Network Function Virtualization Orchestrator) is 905 responsible for the second step. It deploys the service or slice as 906 is received from the tenant over IF2 and IF3. It then hands over the 907 deployment decisions to the Assurance management subsystem which use 908 this information to collect the periodic monitoring tickets in step 909 3. On the other end it is responsible for receiving the request over 910 IF2 to deploy a part of the service, consult with the catalogue and 911 topology management system on the translation of the abstraction to 912 the received request and then for the actual deployment over the 913 domains using IF3. The result of this deployment and the management 914 and control handles to access the deployed slice or service is then 915 returned to the requesting MPO. 917 The assurance management component periodically studies the collected 918 results to report the overall service performance to the tenant or 919 the requesting MPO as well as to ensure that the service is 920 functioning within the specified parameters. In case of failures or 921 violations the Assurance management system recommends 922 reconfigurations to the NFVO. 924 6.1. Interworking interfaces 926 In this section we provide more details on the interworking 927 interfaces of the MPO reference architecture. Each interface IF1, 928 IF2 and IF3 is broken down into several sub-interfaces. Each of them 929 has a clear scope and functionality. 931 For multi provider Network Service orchestration, the Multi-domain 932 Orchestrator (MdO) offers Network Services by exposing an OSS/BSS - 933 NFVO interface to other MPOs belonging to other providers. For 934 multi-provider resource orchestration, the MPO presents a VIM-like 935 view and exposes an extended NFVO - VIM interface to other MPOs. The 936 MPO exposes a northbound sub-interface (IF1-S) through which an MPO 937 customer sends the initial request for services. It handles command 938 and control functions to instantiate network services. Such 939 functions include requesting the instantiation and interconnection of 940 Network Functions (NFs). A sub-interface IF2-S is defined to perform 941 similar operations between MPOs of different administrative domains. 942 A set of sub-interfaces -- IF3-R and IF2-R -- are used to keep an 943 updated global view of the underlying infrastructure topology exposed 944 by domain orchestrators. The service catalogue exposes available 945 services to customers on a sub-interface IF1-C and to other MPO 946 service operators on sub-interface IF2-C. Resource orchestration 947 related interfaces are broken up to IF2-RC, IF2-RT, IF2-RMon to 948 reflect resource control, resource topology and resource monitoring 949 respectively. Furthermore, the sub-interfaces introduced before are 950 generalised and also used for interfaces IF3 and IF1. 952 6.2. 5GEx Multi Architecture 954 The 5G-PPP H2020 5GEx projects addresses the proposal and the 955 deployment of a complete Multi-Provider Orchestrator providing, 956 besides network and service orchestration, service exposition to 957 other providers. The main assumptions of the 5GEx functional 958 architecture are a) a multi-operator wholesale relationship, b) a 959 full multi-vendor inter-operability and c) technology-agnostic 960 approach for physical resources. The proposed functional 961 architecture of the 5GEx MPO is depicted in Figure 13. 963 ^ ^ 964 I1-S | | 965 I1-F | I1-C | 966 I1-RM| | 967 +----------------------------------------------------+ 968 | +-------------------------------------|--+| 969 | | | | || I2-S 970 | | +--------------------+ | || I2-F 971 |+---+ | | +-----+ +---+ IP- | | || I2-RC 972 ||OSS|<----|-| | NSO | |RO | NFVO +<-------------|--+|--------------> 973 |+---+ | | +-----+ +---+ |<-------------+ || 974 | ^ | +---^----------------+ | || 975 | | | | ^ ^ ^^ ^ | || 976 | | | +---+---+ | | || | | || 977 | +---------| VNF | | | || | Multi- | || 978 | | |Manager| | | || | Provider | || 979 | | ++------+ | | || | Orchestrator | || 980 | | ^ | | || | (MPO) | || 981 | | +--------+ | || | | || 982 | | | +-------+ || | | || I2-Mon 983 | | | |SLA |<-|-|---------------|--+|--------------> 984 | | | |Manager| || | | || 985 | | | +-------+ || | | || 986 | | | ^ || +------------+ | ||I2-RT-advertise 987 | | | | || |Topology | | ||I2-RT-bilateral 988 | | | | || |Distribution|<-|--+|--------------> 989 | | | | || |Repository | | || 990 | | | | || +--------^+--+ | || 991 | | | | || ^ || | || 992 | | | | || | +---+v-+ | ||I2-RC-network 993 | | | | |+---|--+MD-PCE|<--|--+|--------------> 994 | | | | | | +------+ | || 995 | | | | | | ^ +-------+-+||I2-C-advertise 996 | | | | | | | |Service |||I2-C-bilateral 997 | | | | | | | |Catalogue+<|--------------> 998 | | | | | | | +---------+|| 999 | | | | | | | ^ || 1000 | +--|----- -|-------|----|---|------|-----+| 1001 | | | | | | | 1002 | |I3-RC | I3-S| | |I3-RC-network| 1003 | +--+--+ | +----+ | +---+ | | 1004 | | VIM | | |NFVO| | |PCE| | | 1005 | +-----+ | +----+ | +---+ | | 1006 | | | | | 1007 | | I3-RT| |I3-C | 1008 | I3-Mon | +------+----+ +---+-----+| 1009 | +---------+-+ |Topology | |Service || 1010 |Operator | Monitoring| |Abstraction| |Catalogue|| 1011 |Domain +-----------+ +-----------+ +---------+| 1012 +----------------------------------------------------+ 1014 Figure 13: 5GEx MPO functional architecture 1016 Providers expose MPOs service specification API allowing OSS/BSS or 1017 external business customers to perform and select their requirements 1018 for a service. Interface I1-x is exploited as a northbound API for 1019 business client requests. Peer MPO-MPO communications implementing 1020 multi-operator orchestration operate with specific interfaces 1021 referred to as I2-x interfaces. A number of I2-based interfaces are 1022 provided for communication between specific MPO modules: I2-S for 1023 service orchestration, I2-RC for network resource control, I2-F for 1024 management lifecycle, I2-Mon for inter-operator monitoring messages, 1025 I2-RT for resource advertisement, I2-C for service catalogue 1026 exchange, I2-RC-network for the QoS connectivity resource control. 1027 Some I2 interfaces are bilateral, involving direct relationship 1028 between two operators, and utilized to exchange business/SLA 1029 agreements before entering the federation of inter-operator 1030 orchestrators. Each MPO communicates through a set of southbound 1031 interface, I3-x, with local orchestrators/controllers/VIM, in order 1032 to set/modify/release resources identified by the MPO or during 1033 inter-MPO orchestration phase. A number of I3 interfaces are 1034 defined: I3-S for service orchestration towards local NFVO, I3-RC for 1035 resource orchestration towards local VIM, I3-C towards local service 1036 catalogue, I3-RT towards local abstraction topology module, I3-RC- 1037 network towards local PCE or network controller, I3-Mon towards local 1038 Resource Monitoring agent. All the considered interfaces are 1039 provided to cover either flat orchestration or layered/hierarchical 1040 orchestration. The possibility of hierarchical inter-provider MPO 1041 interaction is enabled at a functional level, e.g., in the case of 1042 operators managing a high number of large administrative domains. 1043 The main MPO modules are the following: 1045 The Inter-provider NFVO, including the RO and the NSO, 1046 implementing the multi-provider service decomposition 1048 the VNF/Element manager, managing VNF lifecycle, scaling and 1049 responsible for FCAPS (Fault, Configuration, Accounting, 1050 Performance and Security management) 1052 the SLA Manager, in charge of reporting monitoring and performance 1053 alerts on the service graph 1055 the Service Catalogue, exposing available services to external 1056 client and operators 1058 the Topology and Resource Distribution module and Repository, 1059 exchanging operators topologies (both IT and network resources) 1060 and providing abstracted view of the own operator topology 1062 the Multi-domain Path Computation Element (PCE implementing inter- 1063 operator path computation to allow QoS-based connectivity serving 1064 VNF-VNF link). 1066 The Inter-provider NVFO selects providers to be involved in the 1067 service chained request, according to policy-based decisions and 1068 resorting to Inter-Provider topologies and service catalogues 1069 advertised through interfaces I2-RT-advertise and I2-C-advertise, 1070 respectively. Network/service requests are sent to other providers 1071 using the I2-RC and I2-S interfaces, respectively. Policy 1072 enforcement for authorized providers running resource orchestration 1073 and lifecycle management are exploited through interfaces I2-RC and 1074 I2-F, respectively. The VNF/Element Manager is in charge of managing 1075 the lifecycle of the VNFs part of the services. More specifically, 1076 it is in charge to perform: the configuration of the VNFs, also in 1077 terms of security aspects, the fault recovery and the scaling 1078 according to their performance. The SLA Manager collects and 1079 aggregates quality measurement reports from probes deployed by the 1080 Inter-Provider NFVO as part of the service setup. Measurements 1081 results at the Manager represent aggregated results and are computed 1082 and stored utilizing the I2-Mon interface between Inter-Provider MPOs 1083 sharing the same service. Faults and alarms are moreover correlated 1084 to raise SLA violation to remote inter-provider MPOs and, optionally, 1085 to detect the source and the location of the violation, triggering 1086 service re-computation/rerouting procedures. The Service Catalogue 1087 stores information on network services and available VNFs and uses 1088 I2-C interfaces (either bilateral or advertised) to advertise and 1089 updating such offered services to other operators. To enable inter- 1090 provider service decomposition, multi-operator topology and peering 1091 relationships need to be advertised. Providers advertise basic 1092 inter-provider topologies using the I2-RT-advertse interface 1093 including, optionally, abstracted network resources, overall IT 1094 resource capabilities, MPO entry-point and MD-PCE IP address. Basic 1095 advertisement takes place between adjacent operators. These 1096 information are collected, filtered by policy rules and propagated 1097 hop-by-hop. In 5GEx, the I2-RT-advertise interfaces utilizes BGP-LS 1098 protocol. Moreover, providers establish point-to-point bilateral 1099 (i.e., direct and exclusive) communications to exchange additional 1100 topology and business information, using the I2-RT-bilateral 1101 interface. Service decomposition may imply the instantiation of 1102 traffic-engineered multi-provider connectivity, subject to 1103 constraints such as guaranteed bandwidth, latency or minimum TE 1104 metric. The multi-domain PCE (MD-PCE) receives the connectivity 1105 request from the inter-provider NFVO and performs inter-operator path 1106 computation to instantiate QoS-based connectivity between two VNFs 1107 (e.g., Label Switched Paths). Two procedures are run sequentially: 1109 operators/domain sequence computation, based on the topology 1110 database, provided by Topology Distribution module, and on 1111 specific policies (e.g., business, bilateral), 1113 per-operator connectivity computation and instantiation. 1115 In 5GEx, MD-PCE is stateful (i.e., current connectivity information 1116 is stored inside the PCE) and inter-operator detailed computation is 1117 performed resorting to the stateful Backward Recursive PCE-based 1118 computation (BRPC) [draft-stateful-BRPC], deploying a chain of PCEP 1119 sessions among adjacent operators, each one responsible of computing 1120 and deploying its segment. Backward recursive procedure allows 1121 optimal e2e constrained path computation results. 1123 6.3. 5G-TRANSFORMER Architecture 1125 5G-TRANSFORMER project proposes a flexible and adaptable SDN/NFV- 1126 based design of the next generation Mobile Transport Networks, 1127 capable of simultaneously supporting the needs of various vertical 1128 industries with diverse range of requirements by offering customized 1129 slices. In this design, multi-domain orchestration and federation 1130 are considered as the key concepts to enable end-to-end orchestration 1131 of services and resources across multiple administrative domains. 1133 The 5G-TRANSFORMER solution consists of three novel building blocks, 1134 namely: 1136 1. Vertical Slicer (VS) as the common entry point for all verticals 1137 into the system. The VS dynamically creates and maps the 1138 vertical services onto network slices according to their 1139 requirements, and manages their lifecycle. It also translates 1140 the vertical and slicing requests into ETSI defined NFV network 1141 services (NFV-NS) sent towards the SO. Here a network slice is 1142 deployed as a NFV-NS instance. 1144 2. Service Orchestrator (SO). It offers service or resource 1145 orchestration and federation, depending on the request coming 1146 from the VS. This includes all tasks related with coordinating 1147 and offering to the vertical an integrated view of services and 1148 resources from multiple administrative domains. Orchestration 1149 entails managing end-to-end services or resources that were split 1150 into multiple administrative domains based on requirements and 1151 availability. Federation entails managing administrative 1152 relations at the interface between SOs belonging to different 1153 domains and handling abstraction of services and resources. 1155 3. Mobile Transport and Computing Platform (MTP) as the underlying 1156 unified transport stratum, responsible for providing the 1157 resources required by the NFV-NS orchestrated by the SO. This 1158 includes their instantiation over the underlying physical 1159 transport network, computing and storage infrastructure. It also 1160 may (de)abstract de MTP resources offered to the SO. 1162 The 5G-TRANSFROMER architecture is quite in line with the general 1163 Multi Domain Multi Provider reference architecture depicted in 1164 Figure 11. Its mapping to the reference architecture is illustrated 1165 in the figure below. 1167 _________ _________ 1168 | | | | 1169 | VS | | VS | 1170 |_________| |_________| 1171 | | 1172 + IF1 + 1173 ____|____ ____|____ 1174 | | IF2 | | 1175 | SO |<--------+---------->| SO | 1176 |_________| |_________| 1177 /\ /\ 1178 / \ / \ 1179 / \ IF3 / \ 1180 ______/__ _\_______ _______/_ _\_______ 1181 | MTP | | MTP | | MTP | | MTP | 1182 |_________| |_________| |_________| |_________| 1184 Figure 14: 5G-TRANSFORMER architecture mapped to the reference 1185 architecture 1187 The MTP would be mapped to the individual domain orchestrators, which 1188 only provides the resource orchestration for the local administrative 1189 domain. The role of the SO is the Multi Provider orchestrator (MPO) 1190 responsible for multi-domain service or resource orchestration and 1191 federation. The operation of the SO follows three main technical 1192 steps handled by the three function components of the MPO shown in 1193 Figure 14, namely (i) the catalogue and topology management system; 1194 (ii) the NFVO (Network Function Virtualization Orchestrator); and the 1195 assurance management component. 1197 Correspondingly, the interface between the SO and the VS (So-Vs) is 1198 the interface 1 (IF1), through which the VS requests the 1199 instantiation and deployment of various network services to support 1200 individual vertical service slices. The interface between the SOs 1201 (So-So) of different domains is the interface 2 (IF2), enabling multi 1202 domain orchestration and federation operations. The interface 1203 between the SO and the MTP (So-Mtp) is the interface 3 (IF3). It, on 1204 the one hand, provides the SO the updated global view of the 1205 underlying infrastructure topology abstraction exposed by the MTP 1206 domain orchestrators, while on the other hand it also handles command 1207 and control functions to allow the SO request each MTP domain for 1208 virtual resource allocation. 1210 In 5G-TRANSFOMER, a set of sub-interfaces have been defined for the 1211 So-Mtp, So-So and Vs-So interfaces. 1213 6.3.1. So-Mtp Interface (IF3) 1215 This interface is based on ETSI GS-NFV IFA 005 and ETSI GS-NFV IFA 1216 006 for the request of virtual resource allocation, management and 1217 monitoring. Accordingly, the 5G-TRANSFORMER identified the following 1218 sub-interfaces at the level of So-Mtp interactions (i.e., IF3-x 1219 interfaces regulating MPO-DO interactions). 1221 So-Mtp(-RAM). It provides the Resource Advertisement Management 1222 (RAM) functions to allow updates or reporting about virtualized 1223 resources and network topologies in the MTP that will accommodate 1224 the requested NFVO component network services. 1226 So-Mtp(-RM). It provides the Resource Management (RM) operations 1227 over the virtualized resources used for reserving, allocating, 1228 updating (in terms of scaling up or down) and terminating (i.e., 1229 release) the virtualized resources handled by each MTP and 1230 triggered by NFVO component (in Figure 14) to accommodate network 1231 services. 1233 So-Mtp(-RMM). It provides the required primitives and parameters 1234 for supporting the SO resource monitoring management (RMM) 1235 capability for the purpose of fault management and SLA assurance 1236 handled by assurance management component in Figure 14. 1238 In the reference architecture (Fig. 6), the IF3-RC, IF3-RT, IF3-RMon 1239 sub-interface are defined for resource control, resource topology and 1240 resource monitoring respectively. The IF3-RT, IF3-RC and IF3-RMon 1241 sub-interfaces map to So-Mtp(-RAM), So-Mtp(-RM) and So-Mtp(-RMM) sub- 1242 interfaces from 5G-TRANSFORMER. 1244 6.3.2. So-So Interface (IF2) 1246 This interface is based ETSI GS-NFV IFA 013 and ETSI GS-NFV IFA 005 1247 for the service and resource federation between the domains. The 5G- 1248 TRANSFORMER identified the following sub-interfaces at the level of 1249 So-So interactions (i.e., IF2-x interfaces regulating MPO 1250 interactions) to provide service and resource federation and enable 1251 NSaaS and NFVIaaS provision, respectively, across different 1252 administrative domains. 1254 So-So(-LCM), for the operation of NFV network services. The 1255 reference point is used to instantiate, terminate, query, update 1256 or re-configure network services or receive notifications for 1257 federated NFV network services. The SO NFVO-NSO uses this 1258 reference point. 1260 So-So(-MON), for the monitoring of network services through 1261 queries or subscriptions/notifications about performance metrics, 1262 VNF indicators and network service failures. The SO NFVO-NSO uses 1263 this reference point. 1265 So-So(-CAT), for the management of Network Service Descriptors 1266 (NSDs) flavors together with VNF/VA and MEC Application Packages, 1267 including their Application Descriptors (AppDs). This reference 1268 point offers primitives for on-boarding, removal, updates, queries 1269 and enabling/disabling of descriptors and packages. The SO NFVO- 1270 NSO uses this reference point. 1272 Furthermore, resource orchestration related operations are broken up 1273 to the following sub-interfaces to reflect resource control, resource 1274 topology and resource monitoring respectively. 1276 So-So(-RM), for allocating, configuring, updating and releasing 1277 resources. The Resource Management reference point offers 1278 operations such as configuration of the resources, configuration 1279 of the network paths for connectivity of VNFs. These operations 1280 mainly depend of the level of abstraction applied to the actual 1281 resources. The SO NFVO-RO uses this reference point. 1283 So-So(-RMM), for monitoring of different resources, computing 1284 power, network bandwidth or latency, storage capacity, VMs, MEC 1285 hosts provided by the peering administrative domain. The details 1286 level depends on the agreed abstraction level. The SO NFVO-RO 1287 uses this reference point. 1289 So-So(-RAM), for advertising available resource abstractions to/ 1290 from other SOs. It broadcasts available resources or resource 1291 abstractions upon capability calculation and periodic updates for 1292 near real-time availability of resources. The SO-SO Resource 1293 Advertisement uses this reference point. 1295 So-So(-RMM), for monitoring of different resources, computing 1296 power, network bandwidth or latency, storage capacity, VMs, MEC 1297 hosts provided by the peering administrative domain. The details 1298 level depends on the agreed abstraction level. The SO NFVO-RO 1299 uses this reference point. 1301 In the reference architecture (Figure 11), the sub-interface IF2-S 1302 and IF2-C are defined to perform network service-related operations 1303 between MPOs of different administrative domains. The IF2-RC, 1304 IF2-RT, IF2-RMon sub-interfaces are defined to regulated interactions 1305 between Catalogue and Topology Management components. Their mapping 1306 to the sub-interfaces defined in 5G-TRANSFORMER are summarized as 1307 follows: 1309 The IF2-S sub-interface maps to So-So(-LCM) and So-So(-MON). 1311 The IF2-C sub-interface maps to So-So(-CAT). 1313 The IF2-RC, IF2-RT, IF2-RMon sub-interfaces map to So-So-RM, So- 1314 So-RAM, So-So-RT respectively. 1316 6.3.3. Vs-So Interface (IF1) 1318 This interface is based on ETSI GS-NFV IFA 013 for the VS requesting 1319 network services from the SO. Accordingly, the 5G-TRANSFORMER 1320 identified the following sub-interfaces at the level of Vs-So 1321 interactions (i.e., IF1-x interfaces regulating tenant-MPO 1322 interactions). 1324 Vs-So(-LCM). It deals with the NFV network service lifecycle 1325 management (LCM) and it is based on the IFA 013 NS Lifecycle 1326 Management Interface. It offers primitives to instantiate, 1327 terminate, query, update or re-configure network services or 1328 receive notifications about their lifecycle. 1330 Vs-So(-MON). It deals with the monitoring (MON) of network 1331 services and VNFs through queries or subscriptions and 1332 notifications about performance metrics, VNF indicators and 1333 network services or VNFs failures. It maps to IF1-S sub-interface 1334 of the reference architecture. 1336 Vs-So(-CAT). It deals with the catalogue (CAT) management of 1337 Network Service Descriptors (NSDs), VNF packages, including their 1338 VNF Descriptors (VNFDs), and Application Packages, including their 1339 Application Descriptors (AppDs). It offers primitives for on- 1340 boarding, removal, updates, queries and enabling/disabling of 1341 descriptors and packages. It maps to IF1-C sub-interface of the 1342 reference architecture. 1344 In the reference architecture (Figure 11), the sub-interface IF1-S 1345 and IF1-C are defined to build request to perform network service- 1346 related operations including requesting the instantiation, update and 1347 termination of the requested network services. The IF1-S sub- 1348 interface maps to Vs-So(-LCM) and Vs-So(-MON), while the IF1-C sub- 1349 interface maps to Vs-So(-CAT) defined in 5G-TRANSFORMER architecture. 1351 7. Multi-domain orchestration and Open Source 1353 Before reviewing current state of the open source projects it should 1354 be explicitly mentioned that term "federation" is quite ambiguous and 1355 used in multiple contexts across the industry. For example, 1356 federation is the approach used at certain software projects to 1357 achieve high availability and enable reliable non-interrupted 1358 operation and service delivery. One of the distinguishing features 1359 of this federation type is that all federated instances are managing 1360 the same piece of the infrastructure or resources set. However, this 1361 document is focused on another federation type, where multiples 1362 independent instances of the orchestration/management software 1363 establish certain relationships and expose available resources and 1364 capabilities in the particular domain to consumers at another domain. 1365 Besides sharing resource details, multi-domain federation requires 1366 various management information synchronization, such authentication/ 1367 authorization data, run-time policies, connectivity details and so 1368 on. This kind of functionality and appropriate implementation 1369 approaches at the relevant open source projects are in scope of 1370 current section. 1372 At this moment several open source industry projects were formed to 1373 develop integrated NFV orchestration platform. The most known of 1374 them are ONAP [onap], OSM [osm] and Cloudify [cloudify]. While all 1375 these projects have different drivers, motivations, implementation 1376 approach and technology stack under the hood, all of them are 1377 considering multi-VIM deployment scenario, i.e. all these software 1378 platforms are capable to deploy NFV service over different 1379 virtualized infrastructures, like public or private providers. 1380 Additionally OSM and Cloudify orchestration platforms have 1381 capabilities to manage interconnection among managed VIMs using 1382 appropriate plugins or drivers. However, despite the fact that 1383 typical Telco/Carrier infrastructure has multiple domains (both 1384 technology and administrative), none of these orchestration projects 1385 is focused on a service federation use case development. 1387 In the meantime, as an acknowledgement of the challenges, emerged 1388 during exploitation of the federation use cases Multisite project 1389 emerged under OPNFV umbrella [opnfv]. Considering OpenStack-based 1390 VIM deployments spanned across multiple regions as a general use 1391 case, this project initially was focusing on a gaps identification in 1392 the key OpenStack projects which lacks capabilities for multi-site 1393 deployment. During several development phases of this OPNFV project, 1394 number of gaps were identified and submitted as a blueprints for the 1395 development into the appropriate OpenStack projects. Further several 1396 demo scenarios were delivered to trial OpenStack as the open source 1397 VIM which is capable to support multisite NFV clouds. While 1398 Multisite OPNFV project was focusing on a resource and VIM layer 1399 only, there are multiple viable outputs which might be considered 1400 during implementation of the federation use cases on the upper 1401 layers. 1403 As a summary it can be stated that it is still early days for the 1404 technology implemented in a referenced NFV orchestration projects and 1405 federation use case in not on a radar for these projects for the 1406 moment. However, it is expected that upon maturity of the federation 1407 as a viable market use case appropriate feature set in the reviewed 1408 projects will be developed. 1410 8. IANA Considerations 1412 N/A. 1414 9. Security Considerations 1416 TBD. 1418 10. Acknowledgments 1420 This work is supported by 5G-PPP 5GEx, an innovation action project 1421 partially funded by the European Community under the H2020 Program 1422 (grant agreement no. 671636). This work is also supported by 5G-PPP 1423 5G-TRANSFORMER, a research and innovation action project partially 1424 funded by the European Community under the H2020 Program (grant 1425 agreement no. 761536). The views expressed here are those of the 1426 authors only. The European Commission is not liable for any use that 1427 may be made of the information in this presentation. 1429 11. Informative References 1431 [cloudify] 1432 "Cloudify", . 1434 [etsi_nvf_ifa009] 1435 "Report on Architectural Options, ETSI GS NFV-IFA 009 1436 V1.1.1", July 2016. 1438 [etsi_nvf_ifa028] 1439 "Report on architecture options to support multiple 1440 administrative domains, ETSI GR NFV-IFA 028 V3.1.1", 1441 January 2018. 1443 [etsi_nvf_whitepaper] 1444 "Network Functions Virtualisation (NFV). White Paper 2", 1445 October 2014. 1447 [etsi_nvf_whitepaper_5g] 1448 "Network Functions Virtualisation (NFV). White Paper on 1449 "Network Operator Perspectives on NFV priorities for 5G"", 1450 February 2017. 1452 [ngmn_5g_whitepaper] 1453 "5G White Paper", February 2015. 1455 [ngmn_slicing] 1456 "Description of Network Slicing Concept", January 2016. 1458 [onap] "ONAP project", . 1460 [opnfv] "OPNFV Multisite project", 1461 . 1463 [osm] "Open Source MANO project", . 1465 Authors' Addresses 1466 Carlos J. Bernardos (editor) 1467 Universidad Carlos III de Madrid 1468 Av. Universidad, 30 1469 Leganes, Madrid 28911 1470 Spain 1472 Phone: +34 91624 6236 1473 Email: cjbc@it.uc3m.es 1474 URI: http://www.it.uc3m.es/cjbc/ 1476 Luis M. Contreras 1477 Telefonica I+D 1478 Ronda de la Comunicacion, S/N 1479 Madrid 28050 1480 Spain 1482 Email: luismiguel.conterasmurillo@telefonica.com 1484 Ishan Vaishnavi 1485 Huawei Technologies Dusseldorf GmBH 1486 Riesstrasse 25, 1487 Munich 80992 1488 Germany 1490 Email: Ishan.vaishnavi@huawei.com 1492 Robert Szabo 1493 Ericsson 1494 Konyves Kaman krt. 11 1495 Budapest, EMEA 1097 1496 Hungary 1498 Phone: +36703135738 1499 Email: robert.szabo@ericsson.com 1501 Josep Mangues-Bafalluy 1502 CTTC 1503 Av. Carl Friecrish Gauss, 7 1504 Castelldefels, EMEA 08860 1505 Spain 1507 Email: josep.mangues@cttc.cat 1508 Xi Li 1509 NEC 1510 Kurfuersten-Anlage 36 1511 Heidelberg 69115 1512 Germany 1514 Email: Xi.Li@neclab.eu 1516 Francesco Paolucci 1517 SSSA 1518 Via Giuseppe Moruzzi, 1 1519 Pisa 56121 1520 Italy 1522 Phone: +395492124 1523 Email: fr.paolucci@santannapisa.it 1525 Andrea Sgambelluri 1526 SSSA 1527 Via Giuseppe Moruzzi, 1 1528 Pisa 56121 1529 Italy 1531 Phone: +395492132 1532 Email: a.sgambelluri@santannapisa.it 1534 Barbara Martini 1535 SSSA 1536 Via Giuseppe Moruzzi, 1 1537 Pisa 56121 1538 Italy 1540 Email: barbara.martini@cnit.it 1542 Luca Valcarenghi 1543 SSSA 1544 Via Giuseppe Moruzzi, 1 1545 Pisa 56121 1546 Italy 1548 Email: luca.valcarenghi@santannapisa.it 1549 Giada Landi 1550 Nextworks 1551 Via Livornese, 1027 1552 Pisa 56122 1553 Italy 1555 Email: g.landi@nextworks.it 1557 Dmitriy Andrushko 1558 MIRANTIS 1560 Email: dandrushko@mirantis.com 1562 Alain Mourad 1563 InterDigital Europe 1565 Email: Alain.Mourad@InterDigital.com 1566 URI: http://www.InterDigital.com/