idnits 2.17.1 draft-rosa-bmwg-vnfbench-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 315: '... measurements. A Test MUST always run...' RFC 2119 keyword, line 991: '... and MUST NOT be connected to device...' RFC 2119 keyword, line 995: '...ial capabilities SHOULD NOT exist in t...' RFC 2119 keyword, line 998: '...loyment scenario SHOULD be identical i...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 20, 2020) is 1277 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BMWG R. Rosa, Ed. 3 Internet-Draft C. Rothenberg 4 Intended status: Informational UNICAMP 5 Expires: April 23, 2021 M. Peuster 6 H. Karl 7 UPB 8 October 20, 2020 10 Methodology for VNF Benchmarking Automation 11 draft-rosa-bmwg-vnfbench-06 13 Abstract 15 This document describes a common methodology for the automated 16 benchmarking of Virtualized Network Functions (VNFs) executed on 17 general-purpose hardware. Specific cases of automated benchmarking 18 methodologies for particular VNFs can be derived from this document. 19 An open source reference implementation is reported as running code 20 embodiment of the proposed, automated benchmarking methodology. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at https://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on April 23, 2021. 39 Copyright Notice 41 Copyright (c) 2020 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (https://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 57 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 58 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 59 4. Considerations . . . . . . . . . . . . . . . . . . . . . . . 6 60 4.1. VNF Assessment Methods . . . . . . . . . . . . . . . . . . 7 61 4.2. Benchmarking Stages . . . . . . . . . . . . . . . . . . . . 7 62 4.3. Architectural Framework . . . . . . . . . . . . . . . . . . 8 63 4.4. Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . 10 64 4.5. Phases of a Benchmarking Test . . . . . . . . . . . . . . . 11 65 4.5.1. Phase I: Deployment . . . . . . . . . . . . . . . . . . . 11 66 4.5.2. Phase II: Configuration . . . . . . . . . . . . . . . . . 11 67 4.5.3. Phase III: Execution . . . . . . . . . . . . . . . . . . 12 68 4.5.4. Phase IV: Result . . . . . . . . . . . . . . . . . . . . 12 69 5. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 12 70 5.1. VNF Benchmarking Descriptor (VNF-BD) . . . . . . . . . . . 13 71 5.2. VNF Performance Profile (VNF-PP) . . . . . . . . . . . . . 13 72 5.3. VNF Benchmarking Report (VNF-BR) . . . . . . . . . . . . . 14 73 5.4. Procedures . . . . . . . . . . . . . . . . . . . . . . . . 14 74 5.4.1. Plan . . . . . . . . . . . . . . . . . . . . . . . . . . 15 75 5.4.2. Realization . . . . . . . . . . . . . . . . . . . . . . . 16 76 5.4.3. Summary . . . . . . . . . . . . . . . . . . . . . . . . . 17 77 6. Particular Cases . . . . . . . . . . . . . . . . . . . . . . 18 78 6.1. Capacity . . . . . . . . . . . . . . . . . . . . . . . . . 18 79 6.2. Redundancy . . . . . . . . . . . . . . . . . . . . . . . . 18 80 6.3. Isolation . . . . . . . . . . . . . . . . . . . . . . . . . 18 81 6.4. Failure Handling . . . . . . . . . . . . . . . . . . . . . 18 82 6.5. Elasticity and Flexibility . . . . . . . . . . . . . . . . 19 83 6.6. Handling Configurations . . . . . . . . . . . . . . . . . . 19 84 6.7. White Box VNF . . . . . . . . . . . . . . . . . . . . . . . 19 85 7. Open Source Reference Implementation . . . . . . . . . . . . 19 86 7.1. Gym . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 87 7.2. Related work: tng-bench . . . . . . . . . . . . . . . . . . 20 88 8. Security Considerations . . . . . . . . . . . . . . . . . . . 21 89 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 90 10. YANG Modules . . . . . . . . . . . . . . . . . . . . . . . . 23 91 10.1. VNF-Benchmarking Descriptor . . . . . . . . . . . . . . . 23 92 10.2. VNF Performance Profile . . . . . . . . . . . . . . . . . 34 93 10.3. VNF Benchmarking Report . . . . . . . . . . . . . . . . . 41 94 11. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 46 95 12. References . . . . . . . . . . . . . . . . . . . . . . . . . 46 96 12.1. Normative References . . . . . . . . . . . . . . . . . . . 46 97 12.2. Informative References . . . . . . . . . . . . . . . . . . 47 98 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 49 100 1. Introduction 102 In [RFC8172] the Benchmarking Methodology Working Group (BMWG) 103 presented considerations for benchmarking of VNFs and their 104 infrastructure, similar to the motivation given, the following 105 aspects reinforce and justify the need for VNF benchmarking: (i) pre- 106 deployment infrastructure dimensioning to realize associated VNF 107 performance profiles; (ii) comparison factor with physical network 108 functions; (iii) and output results for analytical VNF development. 110 Even if many methodologies the BMWG already describes, e.g., self- 111 contained black-box benchmarking, can be applied to VNF benchmarking 112 scenarios, further considerations have to be made. This is because 113 VNFs, which are software components, might not have strict and clear 114 execution boundaries and depend on underlying virtualization 115 environment parameters as well as management and orchestration 116 decisions [ETS14a]. 118 Different enabling technologies advent of Software Defined Networking 119 (SDN) and Network Functions Virtualization (NFV) have propitiated the 120 disaggregation of VNFs and benchmarking tools, turning their 121 Application Programming Interfaces (APIs) open and programmable. 122 This process have occurred mostly by: (i) the decoupling of network 123 function's control and data planes; (ii) the development of VNFs as 124 multi-layer and distributed software components; (iii) and the 125 existence of multiple underlying hardware abstractions to be utilized 126 by VNFs. 128 Utilizing SDN and NFV enabling technologies, a diversity of 129 benchmarking tools have been created to facilitate the active 130 stimulus and the passive monitoring of a VNF via diverse software 131 abstraction layers, propitiating a wide variety of abstractions for 132 benchmarking mechanisms in the formulation of a VNF benchmarking 133 methodology. In this manner of establishing the disaggregation of a 134 VNF benchmarking setup, the abstracted VNF benchmarking mechanisms 135 can be programmable, enabling the execution of their underlying 136 technologies by the means of well defined parameters and producing a 137 report with standardized metrics. 139 Turning programmable the execution of a VNF benchmarking methodology 140 enables a richer apparatus for the benchmarking of a VNF and 141 consequently facilitates the high-fidelity assessment of a VNF 142 behaviour. Estimating the behaviour of a VNF depends on three 143 correlated factors: 145 Internal configuration: Each use case of the VNF might define 146 specific settings for it to work properly, and even each VNF might 147 dispose of specific settings to be configured. 149 Hardware and software execution environment: A myriad of 150 capabilities offered by execution environments might match in a 151 large diversity of manners the possible internal software 152 arrangements that each VNF might be programmable. 154 Network workload specificities: Depending on the use case, a VNF 155 might be placed in different settings, operating under varied 156 traffic profiles and in demand of a specific performance behavior. 158 The role of a VNF benchmarking methodology consists in defining how 159 to tackle the diversity of settings imposed by the above enlisted 160 factors in order to extract performance metrics associated with 161 particular VNF packet processing behaviors. The sample space of 162 testing such diversity of settings can be extensively large, turning 163 manual benchmarking experiments prohibitively expensive. Indeed, 164 portability as an intrinsic characteristic of VNFs allows them to be 165 deployed in multiple execution environments, enabling benchmarking 166 setups in a myriad of settings. Thus, the establishment of a 167 methodology for VNF benchmarking automation detains utter importance. 169 Accordingly, can and should the flexible, software-based nature of 170 VNFs be exploited to fully automate the entire benchmarking 171 methodology end-to-end. This is an inherent need to align VNF 172 benchmarking with the agile methods enabled by the concept of Network 173 Functions Virtualization (NFV) [ETS14e]. More specifically it 174 allows: (i) the development of agile performance-focused DevOps 175 methodologies for Continuous Integration and Delivery (CI/CD) of 176 VNFs; (ii) the creation of on-demand VNF test descriptors for 177 upcoming execution environments; (iii) the path for precise-analytics 178 of automated catalogues of VNF performance profiles; (iv) and run- 179 time mechanisms to assist VNF lifecycle orchestration/management 180 workflows, e.g., automated resource dimensioning based on 181 benchmarking insights. 183 2. Terminology 185 Common benchmarking terminology contained in this document is derived 186 from [RFC1242]. The reader is assumed to be familiar with the 187 terminology as defined in the European Telecommunications Standards 188 Institute (ETSI) NFV document [ETS14b]. Some of these terms, and 189 others commonly used in this document, are defined below. 191 NFV: Network Function Virtualization - the principle of separating 192 network functions from the hardware they run on by using virtual 193 hardware abstraction. 195 VNF: Virtualized Network Function - a software-based network 196 function. A VNF can be either represented by a single entity or 197 be composed by a set of smaller, interconnected software 198 components, called VNF components (VNFCs) [ETS14d]. Those VNFs 199 are also called composed VNFs. 201 VNFC: Virtualized Network Function Component - a software component 202 that implements (parts of) the VNF functionality. A VNF can 203 consist of a single VNFC or multiple, interconnected VNFCs 204 [ETS14d] 206 VNFD: Virtualised Network Function Descriptor - configuration 207 template that describes a VNF in terms of its deployment and 208 operational behaviour, and is used in the process of VNF on- 209 boarding and managing the life cycle of a VNF instance. 211 NS: Network Service - a collection of interconnected VNFs forming a 212 end-to-end service. The interconnection is often done using 213 chaining of functions. 215 VNF Benchmarking Descriptor (VNF-BD) -- contains all the 216 definitions and requirements to deploy, configure, execute, and 217 reproduce VNF benchmarking tests. A VNF-BD is defined by the 218 developer of a VNF benchmarking methodology and serve as input to 219 the execution of an automated benchmarking methodology. 221 VNF Performance Profile (VNF-PP) -- in a well defined structure 222 contains all the measured metrics resulting from the execution of 223 automated VNF benchmarking tests defined by a specific VNF-BD. 224 Additionally, it might also contain additional recordings of 225 configuration parameters used during the execution of the 226 benchmarking setup. 228 VNF Benchmarking Report (VNF-BR) -- contains all the definition of 229 the inputs and outputs of an automated VNF benchmarking 230 methodology. The inputs define the necessary VNF-BD and a 231 respective list of variables referencing the VNF-BD fields that 232 must be utilized to define the sample space of the VNF 233 benchmarking settings. The outputs consist of a list of entries, 234 each one contains one of the combinations of the sampled variables 235 from the inputs, the input VNF-BD parsed with such combination of 236 variables, and the obtained VNF-PP resulting from the automated 237 realization of the parsed VNF-BD. A VNF-BR might contain the 238 settings definitions of the orchestrator platform that realizes 239 the instantiation of the benchmarking setup to enable the VNF-BD 240 fullfilment. 242 3. Scope 244 This document assumes VNFs as black boxes when defining their 245 benchmarking methodologies. White box approaches are assumed and 246 analysed as a particular case under the proper considerations of 247 internal VNF instrumentation, later discussed in this document. 249 This document outlines a methodology for VNF benchmarking, 250 specifically addressing its automation, without limiting the 251 automated process to a specific benchmarking case or infrastructure. 252 The document addresses state-of-the-art work on VNF benchmarking from 253 scientific publications and current developments in other 254 standardization bodies (e.g., [ETS14c], [ETS19f] and [RFC8204]) 255 wherever possible. 257 Whenever utilizing the specifications of this document, a particular 258 automated VNF benchmarking methodology must be described in a clear 259 and objective manner following four basic principles: 261 o Comparability: The output of a benchmarking test shall be simple 262 to understand and process, in a human-readable format, coherent, 263 and easily reusable (e.g., inputs for analytic applications). 265 o Repeatability: A benchmarking setup shall be comprehensively 266 defined through a flexible design model that can be interpreted 267 and executed by the testing platform repeatedly but supporting 268 customization. 270 o Configurability: Open interfaces and extensible messaging models 271 shall be available between benchmarking components for flexible 272 composition of a benchmarking test descriptor and environment 273 configurations. 275 o Interoperability: A benchmarking test shall be ported to different 276 environments, using lightweight components whenever possible. 278 4. Considerations 280 VNF benchmarking considerations are defined in [RFC8172]. 281 Additionally, VNF pre-deployment testing considerations are well 282 explored in [ETS14c]. Further, ETSI provides test specifications for 283 networking benchmarks and measurement methods for NFV infrastructure 284 in [ETS19f], which complements the presented work on VNF benchmarking 285 methodologies. 287 4.1. VNF Assessment Methods 289 Following ETSI's model in [ETS14c], we distinguish three methods for 290 a VNF evaluation: 292 Benchmarking: Where parameters (e.g., CPU, memory, storage) are 293 provided and the corresponding performance metrics (e.g., latency, 294 throughput) are obtained. Note, such evaluations might create 295 multiple reports, for example, with minimal latency or maximum 296 throughput results. 298 Verification: Both parameters and performance metrics are provided 299 and a stimulus verifies if the given association is correct or 300 not. 302 Dimensioning: Performance metrics are provided and the corresponding 303 parameters obtained. Note, multiple deployments may be required, 304 or if possible, underlying allocated resources need to be 305 dynamically altered. 307 Note: Verification and Dimensioning can be reduced to Benchmarking. 309 4.2. Benchmarking Stages 311 The realization of an automated benchmarking methodology can be 312 divided into three stages: 314 Trial: Is a single process or iteration to obtain VNF performance 315 metrics from benchmarking measurements. A Test MUST always run 316 multiple Trials to get statistical confidence about the obtained 317 measurements. 319 Test: Defines unique structural and functional parameters (e.g., 320 configurations, resource assignment) for benchmarked components to 321 perform one or multiple Trials. Each Test must be executed 322 following a particular benchmarking scenario composed by a Method. 323 Proper measures must be taken to ensure statistical validity 324 (e.g., independence across Trials of generated load patterns). 326 Method: Consists of one or more Tests to benchmark a VNF. A Method 327 can explicitly list ranges of parameter values for the 328 configuration of a benchmarking scenario and its components. Each 329 value of such a range is to be realized in a Test. I.e., Methods 330 can define parameter studies. 332 4.3. Architectural Framework 334 A VNF benchmarking architectural framework, shown in Figure 1, 335 establishes the disposal of essential components and control 336 interfaces, explained below, that realize the automation of a VNF 337 benchmarking methodology. 339 +---------------+ 340 | Manager | 341 Control | (Coordinator) | 342 Interfaces +---+-------+---+ 343 +---------+-----------+ +-------------------+ 344 | | | 345 | | +--------------------+ | 346 | | | System Under Test | | 347 | | | | | 348 | | | +-----------------+| | 349 | +--+--------+ | | VNF || | 350 | | | | | || | 351 | | | | | +----+ +----+ || | 352 | | <===> |VNFC|...|VNFC| || | 353 | | | | | +----+ +----+ || | 354 | | Monitor(s)| | +----.---------.--+| | 355 +-----+---+ |{listeners}| | : : | +-----+----+ 356 | Agent(s)| | | | +----^---------V--+| | Agent(s)| 357 |(Sender) | | <===> Execution || |(Receiver)| 358 | | | | | | Environment || | | 359 |{Probers}| +-----------+ | | || |{Probers} | 360 +-----.---+ | +----.---------.--+| +-----.----+ 361 : +------^---------V---+ : 362 V : : : 363 :.................>.........: :........>..: 364 Stimulus Traffic Flow 366 Figure 1: A VNF Benchmarking Architectural Framework 368 Virtualized Network Function (VNF) -- consists of one or more 369 software components, so called VNF components (VNFC), adequate for 370 performing a network function according to allocated virtual 371 resources and satisfied requirements in an execution environment. 372 A VNF can demand particular settings for benchmarking 373 specifications, demonstrating variable performance based on 374 available virtual resource parameters and configured enhancements 375 targeting specific technologies (e.g., NUMA, SR-IOV, CPU-Pinning). 377 Execution Environment -- defines a virtualized and controlled 378 composition of capabilities necessary for the execution of a VNF. 379 An execution environment stands as a general purpose level of 380 virtualization with abstracted resources available for one or more 381 VNFs. It can also define specific technology qualifications, 382 incurring in viable settings for enhancing the performance of 383 VNFs, satisfying their particular enhancement requirements. An 384 execution environment must be defined with the proper 385 virtualization technologies feasible for the allocation of a VNF. 386 The means to programmatically control the execution environment 387 capabilities must be well defined for its life cycle management. 389 Agent (Active Prospection) -- executes active stimulus using 390 probers, to benchmark and collect network and system performance 391 metrics. A single Agent can perform localized benchmarks in 392 execution environments (e.g., stress tests on CPU, memory, storage 393 Input/Output) or can generate stimulus traffic and the other end 394 be the VNF itself where, for example, one-way latency is 395 evaluated. The interaction among two or more Agents enable the 396 generation and collection of end-to-end metrics (e.g., frame loss 397 rate, latency) measured from stimulus traffic flowing through a 398 VNF. An Agent can be defined by a physical or virtual network 399 function, and it must provide programmable interfaces for its life 400 cycle management. 402 Prober -- defines an abstraction layer for a software or hardware 403 tool able to generate stimulus traffic to a VNF or perform 404 stress tests on execution environments. Probers might be 405 specific or generic to an execution environment or a VNF. For 406 an Agent, a Prober must provide programmable interfaces for its 407 life cycle management, e.g., configuration of operational 408 parameters, execution of stilumus, parsing of extracted 409 metrics, and debugging options. Specific Probers might be 410 developed to abstract and to realize the description of 411 particular VNF benchmarking methodologies. 413 Monitor (Passive Prospection) -- when possible is instantiated 414 inside the System Under Test, VNF and/or execution environment, to 415 perform the passive monitoring, using Listeners, for the 416 extraction of metrics while Agents` stimuli takes place. Monitors 417 observe particular properties according to the execution 418 environment and VNF capabilities, i.e., exposed passive monitoring 419 interfaces. Multiple Listeners can be executed at once in 420 synchrony with a Prober' stimulus on a SUT. A Monitor can be 421 defined as a virtualized network function, and it must provide 422 programmable interfaces for its life cycle management. 424 Listener -- defines one or more software interfaces for the 425 extraction of metrics monitored in a target VNF and/or 426 execution environment. A Listener must provide programmable 427 interfaces for its life cycle management workflows, e.g., 428 configuration of operational parameters, execution of passive 429 monitoring captures, parsing of extracted metrics, and 430 debugging options (also see [ETS19g]). Varied methods of 431 passive performance monitoring might be implemented as a 432 Listener, depending on the interfaces exposed by the VNF and/or 433 the execution environment. 435 Manager -- performs (i) the discovery of available Agents and 436 Monitors and their respective features (i.e., available Probers/ 437 Listeners and their execution environment capabilities), (ii) the 438 coordination and synchronization of activities of Agents and 439 Monitors to perform a benchmarking Test, (iii) the collection, 440 processing and aggregation of all VNF benchmarking (active and 441 passive) metrics, which correlates the characteristics of the VNF 442 traffic stimuli and the, possible, SUT monitoring. A Manager 443 executes the main configuration, operation, and management actions 444 to deliver the VNF benchmarking metrics. Hence, it detains 445 interfaces open for users interact with the whole benchmarking 446 framework, realizing, for instance, the retrival of the framework 447 characteristics (e.g., available benchmarking components and their 448 probers/listeners), the coordination of benchmarking tests, the 449 processing and the retrival of benchmarking metrics, among other 450 operational and management functionalities. A Manager can be 451 defined as a physical or virtualized network function, and it must 452 provide programmable interfaces for its life cycle management. 454 4.4. Scenarios 456 A scenario, as well referred as a benchmarking setup, consists of the 457 actual instantiation of physical and/or virtual components of a "VNF 458 Benchmarking Architectural Framework" needed to habilitate the 459 execution of an automated VNF benchmarking methodology. The 460 following considerations hold for a scenario: 462 o Not all components are mandatory for a Test, possible to be 463 disposed in varied setups. 465 o Components can be aggregated in a single entity and be defined as 466 black or white boxes. For instance, Manager and Agents could 467 jointly define one hardware or software entity to perform a VNF 468 benchmarking Test. 470 o Monitor can be defined by multiple instances of distributed 471 software components, each one addressing one or more VNF or 472 execution environment monitoring interfaces. 474 o Agents can be disposed in varied topology setups, included the 475 possibility of multiple input and output ports of a VNF being 476 directly connected each in one Agent. 478 o All benchmarking components defined in a scenario must perform the 479 synchronization of clocks. 481 4.5. Phases of a Benchmarking Test 483 In general, an automated benchmarking methodology must execute Tests 484 repeatedly so it must capture the relevant causes of the performance 485 variability of a VNF. To dissect a VNF benchmarking Test, in the 486 sections that follow a set of benchmarking phases are categorized 487 defining generic operations that may be automated. When executing an 488 automated VNF benchmarking methodology, all the influencing aspects 489 on the performance of a VNF must be carefully analyzed and 490 comprehensively reported in each automated phase of a benchmarking 491 Test. 493 4.5.1. Phase I: Deployment 495 The placement (i.e., assignment and allocation of resources) and the 496 interconnection, physical and/or virtual, of network function(s) and 497 benchmarking components can be realized by orchestration platforms 498 (e.g., OpenStack, Kubernetes, Open Source MANO). In automated 499 manners, the realization of a benchmarking scenario through those 500 means usually rely on network service templates (e.g., TOSCA, YANG, 501 Heat, and Helm Charts). Such descriptors have to capture all 502 relevant details of the execution environment to allow the 503 benchmarking framework to correctly instantiate the SUT as well as 504 helper functions required for a Test. 506 4.5.2. Phase II: Configuration 508 The configuration of benchmarking components and VNFs (e.g., populate 509 routing table, load PCAP source files in source of traffic stimulus) 510 to execute the Test settings can be realized by programming 511 interfaces in an automated way. In the scope of NFV, there might 512 exist management interfaces to control a VNF during a benchmarking 513 Test. Likewise, infrastructure or orchestration components can 514 establish the proper configuration of an execution environment to 515 realize all the capabilities enabling the description of the 516 benchmarking Test. Each configuration registry, its deployment 517 timestamp and target, must all be contained in the report of a VNF 518 benchmarking Test. 520 4.5.3. Phase III: Execution 522 In the execution of a benchmarking Test, the VNF configuration can be 523 programmed to be changed by itself or by a VNF management platform. 524 It means that during a Trial execution, particular behaviors of a VNF 525 can be automatically triggered, e.g., auto-scaling of its internal 526 components. Those must be captured in the detailed procedures of the 527 VNF execution and its performance report. I.e., the execution of a 528 Trial can determine arrangements of internal states inside a VNF, 529 which can interfere in observed benchmarking metrics. For instance, 530 in a particular benchmarking case where the monitoring measurements 531 of the VNF and/or execution environment are available for extraction, 532 comparison Tests must be run to verify if the monitoring of the VNF 533 and/or execution environment can impact the VNF performance metrics. 535 4.5.4. Phase IV: Result 537 The result of a VNF benchmarking Test might contain generic metrics 538 (e.g., CPU and memory consumption) and VNF-specific traffic 539 processing metrics (e.g., transactions or throughput), which can be 540 stored and processed in generic or specific ways (e.g., by statistics 541 or machine learning algorithms). More details about possible metrics 542 and the corresponding capturing methods can be found in [ETS19g]. If 543 automated procedures are applied over the generation of a 544 benchmarking Test result, those must be explained in the result 545 itself, jointly with their input raw measurements and output 546 processed data. For instance, any algorithm used in the generation 547 of processed metrics must be disclosed in the Test result. 549 5. Methodology 551 The execution of an automated benchmarking methodology consists in 552 elaborating a VNF Benchmarking Report, its inputs and outputs. The 553 inputs part of a VNF-BR must be written by a VNF benchmarking tester. 554 When the VNF-BR, with its inputs fulfilled, is requested from the 555 Manager component of a implementation of the "VNF Benchmarking 556 Architectural Framework", the Manager must utilize the inputs part to 557 obtain the outputs part of the VNF-BR, addressing the execution of 558 the automated benchmarking methodology as defined in Section 5.4. 560 The flow of information in the execution of an automated benchmarking 561 methodology can be represented by the YANG modules defined by this 562 document. The sections that follow present an overview of such 563 modules. 565 5.1. VNF Benchmarking Descriptor (VNF-BD) 567 VNF Benchmarking Descriptor (VNF-BD) -- an artifact that specifies 568 how to realize the Test(s) and Trial(s) of an automated VNF 569 benchmarking methodology in order to obtain a VNF Performance 570 Profile. The specification includes structural and functional 571 instructions and variable parameters at different abstraction levels, 572 such as the topology of the benchmarking scenario, and the execution 573 parameters of prober(s)/listener(s) in the required 574 Agent(s)/Monitor(s). A VNF-BD may be specific to a VNF or applicable 575 to several VNF types. 577 More specifically, a VNF-BD is defined by a scenario and its 578 proceedings. The scenario defines nodes (i.e., benchmarking 579 components) and links interconnecting them, a topology that must be 580 instantiated in order to execute the VNF-BD proceedings. The 581 proceedings contain the specification of the required Agent(s) and 582 Monitor(s) needed in the scenario nodes. Detailed in each Agent/ 583 Monitor follows the specification of the Prober(s)/Listener(s) 584 required for the execution of the Tests, and in the details of each 585 Prober/Listener follows the specification of its execution 586 parameters. In the header of a VNF-BD is specified the number of 587 Tests and Trials that a Manager must run them. Each Test realizes a 588 unique instantiation of the scenario, while each Trial realizes a 589 unique execution of the proceedings in the instantiated scenario of a 590 Test. The VNF-BD YANG module is presented in Section 10.1. 592 5.2. VNF Performance Profile (VNF-PP) 594 VNF Performance Profile (VNF-PP) -- an output artifact of a VNF-BD 595 execution performed by a Manager component. It contains all the 596 metrics from Monitor(s) and/or Agent(s) components after realizing 597 the execution of the Prober(s) and/or the Listener(s) proceedings, 598 specified in its corresponding VNF-BD. Metrics are logically grouped 599 according to the execution of the Trial(s) and Test(s) defined by a 600 VNF-BD. A VNF-PP is specifically associated with a unique VNF-BD. 602 More specifically, a VNF-PP is defined by a structure that allows 603 benchmarking results to be presented in a logical and unified format. 604 A VNF-PP report is the result of an unique Test, while its content, 605 the so called snapshot(s), each containing the results of the 606 execution of a single Trial. Each snapshot is built by a single 607 Agent or Monitor. A snapshot contains evaluation(s), each one being 608 the output of the execution of a single Prober or Listener. An 609 evaluation contains one or more metrics. In summary, a VNF-PP 610 aggregates the results from reports (i.e., the Test(s)); a report 611 aggregates Agent(s) and Monitor(s) results (i.e., the Trial(s)); a 612 snapshot aggregates Prober(s) or Listener(s) results; and an 613 evaluation aggregates metrics. The VNF-PP YANG module is presented 614 in Section 10.2. 616 5.3. VNF Benchmarking Report (VNF-BR) 618 VNF Benchmarking Report (VNF-BR) -- the core artifact of an automated 619 VNF benchmarking methodology consisted of three parts: a header, 620 inputs and output. The header refers to the VNF-BR description items 621 (e.g., author, version, name), the description of the target SUT 622 (e.g., the VNF version, release, name), and the environment settings 623 specifying the parameters needed to instantiate the benchmarking 624 scenario via an orchestration platform. The inputs contain the 625 definitions needed to execute the automated benchmarking methodology 626 of the target SUT, a VNF-BD and its variables settings. The outputs 627 contain the results of the execution of the inputs, a list of 628 entries, each one containing a VNF-BD filled with one of the 629 combinations of the input variables settings, and the obtained VNF-PP 630 reported after the execution of the Test(s) and Trial(s) of the 631 parsed VNF-BD. The process of utilizing the VNF-BR inputs to 632 generate its outputs concerns the realization of an automated VNF 633 benchmarking methodology, explained in details in Section 5.4.2. The 634 VNF-BR YANG module is presented in Section 10.3. 636 In details, each one of the variables in the inputs part of a VNF-BR 637 is defined by: a name (the actual name of the variable); a path (the 638 YANG path of the variable in the input VNF-BD); a type (the type of 639 the values, such as string, int, float, etc); class (one of: 640 stimulus, resource, configuration); and values (a list of the 641 variable actual values). The values of all the variables must be 642 combined all-by-all, generating a list containing the whole sample 643 space of variables settings that must be used to create the VNF-BD 644 instances. A VNF-BD instance is defined as the result of the parsing 645 of one of those combinations of input variables into the VNF-BD of 646 the VNF-BR inputs. The parsing takes place when the variable path is 647 utilized to set its value in the VNF-BD. Interatively, all the VNF- 648 BD instances must have its Test(s) and Trial(s) executed to generate 649 its corresponding VNF-PP. After all the VNF-BD instances had their 650 VNF-PP accomplished, the realization of the whole automated VNF 651 benchmarking methodology is complete, fulfilling the outputs part of 652 the VNF-BR as shown in Figure 2. 654 5.4. Procedures 655 +------------+ +-----------------+ +------------+ 656 | | | | | | 657 | VNF-BR | | Execution of | | VNF-BR | 658 | (Inputs) +----------->+ the Automated +--------->+ (Inputs) | 659 | | | Benchmarking | | (Outputs) | 660 +------------+ | Methodology | | | 661 | | +------------+ 662 +-----------------+ 664 Figure 2: VNF benchmarking process inputs and outputs 666 The methodology for VNF benchmarking automation encompasses the 667 process defined in Figure 2, i.e., the procedures that utilize the 668 inputs part to obtain the outputs part of a VNF-BR. This section 669 details the procedures that realize such process. 671 5.4.1. Plan 673 The plan of an automated VNF benchmarking methodology consists in the 674 definition of all the header and the inputs part of a VNF-BR, the 675 artifacts to be utilized by the realization of the methodology, and 676 the establishment of the execution environment where the methodology 677 takes place. The topics below contain the details of such planning. 679 1. The writing of a VNF-BD must be done utilizing the VNF-BD YANG 680 module Section 10.1. A VNF-BD composition must determine the 681 scenario and the proceedings. The VNF-BD must be added to the 682 inputs part of an instance of the VNF-BR YANG model. 684 2. All the variables in the inputs part of a VNF-BR must be 685 defined. Each variable must contain all its fields fullfiled 686 according to the VNF-BR YANG module Section 10.3. 688 3. All the software artifacts needed for the instantiation of the 689 VNF-BD scenario must be made and turn available for the execution 690 of the Test(s) and Trial(s). The artifacts include the definition 691 of software components that realize the role of the functional 692 components of the Benchmarking Architectural Framework, i.e., the 693 Manager, the Agent and the Monitor and their respective Probers 694 and Listeners. 696 4. The header of the VNF-BR instance must be written, stating the 697 VNF-BR description items, the specification of the SUT settings, 698 and the definition of the environment parameters, feasible for the 699 instantiation of the VNF-BD scenario when executing the automated 700 VNF benchmarking methodology. 702 5. The execution environment needed for a VNF-BD scenario must be 703 prepared to be utilized by an orchestration platform to automate 704 instantiation of the scenario nodes and links needed for the 705 execution of a Test. The orchestration platform interface 706 parameters must be referenced in the VNF-BR header. The 707 orchestration platform must have access to the software artifacts 708 that are referenced in the VNF-BD scenario to be able to manage 709 their life cycle. 711 6. The Manager component must be instantiated, the execution 712 environment must be turned available, and the orchestration 713 platform must have accesss to the execution environment and the 714 software artifacts that are referenced in the scenario of the VNF- 715 BD in the inputs part of the VNF-BR. 717 5.4.2. Realization 719 Accomplished all the planning procedures, the process of the 720 realization of the automated benchmarking methodology must be 721 realized as the following topics describe. 723 1. The realization of the benchmarking procedures starts when the 724 VNF-BR composed in the planning procedures is submitted to the 725 Manager component. It triggers the automated execution of the 726 benchmarking methodology defined by the inputs part of the VNF-BR. 728 2. Manager computes all the combinations of values from the lists 729 of inputs in the VNF-BD, part of the submitted VNF-BR. Each 730 combination of variables are used to define a Test. The VNF-BD 731 submitted serves as a template for each combination of variables. 732 Each parsing of each combination of variables by the VNF-BD 733 template creates a so called VNF-BD instance. The Manager must 734 iterate through all the VNF-BD instances to finish the whole set 735 of Tests defined by all the combinations of variables and their 736 respective parsed VNF-BD. The Manager iterates through the 737 following steps until all the Tests are accomplished. 739 3. The Manager must interface an orchestration platform to realize 740 the automated instantiation of the deployment scenario defined by 741 a VNF-BD instance (i.e., a Test). To perform such step, The 742 Manager might interface a management function responsible to 743 properly parse the deployment scenario specifications into the 744 orchestration platform interface format. The environment 745 specifications of the VNF-BR header provide the guidelines to 746 interface the orchestration platform. The orchestration platform 747 must deploy the scenario requested by the Manager, assuring the 748 requirements and policies specified on it. In addition, the 749 orchestration platform must acknowledge the deployed scenario to 750 the Manager specifying the management interfaces of the VNF SUT 751 and the other components in the running instances for the 752 benchmarking scenario. Only when the scenario is correctly 753 deployed the execution of the VNF-BD instance Test(s) and Trial(s) 754 must ocurr, otherwise the whole execution of the VNF-BR must be 755 aborted and an error message must be added to the VNF-BR outputs 756 describing the problems that ocurred in the instantiation of the 757 VNF-BD scenario. If the scenario is successfuly deployed, the 758 VNF-BD Test proceedings can be executed. 760 4. Manager must interface Agent(s) and Monitor(s) via their 761 management interfaces to require the execution of the VNF-BD 762 proceedings, which consist in running the specified Probers and 763 Listeners using the defined parameters, and retrieve their output 764 metrics captured at the end of each Trial. Thus, a Trial 765 conceives the execution of the proceedings of the VNF-BD instance. 766 The number of Trials is defined in each VNF-BD instance. After 767 the execution of all defined Trials the execution of a Test ends. 769 5. Output measurements from each obtained benchmarking Trials that 770 compose a Test result must be collected by the Manager, until all 771 the Tests are finished. Each set of collected measurements from 772 each VNF-BD instance Trials and Tests must be used to elaborate a 773 VNF-PP by the Manager component. The respective VNF-PP, its 774 associated VNF-BD instance and its input variables compose one of 775 the entries of the list of outputs of the VNF-BR. After all the 776 list of combinations of input variables is explored to obtain the 777 whole list of instances of VNF-BDs and elaborated VNF-PPs, the 778 Manager component returns the original VNF-BR submitted to it, 779 including the outputs part properly filled. 781 5.4.3. Summary 783 After the realization of an automated benchmarking methodology, some 784 automated procedures can be performed to improve the quality and the 785 utility of the obtained VNF-BR, as described in the following topics. 787 1. Archive the raw outputs contained in the VNF-BR, perform 788 statistical analysis on it, or train machine learning models with 789 the collected data. 791 2. Evaluate the analysis output to the detection of any possible 792 cause-effect factors and/or intrinsic correlations in the VNF-BR 793 outputs (e.g., outliers). 795 3. Review the inputs of a VNF-BR, VNF-BD and variables, and modify 796 them to realize the proper extraction of the target VNF metrics 797 based on the intended goal of the VNF benchmarking methodology 798 (e.g., throughput). Iterate in the previous steps until composing 799 a stable and representative VNF-BR. 801 6. Particular Cases 803 As described in [RFC8172], VNF benchmarking might require to change 804 and adapt existing benchmarking methodologies. More specifically, 805 the following cases need to be considered. 807 6.1. Capacity 809 VNFs are usually deployed inside containers or VMs to build an 810 abstraction layer between physical resources and the resources 811 available to the VNF. According to [RFC8172], it may be more 812 representative to design experiments in a way that the VMs hosting 813 the VNFs are operating at maximum of 50% utilization and split the 814 workload among several VMs, to mitigateside effects of overloaded 815 VMs. Those cases are supported by the presented automation 816 methodologies through VNF-BDs that enable direct control over the 817 resource assignments and topology layouts used for a benchmarking 818 experiment. 820 6.2. Redundancy 822 As a VNF might be composed of multiple components (VNFCs), there 823 exist different schemas of redundancy where particular VNFCs would be 824 in active or standby mode. For such cases, particular monitoring 825 endpoints should be specified in VNF-BD so listeners can capture the 826 relevant aspects of benchmarking when VNFCs would be in active/ 827 standby modes. In this particular case, capturing the relevant 828 aspects of internal functionalities of a VNF and its internal 829 components provides important measurements to characterize the 830 dynamics of a VNF, those must be reflected in its VNF-PP. 832 6.3. Isolation 834 One of the main challenges of NFV is to create isolation between 835 VNFs. Benchmarking the quality of this isolation behavior can be 836 achieved by Agents that take the role of a noisy neighbor, generating 837 a particular workload in synchrony with a benchmarking procedure over 838 a VNF. Adjustments of the Agent's noisy workload, frequency, 839 virtualization level, among others, must be detailed in the VNF- BD. 841 6.4. Failure Handling 843 Hardware and software components will fail or have errors and thus 844 trigger healing actions of the benchmarked VNFs (self-healing). 845 Benchmarking procedures must also capture the dynamics of this VNF 846 behavior, e.g., if a container or VM restarts because the VNF 847 software crashed. This results in offline periods that must be 848 captured in the benchmarking reports, introducing additional metrics, 849 e.g., max. time-to-heal. The presented concept, with a flexible VNF- 850 PP structure to record arbitrary metrics, enables automation of this 851 case. 853 6.5. Elasticity and Flexibility 855 Having software based network functions and the possibility of a VNF 856 to be composed by multiple components (VNFCs), internal events of the 857 VNF might trigger changes in VNF behavior, e.g.,activating 858 functionalities associated with elasticity such as automated scaling. 859 These state changes and triggers (e.g. the VNF's scaling state) must 860 be captured in the benchmarking results (VNF-PP) to provide a 861 detailed characterization of the VNF's performance behavior in 862 different states. 864 6.6. Handling Configurations 866 As described in [RFC8172], does the sheer number of test conditions 867 and configuration combinations create a challenge for VNF 868 benchmarking. As suggested, machine readable output formats, as they 869 are presented in this document, will allow automated benchmarking 870 procedures to optimize the tested configurations. Approaches for 871 this are, e.g., machine learning-based configuration space sub- 872 sampling methods, such as [Peu-c]. 874 6.7. White Box VNF 876 A benchmarking setup must be able to define scenarios with and 877 without monitoring components inside the VNFs and/or the hosting 878 container or VM. If no monitoring solution is available from within 879 the VNFs, the benchmark is following the black-box concept. If, in 880 contrast, those additional sources of information from within the VNF 881 are available, VNF-PPs must be able to handle these additional VNF 882 performance metrics. 884 7. Open Source Reference Implementation 886 Currently, technical motivating factors in favor of the automation of 887 VNF benchmarking methodologies comprise: (i) the facility to run 888 high-fidelity and commodity traffic generators by software; (ii) the 889 existent means to construct synthetic traffic workloads purely by 890 software (e.g., handcrafted pcap files); (iii) the increasing 891 availability of datasets containing actual sources of production 892 traffic able to be reproduced in benchmarking tests; (iv) the 893 existence of a myriad of automating tools and open interfaces to 894 programmatically manage VNFs; (v) the varied set of orchestration 895 platforms enabling the allocation of resources and instantition of 896 VNFs through automated machineries based on well-defined templates; 897 (vi) the ability to utilize a large tool set of software components 898 to compose pipelines that mathematically analyze benchmarking metrics 899 in automated ways. 901 In simple terms, the enlisted factors above justify that network 902 softwarization enables the automation of VNF benchmarking 903 methodologies. There exists an open source reference implementation 904 that is built to demonstrate the concepts and methodology of this 905 document in order to automate the benchmarking of Virtualized Network 906 Functions. 908 7.1. Gym 910 The software, named Gym, is a framework for automated benchmarking of 911 Virtualized Network Functions (VNFs). It was coded following the 912 initial ideas presented in a 2015 scientific paper entitled "VBaaS: 913 VNF Benchmark-as-a-Service" [Rosa-a]. Later, the evolved design and 914 prototyping ideas were presented at IETF/IRTF meetings seeking impact 915 into NFVRG and BMWG. 917 Gym was built to receive high-level test descriptors and execute them 918 to extract VNFs profiles, containing measurements of performance 919 metrics - especially to associate resources allocation (e.g., vCPU) 920 with packet processing metrics (e.g., throughput) of VNFs. From the 921 original research ideas [Rosa-a], such output profiles might be used 922 by orchestrator functions to perform VNF lifecycle tasks (e.g., 923 deployment, maintenance, tear-down). 925 In [Rosa-b] Gym was utilized to benchmark a decomposed IP Multimedia 926 Subsystem VNF. And in [Rosa-c], a virtual switch (Open vSwitch - 927 OVS) was the target VNF of Gym for the analysis of VNF benchmarking 928 automation. Such articles validated Gym as a prominent open source 929 reference implementation for VNF benchmarking tests. Such articles 930 set important contributions as discussion of the lessons learned and 931 the overall NFV performance testing landscape, included automation. 933 Gym stands as one open source reference implementation that realizes 934 the VNF benchmarking methodologies presented in this document. Gym 935 is released as open source tool under Apache 2.0 license [gym]. 937 7.2. Related work: tng-bench 939 Another software that focuses on implementing a framework to 940 benchmark VNFs is the "5GTANGO VNF/NS Benchmarking Framework" also 941 called "tng-bench" (previously "son-profile") and was developed as 942 part of the two European Union H2020 projects SONATA NFV and 5GTANGO 943 [tango]. Its initial ideas were presented in [Peu-a] and the system 944 design of the end-to-end prototype was presented in [Peu-b]. 946 Tng-bench aims to be a framework for the end-to-end automation of VNF 947 benchmarking processes. Its goal is to automate the benchmarking 948 process in such a way that VNF-PPs can be generated without further 949 human interaction. This enables the integration of VNF benchmarking 950 into continuous integration and continuous delivery (CI/CD) pipelines 951 so that new VNF-PPs are generated on-the-fly for every new software 952 version of a VNF. Those automatically generated VNF-PPs can then be 953 bundled with the VNFs and serve as inputs for orchestration systems, 954 fitting to the original research ideas presented in [Rosa-a] and 955 [Peu-a]. 957 Following the same high-level VNF testing purposes as Gym, namely: 958 Comparability, repeatability, configurability, and interoperability, 959 tng- bench specifically aims to explore description approaches for 960 VNF benchmarking experiments. In [Peu-b] a prototype specification 961 for VNF-BDs is presented which not only allows to specify generic, 962 abstract VNF benchmarking experiments, it also allows to describe 963 sets of parameter configurations to be tested during the benchmarking 964 process, allowing the system to automatically execute complex 965 parameter studies on the SUT, e.g., testing a VNF's performance under 966 different CPU, memory, or software configurations. 968 Tng-bench was used to perform a set of initial benchmarking 969 experiments using different VNFs, like a Squid proxy, an Nginx load 970 balancer, and a Socat TCP relay in [Peu-b]. Those VNFs have not only 971 been benchmarked in isolation, but also in combined setups in which 972 up to three VNFs were chained one after each other. These 973 experiments were used to test tng-bench for scenarios in which 974 composed VNFs, consisting of multiple VNF components (VNFCs), have to 975 be benchmarked. The presented results highlight the need to 976 benchmark composed VNFs in end-to-end scenarios rather than only 977 benchmark each individual component in isolation, to produce 978 meaningful VNF- PPs for the complete VNF. 980 Tng-bench is actively developed and released as open source tool 981 under Apache 2.0 license [tng-bench]. A larger set of example 982 benchmarking results of various VNFs is available in [Peu-d]. 984 8. Security Considerations 986 Benchmarking tests described in this document are limited to the 987 performance characterization of VNFs in a lab environment with 988 isolated network. 990 The benchmarking network topology will be an independent test setup 991 and MUST NOT be connected to devices that may forward the test 992 traffic into a production network, or misroute traffic to the test 993 management network. 995 Special capabilities SHOULD NOT exist in the VNF benchmarking 996 deployment scenario specifically for benchmarking purposes. Any 997 implications for network security arising from the VNF benchmarking 998 deployment scenario SHOULD be identical in the lab and in production 999 networks. 1001 9. IANA Considerations 1003 This document registers one URI in the "ns" subregistry of the IETF 1004 XML Registry [RFC3688]. Following the format in [RFC3688], the 1005 following registrations are requested: 1007 URI: urn:ietf:params:xml:ns:yang:ietf-vnf-bd 1008 Registrant Contact: The BMWG of the IETF. 1009 XML: N/A, the requested URI is an XML namespace. 1011 URI: urn:ietf:params:xml:ns:yang:ietf-vnf-pp 1012 Registrant Contact: The BMWG of the IETF. 1013 XML: N/A, the requested URI is an XML namespace. 1015 URI: urn:ietf:params:xml:ns:yang:ietf-vnf-br 1016 Registrant Contact: The BMWG of the IETF. 1017 XML: N/A, the requested URI is an XML namespace. 1019 Figure 3 1021 This document registers three YANG modules in the YANG Module Names 1022 registry [RFC6020]. Following the format in [RFC6020], the following 1023 registration is requested: 1025 name: ietf-vnf-bd 1026 namespace: urn:ietf:params:xml:ns:yang:ietf-vnf-bd 1027 prefix: vnf-bd 1028 reference: RFC CCCC 1030 name: ietf-vnf-pp 1031 namespace: urn:ietf:params:xml:ns:yang:ietf-vnf-pp 1032 prefix: vnf-pp 1033 reference: RFC CCCC 1035 name: ietf-vnf-br 1036 namespace: urn:ietf:params:xml:ns:yang:ietf-vnf-br 1037 prefix: vnf-br 1038 reference: RFC CCCC 1040 Figure 4 1042 10. YANG Modules 1044 The following sections contain the YANG modules defined by this 1045 document. 1047 10.1. VNF-Benchmarking Descriptor 1049 module vnf-bd { 1050 namespace "urn:ietf:params:xml:ns:yang:vnf-bd"; 1051 prefix "vnf-bd"; 1053 organization "IETF/BMWG"; 1054 contact "Raphael Vicente Rosa , 1055 Manuel Peuster "; 1057 description "Yang module for a VNF Benchmarking 1058 Descriptor (VNF-BD)."; 1060 revision "2019-08-13" { 1061 description "V0.3: Reviewed proceedings, 1062 tool - not VNF specific"; 1063 reference ""; 1064 } 1066 revision "2019-03-13" { 1067 description "V0.2: Reviewed role, policies, connection-points, 1068 lifecycle workflows, resources"; 1069 reference ""; 1070 } 1072 revision "2019-02-28" { 1073 description "V0.1: First release"; 1074 reference ""; 1075 } 1077 typedef workflows { 1078 type enumeration { 1079 enum create { 1080 description "When calling the create workflow."; 1081 } 1082 enum configure { 1083 description "When calling the configure workflow."; 1084 } 1085 enum start { 1086 description "When calling the start workflow."; 1087 } 1088 enum stop { 1089 description "When calling the stop workflow."; 1090 } 1091 enum delete { 1092 description "When calling the delete workflow."; 1093 } 1094 enum custom { 1095 description "When calling a custom workflow."; 1096 } 1097 } 1098 description "Defines basic life cycle workflows for a 1099 node in a scenario."; 1100 } 1102 grouping node_requirements { 1103 container resources { 1104 container cpu { 1105 leaf vcpus { 1106 type uint32; 1107 description "The number of cores to be allocated 1108 for a node."; 1109 } 1110 leaf cpu_bw { 1111 type string; 1112 description "The CPU bandwidth (CFS limit in 0.01-1.0)"; 1113 } 1114 leaf pinning { 1115 type string; 1116 description "The list of CPU cores, separated by comma, 1117 that a node must be pinned to."; 1118 } 1119 description "The node CPU resources that must 1120 be allocated for a benchmarking Test."; 1122 } 1123 container memory { 1124 leaf size { 1125 type uint32; 1126 description "The memory allocation size."; 1127 } 1128 leaf unit { 1129 type string; 1130 description "The memory unit."; 1131 } 1132 description "The node memory resources 1133 that must be allocated for a benchmarking 1134 Test."; 1135 } 1136 container storage { 1137 leaf size { 1138 type uint32; 1139 description "The storage allocation size."; 1140 } 1141 leaf unit { 1142 type string; 1143 description "The storage unit."; 1144 } 1145 leaf volumes { 1146 type string; 1147 description "Volumes to be allocated by 1148 a node storage. 1149 A volume defines a mapping of an outside storage 1150 partition inside the node storage system. 1151 Volumes must be separated by comma and be defined 1152 using a colon to separate the node internal and external 1153 references of storage system paths."; 1154 } 1156 description "The node storage resources 1157 that must be allocated for a benchmarking Test."; 1158 } 1160 description "The set of resources that must be allocated 1161 for a node in a benchmarking Test."; 1162 } 1164 description "'The grouping determining the 1165 resource requirements for a node in a scenario."; 1166 } 1168 grouping connection_points { 1169 leaf id { 1170 type string; 1171 description "The connection-point 1172 unique identifier"; 1173 } 1174 leaf interface { 1175 type string; 1176 description "The name of the node interface 1177 associated with the connection-point."; 1178 } 1179 leaf type { 1180 type string; 1181 description "The type of the network the 1182 connection-point interface is attached to."; 1183 } 1184 leaf address { 1185 type string; 1186 description "The Network address of the 1187 connection-point. It can be specified as a 1188 Ethernet MAC address, a IPv4 address or an IPv6 address."; 1189 } 1190 description "A connections-point of a node."; 1191 } 1193 grouping nodes { 1194 leaf id { 1195 type string; 1196 description "The unique identifier of a node 1197 in a scenario."; 1198 } 1199 leaf type { 1200 type string; 1201 description "The type of a node."; 1202 } 1203 leaf image { 1204 type string; 1205 description "The name of the image to be used to instantiate 1206 a node."; 1207 } 1208 leaf format { 1209 type string; 1210 description "The node format (e.g., container, process, VM)."; 1211 } 1212 leaf role { 1213 type string; 1214 description "The role of the node in the Test scenario. 1215 The role must be one of: manager, agent, monitor, sut."; 1216 } 1217 uses node_requirements; 1219 list connection_points { 1220 key "id"; 1221 uses connection_points; 1222 description "The list of connection points of a node."; 1223 } 1225 list relationships { 1226 key "name"; 1227 leaf name { 1228 type string; 1229 description "Name of the relationship."; 1230 } 1231 leaf type { 1232 type string; 1233 description "Type of the relationship."; 1234 } 1235 leaf target { 1236 type string; 1237 description "Target of the relationship."; 1238 } 1240 description "Relationship of a node with the other 1241 scenario components."; 1242 } 1244 list lifecycle { 1245 key "workflow"; 1246 leaf workflow { 1247 type workflows; 1248 description "The type of the Workflow."; 1249 } 1250 leaf name { 1251 type string; 1252 description "The workflow name."; 1253 } 1255 list parameters { 1256 key "input"; 1257 leaf input { 1258 type string; 1259 description "The name of the parameter."; 1260 } 1261 leaf value { 1262 type string; 1263 description "The value of the parameter"; 1265 } 1267 description "The list of parameters to be 1268 applied to the node workflow."; 1269 } 1271 leaf-list implementation { 1272 type string; 1273 description "The workflow implementation."; 1274 } 1276 description "The life cycle workflows to be 1277 applied to this node."; 1278 } 1280 description "The specification of a node to be used 1281 in a scenario for a benchmarking Test."; 1282 } 1284 grouping link { 1285 leaf id { 1286 type string; 1287 description "The link unique identifier."; 1288 } 1289 leaf name { 1290 type string; 1291 description "The name of the link."; 1292 } 1293 leaf type { 1294 type string; 1295 description "The type of the link."; 1296 } 1297 leaf network { 1298 type string; 1299 description "The network the link belongs to."; 1300 } 1301 leaf-list connection_points { 1302 type leafref { 1303 path "../../nodes/connection_points/id"; 1304 } 1305 description "Reference to the connection points of nodes 1306 the link is adjacent."; 1307 } 1308 description "A link between nodes in a scenario."; 1309 } 1311 grouping scenario { 1312 list nodes { 1313 key "id"; 1314 uses nodes; 1315 description "The list of nodes that must be 1316 instantiated in a scenario in order to enable 1317 a benchmarking Test."; 1318 } 1320 list links { 1321 key "id"; 1322 uses link; 1323 description "The list of links among nodes that must be 1324 instantiated in a scenario in order to enable 1325 a benchmarking Test."; 1326 } 1328 list policies { 1329 key "name"; 1330 leaf name { 1331 type string; 1332 description "The name of the policy."; 1333 } 1334 leaf type { 1335 type string; 1336 description "The type of the policy"; 1337 } 1338 leaf targets { 1339 type string; 1340 description "The targets of the policy. 1341 Uuid of nodes and/or links separated by comma."; 1342 } 1343 leaf action { 1344 type string; 1345 description "The action of the policy"; 1346 } 1348 description "Definition of policies to be 1349 utilized on the instantiation of the scenario. 1350 A policy is defined by a name, it type, 1351 the targets (nodes and/or links) to which it must 1352 be applied to, and the proper action that 1353 realizes the policy."; 1354 } 1356 description "Describes the deployment of all 1357 involved functional components mandatory for 1358 the execution of a benchmarking Test."; 1359 } 1360 grouping tool { 1361 leaf id { 1362 type uint32; 1363 description "The unique identifier of a tool. 1364 This information specifies how a tool can be 1365 identified in a list of probers/listeners of an 1366 Agent/Monitor."; 1367 } 1368 leaf instances { 1369 type uint32; 1370 description "The number of the tool instances that 1371 must be executed in parallel."; 1372 } 1373 leaf name { 1374 type string; 1375 description "The name of a tool."; 1376 } 1377 list parameters { 1378 key "input"; 1379 leaf input { 1380 type string; 1381 description "The input key of a parameter"; 1382 } 1383 leaf value { 1384 type string; 1385 description "The value of a parameter"; 1386 } 1387 description "List of parameters for the execution 1388 of the tool. Each tool detains the proper set of running 1389 parameters that must be utilized to realize a benchmarking 1390 test."; 1391 } 1393 container sched { 1394 leaf from { 1395 type uint32; 1396 default 0; 1397 description "The initial time (in seconds) 1398 of the execution of the tool."; 1399 } 1401 leaf until { 1402 type uint32; 1403 description "The final/maximum time (in seconds) 1404 of the execution of the tool summed all its instances 1405 repeat, duration and interval parameters."; 1406 } 1407 leaf duration { 1408 type uint32; 1409 description "The total duration (in seconds) of the execution 1410 of each instance of the tool."; 1411 } 1413 leaf interval { 1414 type uint32; 1415 description "The interval (in seconds) to be awaited 1416 among each one of the instances of the 1417 execution of the tool."; 1418 } 1420 leaf repeat { 1421 type uint32; 1422 description "The number of times the tool must be executed."; 1423 } 1425 description "The scheduling parameters of a tool. 1426 Each Agent/Monitor must utilize the scheduling parameters 1427 to perform the execution of its tools (probers/listeners) 1428 accordingly."; 1429 } 1431 description "A tool to be used in a benchmarking test. 1432 A tool can be inferred as a prober or a listener."; 1433 } 1435 grouping component { 1436 leaf uuid { 1437 type string; 1438 description "A unique identifier"; 1439 } 1440 leaf name { 1441 type string; 1442 description "The name of component"; 1443 } 1445 description "A generic component."; 1446 } 1448 grouping agent { 1449 uses component; 1451 list probers { 1452 key "id"; 1453 uses tool; 1454 description "Defines a list of the Prober(s) 1455 that must be used in a benchmarking test."; 1456 } 1457 description "An Agent defined by its uuid, 1458 name and the mandatory list of probers to be used 1459 by a benchmarking test."; 1460 } 1462 grouping monitor { 1463 uses component; 1465 list listeners { 1466 key "id"; 1467 uses tool; 1468 description "Defines a list of the Listeners(s) 1469 that must used in a benchmarking test."; 1470 } 1471 description "A Monitor defined by its uuid, 1472 name and the mandatory list of probers to be used 1473 by a benchmarking test."; 1474 } 1476 grouping proceedings { 1477 list agents { 1478 key "uuid"; 1479 uses agent; 1480 description "Defines a list containing the 1481 Agent(s) needed for a VNF-BD test."; 1482 } 1484 list monitors { 1485 key "uuid"; 1486 uses monitor; 1487 description "Defines a list containing the 1488 Monitor(s) needed for a VNF-BD test."; 1489 } 1490 description "Information utilized by a Manager 1491 component to execute a benchmarking test."; 1492 } 1494 grouping vnf-bd { 1496 container experiments { 1497 leaf trials { 1498 type uint32; 1499 default 1; 1500 description "Number of trials. 1501 A trial is a single process or iteration 1502 to obtain VNF performance metrics from 1503 benchmarking the VNF-BD proceedings."; 1504 } 1505 leaf tests { 1506 type uint32; 1507 default 1; 1508 description "Number of tests. 1509 Each test defines unique structural 1510 and functional parameters (e.g., configurations, 1511 resource assignment) for benchmarked components 1512 to perform one or multiple Trials. 1513 Each Test must be executed following a 1514 particular scenario."; 1515 } 1516 description "Defines the number of trials and tests 1517 the VNF-BD must execute."; 1518 } 1520 container scenario { 1521 uses scenario; 1522 description "Scenarios defined by this VNF-BD. 1523 A scenario contains all information needed to describe 1524 the deployment of all involved functional components 1525 mandatory for the execution of a benchmarking Test."; 1526 } 1528 container proceedings { 1529 uses proceedings; 1530 description "Proceedings of VNF-BD. 1531 The proceedings are utilized by the Manager component 1532 to execute a benchmarking Test. It consists of 1533 agent(s)/monitor(s) settings, detailing their 1534 prober(s)/listener(s) specification and 1535 running parameters."; 1536 } 1538 description "A single VNF-BD. 1539 A VNF-BD contains all required definitions and 1540 requirements to deploy, configure, execute, and 1541 reproduce VNF benchmarking tests."; 1542 } 1544 uses vnf-bd; 1545 } 1547 Figure 5 1549 10.2. VNF Performance Profile 1551 module vnf-pp { 1552 namespace "urn:ietf:params:xml:ns:yang:vnf-pp"; 1553 prefix "vnf-pp"; 1555 organization "IETF/BMWG"; 1556 contact "Raphael Vicente Rosa , 1557 Manuel Peuster "; 1559 description "Yang module for a VNF Performance Profile (VNF-PP)."; 1561 revision "2019-10-15" { 1562 description "Reviewed VNF-PP structure - 1563 defines reports, snapshots, evaluations"; 1564 reference ""; 1565 } 1567 revision "2019-08-13" { 1568 description "V0.1: First release"; 1569 reference ""; 1570 } 1572 grouping tuple { 1573 description "A tuple used as key-value."; 1574 leaf key { 1575 type string; 1576 description "Tuple key."; 1577 } 1579 leaf value { 1580 type string; 1581 description "Tuple value."; 1582 } 1583 } 1585 grouping metric { 1586 leaf name { 1587 type string; 1588 description "The metric name"; 1589 } 1591 leaf unit { 1592 type string; 1593 description "The unit of the metric value(s)."; 1594 } 1595 leaf type { 1596 type string; 1597 mandatory true; 1598 description "The data type encoded in the value. 1599 It must refer to a known variable type, i.e., 1600 string, float, uint, etc."; 1601 } 1603 choice value { 1604 case scalar { 1605 leaf scalar { 1606 type string; 1607 mandatory true; 1608 description "A single scalar value."; 1609 } 1610 } 1611 case vector { 1612 leaf-list vector { 1613 type string; 1614 min-elements 1; 1615 description "A list of scalar values"; 1616 } 1617 } 1618 case series { 1619 list series { 1620 key "key"; 1621 uses tuple; 1622 description "A list of key/values, 1623 e.g., a timeseries."; 1624 } 1625 } 1627 mandatory true; 1628 description "Value choice: scalar, vector, series. 1629 A metric can only contain a value with one of them."; 1630 } 1632 description "A metric that holds the recorded benchmarking 1633 results, can be a single value (scalar), a list of values 1634 (vector), or a list of key/value 1635 data (series), e.g., for timeseries."; 1637 } 1639 grouping evaluation { 1640 leaf id { 1641 type string; 1642 description "The evaluation 1643 unique identifier."; 1644 } 1646 leaf instance { 1647 type uint32; 1648 description "The unique identifier of the 1649 parallel instance of the prober/listener that 1650 was executed and created the evaluation."; 1651 } 1653 leaf repeat { 1654 type uint32; 1655 description "The unique identifier of the 1656 prober/listener repeatition instance 1657 was executed and created the evaluation."; 1658 } 1660 container source { 1662 leaf id { 1663 type string; 1664 description "The unique identifier of the source 1665 of the evaluation, 1666 i.e., the prober/listener unique identifier."; 1667 } 1669 leaf name { 1670 type string; 1671 description "The name of the source of the evaluation, 1672 i.e., the prober/listener name."; 1673 } 1675 leaf type { 1676 type string; 1677 description "The type of the source of the evaluation, 1678 i.e., one of prober or listener, that was used to obtain 1679 it."; 1680 } 1682 leaf version { 1683 type string; 1684 description "The version of the tool interfacing 1685 the prober/listener that was used to obtain 1686 the evaluation."; 1687 } 1689 leaf call { 1690 type string; 1691 description "The full call of the tool realized by 1692 the source of the evaluation that performed 1693 the acquisiton of the metrics."; 1694 } 1696 description "The details regarding the 1697 source of the evaluation."; 1698 } 1700 container timestamp { 1702 leaf start { 1703 type string; 1704 description "Time (date, hour, minute, second) 1705 when the evaluation started"; 1706 } 1708 leaf stop { 1709 type string; 1710 description "Time (date, hour, minute, second) 1711 when the evaluation stopped"; 1712 } 1714 description "Timestamps of the procedures 1715 that realized the extraction of the evaluation."; 1716 } 1718 list metrics { 1719 key "name"; 1720 uses metric; 1721 description "List of metrics obtained 1722 from a single evaluation."; 1723 } 1725 leaf error { 1726 type string; 1727 description "Error, if existent, 1728 when obtaining evaluation."; 1729 } 1731 description "The set of metrics and their source 1732 associated with a single Trial."; 1733 } 1735 grouping snapshot { 1736 leaf id { 1737 type string; 1738 description "The snapshot 1739 unique identifier."; 1740 } 1742 leaf trial { 1743 type uint32; 1744 description "The identifier of the trial 1745 when the snapshot was obtained."; 1746 } 1748 container origin { 1750 leaf id { 1751 type string; 1752 description "The unique identifier of the 1753 component of the origin of the snapshot, 1754 i.e., the agent or monitor unique identifier."; 1755 } 1757 leaf role { 1758 type string; 1759 description "The role of the component, 1760 origin of the snapshop, i.e., 1761 one of agent or monitor."; 1762 } 1764 leaf host { 1765 type string; 1766 description "The hostname where the 1767 source of the snapshot was placed."; 1768 } 1770 description "The detailed origin of 1771 the snapshot."; 1773 } 1775 list evaluations { 1776 key "id"; 1777 uses evaluation; 1778 description "The list of evaluations 1779 contained in a single snapshot Test."; 1780 } 1782 leaf timestamp { 1783 type string; 1784 description "Time (date, hour, minute, second) 1785 when the snapshot was created."; 1787 } 1789 leaf error { 1790 type string; 1791 description "Error, if existent, 1792 when obtaining the snapshot."; 1793 } 1795 description "The set of evaluations and their origin 1796 output of the execution of a single trial."; 1797 } 1799 grouping report { 1800 leaf id { 1801 type string; 1802 description "The report unique identifier."; 1803 } 1805 leaf test { 1806 type uint32; 1807 description "The identifier of the Test 1808 when the snapshots were obtained."; 1809 } 1811 list snapshots { 1812 key "id"; 1813 uses snapshot; 1814 description "List of snapshots contained 1815 in a single report."; 1816 } 1818 leaf timestamp { 1819 type string; 1820 description "Time (date, hour, minute, second) 1821 when the report was created."; 1822 } 1824 leaf error { 1825 type string; 1826 description "Error, if existent, 1827 when obtaining the report."; 1828 } 1830 description "The set of snapshots output 1831 of a single Test."; 1832 } 1833 grouping header { 1834 leaf id { 1835 type string; 1836 description "Unique identifier of the VNF-PP."; 1837 } 1838 leaf name { 1839 type string; 1840 description "Name of the VNF-PP."; 1841 } 1842 leaf version { 1843 type string; 1844 description "Version of the VNF-PP."; 1845 } 1846 leaf description { 1847 type string; 1848 description "Description of the VNF-PP"; 1849 } 1850 leaf timestamp { 1851 type string; 1852 description "Time (date, hour, minute, second) 1853 when the VNF-PP was created."; 1854 } 1856 description "The header content of a VNF-PP."; 1857 } 1859 grouping vnf-pp { 1861 uses header; 1863 list reports { 1864 key "id"; 1865 uses report; 1866 description "List of the reports of a VNF-PP."; 1867 } 1869 description "A single VNF-PP."; 1870 } 1872 uses vnf-pp; 1873 } 1875 Figure 6 1877 10.3. VNF Benchmarking Report 1879 module vnf-br { 1880 namespace "urn:ietf:params:xml:ns:yang:vnf-br"; 1881 prefix "vnf-br"; 1883 import vnf-bd { 1884 prefix "vnfbd"; 1885 revision-date 2020-10-08; 1886 } 1888 import vnf-pp { 1889 prefix "vnfpp"; 1890 revision-date 2020-10-08; 1891 } 1893 organization "IETF/BMWG"; 1894 contact "Raphael Vicente Rosa , 1895 Manuel Peuster "; 1896 description "Yang model for a VNF Benchmark Report (VNF-BR)."; 1898 revision "2020-09-09" { 1899 description "V0.2: Review the structure 1900 and the grouping/leaf descriptions."; 1901 reference ""; 1902 } 1904 revision "2020-09-09" { 1905 description "V0.1: First release"; 1906 reference ""; 1907 } 1909 grouping variable { 1910 leaf name { 1911 type string; 1912 description "The name of the variable."; 1913 } 1914 leaf path { 1915 type string; 1916 description "The VNF-BD YANG path of the 1917 variable."; 1918 } 1919 leaf type { 1920 type string; 1921 description "The type of the 1922 variable values."; 1923 } 1924 leaf class { 1925 type string; 1926 description "The class of the 1927 variable (one of resource, stimulus, 1928 configuration)."; 1929 } 1930 leaf-list values { 1931 type string; 1932 description "The list of values 1933 of the variable."; 1934 } 1935 } 1937 grouping output { 1938 leaf id { 1939 type string; 1940 description "The output unique identifier."; 1941 } 1942 list variables { 1943 key "name"; 1944 leaf name { type string; } 1945 leaf value { type string; } 1946 description "The list of instance of varibles 1947 from VNF-BR:inputs utilized by a VNF-BD to 1948 generate a VNF-PP."; 1949 } 1951 container vnfbd { 1952 uses vnfbd:vnf-bd; 1953 description "The VNF-BD that was executed 1954 to generate a output."; 1955 } 1957 container vnfpp { 1958 uses vnfpp:vnf-pp; 1959 description "The output VNF-PP of the 1960 execution of a VNF-BD."; 1961 } 1962 } 1964 grouping vnf { 1965 leaf id { 1966 type string; 1967 description "The VNF unique identifier."; 1968 } 1969 leaf name { 1970 type string; 1971 description "The VNF name."; 1972 } 1973 leaf version { 1974 type string; 1975 description "The VNF version."; 1976 } 1977 leaf author { 1978 type string; 1979 description "The author of the VNF."; 1980 } 1981 leaf description { 1982 type string; 1983 description "The description of the VNF."; 1984 } 1985 description "The details of the VNF SUT."; 1986 } 1988 grouping header { 1989 leaf id { 1990 type string; 1991 description "The unique identifier of the VNF-BR "; 1992 } 1993 leaf name { 1994 type string; 1995 description "The name of the VNF-BR."; 1996 } 1997 leaf version { 1998 type string; 1999 description "The VNF-BR version."; 2000 } 2001 leaf author { 2002 type string; 2003 description "The VNF-BR author."; 2004 } 2005 leaf description { 2006 type string; 2007 description "The description of the VNF-BR."; 2008 } 2010 container vnf { 2011 uses vnf; 2012 description "The VNF-BR target SUT VNF."; 2013 } 2015 container environment { 2016 leaf name { 2017 type string; 2018 description "The evironment name"; 2019 } 2020 leaf description { 2021 type string; 2022 description "A description 2023 of the environment"; 2024 } 2025 leaf deploy { 2026 type boolean; 2027 description "Defines if (True) the environment enables 2028 the automated deployment by an orchestrator platform."; 2029 } 2030 container orchestrator { 2031 leaf name { 2032 type string; 2033 description "Name of the orchestrator 2034 platform."; 2035 } 2037 leaf type { 2038 type string; 2039 description "The type of the orchestrator 2040 platform."; 2041 } 2043 leaf description { 2044 type string; 2045 description "The description of the 2046 orchestrator platform."; 2047 } 2049 list parameters { 2050 key "input"; 2051 leaf input { 2052 type string; 2053 description "The name of the parameter"; 2054 } 2055 leaf value { 2056 type string; 2057 description "The value of the parameter"; 2058 } 2060 description "List of orchestrator 2061 input parameters."; 2062 } 2064 description "The specification of the orchestration platform 2065 settings of a VNF-BR."; 2066 } 2068 description "The environment settings of a VNF-BR."; 2070 } 2072 description "Defines the content of a VNF-BR header."; 2073 } 2075 grouping vnf-br { 2076 description "Grouping for a single vnf-br."; 2078 uses header; 2080 container inputs { 2081 list variables { 2082 key "name"; 2083 uses variable; 2084 description "The list of 2085 input variables."; 2086 } 2088 container vnfbd { 2089 uses vnfbd:vnf-bd; 2090 description "The input VNF-BD."; 2091 } 2093 description "The inputs needed to 2094 realize a VNF-BR."; 2095 } 2097 list outputs { 2098 key "id"; 2099 uses output; 2100 description "The list of outputs 2101 of a VNF-BR."; 2102 } 2104 container timestamp { 2105 leaf start { 2106 type string; 2107 description "Time (date, hour, minute, second) 2108 of when the VNF-BR realization started"; 2109 } 2111 leaf stop { 2112 type string; 2113 description "Time (date, hour, minute, second) 2114 of when the VNF-BR realization stopped"; 2115 } 2117 description "Timestamps of the procedures that 2118 realized the realization of a VNF-BR."; 2119 } 2121 leaf error { 2122 type string; 2123 description "The VNF-BR error, 2124 if ocurred during its realization."; 2125 } 2126 } 2128 uses vnf-br; 2129 } 2131 Figure 7 2133 11. Acknowledgement 2135 The authors would like to thank the support of Ericsson Research, 2136 Brazil. Parts of this work have received funding from the European 2137 Union's Horizon 2020 research and innovation programme under grant 2138 agreement No. H2020-ICT-2016-2 761493 (5GTANGO: https://5gtango.eu). 2140 12. References 2142 12.1. Normative References 2144 [ETS14a] ETSI, "Architectural Framework - ETSI GS NFV 002 V1.2.1", 2145 Dec 2014, . 2148 [ETS14b] ETSI, "Terminology for Main Concepts in NFV - ETSI GS NFV 2149 003 V1.2.1", Dec 2014, 2150 . 2153 [ETS14c] ETSI, "NFV Pre-deployment Testing - ETSI GS NFV TST001 2154 V1.1.1", April 2016, 2155 . 2158 [ETS14d] ETSI, "Network Functions Virtualisation (NFV); Virtual 2159 Network Functions Architecture - ETSI GS NFV SWA001 2160 V1.1.1", December 2014, 2161 . 2165 [ETS14e] ETSI, "Report on CI/CD and Devops - ETSI GS NFV TST006 2166 V0.0.9", April 2018, 2167 . 2170 [ETS19f] ETSI, "Specification of Networking Benchmarks and 2171 Measurement Methods for NFVI - ETSI GS NFV-TST 009 2172 V3.2.1", June 2019, 2173 . 2177 [ETS19g] ETSI, "NFVI Compute and Network Metrics Specification - 2178 ETSI GS NFV-TST 008 V3.2.1", March 2019, 2179 . 2183 [RFC1242] S. Bradner, "Benchmarking Terminology for Network 2184 Interconnection Devices", July 1991, 2185 . 2187 [RFC6020] Bjorklund, M., Ed., "YANG - A Data Modeling Language for 2188 the Network Configuration Protocol (NETCONF)", October 2189 2010, . 2191 [RFC8172] A. Morton, "Considerations for Benchmarking Virtual 2192 Network Functions and Their Infrastructure", July 2017, 2193 . 2195 [RFC8204] M. Tahhan, B. O'Mahony, A. Morton, "Benchmarking Virtual 2196 Switches in the Open Platform for NFV (OPNFV)", September 2197 2017, . 2199 12.2. Informative References 2201 [gym] "Gym Framework Source Code", 2202 . 2204 [Peu-a] M. Peuster, H. Karl, "Understand Your Chains: Towards 2205 Performance Profile-based Network Service Management", 2206 Fifth European Workshop on Software Defined Networks 2207 (EWSDN) , 2016, 2208 . 2210 [Peu-b] M. Peuster, H. Karl, "Profile Your Chains, Not Functions: 2211 Automated Network Service Profiling in DevOps 2212 Environments", IEEE Conference on Network Function 2213 Virtualization and Software Defined Networks (NFV-SDN) , 2214 2017, . 2216 [Peu-c] M. Peuster, H. Karl, "Understand your chains and keep your 2217 deadlines: Introducing time-constrained profiling for 2218 NFV", IEEE/IFIP 14th International Conference on Network 2219 and Service Management (CNSM) , 2018, 2220 . 2222 [Peu-d] M. Peuster and S. Schneider and H. Karl, "The Softwarised 2223 Network Data Zoo", IEEE/IFIP 15th International Conference 2224 on Network and Service Management (CNSM) , 2019, 2225 . 2227 [RFC3688] Mealling, M., "The IETF XML Registry", January 2004, 2228 . 2230 [Rosa-a] R. V. Rosa, C. E. Rothenberg, R. Szabo, "VBaaS: VNF 2231 Benchmark-as-a-Service", Fourth European Workshop on 2232 Software Defined Networks , Sept 2015, 2233 . 2235 [Rosa-b] R. Rosa, C. Bertoldo, C. Rothenberg, "Take your VNF to the 2236 Gym: A Testing Framework for Automated NFV Performance 2237 Benchmarking", IEEE Communications Magazine Testing 2238 Series , Sept 2017, 2239 . 2241 [Rosa-c] R. V. Rosa, C. E. Rothenberg, "Taking Open vSwitch to the 2242 Gym: An Automated Benchmarking Approach", IV Workshop pre- 2243 IETF/IRTF, CSBC Brazil, July 2017, 2244 . 2247 [tango] "5GTANGO: Development and validation platform for global 2248 industry-specific network services and apps", 2249 . 2251 [tng-bench] 2252 "5GTANGO VNF/NS Benchmarking Framework", 2253 . 2255 Authors' Addresses 2257 Raphael Vicente Rosa (editor) 2258 University of Campinas 2259 Av. Albert Einstein, 400 2260 Campinas, Sao Paulo 13083-852 2261 Brazil 2263 Email: rvrosa@dca.fee.unicamp.br 2264 URI: https://intrig.dca.fee.unicamp.br/raphaelvrosa/ 2266 Christian Esteve Rothenberg 2267 University of Campinas 2268 Av. Albert Einstein, 400 2269 Campinas, Sao Paulo 13083-852 2270 Brazil 2272 Email: chesteve@dca.fee.unicamp.br 2273 URI: http://www.dca.fee.unicamp.br/~chesteve/ 2275 Manuel Peuster 2276 Paderborn University 2277 Warburgerstr. 100 2278 Paderborn 33098 2279 Germany 2281 Email: manuel.peuster@upb.de 2282 URI: https://peuster.de 2284 Holger Karl 2285 Paderborn University 2286 Warburgerstr. 100 2287 Paderborn 33098 2288 Germany 2290 Email: holger.karl@upb.de 2291 URI: https://cs.uni-paderborn.de/cn/