idnits 2.17.1 draft-rosa-bmwg-vnfbench-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 2, 2018) is 2219 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 BMWG R. Rosa, Ed. 3 Internet-Draft C. Rothenberg 4 Intended status: Informational UNICAMP 5 Expires: September 3, 2018 March 2, 2018 7 VNF Benchmarking Methodology 8 draft-rosa-bmwg-vnfbench-01 10 Abstract 12 This document describes a common methodology for benchmarking 13 Virtualized Network Functions (VNFs) in general-purpose hardware. 14 Specific cases of benchmarking methodologies for particular VNFs can 15 be derived from this document. An open source reference 16 implementation called Gym is reported as a running code embodiment of 17 the proposed methodology for VNFs. 19 Status of This Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF). Note that other groups may also distribute 26 working documents as Internet-Drafts. The list of current Internet- 27 Drafts is at https://datatracker.ietf.org/drafts/current/. 29 Internet-Drafts are draft documents valid for a maximum of six months 30 and may be updated, replaced, or obsoleted by other documents at any 31 time. It is inappropriate to use Internet-Drafts as reference 32 material or to cite them other than as "work in progress." 34 This Internet-Draft will expire on September 3, 2018. 36 Copyright Notice 38 Copyright (c) 2018 IETF Trust and the persons identified as the 39 document authors. All rights reserved. 41 This document is subject to BCP 78 and the IETF Trust's Legal 42 Provisions Relating to IETF Documents 43 (https://trustee.ietf.org/license-info) in effect on the date of 44 publication of this document. Please review these documents 45 carefully, as they describe your rights and restrictions with respect 46 to this document. Code Components extracted from this document must 47 include Simplified BSD License text as described in Section 4.e of 48 the Trust Legal Provisions and are provided without warranty as 49 described in the Simplified BSD License. 51 Table of Contents 53 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 54 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 55 3. Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 56 4. Considerations . . . . . . . . . . . . . . . . . . . . . . . 4 57 4.1. VNF Testing Methods . . . . . . . . . . . . . . . . . . . . 4 58 4.2. Generic VNF Benchmarking Setup . . . . . . . . . . . . . . 4 59 4.3. Deployment Scenarios . . . . . . . . . . . . . . . . . . . 6 60 4.4. Influencing Aspects . . . . . . . . . . . . . . . . . . . . 7 61 5. Methodology . . . . . . . . . . . . . . . . . . . . . . . . . 8 62 5.1. General Description . . . . . . . . . . . . . . . . . . . . 8 63 5.1.1. Configurations . . . . . . . . . . . . . . . . . . . . . 8 64 5.1.2. Testing Procedures . . . . . . . . . . . . . . . . . . . 9 65 5.2. Particular Cases . . . . . . . . . . . . . . . . . . . . . 10 66 6. VNF Benchmark Report . . . . . . . . . . . . . . . . . . . . 11 67 7. Open Source Reference Implementation . . . . . . . . . . . . 11 68 8. Security Considerations . . . . . . . . . . . . . . . . . . . 12 69 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 70 10. Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . 12 71 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 72 11.1. Normative References . . . . . . . . . . . . . . . . . . . 12 73 11.2. Informative References . . . . . . . . . . . . . . . . . . 13 74 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 13 76 1. Introduction 78 Benchmarking Methodology Working Group (BMWG) initiated efforts, 79 approaching considerations in [RFC8172], to develop methodologies for 80 benchmarking VNFs. Similarly described in [RFC8172], VNF benchmark 81 motivating aspects define: (i) pre-deployment infrastructure 82 dimensioning to realize associated VNF performance profiles; (ii) 83 comparison factor with physical network functions; (iii) and output 84 results for analytical VNF development. 86 Having no strict and clear execution boundaries, different from 87 earlier self-contained black-box benchmarking methodologies described 88 in BMWG, a VNF depends on underlying virtualized environment 89 parameters [ETS14a], intrinsic considerations for analysis when 90 addressing performance. This document stands as a ground methodology 91 guide for VNF benchmarking. It addresses the state-of-the-art 92 publications and the current developments in similar standardization 93 efforts (e.g., [ETS14c] and [RFC8204]) towards bechmarking VNFs. 95 2. Terminology 97 Common benchmarking terminology contained in this document is derived 98 from [RFC1242]. Also, the reader is assumed to be familiar with the 99 terminology as defined in the European Telecommunications Standards 100 Institute (ETSI) NFV document [ETS14b]. Some of these terms, and 101 others commonly used in this document, are defined below. 103 NFV: Network Function Virtualization - The principle of separating 104 network functions from the hardware they run on by using virtual 105 hardware abstraction. 107 NFVI PoP: NFV Infrastructure Point of Presence - Any combination of 108 virtualized compute, storage and network resources. 110 NFVI: NFV Infrastructure - Collection of NFVI PoPs under one 111 orchestrator. 113 VIM: Virtualized Infrastructure Manager - functional block that is 114 responsible for controlling and managing the NFVI compute, storage 115 and network resources, usually within one operator's 116 Infrastructure Domain (e.g. NFVI-PoP). 118 VNFM: Virtualized Network Function Manager - functional block that 119 is responsible for controlling and managing the VNF life-cycle. 121 NFVO: NFV Orchestrator - functional block that manages the Network 122 Service (NS) life-cycle and coordinates the management of NS life- 123 cycle, VNF life-cycle (supported by the VNFM) and NFVI resources 124 (supported by the VIM) to ensure an optimized allocation of the 125 necessary resources and connectivity. 127 VNF: Virtualized Network Function - a software-based network 128 function. 130 VNFD: Virtualised Network Function Descriptor - configuration 131 template that describes a VNF in terms of its deployment and 132 operational behaviour, and is used in the process of VNF on- 133 boarding and managing the life cycle of a VNF instance. 135 VNF-FG: Virtualized Network Function Forwarding Graph - an ordered 136 list of VNFs creating a service chain. 138 3. Scope 140 This document assumes VNFs as black boxes when defining VNF 141 benchmarking methodologies. White box approaches are assumed and 142 analysed as a particular case under proper considerations of internal 143 VNF instrumentation. 145 4. Considerations 147 VNF benchmarking considerations are defined in [RFC8172]. 148 Additionally, VNF pre-deployment testing considerations are well 149 explored in [ETS14c]. 151 4.1. VNF Testing Methods 153 Following the ETSI's model in [ETS14c], we distinguish three methods 154 for VNF evaluation: 156 Benchmarking: Where parameters (e.g., cpu, memory, storage) are 157 provided and the corresponding performance metrics (e.g., latency, 158 throughput) are obtained. Note, such request might create 159 multiple reports, for example, with minimal latency or maximum 160 throughput results. 162 Verification: Both parameters and performance metrics are provided 163 and a stimulus verify if the given association is correct or not. 165 Dimensioning: Where performance metrics are provided and the 166 corresponding parameters obtained. Note, multiple deployment 167 interactions may be required, or if possible, underlying allocated 168 resources need to be dynamically altered. 170 Note: Verification and Dimensioning can be reduced to Benchmarking. 171 Therefore, we detail Benchmarking in what follows. 173 4.2. Generic VNF Benchmarking Setup 175 A generic VNF benchmarking setup is shown in Figure 1, and its 176 components are explained below. Note here, not all components are 177 mandatory, and VNF benchmarking scenarios, further explained, can 178 dispose components in varied settings. 180 +---------------+ 181 | Manager | 182 Control | (Coordinator) | 183 Interface +---+-------+---+ 184 +--------+-----------+ +-------------------+ 185 | | | 186 | | +-------------------------+ | 187 | | | System Under Test | | 188 | | | | | 189 | | | +-----------------+ | | 190 | +--+------- + | | | 191 | | | VNF | | | 192 | | | | | | 193 | | +----.---------.--+ | | 194 +-----+---+ | Monitor | : : | +-----+----+ 195 | Agent | |{listeners}|----^---------V--+ | | Agent | 196 |(Sender) | | | Execution | | |(Receiver)| 197 | | | | Environment | | | | 198 |{Probers}| +-----------| | | |{Probers} | 199 +-----.---+ | +----.---------.--+ | +-----.----+ 200 : +---------^---------V-----+ : 201 V : : : 202 :................>.....: :............>..: 203 Stimulus Traffic Flow 205 Figure 1: Generic VNF Benchmarking Setup 207 Agent -- executes active stimulus using probers, benchmarking tools, 208 to benchmark and collect network and system performance metrics. 209 While a single Agent is capable of performing localized benchmarks 210 (e.g., stress tests on CPU, memory, disk I/O), the interaction 211 among distributed Agents enable the generation and collection of 212 end-to-end metrics (e.g., frame loss rate, latency). In a 213 deployment scenario, one Agent can create the benchmark stimuli 214 and the other end be the VNF itself where, for example, one-way 215 latency is evaluated. A prober defines a software/hardware-based 216 tool able to generate traffic specific to a VNF (e.g., sipp) or 217 generic to multiple VNFs (e.g., pktgen). An Agent can be defined 218 by a physical or virtual network function. 220 Monitor -- when possible, it is instantiated inside the target VNF 221 or NFVI PoP (e.g., as a plug-in process in a virtualized 222 environment) to perform passive monitoring, using listeners, for 223 metrics collection based on benchmark tests evaluated according to 224 Agents` stimuli. Different from the active approach of Agents 225 that can be seen as generic benchmarking VNFs, monitor observes 226 particular properties according to NFVI PoPs and VNFs 227 capabilities. A listener defines one or more interfaces for the 228 extraction of particular metrics monitored in a target VNF and/or 229 execution environment. Logically, a Monitor is defined by as a 230 virtual network function. 232 Manager -- in a VNF benchmarking deployment scenario, is responsible 233 for (i) the coordination and synchronization of activities of 234 Agents and Monitors, (ii) collecting and parsing all VNF 235 benchmarking results, and (iii) aggregating the inputs and parset 236 benchmark outputs to construct a VNF performance profile, report 237 that correlates the VNF stimuli and the monitored metrics. A 238 Manager executes the main configuration, operation and management 239 actions to deliver the VNF benchmarking results. A Manager can be 240 defined by a physical or virtual network function. 242 Virtualized Network Function (VNF) -- consists of one or more 243 software components adequate for performing a network function 244 according to allocated virtual resources and satisfied 245 requirements in an execution environment. A VNF can demand 246 particular configurations for benchmarking specifications, 247 demonstrating variable performance profiles based on available 248 virtual resources/parameters and configured enhancements 249 targetting specific technologies. 251 Execution Environment -- defines a virtualized and controlled 252 composition of capabilities necessary for the execution of a VNF. 253 An execution environment stands as a general purpose level of 254 virtualization with abstracted resources available for one or more 255 VNFs. It can also define specific technology habilitation, 256 incurring in viable settings for enhancing VNF performance 257 profiles. 259 4.3. Deployment Scenarios 261 A VNF benchmark deployment scenario establishes the physical and/or 262 virtual instantiation of components defined in a VNF benchmarking 263 setup. 265 Based on a generic VNF benchmarking setup, the following 266 considerations hold for deployment scenarios: 268 o Components can be composed in a single entity and defined as black 269 or white boxes. For instance, Manager and Agent could jointly 270 define a software entity to perform a VNF benchmark and present 271 results. 273 o Monitor is not a mandatory component and must be considered only 274 when performed white box benchmarking approaches for a VNF and/or 275 its execution environment. 277 o Monitor can be defined by multiple instances of software 278 components, each addressing a VNF or execution environment and 279 their respective open interfaces for the extraction of metrics. 281 o Agents can be disposed in varied topology setups, included the 282 possibility of multiple input and output ports of a VNF being 283 directly connected each in one Agent. 285 o All benchmarking components defined in a deployment scenario must 286 perform the synchronization of clocks to an international time 287 standard. 289 4.4. Influencing Aspects 291 In general, VNF benchmarks must capture relevant causes of 292 performance variability. Examples of VNF performance influencing 293 aspects can be observed in: 295 Deployment Scenario Topology: The orchestrated disposition of 296 components can define particular interconnections among them 297 composing a specific case/method of VNF benchmarking. 299 Execution Environment: The availability of generic and specific 300 capabilities satisfying VNF requirements define a skeleton of 301 opportunities for the allocation of VNF resources. In addition, 302 particular cases can define multiple VNFs interacting in the same 303 execution environment of a benchmarking setup. 305 VNF: A detailed description of functionalities performed by a VNF 306 sets possible traffic forwarding and processing operations it can 307 perform on packets, added to its running requirements and specific 308 configurations, which might affect and compose a benchmarking 309 setup. 311 Agent: The toolset available for benchmarking stimulus for a VNF and 312 its characteristics of packets format, disposition, and workload 313 can interfere in a benchmarking setup. VNFs can support specific 314 traffic format as stimulus. 316 Monitor: In a particular benchmarking setup where measurements of 317 VNF and/or execution environment metrics are available for 318 extraction, an important analysis consist in verifying if the 319 Monitor components can impact performance metrics of the VNF and 320 the underlying execution environment. 322 Manager: The overall composition of VNF benchmarking procedures can 323 determine arrangements of internal states inside a VNF, which can 324 interfere in observed benchmark metrics. 326 5. Methodology 328 Portability as a intrinsic characteristic of VNFs, allow them to be 329 deployed in multiple environments, enabling, even parallel, 330 benchmarking procedures in varied deployment scenarios. A VNF 331 benchmarking methodology must be described in a clear and objective 332 manner in order to allow effective repeatability and comparability of 333 the test results. 335 5.1. General Description 337 For the sake of clarity and generalization of VNF benchmarking tests, 338 consider the following definitions. 340 VNF Benchmarking Layout (VNF-BL) -- a setup that specifies a method 341 of how to measure a VNF Performance Profile. The specification 342 includes structural and functional instructions, and variable 343 parameters at different abstractions (e.g., topology of the 344 deployment scenario, benchmarking target metrics, parameters of 345 benchmarking components). VNF-BL may be specific to a VNF or 346 applicable to several VNF types. A VNF-BL can be used to 347 elaborate a VNF benchmark deployment scenario aiming the 348 extraction of particular VNF performance metrics. 350 VNF Performance Profile: (VNF-PP) -- defines a mapping between VNF 351 allocated capabilities (e.g., cpu, memory) and the VNF performance 352 metrics (e.g., throughput, latency between in/out ports) obtained 353 in a benchmarking test elaborated based on a VNF-BL. Logically, 354 packet processing metrics are presented in a specific format 355 addressing statistical significance where a correspondence among 356 VNF parameters and the delivery of a measured/qualified VNF 357 performance exists. 359 5.1.1. Configurations 361 In addition to a VNF-BL, all the items listed below, added their 362 associated, and not limited to, settings must be contained in 363 annotations describing a VNF benchmark deployment scenario. Ideally, 364 any person in possession of such annotations and the necessary/ 365 associated skeleton of hardware and software components should be 366 able to reproduce the same deployment scenario and VNF benchmarking 367 test. 369 VNF: type, model, version/release, allocated resources, specific 370 parameters, technology requirements, software details. 372 Execution Environment: type, model, version/release, available 373 resources, technology capabilities, software details. 375 Agents: toolset of available probers and related benchmarking 376 metrics, workload, traffic formats, virtualization layer (if 377 existent), hardware capabilities (if existent). 379 Monitors: toolset of available listeners and related monitoring 380 metrics, monitoring target (VNF and/or execution environment), 381 virtualization layer (if existent), hardware capabilities (if 382 existent). 384 Manager: utilized procedures during the benchmark test, set of 385 events and settings exchanged with Agents/Monitors, established 386 sequence of possible states triggered in the target VNF. 388 5.1.2. Testing Procedures 390 Consider the following definitions: 392 Trial: Consists in a single process or iteration to obtain VNF 393 benchmarking metrics as a singular measurement. 395 Test: Defines strict parameters for benchmarking components perform 396 one or more trials. 398 Method: Consists of a VNF-BL targeting one or more Tests to achieve 399 VNF benchmarking measurements. A Method explicits ranges of 400 parameter values for the configuration of benchmarking components 401 realized in a Test. 403 The following sequence of events compose basic general procedures 404 that must be performed for the execution of a VNF benchmarking test. 406 1. The sketch of a VNF benchmarking setup must be defined to later 407 be translated into a deployment scenario. Such sketch must 408 contain all the structural and functional settings composing a 409 VNF-BL. At the end of this step the complete Method of 410 benchmarking the target VNF is defined. 412 2. Via an automated orchestrator or in a manual process, all the 413 components of the VNF benchmark setup must be allocated and 414 interconnected. VNF and the execution environment must be 415 configured to properly address the VNF benchmark stimuli. 417 3. Manager, Agent(s) and Monitor(s) (if existent), must be started 418 and configured to execute the benchmark stimuli and retrieve 419 expected/target metrics captured during and at the end of the VNF 420 benchmarking test. One or more trials realize the measurement of 421 VNF performance metrics. 423 4. Output results from each obtained benchmarking test must be 424 received by Manager. In an automated or manual process, intended 425 metrics to be extracted defined in the VNF-BL must compose a VNF- 426 PP, resulting in a VNF benchmark report. 428 5.2. Particular Cases 430 Configurations and procedures concerning particular cases of VNF 431 benchmarks address testing methodologies proposed in [RFC8172]. In 432 addition to the general description previously defined, some details 433 must be taken into consideration in the following VNF benchmarking 434 cases. 436 Noisy Neighbor: An Agent can detain the role of a noisy neighbor, 437 generating a particular workload in synchrony with a benchmarking 438 procedure over a VNF. Adjustments of the noisy workload stimulus 439 type, frequency, virtualization level, among others, must be 440 detailed in the VNF-BL. 442 Representative Capacity: An average value of workload must be 443 specified as an Agent stimulus. Considering a long-term analysis, 444 the VNF must be configured to properly address a desired average 445 behavior of performance in comparison with the value of the 446 workload stimulus. 448 Flexibility and Elasticity: Having the possibility of a VNF be 449 composed by multiple components, internal events of the VNF might 450 trigger variated behaviors activating functionalities associated 451 with elasticity, such as load balancing. In this terms, a 452 detailed characterization of a VNF must be specified and be 453 contained in the VNF-PP and benchmarking report. 455 On Failures: Similarly to the case before, benchmarking setups of 456 VNF must also capture the dynamics involved in the VNF behavior. 457 In case of failures, a VNF would restart itself and possibly 458 result in a off-line period. A VNF-PP and benchmarking report 459 must clearly capture such variation of VNF states. 461 White Box VNF: A benchmarking setup must define deployment 462 scenarios to be compared with and without monitor components into 463 the VNF and/or the execution environment, in order to analyze if 464 the VNF performance is affected. The VNF-PP and benchmarking 465 report must contain such analysis of performance variability, 466 together with all the targeted VNF performance metrics. 468 6. VNF Benchmark Report 470 On the extraction of VNF and execution environment performance 471 metrics various trials must be performed for statistical significance 472 of the obtained benchmarking results. Each trial must be executed 473 following a particular deployment scenario composed by a VNF-BL. 475 A VNF Benchmarking Report correlates structural and functional 476 parameters of VNF-BL with targeted/extracted VNF benchmarking metrics 477 of the obtained VNF-PP. 479 A VNF performance profile must address the combined set of classified 480 items in the 3x3 Matrix Coverage defined in [RFC8172]. 482 7. Open Source Reference Implementation 484 The software, named Gym, is a framework for automated benchmarking of 485 Virtualized Network Functions (VNFs). It was coded following the 486 initial ideas presented in a 2015 scientific paper entitled "VBaaS: 487 VNF Benchmark-as-a-Service" [Rosa-a]. Later, the evolved design and 488 prototyping ideas were presented at IETF/IRTF meetings seeking impact 489 into NFVRG and BMWG. 491 Gym was built to receive high-level test descriptors and execute them 492 to extract VNFs profiles, containing measurements of performance 493 metrics - especially to associate resources allocation (e.g., vCPU) 494 with packet processing metrics (e.g., throughput) of VNFs. From the 495 original research ideas [Rosa-a], such output profiles might be used 496 by orchestrator functions to perform VNF lifecycle tasks (e.g., 497 deployment, maintenance, tear-down). 499 The proposed guiding principles, elaborated in [Rosa-b], to design 500 and build Gym can be compounded in multiple practical ways for 501 multiple VNF testing purposes: 503 o Comparability: Output of tests shall be simple to understand and 504 process, in a human-read able format, coherent, and easily 505 reusable (e.g., inputs for analytic applications). 507 o Repeatability: Test setup shall be comprehensively defined through 508 a flexible design model that can be interpreted and executed by 509 the testing platform repeatedly but supporting customization. 511 o Configurability: Open interfaces and extensible messaging models 512 shall be available between components for flexible composition of 513 test descriptors and platform configurations. 515 o Interoperability: Tests shall be ported to different environments 516 using lightweight components. 518 In [Rosa-b] Gym was utilized to benchmark a decomposed IP Multimedia 519 Subsystem VNF. And in [Rosa-c], a virtual switch (Open vSwitch - 520 OVS) was the target VNF of Gym for the analysis of VNF benchmarking 521 automation. Such articles validated Gym as a prominent open source 522 reference implementation for VNF benchmarking tests. Such articles 523 set important contributions as discussion of the lessons learned and 524 the overall NFV performance testing landscape, included automation. 526 Gym stands as the open source reference implementation that realizes 527 the VNF Benchmarking Methodologies presented in this document. Gym 528 is being released open source at [Gym]. The code repository includes 529 also VNF Benchmarking Layout (VNF-BL) examples on the vIMS and OVS 530 targets as described in [Rosa-b] and [Rosa-c]. 532 8. Security Considerations 534 TBD 536 9. IANA Considerations 538 This document does not require any IANA actions. 540 10. Acknowledgement 542 The authors would like to thank the support of Ericsson Research, 543 Brazil. 545 11. References 547 11.1. Normative References 549 [ETS14a] ETSI, "Architectural Framework - ETSI GS NFV 002 V1.2.1", 550 Dec 2014, . 553 [ETS14b] ETSI, "Terminology for Main Concepts in NFV - ETSI GS NFV 554 003 V1.2.1", Dec 2014, 555 . 558 [ETS14c] ETSI, "NFV Pre-deployment Testing - ETSI GS NFV TST001 559 V1.1.1", April 2016, 560 . 563 [RFC1242] S. Bradner, "Benchmarking Terminology for Network 564 Interconnection Devices", July 1991, 565 . 567 [RFC8172] A. Morton, "Considerations for Benchmarking Virtual 568 Network Functions and Their Infrastructure", July 2017, 569 . 571 [RFC8204] M. Tahhan, B. O'Mahony, A. Morton, "Benchmarking Virtual 572 Switches in the Open Platform for NFV (OPNFV)", September 573 2017, . 575 11.2. Informative References 577 [Gym] "Gym Home Page", . 579 [Rosa-a] R. V. Rosa, C. E. Rothenberg, R. Szabo, "VBaaS: VNF 580 Benchmark-as-a-Service", Fourth European Workshop on 581 Software Defined Networks , Sept 2015, 582 . 584 [Rosa-b] R. Rosa, C. Bertoldo, C. Rothenberg, "Take your VNF to the 585 Gym: A Testing Framework for Automated NFV Performance 586 Benchmarking", IEEE Communications Magazine Testing 587 Series , Sept 2017, 588 . 590 [Rosa-c] R. V. Rosa, C. E. Rothenberg, "Taking Open vSwitch to the 591 Gym: An Automated Benchmarking Approach", IV Workshop pre- 592 IETF/IRTF, CSBC Brazil, July 2017, 593 . 596 Authors' Addresses 598 Raphael Vicente Rosa (editor) 599 University of Campinas 600 Av. Albert Einstein, 400 601 Campinas, Sao Paulo 13083-852 602 Brazil 604 Email: rvrosa@dca.fee.unicamp.br 605 URI: https://intrig.dca.fee.unicamp.br/raphaelvrosa/ 606 Christian Esteve Rothenberg 607 University of Campinas 608 Av. Albert Einstein, 400 609 Campinas, Sao Paulo 13083-852 610 Brazil 612 Email: chesteve@dca.fee.unicamp.br 613 URI: http://www.dca.fee.unicamp.br/~chesteve/