idnits 2.17.1 draft-vpolak-mkonstan-bmwg-mlrsearch-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The abstract seems to contain references ([RFC2544]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (July 08, 2019) is 1753 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC8174' is defined on line 654, but no explicit reference was found in the text Summary: 2 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Konstantynowicz, Ed. 3 Internet-Draft V. Polak, Ed. 4 Intended status: Informational Cisco Systems 5 Expires: January 9, 2020 July 08, 2019 7 Multiple Loss Ratio Search for Packet Throughput (MLRsearch) 8 draft-vpolak-mkonstan-bmwg-mlrsearch-02 10 Abstract 12 This document proposes changes to [RFC2544], specifically to packet 13 throughput search methodology, by defining a new search algorithm 14 referred to as Multiple Loss Ratio search (MLRsearch for short). 15 Instead of relying on binary search with pre-set starting offered 16 load, it proposes a novel approach discovering the starting point in 17 the initial phase, and then searching for packet throughput based on 18 defined packet loss ratio (PLR) input criteria and defined final 19 trial duration time. One of the key design principles behind 20 MLRsearch is minimizing the total test duration and searching for 21 multiple packet throughput rates (each with a corresponding PLR) 22 concurrently, instead of doing it sequentially. 24 The main motivation behind MLRsearch is the new set of challenges and 25 requirements posed by NFV (Network Function Virtualization), 26 specifically software based implementations of NFV data planes. 27 Using [RFC2544] in the experience of the authors yields often not 28 repetitive and not replicable end results due to a large number of 29 factors that are out of scope for this draft. MLRsearch aims to 30 address this challenge and define a common (standard?) way to 31 evaluate NFV packet throughput performance that takes into account 32 varying characteristics of NFV systems under test. 34 Status of This Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at https://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on January 9, 2020. 50 Copyright Notice 52 Copyright (c) 2019 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (https://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 2 68 2. MLRsearch Background . . . . . . . . . . . . . . . . . . . . 4 69 3. MLRsearch Overview . . . . . . . . . . . . . . . . . . . . . 5 70 4. Sample Implementation . . . . . . . . . . . . . . . . . . . . 8 71 4.1. Input Parameters . . . . . . . . . . . . . . . . . . . . 8 72 4.2. Initial Phase . . . . . . . . . . . . . . . . . . . . . . 9 73 4.3. Non-Initial Phases . . . . . . . . . . . . . . . . . . . 10 74 4.4. Sample MLRsearch Run . . . . . . . . . . . . . . . . . . 12 75 5. Known Implementations . . . . . . . . . . . . . . . . . . . . 12 76 5.1. FD.io CSIT Implementation Deviations . . . . . . . . . . 12 77 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 78 7. Security Considerations . . . . . . . . . . . . . . . . . . . 14 79 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 14 80 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 14 81 9.1. Normative References . . . . . . . . . . . . . . . . . . 14 82 9.2. Informative References . . . . . . . . . . . . . . . . . 14 83 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 15 85 1. Terminology 87 o Frame size: size of an Ethernet Layer-2 frame on the wire, 88 including any VLAN tags (dot1q, dot1ad) and Ethernet FCS, but 89 excluding Ethernet preamble and inter-frame gap. Measured in 90 bytes. 92 o Packet size: same as frame size, both terms used interchangeably. 94 o Inner L2 size: for tunneled L2 frames only, size of an 95 encapsulated Ethernet Layer-2 frame, preceded with tunnel header, 96 and followed by tunnel trailer. Measured in Bytes. 98 o Inner IP size: for tunneled IP packets only, size of an 99 encapsulated IPv4 or IPv6 packet, preceded with tunnel header, and 100 followed by tunnel trailer. Measured in Bytes. 102 o Device Under Test (DUT): In software networking, "device" denotes 103 a specific piece of software tasked with packet processing. Such 104 device is surrounded with other software components (such as 105 operating system kernel). It is not possible to run devices 106 without also running the other components, and hardware resources 107 are shared between both. For purposes of testing, the whole set 108 of hardware and software components is called "system under test" 109 (SUT). As SUT is the part of the whole test setup performance of 110 which can be measured by [RFC2544] methods, this document uses SUT 111 instead of [RFC2544] DUT. Device under test (DUT) can be re- 112 introduced when analysing test results using whitebox techniques, 113 but this document sticks to blackbox testing. 115 o System Under Test (SUT): System under test (SUT) is a part of the 116 whole test setup whose performance is to be benchmarked. The 117 complete methodology contains other parts, whose performance is 118 either already established, or not affecting the benchmarking 119 result. 121 o Bi-directional throughput tests: involve packets/frames flowing in 122 both transmit and receive directions over every tested interface 123 of SUT/DUT. Packet flow metrics are measured per direction, and 124 can be reported as aggregate for both directions (i.e. throughput) 125 and/or separately for each measured direction (i.e. latency). In 126 most cases bi-directional tests use the same (symmetric) load in 127 both directions. 129 o Uni-directional throughput tests: involve packets/frames flowing 130 in only one direction, i.e. either transmit or receive direction, 131 over every tested interface of SUT/DUT. Packet flow metrics are 132 measured and are reported for measured direction. 134 o Packet Loss Ratio (PLR): ratio of packets received relative to 135 packets transmitted over the test trial duration, calculated using 136 formula: PLR = ( pkts_transmitted - pkts_received ) / 137 pkts_transmitted. For bi-directional throughput tests aggregate 138 PLR is calculated based on the aggregate number of packets 139 transmitted and received. 141 o Packet Throughput Rate: maximum packet offered load DUT/SUT 142 forwards within the specified Packet Loss Ratio (PLR). In many 143 cases the rate depends on the frame size processed by DUT/SUT. 144 Hence packet throughput rate MUST be quoted with specific frame 145 size as received by DUT/SUT during the measurement. For bi- 146 directional tests, packet throughput rate should be reported as 147 aggregate for both directions. Measured in packets-per-second 148 (pps) or frames-per-second (fps), equivalent metrics. 150 o Bandwidth Throughput Rate: a secondary metric calculated from 151 packet throughput rate using formula: bw_rate = pkt_rate * 152 (frame_size + L1_overhead) * 8, where L1_overhead for Ethernet 153 includes preamble (8 Bytes) and inter-frame gap (12 Bytes). For 154 bi-directional tests, bandwidth throughput rate should be reported 155 as aggregate for both directions. Expressed in bits-per-second 156 (bps). 158 o Non Drop Rate (NDR): maximum packet/bandwith throughput rate 159 sustained by DUT/SUT at PLR equal zero (zero packet loss) specific 160 to tested frame size(s). MUST be quoted with specific packet size 161 as received by DUT/SUT during the measurement. Packet NDR 162 measured in packets-per-second (or fps), bandwidth NDR expressed 163 in bits-per-second (bps). 165 o Partial Drop Rate (PDR): maximum packet/bandwith throughput rate 166 sustained by DUT/SUT at PLR greater than zero (non-zero packet 167 loss) specific to tested frame size(s). MUST be quoted with 168 specific packet size as received by DUT/SUT during the 169 measurement. Packet PDR measured in packets-per-second (or fps), 170 bandwidth PDR expressed in bits-per-second (bps). 172 o Maximum Receive Rate (MRR): packet/bandwidth rate regardless of 173 PLR sustained by DUT/SUT under specified Maximum Transmit Rate 174 (MTR) packet load offered by traffic generator. MUST be quoted 175 with both specific packet size and MTR as received by DUT/SUT 176 during the measurement. Packet MRR measured in packets-per-second 177 (or fps), bandwidth MRR expressed in bits-per-second (bps). 179 o Trial: a single measurement step. 181 o Trial duration: amount of time over which packets are transmitted 182 and received in a single throughput measurement step. 184 2. MLRsearch Background 186 Multiple Loss Ratio search (MLRsearch) is a packet throughput search 187 algorithm suitable for deterministic systems (as opposed to 188 probabilistic systems). MLRsearch discovers multiple packet 189 throughput rates in a single search, with each rate associated with a 190 distinct Packet Loss Ratio (PLR) criteria. 192 For cases when multiple rates need to be found, this property makes 193 MLRsearch more efficient in terms of time execution, compared to 194 traditional throughput search algorithms that discover a single 195 packet rate per defined search criteria (e.g. a binary search 196 specified by [RFC2544]). MLRsearch reduces execution time even 197 further by relying on shorter trial durations of intermediate steps, 198 with only the final measurements conducted at the specified final 199 trial duration. This results in the shorter overall search execution 200 time when compared to a traditional binary search, while guaranteeing 201 the same results for deterministic systems. 203 In practice two rates with distinct PLRs are commonly used for packet 204 throughput measurements of NFV systems: Non Drop Rate (NDR) with 205 PLR=0 and Partial Drop Rate (PDR) with PLR>0. The rest of this 206 document describes MLRsearch for NDR and PDR. If needed, MLRsearch 207 can be easily adapted to discover more throughput rates with 208 different pre-defined PLRs. 210 Similarly to other throughput search approaches like binary search, 211 MLRsearch is effective for SUTs/DUTs with PLR curve that is 212 continuously flat or increasing with growing offered load. It may 213 not be as effective for SUTs/DUTs with abnormal PLR curves. 215 MLRsearch relies on traffic generator to qualify the received packet 216 stream as error-free, and invalidate the results if any disqualifying 217 errors are present e.g. out-of-sequence frames. 219 MLRsearch can be applied to both uni-directional and bi-directional 220 throughput tests. 222 For bi-directional tests, MLRsearch rates and ratios are aggregates 223 of both directions, based on the following assumptions: 225 o Packet rates transmitted by traffic generator and received by SUT/ 226 DUT are the same in each direction, in other words the load is 227 symmetric. 229 o SUT/DUT packet processing capacity is the same in both directions, 230 resulting in the same packet loss under load. 232 3. MLRsearch Overview 234 The main properties of MLRsearch: 236 o MLRsearch is a duration aware multi-phase multi-rate search 237 algorithm: 239 * Initial Phase determines promising starting interval for the 240 search. 242 * Intermediate Phases progress towards defined final search 243 criteria. 245 * Final Phase executes measurements according to the final search 246 criteria. 248 * Final search criteria is defined by following inputs: 250 + PLRs associated with NDR and PDR. 252 + Final trial duration. 254 + Measurement resolution. 256 o Initial Phase: 258 * Measure MRR over initial trial duration. 260 * Measured MRR is used as an input to the first intermediate 261 phase. 263 o Multiple Intermediate Phases: 265 * Trial duration: 267 + Start with initial trial duration in the first intermediate 268 phase. 270 + Converge geometrically towards the final trial duration. 272 * Track two values for NDR and two for PDR: 274 + The values are called lower_bound and upper_bound. 276 + Each value comes from a specific trial measurement: 278 - Most recent for that transmit rate. 280 - As such the value is associated with that measurement's 281 duration and loss. 283 + A bound can be valid or invalid: 285 - Valid lower_bound must conform with PLR search criteria. 287 - Valid upper_bound must not conform with PLR search 288 criteria. 290 - Example of invalid NDR lower_bound is if it has been 291 measured with non-zero loss. 293 - Invalid bounds are not real boundaries for the searched 294 value: 296 o They are needed to track interval widths. 298 - Valid bounds are real boundaries for the searched value. 300 - Each non-initial phase ends with all bounds valid. 302 - Bound can become invalid if it re-measured at longer 303 trial duration in sub-sequent phase. 305 * Search: 307 + Start with a large (lower_bound, upper_bound) interval 308 width, that determines measurement resolution. 310 + Geometrically converge towards the width goal of the phase. 312 + Each phase halves the previous width goal. 314 * Use of internal and external searches: 316 + External search: 318 - Measures at transmit rates outside the (lower_bound, 319 upper_bound) interval. 321 - Activated when a bound is invalid, to search for a new 322 valid bound by doubling the interval width. 324 - It is a variant of "exponential search". 326 + Internal search: 328 - A "binary search" that measures at transmit rates within 329 the (lower_bound, upper_bound) valid interval, halving 330 the interval width. 332 o Final Phase: 334 * Executed with the final test trial duration, and the final 335 width goal that determines resolution of the overall search. 337 o Intermediate Phases together with the Final Phase are called Non- 338 Initial Phases. 340 The main benefits of MLRsearch vs. binary search include: 342 o In general MLRsearch is likely to execute more trials overall, but 343 likely less trials at a set final trial duration. 345 o In well behaving cases, e.g. when results do not depend on trial 346 duration, it greatly reduces (>50%) the overall duration compared 347 to a single PDR (or NDR) binary search over duration, while 348 finding multiple drop rates. 350 o In all cases MLRsearch yields the same or similar results to 351 binary search. 353 o Note: both binary search and MLRsearch are susceptible to 354 reporting non-repeatable results across multiple runs for very bad 355 behaving cases. 357 Caveats: 359 o Worst case MLRsearch can take longer than a binary search e.g. in 360 case of drastic changes in behaviour for trials at varying 361 durations. 363 4. Sample Implementation 365 Following is a brief description of a sample MLRsearch implementation 366 based on the open-source code running in FD.io CSIT project as part 367 of a Continuous Integration / Continuous Development (CI/CD) 368 framework. 370 4.1. Input Parameters 372 1. *maximum_transmit_rate* - Maximum Transmit Rate (MTR) of packets 373 to be used by external traffic generator implementing MLRsearch, 374 limited by the actual Ethernet link(s) rate, NIC model or traffic 375 generator capabilities. Sample defaults: 2 * 14.88 Mpps for 64B 376 10GE link rate, 2 * 18.75 Mpps for 64B 40GE NIC (specific model) 377 maximum rate (lower than 2 * 59.52 Mpps 40GE link rate). 379 2. *minimum_transmit_rate* - minimum packet transmit rate to be used 380 for measurements. MLRsearch fails if lower transmit rate needs 381 to be used to meet search criteria. Default: 2 * 10 kpps (could 382 be higher). 384 3. *final_trial_duration* - required trial duration for final rate 385 measurements. Default: 30 sec. 387 4. *initial_trial_duration* - trial duration for initial MLRsearch 388 phase. Default: 1 sec. 390 5. *final_relative_width* - required measurement resolution 391 expressed as (lower_bound, upper_bound) interval width relative 392 to upper_bound. Default: 0.5%. 394 6. *packet_loss_ratio* - maximum acceptable PLR search criteria for 395 PDR measurements. Default: 0.5%. 397 7. *number_of_intermediate_phases* - number of phases between the 398 initial phase and the final phase. Impacts the overall MLRsearch 399 duration. Less phases are required for well behaving cases, more 400 phases may be needed to reduce the overall search duration for 401 worse behaving cases. Default (2). (Value chosen based on 402 limited experimentation to date. More experimentation needed to 403 arrive to clearer guidelines.) 405 4.2. Initial Phase 407 1. First trial measures at configured maximum transmit rate (MTR) 408 and discovers maximum receive rate (MRR). 410 * IN: trial_duration = initial_trial_duration. 412 * IN: offered_transmit_rate = maximum_transmit_rate. 414 * DO: single trial. 416 * OUT: measured loss ratio. 418 * OUT: MRR = measured receive rate. 420 2. Second trial measures at MRR and discovers MRR2. 422 * IN: trial_duration = initial_trial_duration. 424 * IN: offered_transmit_rate = MRR. 426 * DO: single trial. 428 * OUT: measured loss ratio. 430 * OUT: MRR2 = measured receive rate. 432 3. Third trial measures at MRR2. 434 * IN: trial_duration = initial_trial_duration. 436 * IN: offered_transmit_rate = MRR2. 438 * DO: single trial. 440 * OUT: measured loss ratio. 442 4.3. Non-Initial Phases 444 1. Main loop: 446 1. IN: trial_duration for the current phase. Set to 447 initial_trial_duration for the first intermediate phase; to 448 final_trial_duration for the final phase; or to the element 449 of interpolating geometric sequence for other intermediate 450 phases. For example with two intermediate phases, 451 trial_duration of the second intermediate phase is the 452 geometric average of initial_trial_duration and 453 final_trial_duration. 455 2. IN: relative_width_goal for the current phase. Set to 456 final_relative_width for the final phase; doubled for each 457 preceding phase. For example with two intermediate phases, 458 the first intermediate phase uses quadruple of 459 final_relative_width and the second intermediate phase uses 460 double of final_relative_width. 462 3. IN: ndr_interval, pdr_interval from the previous main loop 463 iteration or the previous phase. If the previous phase is 464 the initial phase, both intervals have lower_bound = MRR2, 465 upper_bound = MRR. Note that the initial phase is likely to 466 create intervals with invalid bounds. 468 4. DO: According to the procedure described in point 2., either 469 exit the phase (by jumping to 1.7.), or calculate new 470 transmit rate to measure with. 472 5. DO: Perform the trial measurement at the new transmit rate 473 and trial_duration, compute its loss ratio. 475 6. DO: Update the bounds of both intervals, based on the new 476 measurement. The actual update rules are numerous, as NDR 477 external search can affect PDR interval and vice versa, but 478 the result agrees with rules of both internal and external 479 search. For example, any new measurement below an invalid 480 lower_bound becomes the new lower_bound, while the old 481 measurement (previously acting as the invalid lower_bound) 482 becomes a new and valid upper_bound. Go to next iteration 483 (1.3.), taking the updated intervals as new input. 485 7. OUT: current ndr_interval and pdr_interval. In the final 486 phase this is also considered to be the result of the whole 487 search. For other phases, the next phase loop is started 488 with the current results as an input. 490 2. New transmit rate (or exit) calculation (for point 1.4.): 492 1. If there is an invalid bound then prepare for external 493 search: 495 + IF the most recent measurement at NDR lower_bound transmit 496 rate had the loss higher than zero, then the new transmit 497 rate is NDR lower_bound decreased by two NDR interval 498 widths or the amount needed to hit the current width goal, 499 whichever is larger. 501 + Else, IF the most recent measurement at PDR lower_bound 502 transmit rate had the loss higher than PLR, then the new 503 transmit rate is PDR lower_bound decreased by two PDR 504 interval widths. 506 + Else, IF the most recent measurement at NDR upper_bound 507 transmit rate had no loss, then the new transmit rate is 508 NDR upper_bound increased by two NDR interval widths. 510 + Else, IF the most recent measurement at PDR upper_bound 511 transmit rate had the loss lower or equal to PLR, then the 512 new transmit rate is PDR upper_bound increased by two PDR 513 interval widths. 515 2. If interval width is higher than the current phase goal: 517 + Else, IF NDR interval does not meet the current phase 518 width goal, prepare for internal search. The new transmit 519 rate is a geometric average of NDR lower_bound and NDR 520 upper_bound. 522 + Else, IF PDR interval does not meet the current phase 523 width goal, prepare for internal search. The new transmit 524 rate is a geometric average of PDR lower_bound and PDR 525 upper_bound. 527 3. Else, IF some bound has still only been measured at a lower 528 duration, prepare to re-measure at the current duration (and 529 the same transmit rate). The order of priorities is: 531 + NDR lower_bound, 533 + PDR lower_bound, 535 + NDR upper_bound, 537 + PDR upper_bound. 539 4. Else, do not prepare any new rate, to exit the phase. This 540 ensures that at the end of each non-initial phase all 541 intervals are valid, narrow enough, and measured at current 542 phase trial duration. 544 4.4. Sample MLRsearch Run 546 TODO add a sample MLRsearch run with values. 548 5. Known Implementations 550 The only known working implementation of MLRsearch is in Linux 551 Foundation FD.io CSIT project [FDio-CSIT-MLRsearch]. MLRsearch is 552 also available as a Python package in [PyPI-MLRsearch]. 554 5.1. FD.io CSIT Implementation Deviations 556 This document so far has been describing a simplified version of 557 MLRsearch algorithm. The full algorithm as implemented contains 558 additional logic, which makes some of the details (but not general 559 ideas) above incorrect. Here is a short description of the 560 additional logic as a list of principles, explaining their main 561 differences from (or additions to) the simplified description, but 562 without detailing their mutual interaction. 564 1. Logarithmic transmit rate. 566 * In order to better fit the relative width goal, the interval 567 doubling and halving is done differently. 569 * For example, the middle of 2 and 8 is 4, not 5. 571 2. Optimistic maximum rate. 573 * The increased rate is never higher than the maximum rate. 575 * Upper bound at that rate is always considered valid. 577 3. Pessimistic minimum rate. 579 * The decreased rate is never lower than the minimum rate. 581 * If a lower bound at that rate is invalid, a phase stops 582 refining the interval further (until it gets re-measured). 584 4. Conservative interval updates. 586 * Measurements above current upper bound never update a valid 587 upper bound, even if drop ratio is low. 589 * Measurements below current lower bound always update any lower 590 bound if drop ratio is high. 592 5. Ensure sufficient interval width. 594 * Narrow intervals make external search take more time to find a 595 valid bound. 597 * If the new transmit increased or decreased rate would result 598 in width less than the current goal, increase/decrease more. 600 * This can happen if the measurement for the other interval 601 makes the current interval too narrow. 603 * Similarly, take care the measurements in the initial phase 604 create wide enough interval. 606 6. Timeout for bad cases. 608 * The worst case for MLRsearch is when each phase converges to 609 intervals way different than the results of the previous 610 phase. 612 * Rather than suffer total search time several times larger than 613 pure binary search, the implemented tests fail themselves when 614 the search takes too long (given by argument _timeout_). 616 6. IANA Considerations 618 No requests of IANA. 620 7. Security Considerations 622 Benchmarking activities as described in this memo are limited to 623 technology characterization of a DUT/SUT using controlled stimuli in 624 a laboratory environment, with dedicated address space and the 625 constraints specified in the sections above. 627 The benchmarking network topology will be an independent test setup 628 and MUST NOT be connected to devices that may forward the test 629 traffic into a production network or misroute traffic to the test 630 management network. 632 Further, benchmarking is performed on a "black-box" basis, relying 633 solely on measurements observable external to the DUT/SUT. 635 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 636 benchmarking purposes. Any implications for network security arising 637 from the DUT/SUT SHOULD be identical in the lab and in production 638 networks. 640 8. Acknowledgements 642 Many thanks to Alec Hothan of OPNFV NFVbench project for thorough 643 review and numerous useful comments and suggestions. 645 9. References 647 9.1. Normative References 649 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 650 Network Interconnect Devices", RFC 2544, 651 DOI 10.17487/RFC2544, March 1999, 652 . 654 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 655 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 656 May 2017, . 658 9.2. Informative References 660 [FDio-CSIT-MLRsearch] 661 "FD.io CSIT Test Methodology - MLRsearch", June 2019, 662 . 666 [PyPI-MLRsearch] 667 "MLRsearch 0.2.0, Python Package Index", August 2018, 668 . 670 Authors' Addresses 672 Maciek Konstantynowicz (editor) 673 Cisco Systems 675 Email: mkonstan@cisco.com 677 Vratko Polak (editor) 678 Cisco Systems 680 Email: vrpolak@cisco.com