idnits 2.17.1 draft-ietf-bmwg-mlrsearch-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The abstract seems to contain references ([RFC2544]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 152: '... throughput rate MUST be quoted with s...' RFC 2119 keyword, line 168: '... frame size(s). MUST be quoted with s...' RFC 2119 keyword, line 175: '...ed frame size(s). MUST be quoted with...' RFC 2119 keyword, line 182: '... by traffic generator. MUST be quoted...' RFC 2119 keyword, line 944: '... and MUST NOT be connected to device...' (2 more instances...) Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 12, 2021) is 1012 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 3 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Benchmarking Working Group M. Konstantynowicz, Ed. 3 Internet-Draft V. Polak, Ed. 4 Intended status: Informational Cisco Systems 5 Expires: January 13, 2022 July 12, 2021 7 Multiple Loss Ratio Search for Packet Throughput (MLRsearch) 8 draft-ietf-bmwg-mlrsearch-01 10 Abstract 12 This document proposes changes to [RFC2544], specifically to packet 13 throughput search methodology, by defining a new search algorithm 14 referred to as Multiple Loss Ratio search (MLRsearch for short). 15 Instead of relying on binary search with pre-set starting offered 16 load, it proposes a novel approach discovering the starting point in 17 the initial phase, and then searching for packet throughput based on 18 defined packet loss ratio (PLR) input criteria and defined final 19 trial duration time. One of the key design principles behind 20 MLRsearch is minimizing the total test duration and searching for 21 multiple packet throughput rates (each with a corresponding PLR) 22 concurrently, instead of doing it sequentially. 24 The main motivation behind MLRsearch is the new set of challenges and 25 requirements posed by NFV (Network Function Virtualization), 26 specifically software based implementations of NFV data planes. 27 Using [RFC2544] in the experience of the authors yields often not 28 repetitive and not replicable end results due to a large number of 29 factors that are out of scope for this draft. MLRsearch aims to 30 address this challenge in a simple way of getting the same result 31 sooner, so more repetitions can be done to describe the 32 replicability. 34 Status of This Memo 36 This Internet-Draft is submitted in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF). Note that other groups may also distribute 41 working documents as Internet-Drafts. The list of current Internet- 42 Drafts is at https://datatracker.ietf.org/drafts/current/. 44 Internet-Drafts are draft documents valid for a maximum of six months 45 and may be updated, replaced, or obsoleted by other documents at any 46 time. It is inappropriate to use Internet-Drafts as reference 47 material or to cite them other than as "work in progress." 48 This Internet-Draft will expire on January 13, 2022. 50 Copyright Notice 52 Copyright (c) 2021 IETF Trust and the persons identified as the 53 document authors. All rights reserved. 55 This document is subject to BCP 78 and the IETF Trust's Legal 56 Provisions Relating to IETF Documents 57 (https://trustee.ietf.org/license-info) in effect on the date of 58 publication of this document. Please review these documents 59 carefully, as they describe your rights and restrictions with respect 60 to this document. Code Components extracted from this document must 61 include Simplified BSD License text as described in Section 4.e of 62 the Trust Legal Provisions and are provided without warranty as 63 described in the Simplified BSD License. 65 Table of Contents 67 1. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 2 68 2. MLRsearch Background . . . . . . . . . . . . . . . . . . . . 5 69 3. MLRsearch Overview . . . . . . . . . . . . . . . . . . . . . 6 70 4. Sample Implementation . . . . . . . . . . . . . . . . . . . . 9 71 4.1. Input Parameters . . . . . . . . . . . . . . . . . . . . 9 72 4.2. Initial Phase . . . . . . . . . . . . . . . . . . . . . . 10 73 4.3. Non-Initial Phases . . . . . . . . . . . . . . . . . . . 11 74 5. FD.io CSIT Implementation . . . . . . . . . . . . . . . . . . 15 75 5.1. Additional details . . . . . . . . . . . . . . . . . . . 15 76 5.1.1. FD.io CSIT Input Parameters . . . . . . . . . . . . . 17 77 5.2. Example MLRsearch Run . . . . . . . . . . . . . . . . . . 18 78 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 20 79 7. Security Considerations . . . . . . . . . . . . . . . . . . . 20 80 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 20 81 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 20 82 9.1. Normative References . . . . . . . . . . . . . . . . . . 21 83 9.2. Informative References . . . . . . . . . . . . . . . . . 21 84 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 21 86 1. Terminology 88 o Frame size: size of an Ethernet Layer-2 frame on the wire, 89 including any VLAN tags (dot1q, dot1ad) and Ethernet FCS, but 90 excluding Ethernet preamble and inter-frame gap. Measured in 91 bytes (octets). 93 o Packet size: same as frame size, both terms used interchangeably. 95 o Device Under Test (DUT): In software networking, "device" denotes 96 a specific piece of software tasked with packet processing. Such 97 device is surrounded with other software components (such as 98 operating system kernel). It is not possible to run devices 99 without also running the other components, and hardware resources 100 are shared between both. For purposes of testing, the whole set 101 of hardware and software components is called "system under test" 102 (SUT). As SUT is the part of the whole test setup performance of 103 which can be measured by [RFC2544] methods, this document uses SUT 104 instead of [RFC2544] DUT. Device under test (DUT) can be re- 105 introduced when analysing test results using whitebox techniques, 106 but this document sticks to blackbox testing. 108 o System Under Test (SUT): System under test (SUT) is a part of the 109 whole test setup whose performance is to be benchmarked. The 110 complete test setup contains other parts, whose performance is 111 either already established, or not affecting the benchmarking 112 result. 114 o Bi-directional throughput tests: involve packets/frames flowing in 115 both transmit and receive directions over every tested interface 116 of SUT/DUT. Packet flow metrics are measured per direction, and 117 can be reported as aggregate for both directions and/or separately 118 for each measured direction. In most cases bi-directional tests 119 use the same (symmetric) load in both directions. 121 o Uni-directional throughput tests: involve packets/frames flowing 122 in only one direction, i.e. either transmit or receive direction, 123 over every tested interface of SUT/DUT. Packet flow metrics are 124 measured and are reported for measured direction. 126 o Packet Loss Ratio (PLR): ratio of packets received relative to 127 packets transmitted over the test trial duration, calculated using 128 formula: PLR = ( pkts_transmitted - pkts_received ) / 129 pkts_transmitted. For bi-directional throughput tests aggregate 130 PLR is calculated based on the aggregate number of packets 131 transmitted and received. 133 o Effective loss ratio: A corrected value of measured packet loss 134 ratio chosen to avoid difficulties if SUT exhibits decreasing loss 135 with increasing load. Maximum of packet loss ratios measured at 136 the same duration on all loads smaller than (and including) the 137 current one. 139 o Target loss ratio: A packet loss ratio value acting as an imput 140 for search. The search is finding tight enough lower and upper 141 bound in intended load, so that the lower bound has smaller or 142 equal loss ratio, and upper bound has strictly larger loss ratio. 144 For the tighterst upper bound, the effective loss ratio is the 145 same as packet loss ratio. For the tightest lower bound, the 146 effective loss ratio can be higher than the packet loss ratio, but 147 still not larger than the target loss ratio. 149 o Packet Throughput Rate: maximum packet offered load DUT/SUT 150 forwards within the specified Packet Loss Ratio (PLR). In many 151 cases the rate depends on the frame size processed by DUT/SUT. 152 Hence packet throughput rate MUST be quoted with specific frame 153 size as received by DUT/SUT during the measurement. For bi- 154 directional tests, packet throughput rate should be reported as 155 aggregate for both directions. Measured in packets-per-second 156 (pps) or frames-per-second (fps), equivalent metrics. 158 o Bandwidth Throughput Rate: a secondary metric calculated from 159 packet throughput rate using formula: bw_rate = pkt_rate * 160 (frame_size + L1_overhead) * 8, where L1_overhead for Ethernet 161 includes preamble (8 octets) and inter-frame gap (12 octets). For 162 bi-directional tests, bandwidth throughput rate should be reported 163 as aggregate for both directions. Expressed in bits-per-second 164 (bps). 166 o Non Drop Rate (NDR): maximum packet/bandwith throughput rate 167 sustained by DUT/SUT at PLR equal zero (zero packet loss) specific 168 to tested frame size(s). MUST be quoted with specific packet size 169 as received by DUT/SUT during the measurement. Packet NDR 170 measured in packets-per-second (or fps), bandwidth NDR expressed 171 in bits-per-second (bps). 173 o Partial Drop Rate (PDR): maximum packet/bandwith throughput rate 174 sustained by DUT/SUT at PLR greater than zero (non-zero packet 175 loss) specific to tested frame size(s). MUST be quoted with 176 specific packet size as received by DUT/SUT during the 177 measurement. Packet PDR measured in packets-per-second (or fps), 178 bandwidth PDR expressed in bits-per-second (bps). 180 o Maximum Receive Rate (MRR): packet/bandwidth rate regardless of 181 PLR sustained by DUT/SUT under specified Maximum Transmit Rate 182 (MTR) packet load offered by traffic generator. MUST be quoted 183 with both specific packet size and MTR as received by DUT/SUT 184 during the measurement. Packet MRR measured in packets-per-second 185 (or fps), bandwidth MRR expressed in bits-per-second (bps). 187 o Trial: a single measurement step. See [RFC2544] section 23. 189 o Trial duration: amount of time over which packets are transmitted 190 in a single measurement step. 192 2. MLRsearch Background 194 Multiple Loss Ratio search (MLRsearch) is a packet throughput search 195 algorithm suitable for deterministic systems (as opposed to 196 probabilistic systems). MLRsearch discovers multiple packet 197 throughput rates in a single search, each rate is associated with a 198 distinct Packet Loss Ratio (PLR) criterion. 200 For cases when multiple rates need to be found, this property makes 201 MLRsearch more efficient in terms of time execution, compared to 202 traditional throughput search algorithms that discover a single 203 packet rate per defined search criteria (e.g. a binary search 204 specified by [RFC2544]). MLRsearch reduces execution time even 205 further by relying on shorter trial durations of intermediate steps, 206 with only the final measurements conducted at the specified final 207 trial duration. This results in the shorter overall search execution 208 time when compared to a traditional binary search, while guaranteeing 209 the same results for deterministic systems. 211 In practice two rates with distinct PLRs are commonly used for packet 212 throughput measurements of NFV systems: Non Drop Rate (NDR) with 213 PLR=0 and Partial Drop Rate (PDR) with PLR>0. The rest of this 214 document describes MLRsearch with NDR and PDR pair as an example. 216 Similarly to other throughput search approaches like binary search, 217 MLRsearch is effective for SUTs/DUTs with PLR curve that is non- 218 decreasing with growing offered load. It may not be as effective for 219 SUTs/DUTs with abnormal PLR curves, although it will always converge 220 to some value. 222 MLRsearch relies on traffic generator to qualify the received packet 223 stream as error-free, and invalidate the results if any disqualifying 224 errors are present e.g. out-of-sequence frames. 226 MLRsearch can be applied to both uni-directional and bi-directional 227 throughput tests. 229 For bi-directional tests, MLRsearch rates and ratios are aggregates 230 of both directions, based on the following assumptions: 232 o Traffic transmitted by traffic generator and received by SUT/DUT 233 has the same packet rate in each direction, in other words the 234 offered load is symmetric. 236 o SUT/DUT packet processing capacity is the same in both directions, 237 resulting in the same packet loss under load. 239 MLRsearch can be applied even without those assumptions, but in that 240 case the aggregate loss ratio is less useful as a metric. 242 MLRsearch can be used for network transactions consisting of more 243 than just one packet, or anything else that has intended load as 244 input and loss ratio as output (duration as input is optional). This 245 text uses mostly packet-centric language. 247 3. MLRsearch Overview 249 The main properties of MLRsearch: 251 o MLRsearch is a duration aware multi-phase multi-rate search 252 algorithm: 254 * Initial Phase determines promising starting interval for the 255 search. 257 * Intermediate Phases progress towards defined final search 258 criteria. 260 * Final Phase executes measurements according to the final search 261 criteria. 263 * Final search criteria are defined by following inputs: 265 + Target PLRs (e.g. 0.0 and 0.005 when searching for NDR and 266 PDR). 268 + Final trial duration. 270 + Measurement resolution. 272 o Initial Phase: 274 * Measure MRR over initial trial duration. 276 * Measured MRR is used as an input to the first intermediate 277 phase. 279 o Multiple Intermediate Phases: 281 * Trial duration: 283 + Start with initial trial duration in the first intermediate 284 phase. 286 + Converge geometrically towards the final trial duration. 288 * Track all previous trial measurement results: 290 + Duration, offered load and loss ratio are tracked. 292 + Effective loss ratios are tracked. 294 - While in practice, real loss ratios can decrease with 295 increasing load, effective loss ratios never decrease. 296 This is achieved by sorting results by load, and using 297 the effective loss ratio of the previous load if the 298 current loss ratio is smaller than that. 300 + The algorithm queries the results to find best lower and 301 upper bounds. 303 - Effective loss ratios are always used. 305 + The phase ends if all target loss ratios have tight enough 306 bounds. 308 * Search: 310 + Iterate over target loss ratios in increasing order. 312 + If both upper and lower bound are in measurement results for 313 this duration, apply bisect until the bounds are tight 314 enough, and continue with next loss ratio. 316 + If a bound is missing for this duration, but there exists a 317 bound from the previous duration (compatible with the other 318 bound at this duration), re-measure at the current duration. 320 + If a bound in one direction (upper or lower) is missing for 321 this duration, and the previous duration does not have a 322 compatible bound, compute the current "interval size" from 323 the second tightest bound in the other direction (lower or 324 upper respectively) for the current duration, and choose 325 next offered load for external search. 327 + The logic guarantees that a measurement is never repeated 328 with both duration and offered load being the same. 330 + The logic guarantees that measurements for higher target 331 loss ratio iterations (still within the same phase duration) 332 do not affect validity and tightness of bounds for previous 333 target loss ratio iterations (at the same duration). 335 * Use of internal and external searches: 337 + External search: 339 - It is a variant of "exponential search". 341 - The "interval size" is multiplied by a configurable 342 constant (powers of two work well with the subsequent 343 internal search). 345 + Internal search: 347 - A variant of binary search that measures at offered load 348 between the previously found bounds. 350 - The interval does not need to be split into exact halves, 351 if other split can get to the target width goal faster. 353 o The idea is to avoid returning interval narrower than 354 the current width goal. See sample implementation 355 details, below. 357 o Final Phase: 359 * Executed with the final test trial duration, and the final 360 width goal that determines resolution of the overall search. 362 o Intermediate Phases together with the Final Phase are called Non- 363 Initial Phases. 365 o The returned bounds stay within prescribed min_rate and max_rate. 367 * When returning min_rate or max_rate, the returned bounds may be 368 invalid. 370 + E.g. upper bound at max_rate may come from a measurement 371 with loss ratio still not higher than the target loss ratio. 373 The main benefits of MLRsearch vs. binary search include: 375 o In general MLRsearch is likely to execute more trials overall, but 376 likely less trials at a set final trial duration. 378 o In well behaving cases, e.g. when results do not depend on trial 379 duration, it greatly reduces (>50%) the overall duration compared 380 to a single PDR (or NDR) binary search over duration, while 381 finding multiple drop rates. 383 o In all cases MLRsearch yields the same or similar results to 384 binary search. 386 o Note: both binary search and MLRsearch are susceptible to 387 reporting non-repeatable results across multiple runs for very bad 388 behaving cases. 390 Caveats: 392 o Worst case MLRsearch can take longer than a binary search, e.g. in 393 case of drastic changes in behaviour for trials at varying 394 durations. 396 * Re-measurement at higher duration can trigger a long external 397 search. That never happens in binary search, which uses the 398 final duration from the start. 400 4. Sample Implementation 402 Following is a brief description of a sample MLRsearch 403 implementation, which is a simplified version of the existing 404 implementation. 406 4.1. Input Parameters 408 1. *max_rate* - Maximum Transmit Rate (MTR) of packets to be used by 409 external traffic generator implementing MLRsearch, limited by the 410 actual Ethernet link(s) rate, NIC model or traffic generator 411 capabilities. 413 2. *min_rate* - minimum packet transmit rate to be used for 414 measurements. MLRsearch fails if lower transmit rate needs to be 415 used to meet search criteria. 417 3. *final_trial_duration* - required trial duration for final rate 418 measurements. 420 4. *initial_trial_duration* - trial duration for initial MLRsearch 421 phase. 423 5. *final_relative_width* - required measurement resolution 424 expressed as (lower_bound, upper_bound) interval width relative 425 to upper_bound. 427 6. *packet_loss_ratios* - list of maximum acceptable PLR search 428 criteria. 430 7. *number_of_intermediate_phases* - number of phases between the 431 initial phase and the final phase. Impacts the overall MLRsearch 432 duration. Less phases are required for well behaving cases, more 433 phases may be needed to reduce the overall search duration for 434 worse behaving cases. 436 4.2. Initial Phase 438 1. First trial measures at configured maximum transmit rate (MTR) 439 and discovers maximum receive rate (MRR). 441 * IN: trial_duration = initial_trial_duration. 443 * IN: offered_transmit_rate = maximum_transmit_rate. 445 * DO: single trial. 447 * OUT: measured loss ratio. 449 * OUT: MRR = measured receive rate. Received rate is computed 450 as intended load multiplied by pass ratio (which is one minus 451 loss ratio). This is useful when loss ratio is computed from 452 a different metric than intended load. For example, intended 453 load can be in transactions (multiple packets each), but loss 454 ratio is computed on level of packets, not transactions. 456 * Example: If MTR is 10 transactions per second, and each 457 transaction has 10 packets, and receive rate is 90 packets per 458 second, then loss rate is 10%, and MRR is computed to be 9 459 transactions per second. 461 If MRR is too close to MTR, MRR is set below MTR so that interval 462 width is equal to the width goal of the first intermediate phase. 463 If MRR is less than min_rate, min_rate is used. 465 2. Second trial measures at MRR and discovers MRR2. 467 * IN: trial_duration = initial_trial_duration. 469 * IN: offered_transmit_rate = MRR. 471 * DO: single trial. 473 * OUT: measured loss ratio. 475 * OUT: MRR2 = measured receive rate. If MRR2 is less than 476 min_rate, min_rate is used. If loss ratio is less or equal to 477 the smallest target loss ratio, MRR2 is set to a value above 478 MRR, so that interval width is equal to the width goal of the 479 first intermediate phase. MRR2 could end up being equal to 480 MTR (for example if both measurements so far had zero loss), 481 which was already measured, step 3 is skipped in that case. 483 3. Third trial measures at MRR2. 485 * IN: trial_duration = initial_trial_duration. 487 * IN: offered_transmit_rate = MRR2. 489 * DO: single trial. 491 * OUT: measured loss ratio. 493 * OUT: MRR3 = measured receive rate. If MRR3 is less than 494 min_rate, min_rate is used. If step 3 is not skipped, the 495 first trial measurement is forgotten. This is done because in 496 practice (if MRR2 is above MRR), external search from MRR and 497 MRR2 is likely to lead to a faster intermediate phase than a 498 bisect between MRR2 and MTR. 500 4.3. Non-Initial Phases 502 1. Main phase loop: 504 1. IN: trial_duration for the current phase. Set to 505 initial_trial_duration for the first intermediate phase; to 506 final_trial_duration for the final phase; or to the element 507 of interpolating geometric sequence for other intermediate 508 phases. For example with two intermediate phases, 509 trial_duration of the second intermediate phase is the 510 geometric average of initial_trial_duration and 511 final_trial_duration. 513 2. IN: relative_width_goal for the current phase. Set to 514 final_relative_width for the final phase; doubled for each 515 preceding phase. For example with two intermediate phases, 516 the first intermediate phase uses quadruple of 517 final_relative_width and the second intermediate phase uses 518 double of final_relative_width. 520 3. IN: Measurement results from the previous phase (previous 521 duration). 523 4. Internal target ratio loop: 525 1. IN: Target loss ratio for this iteration of ratio loop. 527 2. IN: Measurement results from all previous ratio loop 528 iterations of current phase (current duration). 530 3. DO: According to the procedure described in point 2: 532 1. either exit the phase (by jumping to 1.5), 534 2. or exit loop iteration (by continuing with next 535 target loss ratio, jumping to 1.4.1), 537 3. or calculate new transmit rate to measure with. 539 4. DO: Perform the trial measurement at the new transmit 540 rate and current trial duration, compute its loss ratio. 542 5. DO: Add the result and go to next iteration (1.4.1), 543 including the added trial result in 1.4.2. 545 5. OUT: Measurement results from this phase. 547 6. OUT: In the final phase, bounds for each target loss ratio 548 are extracted and returned. 550 1. If a valid bound does not exist, use min_rate or 551 max_rate. 553 2. New transmit rate (or exit) calculation (for point 1.4.3): 555 1. If the previous duration has the best upper and lower bound, 556 select the middle point as the new transmit rate. 558 1. See 2.5.3. below for the exact splitting logic. 560 2. This can be a no-op if interval is narrow enough already, 561 in that case continue with 2.2. 563 3. Discussion, assuming the middle point is selected and 564 measured: 566 1. Regardless of loss rate measured, the result becomes 567 either best upper or best lower bound at current 568 duration. 570 2. So this condition is satisfied at most once per 571 iteration. 573 3. This also explains why previous phase has double 574 width goal: 576 1. We avoid one more bisection at previous phase. 578 2. At most one bound (per iteration) is re-measured 579 with current duration. 581 3. Each re-measurement can trigger an external 582 search. 584 4. Such surprising external searches are the main 585 hurdle in achieving low overal search durations. 587 5. Even without 1.1, there is at most one external 588 search per phase and target loss ratio. 590 6. But without 1.1 there can be two re-measurements, 591 each coming with a risk of triggering external 592 search. 594 2. If the previous duration has one bound best, select its 595 transmit rate. In deterministic case this is the last 596 measurement needed this iteration. 598 3. If only upper bound exists in current duration results: 600 1. This can only happen for the smallest target loss ratio. 602 2. If the upper bound was measured at min_rate, exit the 603 whole phase early (not investigating other target loss 604 ratios). 606 3. Select new transmit rate using external search: 608 1. For computing previous interval size, use: 610 1. second tightest bound at current duration, 612 2. or tightest bound of previous duration, if 613 compatible and giving a more narrow interval, 615 3. or target interval width if none of the above is 616 available. 618 4. In any case increase to target interval width if 619 smaller. 621 2. Quadruple the interval width. 623 3. Use min_rate if the new transmit rate is lower. 625 4. If only lower bound exists in current duration results: 627 1. If the lower bound was measured at max_rate, exit this 628 iteration (continue with next lowest target loss ratio). 630 2. Select new transmit rate using external search: 632 1. For computing previous interval size, use: 634 1. second tightest bound at current duration, 636 2. or tightest bound of previous duration, if 637 compatible and giving a more narrow interval, 639 3. or target interval width if none of the above is 640 available. 642 4. In any case increase to target interval width if 643 smaller. 645 2. Quadruple the interval width. 647 3. Use max_rate if the new transmit rate is higher. 649 5. The only remaining option is both bounds in current duration 650 results. 652 1. This can happen in two ways, depending on how the lower 653 bound was chosen. 655 1. It could have been selected for the current loss 656 ratio, e.g. in re-measurement (2.2) or in initial 657 bisect (2.1). 659 2. It could have been found as an upper bound for the 660 previous smaller target loss ratio, in which case it 661 might be too low. 663 3. The algorithm does not track which one is the case, 664 as the decision logic works well regardless. 666 2. Compute "extending down" candidate transmit rate exactly 667 as in 2.3. 669 3. Compute "bisecting" candidate transmit rate: 671 1. Compute the current interval width from the two 672 bounds. 674 2. Express the width as a (float) multiple of the target 675 width goal for this phase. 677 3. If the multiple is not higher than one, it means the 678 width goal is met. Exit this iteration and continue 679 with next higher target loss ratio. 681 4. If the multiple is two or less, use half of that for 682 new width if the lower subinterval. 684 5. Round the multiple up to nearest even integer. 686 6. Use half of that for new width if the lower 687 subinterval. 689 7. Example: If lower bound is 2.0 and upper bound is 690 5.0, and width goal is 1.0, the new candidate 691 transmit rate will be 4.0. This can save a 692 measurement when 4.0 has small loss. Selecting the 693 average (3.5) would never save a measurement, giving 694 more narrow bounds instead. 696 4. If either candidate computation want to exit the 697 iteration, do as bisecting candidate computation says. 699 5. The remaining case is both candidates wanting to measure 700 at some rate. Use the higher rate. This prefers 701 external search down narrow enough interval, competing 702 with perfectly sized lower bisect subinterval. 704 5. FD.io CSIT Implementation 706 The only known working implementation of MLRsearch is in the open- 707 source code running in Linux Foundation FD.io CSIT project 708 [FDio-CSIT-MLRsearch] as part of a Continuous Integration / 709 Continuous Development (CI/CD) framework. 711 MLRsearch is also available as a Python package in [PyPI-MLRsearch]. 713 5.1. Additional details 715 This document so far has been describing a simplified version of 716 MLRsearch algorithm. The full algorithm as implemented in CSIT 717 contains additional logic, which makes some of the details (but not 718 general ideas) above incorrect. Here is a short description of the 719 additional logic as a list of principles, explaining their main 720 differences from (or additions to) the simplified description, but 721 without detailing their mutual interaction. 723 1. Logarithmic transmit rate. 725 * In order to better fit the relative width goal, the interval 726 doubling and halving is done differently. 728 * For example, the middle of 2 and 8 is 4, not 5. 730 2. Timeout for bad cases. 732 * The worst case for MLRsearch is when each phase converges to 733 intervals way different than the results of the previous 734 phase. 736 * Rather than suffer total search time several times larger than 737 pure binary search, the implemented tests fail themselves when 738 the search takes too long (given by argument _timeout_). 740 3. Intended count. 742 * The number of packets to send during the trial should be equal 743 to the intended load multiplied by the duration. 745 + Also multiplied by a coefficient, if loss ratio is 746 calculated from a different metric. 748 - Example: If a successful transaction uses 10 packets, 749 load is given in transactions per second, byt loss ratio 750 is calculated from packets, the coefficient to get 751 intended count of packets is 10. 753 * But in practice that does not work. 755 + It could result in a fractional number of packets, 757 + so it has to be rounded in a way traffic generator chooses, 759 + which may depend on the number of traffic flows and traffic 760 generator worker threads. 762 4. Attempted count. As the real number of intended packets is not 763 known exactly, the computation uses the number of packets traffic 764 generator reports as sent. Unless overriden by the next point. 766 5. Duration stretching. 768 * In some cases, traffic generator may get overloaded, causing 769 it to take significantly longer (than duration) to send all 770 packets. 772 * The implementation uses an explicit stop, 774 + causing lower attempted count in those cases. 776 * The implementation tolerates some small difference between 777 attempted count and intended count. 779 + 10 microseconds worth of traffic is sufficient for our 780 tests. 782 * If the difference is higher, the unsent packets are counted as 783 lost. 785 + This forces the search to avoid the regions of high 786 duration stretching. 788 + The final bounds describe the performance of not just SUT, 789 but of the whole system, including the traffic generator. 791 6. Excess packets. 793 * In some test (e.g. using TCP flows) Traffic generator reacts 794 to packet loss by retransmission. Usually, such packet loss 795 is already affecting loss ratio. If a test also wants to 796 treat retransmissions due to heavily delayed packets also as a 797 failure, this is once again visible as a mismatch between the 798 intended count and the attempted count. 800 * The CSIT implementation simply looks at absolute value of the 801 difference, so it offes the same small tolerance before it 802 start marking a "loss". 804 7. For result processing, we use lower bounds and ignore upper 805 bounds. 807 5.1.1. FD.io CSIT Input Parameters 809 1. *max_rate* - Typical values: 2 * 14.88 Mpps for 64B 10GE link 810 rate, 2 * 18.75 Mpps for 64B 40GE NIC (specific model). 812 2. *min_rate* - Value: 2 * 9001 pps (we reserve 9000 pps for latency 813 measurements). 815 3. *final_trial_duration* - Value: 30.0 seconds. 817 4. *initial_trial_duration* - Value: 1.0 second. 819 5. *final_relative_width* - Value: 0.005 (0.5%). 821 6. *packet_loss_ratios* - Value: 0.0, 0.005 (0.0% for NDR, 0.5% for 822 PDR). 824 7. *number_of_intermediate_phases* - Value: 2. The value has been 825 chosen based on limited experimentation to date. More 826 experimentation needed to arrive to clearer guidelines. 828 8. *timeout* - Limit for the overall search duration (for one 829 search). If MLRsearch oversteps this limit, it immediatelly 830 declares the test failed, to avoid wasting even more time on a 831 misbehaving SUT. Value: 600.0 (seconds). 833 9. *expansion_coefficient* - Width multiplier for external search. 834 Value: 4.0 (interval width is quadroupled). Value of 2.0 is best 835 for well-behaved SUTs, but value of 4.0 has been found to 836 decrease overall search time for worse-behaved SUT 837 configurations, contributing more to the overall set of different 838 SUT configurations tested. 840 5.2. Example MLRsearch Run 842 The following list describes a search from a real test run in CSIT 843 (using the default input values as above). 845 o Initial phase, trial duration 1.0 second. 847 Measurement 1, intended load 18750000.0 pps (MTR), measured loss 848 ratio 0.7089514628479618 (valid upper bound for both NDR and PDR). 850 Measurement 2, intended load 5457160.071600716 pps (MRR), measured 851 loss ratio 0.018650817320118702 (new tightest upper bounds). 853 Measurement 3, intended load 5348832.933500009 pps (slightly less 854 than MRR2 in preparation for first intermediate phase target interval 855 width), measured loss ratio 0.00964383362905351 (new tightest upper 856 bounds). 858 o First intermediate phase starts, trial duration still 1.0 seconds. 860 Measurement 4, intended load 4936605.579021453 pps (no lower bound, 861 performing external search downwards, for NDR), measured loss ratio 862 0.0 (valid lower bound for both NDR and PDR). 864 Measurement 5, intended load 5138587.208637197 pps (bisecting for 865 NDR), measured loss ratio 0.0 (new tightest lower bounds). 867 Measurement 6, intended load 5242656.244044665 pps (bisecting), 868 measured loss ratio 0.013523745379347257 (new tightest upper bounds). 870 o Both intervals are narrow enough. 872 o Second intermediate phase starts, trial duration 5.477225575051661 873 seconds. 875 Measurement 7, intended load 5190360.904111567 pps (initial bisect 876 for NDR), measured loss ratio 0.0023533920869969953 (NDR upper bound, 877 PDR lower bound). 879 Measurement 8, intended load 5138587.208637197 pps (re-measuring NDR 880 lower bound), measured loss ratio 1.2080222912800403e-06 (new 881 tightest NDR upper bound). 883 o The two intervals have separate bounds from now on. 885 Measurement 9, intended load 4936605.381062318 pps (external NDR 886 search down), measured loss ratio 0.0 (new valid NDR lower bound). 888 Measurement 10, intended load 5036583.888432355 pps (NDR bisect), 889 measured loss ratio 0.0 (new tightest NDR lower bound). 891 Measurement 11, intended load 5087329.903232804 pps (NDR bisect), 892 measured loss ratio 0.0 (new tightest NDR lower bound). 894 o NDR interval is narrow enough, PDR interval not ready yet. 896 Measurement 12, intended load 5242656.244044665 pps (re-measuring PDR 897 upper bound), measured loss ratio 0.0101174866190136 (still valid PDR 898 upper bound). 900 o Also PDR interval is narrow enough, with valid bounds for this 901 duration. 903 o Final phase starts, trial duration 30.0 seconds. 905 Measurement 13, intended load 5112894.3238511775 pps (initial bisect 906 for NDR), measured loss ratio 0.0 (new tightest NDR lower bound). 908 Measurement 14, intended load 5138587.208637197 (re-measuring NDR 909 upper bound), measured loss ratio 2.030389804256833e-06 (still valid 910 PDR upper bound). 912 o NDR interval is narrow enough, PDR interval not yet. 914 Measurement 15, intended load 5216443.04126728 pps (initial bisect 915 for PDR), measured loss ratio 0.005620871287975237 (new tightest PDR 916 upper bound). 918 Measurement 16, intended load 5190360.904111567 (re-measuring PDR 919 lower bound), measured loss ratio 0.0027629971184465604 (still valid 920 PDR lower bound). 922 o PDR interval is also narrow enough. 924 o Returning bounds: 926 o NDR_LOWER = 5112894.3238511775 pps; NDR_UPPER = 5138587.208637197 927 pps; 929 o PDR_LOWER = 5190360.904111567 pps; PDR_UPPER = 5216443.04126728 930 pps. 932 6. IANA Considerations 934 No requests of IANA. 936 7. Security Considerations 938 Benchmarking activities as described in this memo are limited to 939 technology characterization of a DUT/SUT using controlled stimuli in 940 a laboratory environment, with dedicated address space and the 941 constraints specified in the sections above. 943 The benchmarking network topology will be an independent test setup 944 and MUST NOT be connected to devices that may forward the test 945 traffic into a production network or misroute traffic to the test 946 management network. 948 Further, benchmarking is performed on a "black-box" basis, relying 949 solely on measurements observable external to the DUT/SUT. 951 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 952 benchmarking purposes.Any implications for network security arising 953 from the DUT/SUT SHOULD be identical in the lab and in production 954 networks. 956 8. Acknowledgements 958 Many thanks to Alec Hothan of OPNFV NFVbench project for thorough 959 review and numerous useful comments and suggestions. 961 9. References 962 9.1. Normative References 964 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 965 Network Interconnect Devices", RFC 2544, 966 DOI 10.17487/RFC2544, March 1999, 967 . 969 9.2. Informative References 971 [FDio-CSIT-MLRsearch] 972 "FD.io CSIT Test Methodology - MLRsearch", February 2021, 973 . 977 [PyPI-MLRsearch] 978 "MLRsearch 0.4.0, Python Package Index", April 2021, 979 . 981 Authors' Addresses 983 Maciek Konstantynowicz (editor) 984 Cisco Systems 986 Email: mkonstan@cisco.com 988 Vratko Polak (editor) 989 Cisco Systems 991 Email: vrpolak@cisco.com