idnits 2.17.1 draft-morton-bmwg-multihome-evpn-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document seems to lack the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords -- however, there's a paragraph with a matching beginning. Boilerplate error? (The document does seem to have the reference to RFC 2119 which the ID-Checklist requires). -- The document date (October 23, 2018) is 2009 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC7432' is mentioned on line 107, but not defined == Unused Reference: 'RFC5180' is defined on line 385, but no explicit reference was found in the text == Unused Reference: 'RFC6201' is defined on line 390, but no explicit reference was found in the text == Unused Reference: 'RFC6985' is defined on line 406, but no explicit reference was found in the text == Unused Reference: 'OPNFV-2017' is defined on line 417, but no explicit reference was found in the text == Unused Reference: 'RFC8239' is defined on line 424, but no explicit reference was found in the text == Unused Reference: 'TST009' is defined on line 428, but no explicit reference was found in the text == Unused Reference: 'VSPERF-b2b' is defined on line 435, but no explicit reference was found in the text == Unused Reference: 'VSPERF-BSLV' is defined on line 441, but no explicit reference was found in the text ** Obsolete normative reference: RFC 1944 (Obsoleted by RFC 2544) Summary: 1 error (**), 0 flaws (~~), 11 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Morton 3 Internet-Draft J. Uttaro 4 Updates: ???? (if approved) AT&T Labs 5 Intended status: Informational October 23, 2018 6 Expires: April 26, 2019 8 Benchmarks and Methods for Multihomed EVPN 9 draft-morton-bmwg-multihome-evpn-00 11 Abstract 13 Fundamental Benchmarking Methodologies for Network Interconnect 14 Devices of interest to the IETF are defined in RFC 2544. Key 15 benchmarks applicable to restoration and multi-homed sites are in RFC 16 6894. This memo applies these methods to Multihomed nodes 17 implemented on Ethernet Virtual Private Networks (EVPN). 19 Requirements Language 21 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 22 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 23 "OPTIONAL" in this document are to be interpreted as described in BCP 24 14[RFC2119] [RFC8174] when, and only when, they appear in all 25 capitals, as shown here. 27 Status of This Memo 29 This Internet-Draft is submitted in full conformance with the 30 provisions of BCP 78 and BCP 79. 32 Internet-Drafts are working documents of the Internet Engineering 33 Task Force (IETF). Note that other groups may also distribute 34 working documents as Internet-Drafts. The list of current Internet- 35 Drafts is at https://datatracker.ietf.org/drafts/current/. 37 Internet-Drafts are draft documents valid for a maximum of six months 38 and may be updated, replaced, or obsoleted by other documents at any 39 time. It is inappropriate to use Internet-Drafts as reference 40 material or to cite them other than as "work in progress." 42 This Internet-Draft will expire on April 26, 2019. 44 Copyright Notice 46 Copyright (c) 2018 IETF Trust and the persons identified as the 47 document authors. All rights reserved. 49 This document is subject to BCP 78 and the IETF Trust's Legal 50 Provisions Relating to IETF Documents 51 (https://trustee.ietf.org/license-info) in effect on the date of 52 publication of this document. Please review these documents 53 carefully, as they describe your rights and restrictions with respect 54 to this document. Code Components extracted from this document must 55 include Simplified BSD License text as described in Section 4.e of 56 the Trust Legal Provisions and are provided without warranty as 57 described in the Simplified BSD License. 59 Table of Contents 61 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 62 2. Scope and Goals . . . . . . . . . . . . . . . . . . . . . . . 3 63 3. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 3 64 4. Test Setups . . . . . . . . . . . . . . . . . . . . . . . . . 3 65 5. Procedure for Throughput Characterization . . . . . . . . . . 5 66 5.1. Address Learning Phase . . . . . . . . . . . . . . . . . 5 67 5.2. Test for a Single Frame Size and Number of Flows . . . . 5 68 5.3. Test Repetition . . . . . . . . . . . . . . . . . . . . . 6 69 5.4. Benchmark Calculations . . . . . . . . . . . . . . . . . 6 70 6. Procedure for Mass Withdrawal Characterization . . . . . . . 6 71 6.1. Address Learning Phase . . . . . . . . . . . . . . . . . 6 72 6.2. Test for a Single Frame Size and Number of Flows . . . . 6 73 6.3. Test Repetition . . . . . . . . . . . . . . . . . . . . . 7 74 6.4. Benchmark Calculations . . . . . . . . . . . . . . . . . 7 75 7. Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . 7 76 8. Security Considerations . . . . . . . . . . . . . . . . . . . 8 77 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 8 78 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 8 79 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 9 80 11.1. Normative References . . . . . . . . . . . . . . . . . . 9 81 11.2. Informative References . . . . . . . . . . . . . . . . . 10 82 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 10 84 1. Introduction 86 The IETF's fundamental Benchmarking Methodologies are defined 87 in[RFC2544], supported by the terms and definitions in [RFC1242], and 88 [RFC2544] actually obsoletes an earlier specification, [RFC1944]. 90 This memo recognizes the importance of Ethernet Virtual Private 91 Network (EVPN) Multihoming connectivity scenarios, where a CE device 92 is connected to 2 or more PEs using an instance of an Ethernet 93 Segment. 95 In an all-active or Active-Active scenario, CE-PE traffic is load- 96 balanced across the two or more PEs. 98 Mass-withdrawal of routes may take place when an autodiscovery route 99 is used on a per Ethernet Segment basis, and there is a link failure 100 on the one of the Ethernet Segment links (or configuration changes 101 take place). 103 Although the EVPN depends on address-learning in the control-plane, 104 the Ethernet Segment Instance is permitted to use "the method best 105 suited to the CE: data-plane learning, IEEE 802.1x, the Link Layer 106 Discovery Protocol (LLDP), IEEE 802.1aq, Address Resolution Protocol 107 (ARP), management plane, or other protocols" [RFC7432]. 109 This memo seeks to benchmark these important cases (and others). 111 2. Scope and Goals 113 The scope of this memo is to define a method to unambiguously perform 114 tests, measure the benchmark(s), and report the results for Capacity 115 of EVPN Multihoming connectivity scenarios, and other key restoration 116 activities covering link failure in the Active-Active scenario. 118 The goal is to provide more efficient test procedures where possible, 119 and to expand reporting with additional interpretation of the 120 results. The tests described in this memo address the cases where 122 3. Motivation 124 To be provided. 126 . 128 4. Test Setups 130 For simple Capacity/Throughput Benchmarks, the Test Setup MUST be 131 consistent with Figure 1 of [RFC2544], or Figure 2 when the tester's 132 sender and receiver are different devices. 134 +--------+ ,-----. +--------+ 135 | | / \ | | 136 | | /( PE ....| | 137 | | / \ 1 / | | 138 | Test | ,-----. / `-----' | Test | 139 | | / \ / | | 140 | Device |...( CE X | Device | 141 | | \ 1 / \ | | 142 | | `-----' \ ,-----. | | 143 | | \ / \ | | 144 | | \( PE ....| | 145 +--------+ \ 2 / +--------+ 146 `-----' 148 Figure 1 SUT for Throughput and other Ethernet Segment Tests 150 In this case, the System Under Test (SUT) is comprised of a single CE 151 device and two or more PE devices. The tester SHALL be connected to 152 all CE and PE, and be capable of simulateneously sending and 153 receiving frames on all ports in use. The tester SHALL be capable of 154 generating multiple flows (according to a 5-tuple definition, or any 155 sub-set of the 5-tuple). The tester SHALL be able to control the IP 156 capacity of sets of individual flows, and the presence of sets of 157 flows on specific interface ports. 159 Other mandatory testing aspects described in [RFC2544] MUST be 160 included, unless explicitly modified in the next section. 162 The ingress and egress link speeds and link layer protocols MUST be 163 specified and used to compute the maximum theoretical frame rate when 164 respecting the minimum inter-frame gap. 166 A second test case is where a BGP backbone using MPLS-LDP to 167 interconnects multiple PE - ESI - CE locations. 169 Test Test 170 Device Device 171 EVI-1 172 +---+ ,-----. +---+ 173 | | ESI / \ | | 174 | | 1 /( PE ..... ESI | | 175 | | / \ 1 / \ EVI 2 | | 176 | | ,-----. / `-----' \ ,-----. +--+| | 177 | | / \ / \ / \ | || | 178 | |...( CE X X...( PE ...|CE|| | 179 | | \ 1 / \ / \ 3 / | 2|| | 180 | | `-----' \ ,-----. / `-----' +--+| | 181 | | \ / \ / | | 182 | | \( PE ..../ | | 183 +---+ \ 2 / +---+ 184 `-----' 185 EVI-2 187 Figure 2 SUT with BGP & MPLS interconnecting multiple PE-ESI-CE 188 locations 190 All Link speeds MUST be reported, along with complete device 191 configurations in the SUT and Test Device(s). 193 Additional Test Setups and configurations will be provided in this 194 section, after review. 196 One capacity benchmark pertains to the number of ESI that a network 197 with multiple PE - ESI - CE locations can support. 199 5. Procedure for Throughput Characterization 201 Objective: To characterize the ability of a DUT to process frames 202 between CE and one or more PE in a multihomed connectivity scenario. 203 Figure 1 gives the test setup. 205 The Procedure follows. 207 5.1. Address Learning Phase 209 "For every address, learning frames MUST be sent to the DUT/SUT to 210 allow the DUT/SUT update its address tables properly." [RFC2889] 212 5.2. Test for a Single Frame Size and Number of Flows 214 Each trial in the test requires Confiuring a number of flows (from 215 100 to 100k) and a fixed frame size (64 octets to 128, 256, 512, 216 1024, 1280 and 1518 bytes, as per [RFC2544]). 218 The Procedure SHALL follow section 5.1 of [RFC2889]. 220 5.3. Test Repetition 222 The test MUST be repeated N times for each frame size in the subset 223 list, and each Throughput value made available for further processing 224 (below). 226 5.4. Benchmark Calculations 228 For each Frame size, calculate the following summary statistics for 229 Throughput values over the N tests: 231 o Average (Benchmark) 233 o Minimum 235 o Maximum 237 o Standard Deviation 239 Comparison will determine how the load was balanced among PEs. 241 6. Procedure for Mass Withdrawal Characterization 243 Objective: To characterize the ability of a DUT to process frames 244 between CE and one or more PE in a multihomed connectivity scenario 245 when a mass withdrawal takes place. Figure 2 gives the test setup. 247 The Procedure follows. 249 6.1. Address Learning Phase 251 "For every address, learning frames MUST be sent to the DUT/SUT to 252 allow the DUT/SUT update its address tables properly." [RFC2889] 254 6.2. Test for a Single Frame Size and Number of Flows 256 Each trial in the test requires Confiuring a number of flows (from 257 100 to 100k) and a fixed frame size (64 octets to 128, 256, 512, 258 1024, 1280 and 1518 bytes, as per [RFC2544]). 260 The Offered Load SHALL be offered at the Throughput level 261 corrsponding to previously determined for the selected Frame size and 262 number of Flows in use. 264 The Procedure SHALL follow section 5.1 of [RFC2889] (except there is 265 no need to search for the Throughput level). 267 When traffic has been sent for 5 seconds one of the CE-PE links on 268 the ESI SHALL be disabled, and the time of this action SHALL be 269 recorded for further calculations. For example, if the CE1 link to 270 PE1 is disabled, this should trigger a Mass withdrawal of EVI-1 271 addresses, and the subsequent re-routing of traffic to PE2. 273 Frame losses are expected to be recorded during the restoration time. 274 Time for restoration may be estimated as described in section 3.5 275 of[RFC6412]. 277 6.3. Test Repetition 279 The test MUST be repeated N times for each frame size in the subset 280 list, and each restoration time value made available for further 281 processing (below). 283 6.4. Benchmark Calculations 285 For each Frame size and number of flows, calculate the following 286 summary statistics for Loss (or Time to return to Throughput level 287 after restoration) values over the N tests: 289 o Average (Benchmark) 291 o Minimum 293 o Maximum 295 o Standard Deviation 297 7. Reporting 299 The results SHOULD be reported in the format of a table with a row 300 for each of the tested frame sizes and Number of Flows. There SHOULD 301 be columns for the frame size with number of flows, and for the 302 resultant average frame count (or time) for each type of data stream 303 tested. 305 The number of tests Averaged for the Benchmark, N, MUST be reported. 307 The Minimum, Maximum, and Standard Deviation across all complete 308 tests SHOULD also be reported. 310 The Corrected DUT Restoration Time SHOULD also be reported, as 311 applicable. 313 +----------------+-------------------+----------------+-------------+ 314 | Frame Size, | Ave Benchmark, | Min,Max,StdDev | Calculated | 315 | octets + # | fps, frames or | | Time, Sec | 316 | Flows | time | | | 317 +----------------+-------------------+----------------+-------------+ 318 | 64,100 | 26000 | 25500,27000,20 | 0.00004 | 319 +----------------+-------------------+----------------+-------------+ 321 Throughput or Loss/Restoration Time Results 323 Static and configuration parameters: 325 Number of test repetitions, N 327 Minimum Step Size (during searches), in frames. 329 8. Security Considerations 331 Benchmarking activities as described in this memo are limited to 332 technology characterization using controlled stimuli in a laboratory 333 environment, with dedicated address space and the other constraints 334 [RFC2544]. 336 The benchmarking network topology will be an independent test setup 337 and MUST NOT be connected to devices that may forward the test 338 traffic into a production network, or misroute traffic to the test 339 management network. See [RFC6815]. 341 Further, benchmarking is performed on a "black-box" basis, relying 342 solely on measurements observable external to the DUT/SUT. 344 Special capabilities SHOULD NOT exist in the DUT/SUT specifically for 345 benchmarking purposes. Any implications for network security arising 346 from the DUT/SUT SHOULD be identical in the lab and in production 347 networks. 349 9. IANA Considerations 351 This memo makes no requests of IANA. 353 10. Acknowledgements 355 Thanks to 357 11. References 359 11.1. Normative References 361 [RFC1242] Bradner, S., "Benchmarking Terminology for Network 362 Interconnection Devices", RFC 1242, DOI 10.17487/RFC1242, 363 July 1991, . 365 [RFC1944] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 366 Network Interconnect Devices", RFC 1944, 367 DOI 10.17487/RFC1944, May 1996, 368 . 370 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 371 Requirement Levels", BCP 14, RFC 2119, 372 DOI 10.17487/RFC2119, March 1997, 373 . 375 [RFC2544] Bradner, S. and J. McQuaid, "Benchmarking Methodology for 376 Network Interconnect Devices", RFC 2544, 377 DOI 10.17487/RFC2544, March 1999, 378 . 380 [RFC2889] Mandeville, R. and J. Perser, "Benchmarking Methodology 381 for LAN Switching Devices", RFC 2889, 382 DOI 10.17487/RFC2889, August 2000, 383 . 385 [RFC5180] Popoviciu, C., Hamza, A., Van de Velde, G., and D. 386 Dugatkin, "IPv6 Benchmarking Methodology for Network 387 Interconnect Devices", RFC 5180, DOI 10.17487/RFC5180, May 388 2008, . 390 [RFC6201] Asati, R., Pignataro, C., Calabria, F., and C. Olvera, 391 "Device Reset Characterization", RFC 6201, 392 DOI 10.17487/RFC6201, March 2011, 393 . 395 [RFC6412] Poretsky, S., Imhoff, B., and K. Michielsen, "Terminology 396 for Benchmarking Link-State IGP Data-Plane Route 397 Convergence", RFC 6412, DOI 10.17487/RFC6412, November 398 2011, . 400 [RFC6815] Bradner, S., Dubray, K., McQuaid, J., and A. Morton, 401 "Applicability Statement for RFC 2544: Use on Production 402 Networks Considered Harmful", RFC 6815, 403 DOI 10.17487/RFC6815, November 2012, 404 . 406 [RFC6985] Morton, A., "IMIX Genome: Specification of Variable Packet 407 Sizes for Additional Testing", RFC 6985, 408 DOI 10.17487/RFC6985, July 2013, 409 . 411 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 412 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 413 May 2017, . 415 11.2. Informative References 417 [OPNFV-2017] 418 Cooper, T., Morton, A., and S. Rao, "Dataplane 419 Performance, Capacity, and Benchmarking in OPNFV", June 420 2017, 421 . 424 [RFC8239] Avramov, L. and J. Rapp, "Data Center Benchmarking 425 Methodology", RFC 8239, DOI 10.17487/RFC8239, August 2017, 426 . 428 [TST009] ETSI Network Function Virtualization ISG, "ETSI GS NFV-TST 429 009 V3.1.1 (2018-10), "Network Functions Virtualisation 430 (NFV) Release 3; Testing; Specification of Networking 431 Benchmarks and Measurement Methods for NFVI"", October 432 2018, . 435 [VSPERF-b2b] 436 Morton, A., "Back2Back Testing Time Series (from CI)", 437 June 2017, . 441 [VSPERF-BSLV] 442 Morton, A. and S. Rao, "Evolution of Repeatability in 443 Benchmarking: Fraser Plugfest (Summary for IETF BMWG)", 444 July 2018, 445 . 449 Authors' Addresses 450 Al Morton 451 AT&T Labs 452 200 Laurel Avenue South 453 Middletown,, NJ 07748 454 USA 456 Phone: +1 732 420 1571 457 Fax: +1 732 368 1192 458 Email: acm@research.att.com 460 Jim Uttaro 461 AT&T Labs 462 200 Laurel Avenue South 463 Middletown,, NJ 07748 464 USA 466 Email: uttaro@att.com